id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
11,135,083 | https://en.wikipedia.org/wiki/Vidix | VIDIX (video interface for *nix) is a portable programming interface for Unix-like operating systems that allows video card drivers running in user space to directly access the framebuffer through Direct Graphics Access extension to the X Window System.
History
Nick Kurshev, the author of VIDIX, writes that his motivation in creating the interface was to resolve the issue reported by Vladimir Dergachev in his RFC for an alternative kernel multimedia API: Dergachev noted that existing multimedia interfaces were hard-coded for each device, and suggested that driver developers would have more flexibility with a layer of abstraction.
VIDIX was born as an alternative to the Linux kernel-based drivers from the MPlayer project. For a long time, VIDIX lived within the MPlayer project; later, it lived within the MPlayerXP project, a fork of MPlayer by Kurshev. During that time, Linux and many other Unix-like operating systems lacked quality drivers for the video subsystems. Almost all of the technical documentation for video hardware was under non-disclosure agreements at the time, and many programmers had to code their drivers blindly. Other developers became interested in using VIDIX for their own players, and they asked Kurshev to separate it from the MPlayer project.
VIDIX became an alternative set of device drivers, based on the idea of direct hardware access (similar to Microsoft's DirectX). These drivers mapped accelerated video memory to avoid colour-space conversion and software scaling from the side of the players.
The X Window System now includes the Direct Rendering Infrastructure, which provides similar functionality with broad hardware support. Kurshev continued to develop VIDIX through 2007, when version 1.0.0 of the software was released.
Supported hardware
Trident Microsystems Cyberblade/i1
Hauppage PVR350
ATI Technologies Mach64 and 3DRage chips
ATI Technologies Radeon and Rage128 chips:
Radeon R100 chip series
Radeon R200 chip series
Radeon R300 chip series
Radeon R420 chip series
Radeon R520 chip series
Matrox MGA G200/G4x0/G5x0 chips
Nvidia chips:
RIVA 128
RIVA TNT
RIVA TNT2
GeForce 256
GeForce 2 series
GeForce 3 series
GeForce 4 series
GeForce FX series
GeForce 6 series
GeForce 7 series
Some Quadro
3Dlabs Permedia2, Permedia3, and GLINT R3
S3 Savage
Silicon Integrated Systems (SiS) 300 and 310/325 series chips
VIA Technologies CLE266 Unichrome
See also
Driver
Video
Framebuffer
Video card
References
External links
Home page of VIDIX
Mplayerxp
Device drivers
Computer peripherals | Vidix | [
"Technology"
] | 569 | [
"Computer peripherals",
"Components"
] |
11,135,100 | https://en.wikipedia.org/wiki/LOCOS | LOCOS, short for LOCal Oxidation of Silicon, is a microfabrication process where silicon dioxide is formed in selected areas on a silicon wafer having the Si-SiO2 interface at a lower point than the rest of the silicon surface. As of 2008 it was largely superseded by shallow trench isolation.
This technology was developed to insulate MOS transistors from each other and limit transistor cross-talk. The main goal is to create a silicon oxide insulating structure that penetrates under the surface of the wafer, so that the Si-SiO2 interface occurs at a lower point than the rest of the silicon surface. This cannot be easily achieved by etching field oxide. Thermal oxidation of selected regions surrounding transistors is used instead. The oxygen penetrates in depth of the wafer, reacts with silicon and transforms it into silicon oxide. In this way, an immersed structure is formed. For process design and analysis purposes, the oxidation of silicon surfaces can be modeled effectively using the Deal–Grove model.
References
See also
Shallow trench isolation
microtechnology
semiconductor technology | LOCOS | [
"Materials_science",
"Engineering"
] | 222 | [
"Semiconductor technology",
"Materials science",
"Microtechnology"
] |
11,135,761 | https://en.wikipedia.org/wiki/Financial%20result | The financial result is the difference between earnings before interest and taxes and earnings before taxes. It is determined by the earning or the loss which results from financial affairs.
Interpretation
For most industrial companies the financial result is negative, as the interest charged on borrowing generally exceeds income from investments (dividends). If a company records a positive financial Result over several periods, then one has to ask how much capital is invested at which interest rate, and if this capital would not bear a greater yield if it were invested in the company's growth. In case of constant, positive financial results a company also has to deal with increasing demands for special distributions to its shareholders.
Calculation formula
In mathematical terms financial result is defined as follows:
Advantages
The advantages of the use of financial result as a key performance indicator
The financial result provides information about financing costs.
Information may be gained about non-consolidated companies.
Disadvantages
The disadvantages of the use of financial result as a Key performance indicator
Operating components may be included in the financial result (e.g.: the income from financing activities).
Investment income as a component of the financial result does not provide any information on the risk inherent in this investment.
The financial result may vary strongly over time.
References
Wiehle, Ulrich, Henryk Deter, Michael Rolf, Michael Diegelmann, Peter Noel Schomig. 100 IFRS Financial Ratios (Cometis AG), 2005,
Profit
Financial ratios | Financial result | [
"Mathematics"
] | 288 | [
"Financial ratios",
"Quantity",
"Metrics"
] |
11,135,785 | https://en.wikipedia.org/wiki/Clay%20modeling | Clay modeling (or clay model making) for automobile prototypes was first introduced in the 1930s by automobile designer Harley Earl, head of the General Motors styling studio (known initially as the Art and Color Section, and later as the Design and Styling Department).
Industrial plasticine, or "clay", which is used for this purpose, is a malleable material that can be easily shaped, thus enabling designers to create models to visualize a product. Clay modeling was soon adopted throughout the industry and remains in use today.
References
External links
General Motors – Car Design History
1930s introductions
Vehicle design
Modelling clay | Clay modeling | [
"Engineering"
] | 121 | [
"Vehicle design",
"Design"
] |
11,136,136 | https://en.wikipedia.org/wiki/CSPD%20%28molecule%29 | CSPD ([3-(1-chloro-3'-methoxyspiro[adamantane-4,4'-dioxetane]-3'-yl)phenyl] dihydrogen phosphate) is a chemical substance with formula C18H22ClO7P. It is a component of enhanced chemiluminescence enzyme-linked immunosorbent assay (ELISA) kits, used for the detection of minute amounts of various substances such as proteins.
Properties
The molecule CSPD has the following functional groups in the structure: phosphate group, phenyl group, spiro group, methyl ether group, and chlorine group. The ones worth noting are the ones above. None of these groups carry a charge. If there was a charge this would have had a change in the compound's pH, 3D structure, mass and bond angles.
The toxin CSPD effect persister cell formation using MqsR (MqsR, a crucial regulator for quorum sensing and biofilm formation, is a GCU-specific mRNA interferase in Escherichia coli) and persister cells are cells that avoid stress and are characterized by reduced metabolism and other factors.
References
Chemiluminescence
Adamantanes
Dioxetanes
Organic peroxides
Organochlorides
Organophosphates
Phenol esters
Spiro compounds | CSPD (molecule) | [
"Chemistry",
"Biology"
] | 288 | [
"Luminescence",
"Biotechnology stubs",
"Biochemistry stubs",
"Organic compounds",
"Chemiluminescence",
"Biochemistry",
"Organic peroxides",
"Spiro compounds"
] |
11,136,930 | https://en.wikipedia.org/wiki/Strategic%20uranium%20reserves | Strategic uranium reserves refer to uranium inventories held by the government of a particular country, as well as private industry, for the purpose of providing economic and national security during an energy crisis.
North America
In the early 1990s, the United States created a temporary strategic uranium reserve. The authorization for this reserve expired in 1998:
There is hereby established the National Strategic Uranium Reserve under the direction and control of the Secretary. The Reserve shall consist of natural uranium and uranium equivalents contained in stockpiles or inventories currently held by the United States for defense purposes. Effective on October 24, 1992, and for 6 years thereafter, use of the Reserve shall be restricted to military purposes and government research. Use of the Department of Energy’s stockpile of enrichment tails existing on October 24, 1992, shall be restricted to military purposes for 6 years thereafter.
Recently, due to increases in the price of uranium the Department of Energy has considered the creation of a permanent strategic uranium reserve along the lines of the U.S. strategic petroleum reserve.
Asia
China has announced the creation of a strategic uranium reserve to complement its strategic petroleum reserves.
Japan has also shown interest in creating its own strategic reserve of uranium.
See also
Nuclear power
Global strategic petroleum reserves
Strategic Petroleum Reserve
List of uranium mines
Uranium reserves
Uranium
United States Department of Energy
Energy policy | Strategic uranium reserves | [
"Environmental_science"
] | 265 | [
"Environmental social science",
"Energy policy"
] |
11,136,939 | https://en.wikipedia.org/wiki/Biochemical%20systems%20theory | Biochemical systems theory is a mathematical modelling framework for biochemical systems, based on ordinary differential equations (ODE), in which biochemical processes are represented using power-law expansions in the variables of the system.
This framework, which became known as Biochemical Systems Theory, has been developed since the 1960s by Michael Savageau, Eberhard Voit and others for the systems analysis of biochemical processes. According to Cornish-Bowden (2007) they "regarded this as a general theory of metabolic control, which includes both metabolic control analysis and flux-oriented theory as special cases".
Representation
The dynamics of a species is represented by a differential equation with the structure:
where Xi represents one of the nd variables of the model (metabolite concentrations, protein concentrations or levels of gene expression). j represents the nf biochemical processes affecting the dynamics of the species. On the other hand, ij (stoichiometric coefficient), j (rate constants) and fjk (kinetic orders) are two different kinds of parameters defining the dynamics of the system.
The principal difference of power-law models with respect to other ODE models used in biochemical systems is that the kinetic orders can be non-integer numbers. A kinetic order can have even negative value when inhibition is modeled. In this way, power-law models have a higher flexibility to reproduce the non-linearity of biochemical systems.
Models using power-law expansions have been used during the last 35 years to model and analyze several kinds of biochemical systems including metabolic networks, genetic networks and recently in cell signalling.
See also
Dynamical systems
Ludwig von Bertalanffy
Systems theory
References
Literature
Books:
M.A. Savageau, Biochemical systems analysis: a study of function and design in molecular biology, Reading, MA, Addison–Wesley, 1976.
E.O. Voit (ed), Canonical Nonlinear Modeling. S-System Approach to Understanding Complexity, Van Nostrand Reinhold, NY, 1991.
E.O. Voit, Computational Analysis of Biochemical Systems. A Practical Guide for Biochemists and Molecular Biologists, Cambridge University Press, Cambridge, U.K., 2000.
N.V. Torres and E.O. Voit, Pathway Analysis and Optimization in Metabolic Engineering, Cambridge University Press, Cambridge, U.K., 2002.
Scientific articles:
M.A. Savageau, Biochemical systems analysis: I. Some mathematical properties of the rate law for the component enzymatic reactions in: J. Theor. Biol. 25, pp. 365–369, 1969.
M.A. Savageau, Development of fractal kinetic theory for enzyme-catalysed reactions and implications for the design of biochemical pathways in: Biosystems 47(1-2), pp. 9–36, 1998.
M.R. Atkinson et al., Design of gene circuits using power-law models, in: Cell 113, pp. 597–607, 2003.
F. Alvarez-Vasquez et al., Simulation and validation of modelled sphingolipid metabolism in Saccharomyces cerevisiae, Nature 27, pp. 433(7024), pp. 425–30, 2005.
J. Vera et al., Power-Law models of signal transduction pathways in: Cellular Signalling ), 2007.
Eberhart O. Voit, Applications of Biochemical Systems Theory, 2006.
External links
Savageau Lab at UC Davis
Voit Lab at GA Tech
Systems biology | Biochemical systems theory | [
"Biology"
] | 724 | [
"Systems biology"
] |
11,137,122 | https://en.wikipedia.org/wiki/Mason%27s%20invariant | In electronics, Mason's invariant, named after Samuel Jefferson Mason, is a measure of the quality of transistors.
"When trying to solve a seemingly difficult problem, Sam said to concentrate on the easier ones first; the rest, including the hardest ones, will follow," recalled Andrew Viterbi, co-founder and former vice-president of Qualcomm. He had been a thesis advisee under Samuel Mason at MIT, and this was one lesson he especially remembered from his professor. A few years earlier, Mason had heeded his own advice when he defined a unilateral power gain for a linear two-port device, or U. After concentrating on easier problems with power gain in feedback amplifiers, a figure of merit for all three-terminal devices followed that is still used today as Mason's Invariant.
Origin
In 1953, transistors were only five years old, and they were the only successful solid-state three-terminal active device. They were beginning to be used for RF applications, and they were limited to VHF frequencies and below. Mason wanted to find a figure of merit to compare transistors, and this led him to discover that the unilateral power gain of a linear two-port device was an invariant figure of merit.
In his paper Power Gain in Feedback Amplifiers published in 1953, Mason stated in his introduction, "A vacuum tube, very often represented as a simple transconductance driving a passive impedance, may lead to relatively simple amplifier designs in which the input impedance (and hence the power gain) is effectively infinite, the voltage gain is the quantity of interest, and the input circuit is isolated from the load. The transistor, however, usually cannot be characterized so easily." He wanted to find a metric to characterize and measure the quality of transistors since up until then, no such measure existed. His discovery turned out to have applications beyond transistors.
Derivation of U
Mason first defined the device being studied with the three constraints listed below.
The device has only two ports (at which power can be transferred between it and outside devices).
The device is linear (in its relationships of currents and voltages at the two ports).
The device is used in a specified manner (connected as an amplifier between a linear one-port source and a linear one-port load).
Then, according to Madhu Gupta in Power Gain in Feedback Amplifiers, a Classic Revisited, Mason defined the problem as "being the search for device properties that are invariant with respect to transformations as represented by an embedding network" that satisfy the four constraints listed below.
The embedding network is a four-port.
The embedding network is linear.
The embedding network is lossless.
The embedding network is reciprocal.
He next showed that all transformations that satisfy the above constraints can be accomplished with just three simple transformations performed sequentially. Similarly, this is the same as representing an embedding network by a set of three embedding networks nested within one another. The three mathematical expressions can be seen below.
1. Reactance padding:
2. Real Transformations:
3. Inversion:
Mason then considered which quantities remained invariant under each of these three transformations. His conclusions, listed respectively to the transformations above, are shown below. Each transformation left the values below unchanged.
1. Reactance padding:
and
2. Real transformations:
and
3. Inversion:
The magnitudes of the two determinants and the sign of the denominator in the above fraction remain unchanged in the inversion transformation. Consequently, the quantity invariant under all three conditions is:
Importance
Mason's Invariant, or U, is the only device characteristic that is invariant under lossless, reciprocal embeddings. In other words, U can be used as a figure of merit to compare any two-port active device (which includes three-terminal devices used as two-ports). For example, a factory producing BJTs can calculate U of the transistors it is producing and compare their quality to the other BJTs on the market. Furthermore, U can be used as an indicator of activity. If U is greater than one, the two-port device is active; otherwise, that device is passive. This is especially useful in the microwave engineering community. Though originally published in a circuit theory journal, Mason's paper becomes especially relevant to microwave engineers since U is usually slightly greater than or equal to one in the microwave frequency range. When U is smaller than or considerably larger than one, it becomes relatively useless.
While Mason's Invariant can be used as a figure of merit across all operating frequencies, its value at ƒmax is especially useful. ''ƒmax is the maximum oscillation frequency of a device, and it is discovered when . This frequency is also the frequency at which the maximum stable gain Gms and the maximum available gain Gma of the device become one. Consequently, ƒ''max is a characteristic of the device, and it has the significance that it is the maximum frequency of oscillation in a circuit where only one active device is present, the device is embedded in a passive network, and only single sinusoidal signals are of interest.
Conclusion
In his revisit of Mason's paper, Gupta states, "Perhaps the most convincing evidence of the utility of the concept of a unilateral power gain as a device figure of merit is the fact that for the last three decades, practically every new, active, two-port device developed for high frequency use has been carefully scrutinized for the achievable value of U..." This assumption is appropriate because "Umax" or "maximum unilateral gain" is still listed on transistor specification sheets, and Mason's Invariant is still taught in some undergraduate electrical engineering curricula. Though now it has been over five decades, Mason's finding of an invariant device characteristic still plays a significant role in transistor design.
See also
Scattering parameters
References
Electronic engineering
Two-port networks | Mason's invariant | [
"Technology",
"Engineering"
] | 1,238 | [
"Electrical engineering",
"Two-port networks",
"Electronic engineering",
"Computer engineering"
] |
11,137,263 | https://en.wikipedia.org/wiki/GF%282%29 | (also denoted , or ) is the finite field with two elements.
is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively and , as usual.
The elements of may be identified with the two possible values of a bit and to the Boolean values true and false. It follows that is fundamental and ubiquitous in computer science and its logical foundations.
Definition
GF(2) is the unique field with two elements with its additive and multiplicative identities respectively denoted and .
Its addition is defined as the usual addition of integers but modulo 2 and corresponds to the table below:
If the elements of GF(2) are seen as Boolean values, then the addition is the same as that of the logical XOR operation.
Since each element equals its opposite, subtraction is thus the same operation as addition.
The multiplication of GF(2) is again the usual multiplication modulo 2 (see the table below), and on Boolean variables corresponds to the logical AND operation.
GF(2) can be identified with the field of the integers modulo , that is, the quotient ring of the ring of integers Z by the ideal 2Z of all even numbers: .
Notations and may be encountered although they can be confused with the notation of -adic integers.
Properties
Because GF(2) is a field, many of the familiar properties of number systems such as the rational numbers and real numbers are retained:
addition has an identity element (0) and an inverse for every element;
multiplication has an identity element (1) and an inverse for every element but 0;
addition and multiplication are commutative and associative;
multiplication is distributive over addition.
Properties that are not familiar from the real numbers include:
every element x of GF(2) satisfies and therefore ; this means that the characteristic of GF(2) is 2;
every element x of GF(2) satisfies (i.e. is idempotent with respect to multiplication); this is an instance of Fermat's little theorem. GF(2) is the only field with this property (Proof: if , then either or . In the latter case, x must have a multiplicative inverse, in which case dividing both sides by x gives . All larger fields contain elements other than 0 and 1, and those elements cannot satisfy this property).
Applications
Because of the algebraic properties above, many familiar and powerful tools of mathematics work in GF(2) just as well as other fields. For example, matrix operations, including matrix inversion, can be applied to matrices with elements in GF(2) (see matrix ring).
Any group (V,+) with the property v + v = 0 for every v in V is necessarily abelian and can be turned into a vector space over GF(2) in a natural fashion, by defining 0v = 0 and 1v = v for all v in V. This vector space will have a basis, implying that the number of elements of V must be a power of 2 (or infinite).
In modern computers, data are represented with bit strings of a fixed length, called machine words. These are endowed with the structure of a vector space over GF(2). The addition of this vector space is the bitwise operation called XOR (exclusive or). The bitwise AND is another operation on this vector space, which makes it a Boolean algebra, a structure that underlies all computer science. These spaces can also be augmented with a multiplication operation that makes them into a field GF(2n), but the multiplication operation cannot be a bitwise operation. When n is itself a power of two, the multiplication operation can be nim-multiplication; alternatively, for any n, one can use multiplication of polynomials over GF(2) modulo a irreducible polynomial (as for instance for the field GF(28) in the description of the Advanced Encryption Standard cipher).
Vector spaces and polynomial rings over GF(2) are widely used in coding theory, and in particular in error correcting codes and modern cryptography. For example, many common error correcting codes (such as BCH codes) are linear codes over GF(2) (codes defined from vector spaces over GF(2)), or polynomial codes (codes defined as quotients of polynomial rings over GF(2)).
Algebraic closure
Like any field, GF(2) has an algebraic closure. This is a field F which contains GF(2) as a subfield, which is algebraic over GF(2) (i.e. every element of F is a root of a polynomial with coefficients in GF(2)), and which is algebraically closed (any non-constant polynomial with coefficients in F has a root in F). The field F is uniquely determined by these properties, up to a field automorphism (i.e. essentially up to the notation of its elements).
F is countable and contains a single copy of each of the finite fields GF(2n); the copy of GF(2n) is contained in the copy of GF(2m) if and only if n divides m. The field F is countable and is the union of all these finite fields.
Conway realized that F can be identified with the ordinal number , where the addition and multiplication operations are defined in a natural manner by transfinite induction (these operations are however different from the standard addition and multiplication of ordinal numbers). The addition in this field is simple to perform and is akin to Nim-addition; Lenstra has shown that the multiplication can also be performed efficiently.
See also
Field with one element
References
Finite fields
2 (number)
Binary arithmetic | GF(2) | [
"Mathematics"
] | 1,215 | [
"Arithmetic",
"Binary arithmetic"
] |
11,138,566 | https://en.wikipedia.org/wiki/Rachel%20Carson%20Prize%20%28environmentalist%20award%29 | The Rachel Carson Prize (Rachel Carson-prisen) is an international environmental award, established in Stavanger, Norway in 1991 to commemorate the achievements of environmentalist Rachel Carson and to award efforts in her spirit. The prize is awarded to a woman who has distinguished herself in outstanding work for the environment in Norway or internationally.
The prize was established spontaneously during a 1989 meeting in Stavanger, on the initiative of speaker Berit Ås. The prize consists of money and the sculpture The Cormorant by artist Irma Bruun Hodne.
Awardees
1991: Sidsel Mørck, Norwegian author and activist
1993: Bergljot Børresen, Norwegian veterinarian
1995: Anne Grieg, Norwegian psychiatrist
1997: Berit Ås, Norwegian feminist and professor in social psychology
1999: Theo Colborn, American zoologist
2001: Renate Künast, German Federal Minister of Consumer Protection, Food and Agriculture
2003: Åshild Dale, Norwegian farmer
2005: Malin Falkenmark, Swedish professor in hydrology
2007: Sheila Watt-Cloutier, Canadian Inuit climate activist
2009: Marie-Monique Robin, French journalist
2011: Marilyn Mehlmann, Swedish environmentalist and writer
2013: Sam Fanshawe, British marine conservationist
2015: Mozhgan Savabieasfahani, Iranian environmental toxicologist
2016: Gabrielle Hecht
2017: Sylvia Earle
2019: Greta Thunberg, Swedish climate activist
2021: Maja Lunde, Norwegian author
See also
Women in science
List of prizes, medals, and awards for women in science
List of environmental awards
References
External links
Official website (English)
Environmental awards
Awards established in 1991
Norwegian awards
Science awards honoring women
Stavanger
1991 establishments in Norway
Prize | Rachel Carson Prize (environmentalist award) | [
"Technology"
] | 351 | [
"Science and technology awards",
"Science awards honoring women"
] |
11,138,712 | https://en.wikipedia.org/wiki/Accordion%20%28GUI%29 | The accordion is a graphical control element comprising a vertically stacked list of items, such as labels or thumbnails. Each item can be "expanded" or "collapsed" to reveal the content associated with that item. There can be zero expanded items, exactly one, or more than one item expanded at a time, depending on the configuration.
The term stems from the musical accordion in which sections of the bellows can be expanded by pulling outward.
A common example of an accordion is the Show/Hide operation of a box region, but extended to have multiple sections in a list.
An accordion is similar in purpose to a tabbed interface, a list of items where exactly one item is expanded into a panel (i.e. list items are shortcuts to access separate panels).
User definition
Several windows are stacked on each other. All of them are "shaded", so only their captions are visible. If one of them is clicked, to make it active, it is "unshaded" or "maximized". Other windows in accordion are displaced around top or bottom edge.
Examples
A common example using a GUI accordion is the Show/Hide operation of a box region, but extended to have multiple sections in a list.
SlideVerse is an accordion interface providing access to web content.
The list view of Google Reader also features this.
In an early example, Apple's download page used roll-over accordions in 2008. In this example, captured in the Wayback Machine in the Internet Archive, the left column of the page includes three categories that expand on roll-over: "All Downloads", "Top Apple Downloads", and "Top Downloads".
See also
Code folding, a similar technique applied to text
References
External links
jQuery UI accordion widget
YourHead
mootools Tutorial (where the effect is called sliding shelf) on MONFX
Accordion Interface Demo of an accordion script
Graphical user interface elements
Graphical control elements | Accordion (GUI) | [
"Technology"
] | 396 | [
"Components",
"Graphical user interface elements"
] |
11,138,871 | https://en.wikipedia.org/wiki/Louisville%20Water%20Tower | The Louisville Water Tower, located east of downtown Louisville, Kentucky, near the riverfront, is the oldest ornamental water tower in the world, having been built before the more famous Chicago Water Tower. Both the actual water tower and its pumping station are a designated National Historic Landmark for their architecture. As with the Fairmount Water Works of Philadelphia (designed 1812, built 1819–22), the industrial nature of its pumping station was disguised in the form of a Roman temple complex.
In 2014, the Louisville WaterWorks Museum opened on the premises.
History
Unknown to residents at the time, the lack of a safe water supply presented a significant health risk to the city. After the arrival of the second cholera pandemic in the United States (1832), Louisville in the 1830s and 40s gained the nickname "graveyard of the west", due to the polluted local water giving Louisville residents cholera and typhoid at epidemic levels. This was because residents used the water of tainted private wells, but the linkage was not discovered until 1854 by the English physician John Snow, and not accepted as fact until decades later. Due to the water project's completion in 1866, Louisville was free of cholera during the epidemic of 1873.
After several devastating fires in the 1850s, Louisvillians were convinced of the importance of the project. The decision was made by the Kentucky Legislature to form the Louisville Water Company on March 6, 1854. Private investors showed little interest and so after only 55 shares had been sold and the failure of a first attempt to secure voter approval to buy shares, the project was widely promoted. In 1856 voters approved purchase of 5500 shares in 1856, and another 2200 shares in 1859, transforming it into an almost completely government-owned corporation.
The inspiration for the architecture of Louisville's Water Tower came from the French architect Claude Nicolas Ledoux, who merged "architectural beauty with industrial efficiency". It was decided to render the water station an ornament to the city, to make skeptical Louisvillians more accepting of a water company. Theodore Scowden and his assistant Charles Hermany were the architects of the structures. They chose an area just outside town, on a hill overlooking the Ohio River, which provided excellent elevation. The location also meant that coal boats could easily deliver the coal necessary to operate the station. The main column, of the Doric order, rises out of a Corinthian portico surrounding its base. The portico is surmounted by a wooden balustrade with ten pedestals also constructed of wood, originally supporting painted cast-zinc statues from J. W. Fiske & Company, ornamental cast-iron manufacturers of New York. Even the reservoir's gatehouse on the riverfront invoked the castles along the Rhine.
The water tower began operations on October 16, 1860. The tower was not just pretty; it was effective. In 24 hours the station could produce 12 million US gallons (45,000 m3) of water. This water, in turn, flowed through 26 miles (42 km) of pipe.
A tornado on March 27, 1890 irreparably changed the Water Tower. The original water tower had an iron pipe protected by a wood-paneled shaft, but after the tornado destroyed it, it was replaced with cast iron. The tornado also destroyed all but two of the ten statues that were on the pedestals. Shortly thereafter, a new pumping station and reservoirs were built in Crescent Hill, and the original water tower ceased pumping operations in 1909. The pumping station was renovated in 2010.
In January 2013, extensive renovations of the water tower property, including the addition of the Louisville WaterWorks Museum, began, and the museum opened on March 1, 2014.
Statues
There are ten zinc statues above the first level's balustrade, each standing on a pedestal over a Corinthian column. They are listed clockwise below with identifiable features:
An Indian hunter: a tomahawk and a dog on a leash. He represents possibly the element earth.
A Danaide: emptying a large amphora on her raised leg. She represents "tasks that are never complete".
Mercury: winged helmet.
Winter: headscarf, censer of flame in hand. (The four seasons here are all women.)
Hebe: raising a small jug above her head, a cup in the other hand.
Neptune: a trident.
Spring: a flower bud in one hand, a basket in another.
Flora: a wreath in her hand.
Summer: shielding her eyes from the sun with her hand.
Autumn: a plate of harvest, grapes in her hair.
The statues were originally urns in the plans. The first set of statues included Ceres, Diana, and a girl in a bonnet.
Gallery
See also
Crescent Hill Reservoir
Cardinal Hill Reservoir
List of attractions and events in the Louisville metropolitan area
National Register of Historic Places listings in Jefferson County, Kentucky
References
External links
Louisville Visual Arts Association website
History of Water Tower
Infrastructure completed in 1860
Towers completed in 1860
19th-century buildings and structures in Louisville, Kentucky
Water towers on the National Register of Historic Places
Historic American Engineering Record in Kentucky
National Historic Landmarks in Kentucky
National Register of Historic Places in Louisville, Kentucky
Water towers in Kentucky
Historic Civil Engineering Landmarks
Infrastructure in Louisville, Kentucky
Tourist attractions in Louisville, Kentucky
Former pumping stations
1860 establishments in Kentucky | Louisville Water Tower | [
"Engineering"
] | 1,074 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
11,139,487 | https://en.wikipedia.org/wiki/Frictionless%20plane | The frictionless plane is a concept from the writings of Galileo Galilei. In his 1638 The Two New Sciences, Galileo presented a formula that predicted the motion of an object moving down an inclined plane. His formula was based upon his past experimentation with free-falling bodies. However, his model was not based upon experimentation with objects moving down an inclined plane, but from his conceptual modeling of the forces acting upon the object. Galileo understood the mechanics of the inclined plane as the combination of horizontal and vertical vectors; the result of gravity acting upon the object, diverted by the slope of the plane.
However, Galileo's equations do not contemplate friction, and therefore do not perfectly predict the results of an actual experiment. This is because some energy is always lost when one mass applies a non-zero normal force to another. Therefore, the observed speed, acceleration and distance traveled should be less than Galileo predicts. This energy is lost in forms like sound and heat. However, from Galileo's predictions of an object moving down an inclined plane in a frictionless environment, he created the theoretical foundation for extremely fruitful real-world experimental prediction.
Frictionless planes do not exist in the real world. However, if they did, one can be almost certain that objects on them would behave exactly as Galileo predicts. Despite their nonexistence, they have considerable value in the design of engines, motors, roadways, and even tow-truck beds, to name a few examples.
The effect of friction on an object moving down an inclined plane can be calculated as
where is the force of friction exerted by the object and the inclined plane on each other, parallel to the surface of the plane, is the normal force exerted by the object and the plane on each other, directed perpendicular to the plane, and is the coefficient of kinetic friction.
Unless the inclined plane is in a vacuum, a (usually) small amount of potential energy is also lost to air drag.
See also
Atwood machine
Spherical cow
References
Abstraction
Physics education | Frictionless plane | [
"Physics"
] | 411 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
11,139,555 | https://en.wikipedia.org/wiki/Lactation%20room | A lactation room (or lactorium) is a private space where a nursing mother can use a breast pump. The development is mostly confined to the United States, which is unique among developed countries in providing minimal maternity leave.
Purpose and description
Lactation rooms provide breastfeeding mothers with a private space to pump or nurse. While lactation spaces existed prior to the 2010 Patient Protection and Affordable Care Act, the amended Section 4207 of the Fair Labor Standards Act requires employers with 50 employees or more to provide a private space for nursing mothers that's not a bathroom.
Generally, a lactation room includes a refrigerator, sink, cleaning supplies, table, and comfortable chair. The ability to pump throughout the day allows mothers to keep up their milk supply and enables them to save and take home the nutrient-rich milk they have pumped.
Popularity
Lactation rooms have become widely popular in the US business setting. The reason for this development is that
mothers are the fastest-growing segment of the U.S. labor force. Approximately 70% of employed mothers with children younger than 3 years work full time. One-third of these mothers return to work within 3 months after giving birth and two-thirds return within 6 months. Working outside the home is related to a shorter duration of breastfeeding, and intentions to work full-time are significantly associated with lower rates of breastfeeding initiation and shorter duration.
Benefits
In addition, breastfeeding benefits employers as breastfeeding results in decreased health claims, increased productivity, and fewer days missed from work to care for sick children.
One example of the benefits provided to businesses and employees by establishing a corporate lactation program is that of CIGNA, a US employee benefits company. In 1995, CIGNA established the “Working Well Moms” program, which provided lactation education program and lactation rooms. In 2000, CIGNA and the UCLA conducted a study of 343 breastfeeding women who were taking part in CIGNA’s program. The study revealed a savings of $240,000 annually in health care expenses for breastfeeding mothers and their children, and a savings of $60,000 annually through reduced absenteeism among breastfeeding mothers at CIGNA. In addition, the study found that "breastfeeding duration for women enrolled in the Working Well Moms program is 72.5% at six months compared to a 21.1 percent national average of employed new mothers."
Resources
A variety of resources exist for breastfeeding mother and employers on how to establish and promote a lactation room or lactation support program. The following are currently available:
American Institute of Architects' Lactation Room Design
Center for Disease Control’s Healthy Workplace Initiative
US Dept. of Health and Human Services’ Healthy People 2010
In addition, the US Department of Health and Human Services, Maternal and Child Health Bureau is currently developing a toolkit to promote breastfeeding in the workplace called “The Business Case for Breastfeeding”.
Notes
See also
Break room
Breastmilk storage and handling
Breastfeeding
Rooms
Women-only spaces | Lactation room | [
"Engineering"
] | 618 | [
"Rooms",
"Architecture"
] |
11,140,843 | https://en.wikipedia.org/wiki/Townsend%20%28unit%29 | The townsend (symbol Td) is a physical unit of the reduced electric field (ratio E/N), where is electric field and is concentration of neutral particles.
It is named after John Sealy Townsend, who conducted early research into gas ionisation.
Definition
It is defined by the relation
For example, an electric field of
in a medium with the density of an ideal gas at 1 atm, the Loschmidt constant
gives
,
which corresponds to .
Uses
This unit is important in gas discharge physics, where it serves as scaling parameter because the mean energy of electrons (and therefore many other properties of discharge) is typically a function of over broad range of and .
The concentration , which is in ideal gas simply related to pressure and temperature, controls the mean free path and collision frequency. The electric field governs the energy gained between two successive collisions.
Reduced electric field being a scaling factor effectively means, that increasing the electric field intensity E by some factor q has the same consequences as lowering gas density N by factor q.
See also
Electric glow discharge
Vacuum arc
References
A Bankovic´, S Dujko, R D White, J P Marler, S J Buckman, S Marjanovic´, G Malovic´, G Garc´ıa and Z Lj Petrovic, Positron transport in water vapour. 2012 New J. Phys. 14 035003.
Electrical breakdown | Townsend (unit) | [
"Physics"
] | 286 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
11,141,222 | https://en.wikipedia.org/wiki/Tetrad%20formalism | The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary for . In German, "" translates to "four", "" to "many", and "" to "leg".
The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique.
The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions.
The significance of the tetradic formalism appear in the Einstein–Cartan formulation of general relativity. The tetradic formalism of the theory is more fundamental than its metric formulation as one can not convert between the tetradic and metric formulations of the fermionic actions despite this being possible for bosonic actions . This is effectively because Weyl spinors can be very naturally defined on a Riemannian manifold and their natural setting leads to the spin connection. Those spinors take form in the vielbein coordinate system, and not in the manifold coordinate system.
The privileged tetradic formalism also appears in the deconstruction of higher dimensional Kaluza–Klein gravity theories and massive gravity theories, in which the extra-dimension(s) is/are replaced by series of N lattice sites such that the higher dimensional metric is replaced by a set of interacting metrics that depend only on the 4D components. Vielbeins commonly appear in other general settings in physics and mathematics. Vielbeins can be understood as solder forms.
Mathematical formulation
The tetrad formulation is a special case of a more general formulation, known as the vielbein or -bein formulation, with =4. Make note of the spelling: in German, "viel" means "many", not to be confused with "vier", meaning "four".
In the vielbein formalism, an open cover of the spacetime manifold and a local basis for each of those open sets is chosen: a set of independent vector fields
for that together span the -dimensional tangent bundle at each point in the set. Dually, a vielbein (or tetrad in 4 dimensions) determines (and is determined by) a dual co-vielbein (co-tetrad) — a set of independent 1-forms.
such that
where is the Kronecker delta. A vielbein is usually specified by its coefficients with respect to a coordinate basis, despite the choice of a set of (local) coordinates being unnecessary for the specification of a tetrad. Each covector is a solder form.
From the point of view of the differential geometry of fiber bundles, the vector fields define a section of the frame bundle i.e. a parallelization of which is equivalent to an isomorphism . Since not every manifold is parallelizable, a vielbein can generally only be chosen locally (i.e. only on a coordinate chart and not all of .)
All tensors of the theory can be expressed in the vector and covector basis, by expressing them as linear combinations of members of the (co)vielbein. For example, the spacetime metric tensor can be transformed from a coordinate basis to the tetrad basis.
Popular tetrad bases in general relativity include orthonormal tetrads and null tetrads. Null tetrads are composed of four null vectors, so are used frequently in problems dealing with radiation, and are the basis of the Newman–Penrose formalism and the GHP formalism.
Relation to standard formalism
The standard formalism of differential geometry (and general relativity) consists simply of using the coordinate tetrad in the tetrad formalism. The coordinate tetrad is the canonical set of vectors associated with the coordinate chart. The coordinate tetrad is commonly denoted whereas the dual cotetrad is denoted . These tangent vectors are usually defined as directional derivative operators: given a chart which maps a subset of the manifold into coordinate space , and any scalar field , the coordinate vectors are such that:
The definition of the cotetrad uses the usual abuse of notation to define covectors (1-forms) on . The involvement of the coordinate tetrad is not usually made explicit in the standard formalism. In the tetrad formalism, instead of writing tensor equations out fully (including tetrad elements and tensor products as above) only components of the tensors are mentioned. For example, the metric is written as "". When the tetrad is unspecified this becomes a matter of specifying the type of the tensor called abstract index notation. It allows to easily specify contraction between tensors by repeating indices as in the Einstein summation convention.
Changing tetrads is a routine operation in the standard formalism, as it is involved in every coordinate transformation (i.e., changing from one coordinate tetrad basis to another). Switching between multiple coordinate charts is necessary because, except in trivial cases, it is not possible for a single coordinate chart to cover the entire manifold. Changing to and between general tetrads is much similar and equally necessary (except for parallelizable manifolds). Any tensor can locally be written in terms of this coordinate tetrad or a general (co)tetrad.
For example, the metric tensor can be expressed as:
(Here we use the Einstein summation convention). Likewise, the metric can be expressed with respect to an arbitrary (co)tetrad as
Here, we use choice of alphabet (Latin and Greek) for the index variables to distinguish the applicable basis.
We can translate from a general co-tetrad to the coordinate co-tetrad by expanding the covector . We then get
from which it follows that . Likewise expanding with respect to the general tetrad, we get
which shows that .
Manipulation of indices
The manipulation with tetrad coefficients shows that abstract index formulas can, in principle, be obtained from tensor formulas with respect to a coordinate tetrad by "replacing greek by latin indices". However care must be taken that a coordinate tetrad formula defines a genuine tensor when differentiation is involved. Since the coordinate vector fields have vanishing Lie bracket (i.e. commute: ), naive substitutions of formulas that correctly compute tensor coefficients with respect to a coordinate tetrad may not correctly define a tensor with respect to a general tetrad because the Lie bracket is non-vanishing: . Thus, it is sometimes said that tetrad coordinates provide a non-holonomic basis.
For example, the Riemann curvature tensor is defined for general vector fields by
.
In a coordinate tetrad this gives tensor coefficients
The naive "Greek to Latin" substitution of the latter expression
is incorrect because for fixed c and d, is, in general, a first order differential operator rather than a zeroth order operator which defines a tensor coefficient. Substituting a general tetrad basis in the abstract formula we find the proper definition of the curvature in abstract index notation, however:
where . Note that the expression is indeed a zeroth order operator, hence (the (c d)-component of) a tensor. Since it agrees with the coordinate expression for the curvature when specialised to a coordinate tetrad it is clear, even without using the abstract definition of the curvature, that it defines the same tensor as the coordinate basis expression.
Example: Lie groups
Given a vector (or covector) in the tangent (or cotangent) manifold, the exponential map describes the corresponding geodesic of that tangent vector. Writing , the parallel transport of a differential corresponds to
The above can be readily verified simply by taking to be a matrix.
For the special case of a Lie algebra, the can be taken to be an element of the algebra, the exponential is the exponential map of a Lie group, and group elements correspond to the geodesics of the tangent vector. Choosing a basis for the Lie algebra and writing for some functions the commutators can be explicitly written out. One readily computes that
for the structure constants of the Lie algebra. The series can be written more compactly as
with the infinite series
Here, is a matrix whose matrix elements are . The matrix is then the vielbein; it expresses the differential in terms of the "flat coordinates" (orthonormal, at that) .
Given some map from some manifold to some Lie group , the metric tensor on the manifold becomes the pullback of the metric tensor on the Lie group :
The metric tensor on the Lie group is the Cartan metric, aka the Killing form. Note that, as a matrix, the second W is the transpose. For a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric. The above generalizes to the case of symmetric spaces. These vielbeins are used to perform calculations in sigma models, of which the supergravity theories are a special case.
See also
Frame bundle
Orthonormal frame bundle
Principal bundle
Spin bundle
Connection (mathematics)
G-structure
Spin manifold
Spin structure
Dirac equation in curved spacetime
Notes
Citations
References
External links
General Relativity with Tetrads
Differential geometry
Theory of relativity
Mathematical notation | Tetrad formalism | [
"Physics",
"Mathematics"
] | 2,204 | [
"nan",
"Theory of relativity"
] |
11,141,258 | https://en.wikipedia.org/wiki/Saddle%20soap | Saddle soap is a compound used for cleaning, conditioning, and protecting leather. It typically contains mild soap, softening ingredients such as lanolin, and preservatives such as beeswax. It is commonly used on leather footwear, saddles, and other items of horse tack, hence its name.
See also
Dubbin
Neatsfoot oil
Mink oil
Shoe polish
References
Cleaning products
Saddles
Leather | Saddle soap | [
"Physics",
"Chemistry"
] | 84 | [
"Products of chemical industry",
"Materials stubs",
"Cleaning products",
"Materials",
"Matter"
] |
11,141,813 | https://en.wikipedia.org/wiki/Peter%20L.%20Hagelstein | Peter L. Hagelstein is an associate professor of electrical engineering at the Massachusetts Institute of Technology (MIT), affiliated with the Research Laboratory of Electronics (RLE).
Hagelstein received a B.S. and M.S. in 1976 and Ph.D. in electrical engineering in 1981, from MIT.
Hagelstein began his career at the Lawrence Livermore National Laboratory, working on high-energy laser and plasma physics from 1981 to 1985. While working in the Lawrence Livermore National Laboratory, he pioneered the work that later produced the first X-ray laser, which would later become important for the US Strategic Defense Initiative, popularly referred to as the "Star Wars" program. His work on X-ray lasers was honored with the Ernest Orlando Lawrence Award in 1984. Following this time, he took up an academic appointment at MIT in 1986.
In 1989, he started investigating cold fusion (also called low-energy nuclear reactions) with the hope of making a breakthrough similar to the X-ray laser. In the period between 1989 and 2004, the field became discredited in the eyes of many scientists. Hagelstein continued his research activity in the field, chairing the Tenth International Conference on Cold Fusion in 2003. On November 14, 2017, he gave a 90 minute presentation reviewing relevant experiments and describing possible mechanisms.
Following the cold fusion episode, his primary research has shifted to solid-state physics, including the development of new thermoelectric materials. In addition, he is active in education, writing a textbook on quantum and statistical mechanics.
References
Bibliography
Hagelstein's profile at MIT
Living people
MIT School of Engineering faculty
Cold fusion
American electrical engineers
Year of birth missing (living people)
MIT School of Engineering alumni | Peter L. Hagelstein | [
"Physics",
"Chemistry"
] | 353 | [
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
11,142,338 | https://en.wikipedia.org/wiki/Engineering%20Leadership%20Award | The Engineering Leadership Award Scheme was created by the Royal Academy of Engineering (RAEng) and comprises two types of national award for engineering undergraduates: the Standard Award and the Advanced Award.
Up to forty Engineering Leadership Advanced Awards and a few hundred Standard Awards are distributed each year. Holders of the advanced award must be engineering undergraduates studying for an MEng at British universities with outstanding academic ability and marked leadership potential. Standard award holders must be engineering students who are UK nationals, or who have participated in the following pre-university schemes: Young Engineers, Headstart, the Engineering EducationScheme, The Smallpeice Trust, and the Year in Industry.
The scheme is predominantly sponsored by Department for Business, Innovation & Skills and engineering firms, and aims to guide and accelerate the personal development of Britain's future engineering leaders.
History
The award scheme was started in 1996. Each year's advanced award holders are collectively known as a “Cohort”. Cohort 18 was announced in April 2013.
Selection
The selection procedure for the advanced award consists of two stages:
1) Complete a written application, which must be supported by an academic and an industrial reference. This must be received by the RAEng by early December.
2) If the application is successful, the applicant will be asked to attend a two-day selection event in March. During the selection event applicants are interviewed by a fellow of the RAEng and a Sainsbury's Management Fellow.
The award winners are usually announced a fortnight after the selection event.
Benefits
Advanced award holders receive the following benefits:
Up to £5,000 over three years for activities that develop their leadership potential
Seminars and workshops which provide management training and career development advice.
Access to a mentor panel of Sainsbury Management Fellows.
Standard award holders have the opportunity to take part in a number of development activities and course free of charge.
See also
List of engineering awards
External links
Royal Academy of Engineering website
Engineering Leadership Award website
Royal Academy of Engineering's BEST Programme website
Awards established in 1996
British science and technology awards
Engineering awards
Engineering education in the United Kingdom
Royal Academy of Engineering | Engineering Leadership Award | [
"Technology",
"Engineering"
] | 423 | [
"Science and technology awards",
"Royal Academy of Engineering",
"National academies of engineering",
"Engineering awards"
] |
11,143,173 | https://en.wikipedia.org/wiki/Encyclopedia%20of%20Life | The Encyclopedia of Life (EOL) is a free, online encyclopedia intended to document all of the 1.9 million living species known to science. It aggregates content to form "pages" for every known species. Content is compiled from existing trusted databases which are curated by experts and it calls on the assistance of non-experts throughout the world. It includes video, sound, images, graphics, information on characteristics, as well as text. In addition, the Encyclopedia incorporates species-related content from the Biodiversity Heritage Library, which digitizes millions of pages of printed literature from the world's major natural history libraries. The BHL digital content is indexed with the names of organisms using taxonomic indexing software developed by the Global Names project. The EOL project was initially backed by a US$50 million funding commitment, led by the MacArthur Foundation and the Sloan Foundation, who provided US$20 million and US$5 million, respectively. The additional US$25 million came from five cornerstone institutions—the Field Museum, Harvard University, the Marine Biological Laboratory, the Missouri Botanical Garden, and the Smithsonian Institution. The project was initially led by Jim Edwards and the development team by David Patterson. Today, participating institutions and individual donors continue to support EOL through financial contributions.
Overview
EOL went live on 26 February 2008 with 30,000 entries. The site immediately proved to be extremely popular, and temporarily had to revert to demonstration pages for two days when over 11 million views of it were requested.
The site relaunched on 5 September 2011 with a redesigned interface and tools. The new version – referred to as EOLv2 – was developed in response to requests from the general public, citizen scientists, educators and professional biologists for a site that was more engaging, accessible and personal. EOLv2 is redesigned to enhance usability and encourage contributions and interactions among users. It is also internationalized with interfaces provided for English, German, Spanish, French, Galician, Serbian, Macedonian, Arabic, Chinese, Korean and Ukrainian language speakers. On 16 January 2014, EOL launched TraitBank, a searchable, open digital repository for organism traits, measurements, interactions and other facts for all taxa.
The initiative's executive committee includes senior officers from the Atlas of Living Australia, the Biodiversity Heritage Library consortium, the Chinese Academy of Sciences, CONABIO, Field Museum, Harvard University, the Bibliotheca Alexandrina (Library of Alexandria), MacArthur Foundation, Marine Biological Laboratory, Missouri Botanical Garden, Sloan Foundation, and the Smithsonian Institution.
Intention
Information about many species is already available from a variety of sources, in particular about the megafauna. Gathering currently available data on all 1.9 million species will take about 10 years. , EOL had information on more than 700,000 species available, along with more than 600,000 photos and millions of pages of scanned literature. The initiative relies on indexing information compiled by other efforts, including the Species 2000 and ITIS, Catalogue of Life, Fishbase and the Assembling Tree of Life project of NSF, AmphibiaWeb, Mushroom explorer, micro*scope, etc. The initial focus has been on living species but will later include extinct species. As the discovery of new species is expected to continue (currently at about 20,000 per year), the encyclopedia will continue to grow. As taxonomy finds new ways to include species discovered by molecular techniques, the rate of new additions will increase, particularly in respect to the microbial work of (eu)bacteria, archaebacteria and viruses. EOLs goal is to serve as a resource for the general public, enthusiastic amateurs, educators, students and professional scientists from around the world.
Resources and collaborations
The Encyclopedia of Life is an aggregative environment, that collects data from other on-line data sources. It provides full provenance for information through citations from its trusted databases. Professional researchers publishing academic research should cite directly to the underlying data. Users may not currently edit EOL's entries directly but may register for the site to join specialist expert communities to discuss relevant information, questions, possible corrections, sources, and potential updates, contribute images and sound, or volunteer for technical support services. Its interface is translated at translatewiki.net.
EoL was made distinctive by its incorporation of 'taxonomic intelligence', a growing array of algorithms that sought to emulate the practices of taxonomists. These tools included names resolution so that data entered into different databases using different names for organisms could be combined. Components of hierarchical classifications systems could be used to drill-down or to expand data searches. Common components of different classification schemes were used to allow users to navigate using multiple classifications and to meander among schemes. This initiative overcame a major problem of many biological data bases, that of having rigid and singular classification structures that were unable to reflect the diversity of views, or evolving concepts of how names of species and other taxa should be interpreted. The names management systems continue to be developed by the Global Names project.
See also
All Species Foundation
Biodiversity Heritage Library
List of online encyclopedias
Encyclopedia of Earth
Wikispecies
References
External links
from May 2007.
Encyclopedia of Life at the National Museum of Natural History
21st-century encyclopedias
American online encyclopedias
Arabic-language encyclopedias
Biodiversity databases
Biological literature
Biology websites
Chinese encyclopedias
Encyclopedias of science
English-language encyclopedias
Online encyclopedias
Online taxonomy databases
French-language encyclopedias
German-language encyclopedias
Internet properties established in 2008
Korean-language encyclopedias
Missouri Botanical Garden
Multilingual websites
Phylogenetics
Serbian-language encyclopedias
Spanish encyclopedias
Taxonomy (biology)
Zoological nomenclature | Encyclopedia of Life | [
"Biology",
"Environmental_science"
] | 1,134 | [
"Zoological nomenclature",
"Biological nomenclature",
"Taxonomy (biology)",
"Bioinformatics",
"Biodiversity",
"Environmental science databases",
"Phylogenetics",
"Biodiversity databases"
] |
11,143,722 | https://en.wikipedia.org/wiki/Mef2 | In the field of molecular biology, myocyte enhancer factor-2 (Mef2) proteins are a family of transcription factors which through control of gene expression are important regulators of cellular differentiation and consequently play a critical role in embryonic development. In adult organisms, Mef2 proteins mediate the stress response in some tissues. Mef2 proteins contain both MADS-box and Mef2 DNA-binding domains.
Discovery
Mef2 was originally identified as a transcription factor complex through promoter analysis of the muscle creatine kinase (mck) gene to identify nuclear factors interacting with the mck enhancer region during muscle differentiation. Three human mRNA coding sequences designated RSRF (Related to Serum Response Factor) were cloned and shown to dimerize, bind a consensus sequence similar to the one present in the MCK enhancer region, and drive transcription. RSRFs were subsequently demonstrated to encode human genes now named Mef2A, Mef2B and Mef2D.
Species distribution
The Mef2 gene is widely expressed in all branches of eukaryotes from yeast to humans. While Drosophila has a single Mef2 gene, vertebrates have at least four versions of the Mef2 gene (human versions are denoted as MEF2A, MEF2B, MEF2C, and MEF2D), all expressed in distinct but overlapping patterns during embryogenesis through adulthood.
Sequence and structure
All of the mammalian Mef2 genes share approximately 50% overall amino acid identity and about 95% similarity throughout the highly conserved N-terminal MADS-box and Mef2 domains, however their sequences diverge in their C-terminal transactivation domain (see figure to the right).
The MADS-box serves as the minimal DNA-binding domain, however an adjacent 29-amino acid extension called the Mef2 domain is required for high affinity DNA-binding and dimerization. Through an interaction with the MADS-box, Mef2 transcription factors have the ability to homo- and heterodimerize, and a classic nuclear localization sequence (NLS) in the C-terminus of Mef2A, -C, and – D ensures nuclear localization of the protein. D-Mef2 and human MEF2B lack this conserved NLS but are still found in the nucleus.
Function
Development
In Drosophila, Mef2 regulates muscle development. Mammalian Mef2 can cooperate with bHLH transcription factors to turn non-muscle cells in culture into muscle. bHLH factors can activate Mef2c expression, which then acts to maintain its own expression.
Loss of Mef2c in neural crest cells results in craniofacial defects in the developing embryo and neonatal death caused by blocking of the upper airway passages. Mef2c upregulates the expression of the homeodomain transcription factors DLX5 and DLX6, two transcription factors that are necessary for craniofacial development.
Stress response
In adult tissues, Mef2 proteins regulate the stress-response during cardiac hypertrophy and tissue remodeling in cardiac and skeletal muscle.
Cardiovascular system
Mef2 is a critical regulator in heart development and cardiac gene expression. In vertebrates, there are four genes in the Mef2 transcription factor family: Mef2a, Mef2b, Mef2c, and Mef2d. Each is expressed at specific times during development. Mef2c, the first gene to be expressed in the heart, is necessary for the development of the anterior (secondary) heart field (AHF), which helps to form components of the cardiac outflow tract and most of the right ventricle. In addition, Mef2 genes are indicated in activating gene expression to aid in sprouting angiogenesis, the formation of new blood vessels from existing vessels.
Knockout studies
In mice, knockout studies of Mef2c have demonstrated that crucial role that it plays in heart development. Mice without the Mef2c die during embryonic day 9.5–10 with major heart defects, including improper looping, outflow tract abnormalities, and complete lack of the right ventricle. This indicates improper differentiation of the anterior heart field. When Mef2c is knocked out specifically in the AHF, the mice die at birth with a range of outflow tract defects and severe cyanosis. Thus, Mef2 is necessary for many aspects of heart development, specifically by regulating the anterior heart field.
Additional Information
MEF2, Myocyte Enhancer Factor 2, is a transcription factor with four specific numbers such as MEF2A, B, C, and D. Each MEF2 gene is located on a specific chromosome. MEF2 is known to be involved in the development and the looping of the heart (Chen) MEF2 is necessary for myocyte differentiation and gene activation (Black). Both roles contribute to the heart structure, and if there is a disruption with MEF2 in embryonic development, it can lead to two phenotypic problems (Karamboulas). The Type-I phenotype can cause severe malformations to the heart and the type-II phenotype, while it looks normal, has a thin-walled myocardium which can cause cardiac insufficiency.
Another problem that can arise is from the knockout gene MEF2C. MEF2C is known to be directly related to congenital heart disease when associated with Tdgf1 (teratocarcinoma-derived growth factor 1). If MEF2C improperly regulates Tdgf1, developmental defects arise, especially within the embryonic development of the heart. (Chen). The way that MEF2C interacts with the protein Tdgf1 is through the 〖Ca〗^(2+) signaling pathway, which is required to regulate different mechanisms. MicroRNA's, non-small coding RNAs, also play a specific role in regulating MEF2C. The expression of congenital heart disease is upregulated due to the downregulation of the microRNA miR-29C (Chen). A few other known diseases associated with the MEF2 family are liver fibrosis, cancers, and neurodegenerative diseases (Chen).
References
Black, Brian L., and Richard M. Cripps. “Myocyte Enhancer Factor 2 Transcription Factors in Heart Development and Disease.” Heart Development and Regeneration, 2010, pp. 673–699., doi:10.1016/b978-0-12-381332-9.00030-x.
Chen, Xiao, et al. “MEF2 Signaling and Human Diseases.” Oncotarget, vol. 8, no. 67, 2017, pp. 112152–112165., doi:10.18632/oncotarget.22899.
Karamboulas, C., et al. “Disruption of MEF2 Activity in Cardiomyoblasts Inhibits Cardiomyogenesis.” Journal of Cell Science, vol. 120, no. 1, 2006, pp. 4315–4318., doi:10.1242/jcs.03369.
External links
OrthoDB Orthology in all Eukaryotes
Drosophila melanogaster genes
Transcription factors | Mef2 | [
"Chemistry",
"Biology"
] | 1,535 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,143,727 | https://en.wikipedia.org/wiki/Tower%20Mounted%20Amplifier | A Tower Mounted Amplifier (TMA), or Mast Head Amplifier (MHA), is a low-noise amplifier (LNA) mounted as close as practical to the antenna in mobile masts or base transceiver stations. A TMA reduces the base transceiver station noise figure (NF) and therefore improves its overall sensitivity; in other words the mobile mast is able to receive weaker signals.
The power to feed the amplifier (in the top of the mast) is usually a DC component on the same coaxial cable that feeds the antenna, otherwise an extra power cable has to be run to the TMA/MHA to supply it with power.
Benefits in mobile communications
In two way communications systems, there are occasions when one way, one link, is weaker than the other, normally referenced as unbalanced links. This can be fixed by making the transmitter on that link stronger or the receiver more sensitive to weaker signals.
TMAs are used in mobile networks to improve the sensitivity of the uplink in mobile phone masts. Since the transmitter in a mobile phone it cannot be easily modified to transmit stronger signals. Improving the uplink translates into a combination of better coverage and mobile transmitting at less power, which in turn implies a lower drain from its batteries, thus a longer battery charge.
There are occasions when the cable between the antenna and the receiver is so lossy (too thin or too long) that the signal weakens from the antenna before reaching the receiver; therefore it may be decided to install TMAs from the start to make the system viable. In other words, the TMA can only partially correct, or palliate, the link imbalance.
Drawbacks/pitfalls
If the received signal is not weak, installing a TMA will not deliver its intended benefit.
If the received signal is strong enough, it may cause the TMA to create its own interference which is passed on to the receiver.
In some mobile networks (e.g. IS-95 or WCDMA - aka European 3G -), it is not simple to detect and correct unbalanced links since the link balance is not constant; link balance changes with traffic load. However, other mobile networks (e.g. GSM) have a constant link, therefore it is possible analyse call records and establish where TMAs are needed.
There might be practical room restrictions, visual, or structural weight restrictions to install a TMA at the top of a phone mast.
If the TMA fails, it may render the system unusable until serviced, unless it can be bypassed.
Servicing TMAs is harder than servicing receivers - and thus more expensive - as the TMA may be dangerously near (near field) of the antenna and high up in a tower. The receiver may alternatively be housed in a cabinet or hut at the base of the tower.
Mathematical principles
In a receiver, the receiving path starts with the signal originating at the antenna. Then the signal is amplified in further stages within the receiver. It is actually not amplified all at once but in stages, with some stages producing other changes (like changing the signal's frequency).
The principle can be demonstrated mathematically; the receiver's noise figure is calculated by modularly assessing each amplifier stage. Each stage consists of a noise figure (F) and an amount of amplification, or gain (G). So amplifier number 1 will be right after the antenna and described by and . The relationship of the stages is known as the Friis formula.
Note that:
The first amplifier will set the temperature (); nothing reduces its contribution to the total.
The second amplifier's temperature () will also influence the total but it is reduced (divided) by the gain of the first amplifier .
The third amplifier's temperature is influencing even less, as it is reduced by its preceding amplifier gains , .
And so on until N stages.
Applying the Friis formula to TMAs
Typical receiver without TMA
Start with a typical receiver: Antenna - Connecting Cable (stage 1) - Receiver (stage 2).
The first stage after the antenna is actually the connecting cable. Therefore:
Stage 1: is equal to the loss of the cable and will increase with ambient temperature
Stage 2: will depend on the lossiness of the cable. Since the element is lossy is less than one; in other words, it will increase . The more loss, the closer is to zero and the more will increase.
What can be done to improve the receiver to pick up very weak signals? It must have a lower noise figure; that is when the TMA comes into use.
Typical receiver with TMA
It is a chain of 4 modules: antenna - short connecting cable (stage 1) - TMA (stage 2) - longer connecting cable (stage 3) - receiver (stage 4)
Stage 1: By using the shortest, the least lossy connecting cable between the antenna and the TMA, is low is nearly one.
Stage 2: The TMA of noise figure and gain .
Stage 3: Then comes the next cable ( and ), but this time its noise addition () is reduced by .
Stage 4: Then comes the receiver, whose noise figure is less downgraded by the cables, as , is from the TMA, and from the second cable. So will counteract the effects of .
Updating the Friis formula with this case, the noise figure is now:
In this way, the cable losses are now negligible and do not significantly affect the system noise figure.
This number is normally expressed in decibels (dB) thus:
See also
Low-noise block converter
References
External links
TMA test equipment
Electronic amplifiers
Mobile technology | Tower Mounted Amplifier | [
"Technology"
] | 1,161 | [
"Electronic amplifiers",
"Amplifiers",
"nan"
] |
11,144,232 | https://en.wikipedia.org/wiki/Gulf%20Publishing%20Company | Gulf Publishing Company is an international publishing and events business dedicated to the hydrocarbon energy sector. In mid-2018 it rebranded as Gulf Energy Information. Founded in 1916 by Ray Lofton Dudley, Gulf Energy Information produces and distributes publications in print and web formats, online news, webcasts and databases; hosts conferences and events designed for the energy industry. The company was a subsidiary of Euromoney Institutional Investor from 2001 until a 2016 management buyout by CEO John Royall and Texas investors. The business and strategy publication Petroleum Economist also transferred to the company in May 2016. In mid-2017 the company acquired 109-year old Oildom Publishing.
The company's flagship magazines, World Oil, Hydrocarbon Processing, Pipeline & Gas Journal, and the Petroleum Economist are published monthly. Gulf is headquartered in Houston, Texas, with sales staff and columnists around the world, due to expansion efforts by William G. Dudley, Sr. The Petroleum Economist publishing and map cartography staff are based in London, UK. Gulf Energy Info's Data Services staff support on-line Energy Web Atlas energy data visualization, and Construction Boxscore downstream project database, from Houston, London and Mumbai.
Since 1916 World Oil has covered the upstream oil and gas industry for conventional, shale, offshore, exploration and production technology in oil and gas.
Since 1922, Hydrocarbon Processing has provided job-relevant information to technical staff, operations, maintenance and management in petroleum refining, gas processing facilities, petrochemical and engineer/constructor companies throughout the world. Bi-monthly supplement Gas Processing & LNG was added in 2012.
Since 1934, the Petroleum Economist has written about oil, its politics and economics - explained some of the industry's biggest disruptions: such as the 1973 oil crisis, the Gulf Wars, the rise of China, the Arab uprisings, and the more recent supply-side shocks from North America's unconventional energy sector.
Since 1859, Pipeline & Gas Journal has been the essential resource for technology and trends in the midstream industry; written and edited to be of service to those involved in moving, marketing and managing hydrocarbons from wellheads to ultimate consumers.
The company formerly published trade books, but spun off the division as TaylorWilson (now part of Taylor Trade) in 2000; sold its professional book list to Elsevier in 2013.
References
External links
Magazine publishing companies of the United States
Publishing companies established in 1916
Companies based in Houston
Petroleum industry | Gulf Publishing Company | [
"Chemistry"
] | 497 | [
"Chemical process engineering",
"Petroleum",
"Petroleum industry"
] |
11,144,386 | https://en.wikipedia.org/wiki/Uzel%20%28computer%29 | Uzel was the Soviet Union's first digital computer used on submarines, to assist in tracking multiple targets and calculate torpedo solutions. Uzel's design team was headed by two American defectors to the Soviet Union, Alfred Sarant (a.k.a. Philip Staros) and Joel Barr (a.k.a. Joseph Berg). An upgraded version of the Uzel computer is still in use on the Kilo class submarine today.
References
History of computing
Soviet computer systems
Soviet Union–United States relations
Military computers | Uzel (computer) | [
"Technology"
] | 109 | [
"Computer systems",
"Soviet computer systems",
"Computing stubs",
"Computers",
"History of computing"
] |
11,144,474 | https://en.wikipedia.org/wiki/Light-gated%20ion%20channel | Light-gated ion channels are a family of ion channels regulated by electromagnetic radiation. Other gating mechanisms for ion channels include voltage-gated ion channels, ligand-gated ion channels, mechanosensitive ion channels, and temperature-gated ion channels. Most light-gated ion channels have been synthesized in the laboratory for study, although two naturally occurring examples, channelrhodopsin and anion-conducting channelrhodopsin, are currently known. Photoreceptor proteins, which act in a similar manner to light-gated ion channels, are generally classified instead as G protein-coupled receptors.
Mechanism
Light-gated ion channels function in a similar manner to other gated ion channels. Such transmembrane proteins form pores through lipid bilayers to facilitate the passage of ions. These ions move from one side of the membrane to another under the influence of an electrochemical gradient. When exposed to a stimulus, a conformational change occurs in the transmembrane region of the protein to open or close the ion channel. In the specific case of light-gated ion channels, the transmembrane proteins are usually coupled with a smaller molecule that acts as a photoswitch, whereby photons bind to the switching molecule, to then alter the conformation of the proteins, so that the pore changes from a closed state to an open state, or vice versa, thereby increasing or decreasing ion conductance. Retinal is a good example of a molecular photoswitch and is found in the naturally occurring channelrhodopsins.
Synthetic isoforms
Once channelrhosopsin had been identified and characterized, the channel's ion selectivity was modified in order to control membrane potential through optogenetic control. Directed mutations of the channel changed the charges lining the pore, resulting in a pore which instead excluded cations in favor of anions.
Other types of gated ion channels, ligand-gated and voltage-gated, have been synthesized with a light-gated component in an attempt to better understand their nature and properties. By the addition of a light-gated section, the kinetics and mechanisms of operation can be studied in depth. For example, the addition of a light-gated component allows for the introduction of many highly similar ligands to be introduced to the binding site of a ligand-gated ion channel to assist in the determination of the mechanism.
Such ion channels have been modified by binding a photoswitch to confer photosensitivity on the ion channel. This is done through careful selection of a tether which can lengthen or shorten through photoisomerization. One side of the tether is bound to the ion channel protein and the other end of the tether is bound to a blocking group, which has a high binding affinity for an exposed portion of the pore. When the tether is lengthened, it allows the blocking section to bind to the pore and prevent ionic current. When the tether is shortened, it disrupts this obstruction and opens the pore. Kinetic studies have demonstrated that fine temporal and spatial control can be achieved in this manner.
Azobenzene is a common choice for the functional portion of a tether for synthetically-developed light-gated ion channels because of its well documented length change as either cis or trans isomers, as well as the excitation wavelength needed to induce photoisomerization. Azobenzene converts to its longer trans-isomer at a wavelength of λ=500 nm and to its cis-isomer at λ=380 nm.
In 1980, the first ion channel to be adapted for study with a light-gated mechanism was the nicotinic acetylcholine receptor. This receptor was well-known at the time, and so was aptly suited to adaptation, and allowed for a study of the kinetics as not allowed before.
The expression of light-gated ion channels in a specific cell type through promoter control allows for the regulation of cell potential by either depolarizing the membrane to 0 mV for cation-permeant channelrhodopsin or by holding the voltage at –67 mV for anion-conducting channelrhodopsin. Depolarization can conduct a current in the range of 5 fA per channel and occurs on the timescale of action potentials and neurotransmitter exocytosis. They have an advantage over other types of ion channel regulation in that they provide non-invasive, reversible membrane potential changes with fine temporal and spatial control granted by induction through laser stimuli. They reliably stimulate single action potentials with rapid depolarization and can be utilized in vivo because they do not require high intensity illumination to maintain function, unlike other techniques like light-activated proton pumps and photoactivatable probes.
Examples
Examples of light-gated ion channels occur in both natural and synthetic environments. These include:
Naturally occurring
Channelrhodopsins were the first discovered family of light-gated ion channels.
Channelrhodopsin-1 (from Chlamydomonas reinhardtii and Volvox)
Channelrhodopsin-2
Anion-conducting channelrhodopsin
Synthetically adapted
Nicotinic acetylcholine receptor was the first ion channel to be synthetically adapted with a light-gated mechanism.
Light-activated potassium channels have been engineered from bacterial K+ channels with the goal of inhibiting neuronal activity upon illumination. A second strategy is to combine a cyclic nucleotide gated K+ channel with a photoactivated adenylyl cyclase to inhibit neuronal activity at very low light levels.
Many other fully synthetic, light-gated channels have been produced as well.
References
Ion channels | Light-gated ion channel | [
"Chemistry"
] | 1,190 | [
"Neurochemistry",
"Ion channels"
] |
11,145,013 | https://en.wikipedia.org/wiki/Toxic%20cough%20syrup | Since the 1990s, several mass poisonings from toxic cough syrup have occurred in developing countries. In these cases, an ingredient in cough syrup, glycerine (glycerol), was replaced with diethylene glycol, a cheaper alternative to glycerine for industrial applications. Diethylene glycol is nephrotoxic and can result in multiple organ dysfunction syndrome (MODS), especially in children.
History
There have been poisonings in Bangladesh, Indonesia, Marshall Islands, Pakistan, Panama, The Gambia, India (twice), Uzbekistan, and Cameroon between 1992 and 2022, due to contaminated cough syrup and other medications that incorporated inexpensive diethylene glycol instead of glycerine.
Bangladesh
Discovering and tracing a toxic syrup to its source has been difficult for health care providers and governmental agencies due to difficult communication between the governments of developed countries and developing countries. For example, Michael L. Bennish, an American pediatrician who works in developing countries, had been volunteering in Bangladesh as a physician and had noticed a number of deaths that seemed to coincide with the distribution of the government-issued cough syrup. The government rebuffed his attempts at investigating the medication. In response, Bennish smuggled bottles of the syrup in his suitcase when returning to the United States, allowing pharmaceutical laboratories in Massachusetts to identify the poisonous diethylene glycol, which can appear very similar to the less dangerous glycerine. Bennish went on to author a 1995 article in the British Medical Journal about his experience, writing that, given the amount of medication prescribed, death tolls "must [already] be in the tens of thousands".
Indonesia
In 2022, deaths of nearly 100 children in Indonesia, were reported to be linked to cough syrup and liquid medication. The syrup contained "unacceptable amounts" of diethylene glycol and ethylene glycol, linked to acute kidney injuries (AKI). In October, health officials reported around 200 cases of AKI in children, most of who were aged under five. Indonesia temporarily banned the sale and prescription of all syrup and liquid medicines as it was not clear if these medicines were imported or locally produced.
In November 2023, Afi Farma's chief executive and three other officials, whose cough syrup was linked to the deaths were sentenced to two-year prison sentences and fined 1bn Indonesian rupiah ($63,029; £51,7130) each.
Marshall Islands and Micronesia
In April 2023, World Health Organization (WHO) reported that, Guaifenesin TG syrup manufactured by QP Pharmachem Ltd in Punjab, India, had been found to contain "unacceptable amounts of diethylene glycol and ethylene glycol" in tested samples. Sudhir Pathak, managing director of QP Pharmachem, claimed that the batch of 18,346 bottles had been exported to Cambodia after obtaining all necessary regulatory approvals and that he was unaware of how the product had ended up in the Marshall Islands and Micronesia.
Pakistan
In December 2012, toxic cough syrup led to a death toll of between 16 and 30 in Gujranwala, while in November of that year, at least 19 individuals in Lahore suffered the same fate. Following an inquiry, Tyno cough syrup, produced and distributed by Reko Pharma in Lahore, was identified as the cause of the fatalities in Lahore. Many of the victims from the two incidents were drug addicts seeking intoxication. The syrup was later found to contain too much dextromethorphan, a cough suppressant.
Panama
In May 2007, 365 deaths were reported in Panama. The diethylene glycol originated from a Chinese manufacturer, which exported it as industrial "TD glycerin" under a shelf life of one year. The letters "TD" were shorthand for "substitute" in Chinese. When Panama-based Medicom received the product from a Spanish trader, it changed the name to "glycerine" and the expiration date to four years before selling it to the government of Panama. Neither the trading companies involved nor the government lab in Panama that processed the ingredient tested the substance for verification. Chinese authorities said they would no longer allow the name "TD glycerin" to be used. One of the country's officials overseeing food and drug safety was sentenced to death in late May on charges related to the scandal. The Panama government detained several officials as well as employees of Medicom and set up a $6-million fund for the victims.
The Gambia
In October 2022, the WHO announced a link between four paediatric cough syrups from one Indian pharmaceutical company and the deaths of 66 children in The Gambia from kidney failure. The products (Promethazine Oral Solution, Kofexmalin Baby Cough Syrup, Makoff Baby Cough Syrup, and Magrip N Cold Syrup) are believed to be contaminated with diethylene glycol and/or ethylene glycol. The products involved were manufactured by Maiden Pharmaceuticals of India in December 2021.
This has led to Maiden Pharmaceuticals' products being banned in The Gambia; a probe by the CDSCO and volunteers from health agencies in The Gambia going door to door in an urgent recall.
In December 2022, a parliamentary committee in The Gambia recommended prosecution of the Indian company, Maiden Pharmaceuticals. It also recommended banning all products by the firm in the country.
Indian authorities started conducting an inquiry into an April 2023 allegation that a pharmaceutical regulator in Haryana state, who holds a senior position in the state health department, accepted a bribe and switched samples of contaminated cough syrup before the state government laboratory tested them. The cough syrup in question was produced by Maiden Pharmaceuticals, and it has been implicated in child deaths in Gambia. Tests conducted by two independent laboratories on behalf of the WHO confirmed the presence of lethal toxins—ethylene glycol and diethylene glycol in the syrup. Indian authorities, however, did not find any toxins, but did identify labeling issues with Maiden Pharmaceuticals' cough syrup. Naresh Kumar Goyal, the founder of Maiden Pharmaceuticals, has previously denied any wrongdoing in the production of the syrup.
Uzbekistan
In December 2022, Uzbekistan's health ministry said that 18 children died from renal problems and acute respiratory disease after drinking cough syrup manufactured by Indian drug maker Marion Biotech. The statement did not specify over what time period the deaths occurred. As a result, Marion Biotech, was suspended from Pharmexcil, an Indian government-linked trade group. As a result, state security police in Uzbekistan arrested four people.
Sources told Reuters that Marion purchased industrial-grade propylene glycol as an ingredient from Maya Chemtech India, which is not licensed to sell pharmaceutical-grade materials. Maya is not facing charges but the investigation is ongoing. Marion did not test the ingredient it purchased.
The Indian government has mandated that after June 2023, cough syrup manufacturers must have their products tested before exporting them. These companies are required to obtain a certificate of analysis from a government-approved laboratory. A list of approved laboratories, both at the central and state government level, was provided where the samples can be tested.
Cameroon
The Naturcold brand of cough syrup from India was associated with the tragic deaths of multiple children in Cameroon. WHO testing on June 27, 2023, revealed alarming levels of diethylene glycol in Naturcold, reaching as high as 28.6% – over 200 times the acceptable limit, which should not exceed 0.1%. This highly toxic solvent, normally used in air-conditioners and fridges, can lead to severe symptoms, including acute kidney injury and even death if ingested.
The packaging of the deadly medicine falsely claimed that it was produced by a British company called Fraken International (England), but no such company exists in the UK. The actual manufacturer was Riemann Private Ltd, an Indian company based in Indore, and it appeared to be exported to global markets, including Cameroon, by another Indian company, Wellona Pharma, based in Surat, Gujarat. The UK’s Medicines and Healthcare products Regulatory Agency keeps an eye on counterfeit claims of UK origin made by foreign pharmaceutical companies, as such claims are used to add credibility to otherwise adulterated, unlicensed, or substandard medicines.
Riemann Pvt Ltd is under investigation and faces potential disciplinary action from the Indian drug regulator, the Central Drugs Standard Control Organisation. Despite the ongoing investigation, the company continues its operations and drug manufacturing activities.
Worldwide
The World Health Organization (WHO) is addressing the global threat of toxic cough syrups that have caused the deaths of more than 300 children across multiple countries in 2022 and 2023. The WHO is working with six additional countries, bringing the total to 15 countries, to track these dangerous medicines. The WHO team lead said that tainted syrups are an ongoing risk. He cautioned that the presence of contaminated medicines could persist for several years, as warehouses may still contain barrels of adulterated propylene glycol. The manufacturers that exported the syrup to other countries in the current spate of incidents are four Indian manufacturers (Maiden Pharmaceuticals, Marion Biotech, QP Pharmachem, and Synercar), one Chinese manufacturer (Fraken Group) and one Pakistani manufacturer (Pharmix Laboratories). Safety alerts have been issued by government agencies in the affected countries, as well as by countries conducting tests on their behalf and the WHO, while investigations into the matter continue. The WHO has urged countries to enhance surveillance and offer support to countries lacking testing resources.
See also
List of medicine contamination incidents
References
2007 in Panama
2007 health disasters
Medical scandals
Health in Panama
Antitussives
Drug safety
Health disasters in North America
Adulteration
Mass poisoning
2022 in Indonesia
2022 in Uzbekistan
2022 health disasters
2022 in the Gambia | Toxic cough syrup | [
"Chemistry"
] | 2,033 | [
"Adulteration",
"Drug safety"
] |
11,145,095 | https://en.wikipedia.org/wiki/Subcarrier%20multiplexing | Subcarrier Multiplexing (SCM) is a method for combining (multiplexing) many different communications signals so that they can be transmitted along a single optical fiber. SCM (also known as SCMA, SubCarrier Multiple Access) is used in passive optical network (PON) access infrastructures as a variant of wavelength division multiplexing (WDM).
SCM follows a different approach compared to WDM. In WDM an optical carrier is modulated with a baseband signal of typically hundred of Mbit/s. In an SCMA infrastructure, the baseband data is first modulated on a GHz wide subcarrier, that is subsequently modulated on the optical carrier. This way each signal occupies a different portion of the optical spectrum surrounding the centre frequency of the optical carrier. At the receiving side, as normally happens in a commercial radio service, the receiver is tuned to the correct subcarrier frequency, filtering out the other subcarriers.
The operation of multiplexing and demultiplexing the single subcarriers is carried out electronically. The conversion into the optical carrier is done at the multiplexer side. This gives an advantage over a pure WDM access, due to the lower cost of the electrical components if compared with an optical multiplexer.
SCM has the disadvantage of being limited in maximum subcarrier frequencies and data rates by the available bandwidth of the electrical and optical components. Therefore, SCM must be used in conjunction with WDM in order to take advantage of most of the available fiber bandwidth, but it can be used effectively for lower-speed, lower-cost multiuser systems.
References
External links
WDM - Wavelength Division Multiplexing
Subcarrier Multiplexing for High-Speed Optical Transmission
Multiplexing
Digital television
Digital radio
Broadcast engineering
Physical layer protocols | Subcarrier multiplexing | [
"Engineering"
] | 368 | [
"Broadcast engineering",
"Electronic engineering"
] |
11,145,154 | https://en.wikipedia.org/wiki/Photodisintegration | Photodisintegration (also called phototransmutation, or a photonuclear reaction) is a nuclear process in which an atomic nucleus absorbs a high-energy gamma ray, enters an excited state, and immediately decays by emitting a subatomic particle. The incoming gamma ray effectively knocks one or more neutrons, protons, or an alpha particle out of the nucleus. The reactions are called (γ,n), (γ,p), and (γ,α), respectively.
Photodisintegration is endothermic (energy absorbing) for atomic nuclei lighter than iron and sometimes exothermic (energy releasing) for atomic nuclei heavier than iron. Photodisintegration is responsible for the nucleosynthesis of at least some heavy, proton-rich elements via the p-process in supernovae of type Ib, Ic, or II.
This causes the iron to further fuse into the heavier elements.
Photodisintegration of deuterium
A photon carrying 2.22 MeV or more energy can photodisintegrate an atom of deuterium:
{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|}
James Chadwick and Maurice Goldhaber used this reaction to measure the proton-neutron mass difference. This experiment proves that a neutron is not a bound state of a proton and an electron, as had been proposed by Ernest Rutherford.
Photodisintegration of beryllium
A photon carrying 1.67 MeV or more energy can photodisintegrate an atom of beryllium-9 (100% of natural beryllium, its only stable isotope):
{| border="0"
|- style="height:2em;"
| ||+ || ||→ ||2|| ||+ ||
|}
Antimony-124 is assembled with beryllium to make laboratory neutron sources and startup neutron sources. Antimony-124 (half-life 60.20 days) emits β− and 1.690 MeV gamma rays (also 0.602 MeV and 9 fainter emissions from 0.645 to 2.090 MeV), yielding stable tellurium-124. Gamma rays from antimony-124 split beryllium-9 into two alpha particles and a neutron with an average kinetic energy of 24 keV (a so-called intermediate neutron in terms of energy):
{| border="0"
|- style="height:2em;"
| ||→ ||||+ || ||+ ||
|}
Other isotopes have higher thresholds for photoneutron production, as high as 18.72 MeV, for carbon-12.
Hypernovae
In explosions of very large stars (250 or more solar masses), photodisintegration is a major factor in the supernova event. As the star reaches the end of its life, it reaches temperatures and pressures where photodisintegration's energy-absorbing effects temporarily reduce pressure and temperature within the star's core. This causes the core to start to collapse as energy is taken away by photodisintegration, and the collapsing core leads to the formation of a black hole. A portion of mass escapes in the form of relativistic jets, which could have "sprayed" the first metals into the universe.
Photodisintegration in lightning
Terrestrial lightnings produce high-speed electrons that create bursts of gamma-rays as bremsstrahlung. The energy of these rays is sometimes sufficient to start photonuclear reactions resulting in emitted neutrons. One such reaction, (γ,n), is the only natural process other than those induced by cosmic rays in which is produced on Earth. The unstable isotopes remaining from the reaction may subsequently emit positrons by β+ decay.
Photofission
Photofission is a similar but distinct process, in which a nucleus, after absorbing a gamma ray, undergoes nuclear fission (splits into two fragments of nearly equal mass).
See also
Pair-instability supernova
Silicon-burning process
References
Nuclear physics
Nucleosynthesis
Neutron sources | Photodisintegration | [
"Physics",
"Chemistry"
] | 879 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
11,146,329 | https://en.wikipedia.org/wiki/Cosmolabe | The cosmolabe was an ancient astronomical instrument resembling the astrolabe, formerly used for measuring the angles between heavenly bodies. It is also called pantacosm. Jacques Besson also uses this name, or universal instrument, for his invention described in Le cosmolabe (1567), which could be used for astrometry, cartography, navigation, and surveying.
Notes
References
Astronomical instruments
Astrometry
Astrological aspects
History of astrology | Cosmolabe | [
"Astronomy"
] | 93 | [
"History of astronomy",
"Astrometry",
"Astronomy stubs",
"Astronomical instruments",
"History of astrology",
"Astronomical sub-disciplines"
] |
11,146,362 | https://en.wikipedia.org/wiki/Coprecipitation | In chemistry, coprecipitation (CPT) or co-precipitation is the carrying down by a precipitate of substances normally soluble under the conditions employed. Analogously, in medicine, coprecipitation (referred to as immunoprecipitation) is specifically "an assay designed to purify a single antigen from a complex mixture using a specific antibody attached to a beaded support".
Coprecipitation is an important topic in chemical analysis, where it can be undesirable, but can also be usefully exploited. In gravimetric analysis, which consists on precipitating the analyte and measuring its mass to determine its concentration or purity, coprecipitation is a problem because undesired impurities often coprecipitate with the analyte, resulting in excess mass. This problem can often be mitigated by "digestion" (waiting for the precipitate to equilibrate and form larger and purer particles) or by redissolving the sample and precipitating it again.
On the other hand, in the analysis of trace elements, as is often the case in radiochemistry, coprecipitation is often the only way of separating an element. Since the trace element is too dilute (sometimes less than a part per trillion) to precipitate by conventional means, it is typically coprecipitated with a carrier, a substance that has a similar crystalline structure that can incorporate the desired element. An example is the separation of francium from other radioactive elements by coprecipitating it with caesium salts such as caesium perchlorate. Otto Hahn is credited for promoting the use of coprecipitation in radiochemistry.
There are three main mechanisms of coprecipitation: inclusion, occlusion, and adsorption. An inclusion (incorporation in the crystal lattice) occurs when the impurity occupies a lattice site in the crystal structure of the carrier, resulting in a crystallographic defect; this can happen when the ionic radius and charge of the impurity are similar to those of the carrier. An adsorbate is an impurity that is weakly, or strongly, bound (adsorbed) to the surface of the precipitate. An occlusion occurs when an adsorbed impurity gets physically trapped inside the crystal as it grows.
Besides its applications in chemical analysis and in radiochemistry, coprecipitation is also important to many environmental issues related to water resources, including acid mine drainage, radionuclide migration around waste repositories, toxic heavy metal transport at industrial and defense sites, metal concentrations in aquatic systems, and wastewater treatment technology.
Coprecipitation is also used as a method of magnetic nanoparticle synthesis.
Distribution between precipitate and solution
There are two models describing of the distribution of the tracer compound between the two phases (the precipitate and the solution):
Doerner-Hoskins law (logarithmic):
Berthelot-Nernst law:
where:
a and b are the initial concentrations of the tracer and carrier, respectively;
a − x and b − y are the concentrations of tracer and carrier after separation;
x and y are the amounts of the tracer and carrier on the precipitate;
D and λ are the distribution coefficients.
For D and λ greater than 1, the precipitate is enriched in the tracer.
Depending on the co-precipitation system and conditions either λ or D may be constant.
The derivation of the Doerner-Hoskins law assumes that there in no mass exchange between the interior of the precipitating crystals and the solution. When this assumption is fulfilled, then the content of the tracer in the crystal is non-uniform (the crystals are said to be heterogeneous). When the Berthelot-Nernst law applies, then the concentration of the tracer in the interior of the crystal is uniform (and the crystals are said to be homogeneous). This is the case when diffusion in the interior is possible (like in the liquids) or when the initial small crystals are allowed to recrystallize. Kinetic effects (like speed of crystallization and presence of mixing) play a role.
See also
Fajans–Paneth–Hahn Law
References
Chemical processes
Analytical chemistry
Radiochemistry | Coprecipitation | [
"Chemistry"
] | 917 | [
"Chemical processes",
"nan",
"Radiochemistry",
"Chemical process engineering",
"Radioactivity"
] |
11,146,484 | https://en.wikipedia.org/wiki/Saclofen | Saclofen is a competitive antagonist for the GABAB receptor. This drug is an analogue of the GABAB agonist baclofen. The GABAB receptor is heptahelical receptor, expressed as an obligate heterodimer, which couples to the Gi/o class of heterotrimeric G-proteins. The action of saclofen on the central nervous system is understandably modest, because G-proteins rely on an enzyme cascade to alter cell behavior while ionotropic receptors immediately change the ionic permeability of the neuronal plasma membrane, thus changing its firing patterns. These particular receptors, presynaptically inhibit N- and P/Q- voltage-gated calcium channels (VGCCs) via a direct interaction of the dissociated beta gamma subunit of the g-protein with the intracellular loop between the 1st and 2nd domain of the VGCC's alpha-subunit; postsynaptically, these potentiate Kir currents. Both result in inhibitory effects.
However, in animal experiments, saclofen is paradoxically observed to have an antiepileptic effect. This is probably because GABAB effect is coupled to excitation in the thalamo-cortical circuits — Kir coupling via Gβγ subunits is so strong that it lowers the threshold for T-type Ca2+ channel opening enough to elicit their opening, and thus an excitation in this circuit. Since thalamo-cortical circuit overfiring is seen in types of epilepsy involving absence seizures (ethosuximide, a T-type Ca2+ channel blocker, is used in the treatment of this), the unexpected antiepileptic effects of saclofen may thus be explained (unexpected as the GABA receptors are inhibitory, and antagonizing them should lead to hyperactivity of the affected neurons). Possible therapeutic uses of saclofen are currently being researched.
Saclofen has two enantiomeric forms. The (R)-stereoisomer is the one that binds to the GABAB receptor, whereas the (S)-stereoisomer does not.
References
4-Chlorophenyl compounds
GABAB receptor antagonists
Sulfonic acids | Saclofen | [
"Chemistry"
] | 474 | [
"Functional groups",
"Sulfonic acids"
] |
11,146,921 | https://en.wikipedia.org/wiki/Acousto-optic%20deflector | An acousto-optic deflector (AOD) is a device that uses the interaction between sound waves and light waves to deflect or redirect a laser beam. AODs are essentially the same as acousto-optic modulators (AOMs). In both an AOM and an AOD, the amplitude and frequency of different orders are adjusted as light is diffracted.
Operation
In the operation of an acousto-optic deflector the power driving the acoustic transducer is kept on, at a constant level, while the acoustic frequency is varied to deflect the beam to different angular positions. The acousto-optic deflector makes use of the acoustic frequency dependent diffraction angle, where a change in the angle as a function of the change in frequency given as,
where is the optical wavelength and is the velocity of the acoustic wave.
Impact
AOM technology has made Bose–Einstein condensation practical, for which the 2001 Nobel Prize in Physics was awarded to Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman. Another application of acoustic-optical deflection is optical trapping of small molecules.
See also
Acousto-optic modulator
Acousto-optics
Acousto-optical spectrometer
Nonlinear optics
Sonoluminescence
References
Acoustics | Acousto-optic deflector | [
"Physics"
] | 276 | [
"Classical mechanics",
"Acoustics"
] |
11,147,109 | https://en.wikipedia.org/wiki/Kempner%20function | In number theory, the Kempner function is defined for a given positive integer to be the smallest number such that divides the For example, the number does not divide , , but does
This function has the property that it has a highly inconsistent growth rate: it grows linearly on the prime numbers but only grows sublogarithmically at the factorial numbers.
History
This function was first considered by François Édouard Anatole Lucas in 1883, followed by Joseph Jean Baptiste Neuberg in 1887. In 1918, A. J. Kempner gave the first correct algorithm for
The Kempner function is also sometimes called the Smarandache function following Florentin Smarandache's rediscovery of the function
Properties
Since is always at A number greater than 4 is a prime number if and only That is, the numbers for which is as large as possible relative to are the primes. In the other direction, the numbers for which is as small as possible are the factorials: for
is the smallest possible degree of a monic polynomial with integer coefficients, whose values over the integers are all divisible
For instance, the fact that means that there is a cubic polynomial whose values are all zero modulo 6, for instance the polynomial
but that all quadratic or linear polynomials (with leading coefficient one) are nonzero modulo 6 at some integers.
In one of the advanced problems in The American Mathematical Monthly, set in 1991 and solved in 1994, Paul Erdős pointed out that the function coincides with the largest prime factor of for "almost all" (in the sense that the asymptotic density of the set of exceptions is zero).
Computational complexity
The Kempner function of an arbitrary number is the maximum, over the prime powers dividing , of .
When is itself a prime power , its Kempner function may be found in polynomial time by sequentially scanning the multiples of until finding the first one whose factorial contains enough multiples The same algorithm can be extended to any whose prime factorization is already known, by applying it separately to each prime power in the factorization and choosing the one that leads to the largest value.
For a number of the form , where is prime and is less than , the Kempner function of is . It follows from this that computing the Kempner function of a semiprime (a product of two primes) is computationally equivalent to finding its prime factorization, believed to be a difficult problem. More generally, whenever is a composite number, the greatest common divisor of will necessarily be a nontrivial divisor allowing to be factored by repeated evaluations of the Kempner function. Therefore, computing the Kempner function can in general be no easier than factoring composite numbers.
References and notes
Factorial and binomial topics | Kempner function | [
"Mathematics"
] | 572 | [
"Factorial and binomial topics",
"Combinatorics"
] |
11,147,298 | https://en.wikipedia.org/wiki/LOLITA | LOLITA is a natural language processing system developed by Durham University between 1986 and 2000. The name is an acronym for "Large-scale, Object-based, Linguistic Interactor, Translator and Analyzer".
LOLITA was developed by Roberto Garigliano and colleagues between 1986 and 2000. It was designed as a general-purpose tool for processing unrestricted text that could be the basis of a wide variety of applications. At its core was a semantic network containing some 90,000 interlinked concepts. Text could be parsed and analysed then incorporated into the semantic net, where it could be reasoned about (Long and Garigliano, 1993). Fragments of the semantic net could also be rendered back to English or Spanish.
Several applications were built using the system, including financial information analysers and information extraction tools for Darpa’s “Message Understanding Conference Competitions” (MUC-6 and MUC-7). The latter involved processing original Wall Street Journal articles, to perform tasks such as identifying key job changes in businesses and summarising articles. LOLITA was one of some systems worldwide to compete in all sections of the tasks. A system description and an analysis of the MUC-6 results were written by Callaghan (Callaghan, 1998).
LOLITA was an early example of a substantial application written in a functional language: it consisted of around 50,000 lines of Haskell, with around 6000 lines of C. It is also a complex and demanding application, in which many aspects of Haskell were invaluable in development.
LOLITA was designed to handle unrestricted text, so that ambiguity at various levels was unavoidable and significant. Laziness was essential in handling the explosion of syntactic ambiguity resulting from a large grammar, and it was much used with semantic ambiguity too. The system used multiple "domain specific embedded languages" for semantic and pragmatic processing and for generation of natural language text from the semantic net. Also, important was the ability to work with complex abstractions and to prototype new analysis algorithms quickly.
Later systems based on the same design include Concepts and SenseGraph.
See also
Computational linguistics
References
External links
Lolita Progress Report #1 1992
A collection of papers on parallelism in Haskell, Lolita frequently being one of or the primary test cases
Belief Modeling for Discourse Plans -(Garagani 1997)
Computational linguistics
Haskell software
Natural language processing software
Durham University | LOLITA | [
"Technology"
] | 496 | [
"Natural language and computing",
"Computational linguistics"
] |
9,659,164 | https://en.wikipedia.org/wiki/Aging%20in%20dogs | Aging in dogs varies from breed to breed, and affects the dog's health and physical ability. As with humans, advanced years often bring changes in a dog's ability to hear, see, and move about easily. Skin condition, appetite, and energy levels often degrade with geriatric age. Medical conditions such as cancer, kidney failure, arthritis, dementia, and joint conditions, and other signs of old age may appear.
The aging profile of dogs varies according to their adult size (often determined by their breed): smaller breeds have an average lifespan of 10–15 years, with some even exceeding 18 years in age; medium breeds typically live for 10 to 13 years; and giant dog breeds have the lowest minimum lifespan, with an overall average of 8 to 13 years. The latter reach maturity at a slightly older age than smaller breeds, with giant breeds reaching adulthood at around two years old compared to the norm of around 13–15 months for other breeds. The accelerated rate of growth required by the drastic change in size exhibited in giant breeds is speculated by scientists at the American Kennel Club to lead to a higher risk of abnormal cell growth and cancer.
Terminology
The terms dog years and human years are frequently used when describing the age of a dog. However, there are two diametrically opposed ways in which the terms are defined:
One common nomenclature uses "human years" to represent a strict calendar basis (365 days) and a "dog year" to be the equivalent portion of a dog's lifetime, as a calendar year would be for a human being. Under this system, a 6-year-old dog would be described as having an age of 6 human years or 40–50 (depending on the breed) dog years.
The other common system defines "dog years" to be the actual calendar years (365 days each) of a dog's life, and "human years" to be the equivalent age of a human being. By this terminology, the age of a 6-year-old dog is described as 6 dog years or 40–50 human years, a reversal from the previous definition.
However, regardless of which set of terminology is used, the relationship between dog years and human years is not linear, as the following section explains.
Aging profile
Dog age concepts can be summarized into three types:
Popular myth — It is popularly believed that one human year equals seven dog years. This is inaccurate because dogs often reproduce at age 1 while humans virtually never reproduce at age 7.
One size fits all — A general rule of thumb is that the first year of a dog's life is equivalent to 15 human years, the second year equivalent to 9 human years, and each subsequent year about 5 human years. So, a dog age 2 is equivalent to a human age 24, while a dog age 10 is equivalent to a human age 64. This is more accurate but still fails to account for size/breed, which is a significant factor.
Size- or breed-specific calculators — These try to factor in the size or breed as well. These are the most accurate types. They typically work either by expected adult weight or by categorizing the dog as "small", "medium", or "large".
No one formula for dog-to-human age conversion is scientifically agreed on, although within fairly close limits they show great similarities. Researchers suggest that dog age depends on DNA methylation which is an epigenetic process. Epigenetic changes occur nonlinear in dogs compared to human.
Oxidative stress appears to be a significant determinant of longevity in small breed compared to large breed dogs. Oxidative damage to DNA can be measured by assessing the level of 8-Oxo-2'-deoxyguanosine in DNA. Oxidative DNA damage measured in puppies was found to be higher in larger dog breeds with shorter lifespans than in smaller breed dogs with longer life spans. This result suggested that DNA repair mechanisms fail earlier in larger breed dogs so that more DNA damage is accumulated sooner in these breeds leading to reduced longevity.
Emotional maturity occurs, as with humans, over an extended period of time and in stages. As in other areas, development of giant breeds is slightly delayed compared to other breeds, and, as with humans, there is a difference between adulthood and full maturity (compare humans age 20 and age 40 for example). In all but large breeds, sociosexual interest arises around 6–9 months, becoming emotionally adult around 15–18 months and fully mature around 3–4 years, although as with humans learning and refinement continue thereafter.
According to the UC Davis Book of Dogs, small-breed dogs (such as small terriers) become geriatric at about 11 years; medium-breed dogs (such as larger spaniels) at 10 years; large-breed dogs (such as German Shepherd Dogs) at 8 years; and giant-breed dogs (such as Great Danes) at 7 years.
Life expectancy by breed
Life expectancy usually varies within a range. For example, a Beagle (average life expectancy 13.3 years) usually lives to around 12–15 years, and a Scottish Terrier (average life expectancy 12 years) usually lives to around 10–16 years. The longest living verified dog is Bluey, an Australian Cattle Dog who died at 29 years. Bobi, a male purebred Rafeiro do Alentejo, was claimed to have died at age 31 in 2023.
Two of the longest living dogs on record, "Bluey" and "Chilla", were Australian Cattle Dogs. This has prompted a study of the longevity of the Australian Cattle Dog to examine if the breed might have exceptional longevity. The 100-dog survey yielded a mean longevity of 13.41 years with a standard deviation of 2.36 years. The study concluded that while Australian Cattle Dogs are a healthy breed and do live on average almost a year longer than most dogs of other breeds in the same weight class, record ages such as Bluey's or Chilla's should be regarded as uncharacteristic exceptions rather than as indicators of common exceptional longevity for the entire breed.
A random-bred dog (also known as a mongrel or a mutt) has an average life expectancy of 13.2 years in the Western world.
Some attempts have been made to determine the causes for breed variation in life expectancy.
Sorted by breed or life expectancy
The following data is from a 2024 study published in Scientific Reports. The total sample size for his study was about 584,734 unique dogs located in the UK, of which 284,734 were deceased.
Factors affecting life expectancy
Apart from breed, several factors influence life expectancy:
Frequency of feeding — Researchers associated with the Dog Aging Project report that dogs that are fed just once daily are healthier on average than dogs fed more frequently. Dogs that received one meal per day had fewer disorders of their dental, gastrointestinal, musculoskeletal, kidney, and urinary systems.
Diet — There are some disagreements regarding the ideal diet. Commonly, senior dogs are fed commercially manufactured senior dog food diets. However, at least two dogs died at 27 years old with non-traditional diets: a Border Collie who was fed a purely vegetarian diet, and a bull terrier cross fed primarily kangaroo and emu meat. They died only 2 years and 5 months younger than the second oldest reported dog, Bluey.
Spaying and neutering — According to a study by the British Veterinary Association (author AR Michell is the president of the Royal College of Veterinary Surgeons), "Neutered females lived longest of dogs dying of all causes, though entire females lived longest of dogs dying of natural causes, with neutered males having the shortest lifespan in each category." Neutering reduces or eliminates the risk of some causes of early death, for example pyometra in females, and testicular cancer in males, as well as indirect causes of early death such as accident and euthanasia (intact dogs roam and tend to be more aggressive), but there might increase the risk of death from other conditions (neutering in cited paper only showed an increase in the risk for prostate cancer but has not been repeated in subsequent papers) in males, and neutered males might have a higher rate for urinary tract cancers such as transitional cell carcinoma and prostatic adenocarcinoma. Caution should be used when interpreting the results of these studies. This is especially important when you consider the frequency of transitional cell carcinoma and prostate carcinoma in a male dog versus the chance an intact male dog will succumb to death from roaming (hit by car or other injuries), benign hyperplasia of the prostate causing prostatic abscesses or inability to urinate (causing euthanasia if this does not resolve with therapy) or euthanasia due to fighting or aggression.
Another study showed that spayed females live longer than intact females (0.8 years more on average) but, unlike the previous study, there were no differences between neutered and intact males. But both groups lived 0.4 years more than intact females.
For more information, see Health effects of neutering.
A major study of dog longevity, which considered both natural and other factors affecting life expectancy, concluded that:
"The mean age at death (all breeds, all causes) was 11 years and 1 month, but in dogs dying of natural causes it was 12 years and 8 months. Only 8 percent of dogs lived beyond 15, and 64 percent of dogs died of disease or were euthanized as a result of disease. Nearly 16 percent of deaths were attributed to cancer, twice as many as to heart disease. [...] In neutered males the importance of cancer as a cause of death was similar to heart disease. [...] The results also include breed differences in lifespan, susceptibility to cancer, road accidents and behavioral problems as a cause of euthanasia."
In 2024, a study published in the journal Scientific Reports involving 584,734 British dogs across over 150 breeds revealed that larger breeds and those with flattened faces tended to have shorter average lifespans compared to smaller dogs and breeds with elongated snouts. Female dogs were found to live slightly longer than male dogs.
Effects of aging
In general, dogs age in a manner similar to humans. Their bodies begin to develop problems that are less common at younger ages, they are more prone to serious or fatal conditions such as cancer, stroke, etc. They become less physically active and less mobile and may develop joint problems such as arthritis. They also become less able to handle change, including wide climatic or temperature variation, and may develop dietary or skin problems or go deaf. In some cases incontinence may develop and breathing difficulties may appear.
"Aging begins at birth, but its manifestations are not noticeable for several years. The first sign of aging is a general decrease in activity level, including a tendency to sleep longer and more soundly, a waning of enthusiasm for long walks and games of catch, and a loss of interest in the goings on in the home."
In studies of cognitive abilities in aging dogs, it has been shown that qualities such as problem-solving, boldness and playfulness tend to decline with age. However, in tasks involving high motivation and low physical demands, older dogs have learned to perform a new task just as well as younger ones. In old age dogs may develop dementia, which is associated with amyloid-beta, a misfolded protein that has been observed in both dogs and humans.
The most common effects of aging are:
Loss of hearing
Loss of vision (cataracts)
Decreased activity, more sleeping, and reduced energy (in part due to reduced lung function)
Weight gain (calorie needs can be 30–40% lower in older dogs)
Weakening of immune system leading to infections
Skin changes (thickening or darkening of skin, dryness leading to reduced elasticity, loss or whitening of hair)
Change in feet and nails (thicker and more brittle nails makes trimming harder)
Arthritis, dysplasia and other joint problems
Loss of teeth
Gastrointestinal upset (stomach lining, diseases of the pancreas, constipation)
Weakness in muscles and bones
Urinary issues (incontinence in both genders, and prostatitis/straining to urinate in males)
Mammary cysts and tumors in females
Dementia
Heart murmurs
Diabetes
Importance of diet in aging
By changing the nutrition of a dog's diet as it ages, certain ailments and side effects of aging can be prevented or slowed.
Some important nutrients and ingredients in senior dog diets include:
Good sources of protein to meet higher protein requirements
Glucosamine and chondroitin sulfate to help maintain joint and bone health
Omega-3 fatty acids for joint and bone health as well as maintaining immune system health
Calcium and phosphorus for maintenance of bone structure
Beet pulp and flaxseed for gastrointestinal health
Fructooligosaccharides and mannanoligosaccharides work to improve the health of the gastrointestinal tract by increasing the number of "good" bacteria and decreasing the amount of "bad" bacteria
Appropriate levels of vitamin E and addition of L-carnitine to support brain and cognitive health
Dietary antioxidants such as vitamin E.
See also
Aging
List of oldest dogs
Old age
Pet loss
Dog year
Bobi
References
Dog health
Senescence in non-human organisms | Aging in dogs | [
"Biology"
] | 2,789 | [
"Senescence",
"Senescence in non-human organisms"
] |
9,659,484 | https://en.wikipedia.org/wiki/Young%20measure | In mathematical analysis, a Young measure is a parameterized measure that is associated with certain subsequences of a given bounded sequence of measurable functions. They are a quantification of the oscillation effect of the sequence in the limit. Young measures have applications in the calculus of variations, especially models from material science, and the study of nonlinear partial differential equations, as well as in various optimization (or optimal control problems). They are named after Laurence Chisholm Young who invented them, already in 1937 in one dimension (curves) and later in higher dimensions in 1942.
Young measures provide a solution to Hilbert’s twentieth problem, as a broad class of problems in the calculus of variations have solutions in the form of Young measures.
Definition
Intuition
Young constructed the Young measure in order to complete sets of ordinary curves in the calculus of variations. That is, Young measures are "generalized curves".
Consider the problem of , where is a function such that , and continuously differentiable. It is clear that we should pick to have value close to zero, and its slope close to . That is, the curve should be a tight jagged line hugging close to the x-axis. No function can reach the minimum value of , but we can construct a sequence of functions that are increasingly jagged, such that .
The pointwise limit is identically zero, but the pointwise limit does not exist. Instead, it is a fine mist that has half of its weight on , and the other half on .
Suppose that is a functional defined by , where is continuous, then so in the weak sense, we can define to be a "function" whose value is zero and whose derivative is . In particular, it would mean that .
Motivation
The definition of Young measures is motivated by the following theorem: Let m, n be arbitrary positive integers, let be an open bounded subset of and be a bounded sequence in . Then there exists a subsequence and for almost every a Borel probability measure on such that for each we have
weakly in if the limit exists (or weakly* in in case of ). The measures are called the Young measures generated by the sequence .
A partial converse is also true: If for each we have a Borel measure on such that , then there exists a sequence , bounded in , that has the same weak convergence property as above.
More generally, for any Carathéodory function , the limit
if it exists, will be given by
.
Young's original idea in the case was to consider for each integer the uniform measure, let's say concentrated on graph of the function (Here, is the restriction of the Lebesgue measure on ) By taking the weak* limit of these measures as elements of we have
where is the mentioned weak limit. After a disintegration of the measure on the product space we get the parameterized measure .
General definition
Let be arbitrary positive integers, let be an open and bounded subset of , and let . A Young measure (with finite p-moments) is a family of Borel probability measures on such that .
Examples
Pointwise converging sequence
A trivial example of Young measure is when the sequence is bounded in and converges pointwise almost everywhere in to a function . The Young measure is then the Dirac measure
Indeed, by dominated convergence theorem, converges weakly* in to
for any .
Sequence of sines
A less trivial example is a sequence
The corresponding Young measure satisfies
for any measurable set , independent of .
In other words, for any :
in . Here, the Young measure does not depend on and so the weak* limit is always a constant.
To see this intuitively, consider that at the limit of large , a rectangle of would capture a part of the curve of . Take that captured part, and project it down to the x-axis. The length of that projection is , which means that should look like a fine mist that has probability density at all .
Minimizing sequence
For every asymptotically minimizing sequence of
subject to (that is, the sequence satisfies ), and perhaps after passing to a subsequence, the sequence of derivatives generates Young measures of the form . This captures the essential features of all minimizing sequences to this problem, namely, their derivatives will tend to concentrate along the minima of the integrand .
If we take , then its limit has value zero, and derivative , which means .
See also
Convex compactification
References
, memoir presented by Stanisław Saks at the session of 16 December 1937 of the Warsaw Society of Sciences and Letters. The free PDF copy is made available by the RCIN –Digital Repository of the Scientifics Institutes.
.
External links
Measures (measure theory) | Young measure | [
"Physics",
"Mathematics"
] | 960 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
9,659,677 | https://en.wikipedia.org/wiki/John%20Price%20%28New%20South%20Wales%20politician%29 | John Charles Price (born 14 May 1939) is an Australian politician, elected as a member of the New South Wales Legislative Assembly from 1984 to his retirement in 2007, including as the first Deputy Speaker of the Legislative Assembly from 1999 to 2007.
Early life and career
Price was born in Mayfield, New South Wales and was educated at Mayfield East Public School and Newcastle Technical High School. Price later obtained certificates in marine engineering technology and structural engineering from Newcastle Technical College before gaining a second class certificate of engineering competency (steam) from the Commonwealth Department of Shipping and Transport. Price began a fitter and machinist apprenticeship with the State Dockyard in 1956 before spending many years as a draughtsman, marine engineer and manager in the shipbuilding industry.
Political career
Price was first elected as an Alderman of the Newcastle City Council in 1977 to 1984. He also served as a Newcastle delegate Councillor on Shortland County Council (1977–1980).
Price represented Waratah from 1984 to 1999 and Maitland from 1999 to 2007 for the Labor Party. Price served on various committees, including as Chairman of the Standing Committee on Ethics (1999–2007) and as Chairman of the Parliamentary Committee for Centenary of Federation Celebration (1999–2001), for which he received the Centenary Medal. He was the first Deputy Speaker from 1999 to 2007, which replace the position of Chairman of Committees which he had held since 1995. Price retired at the March 2007 election.
Later life and honours
Price was made a Member of the Order of Australia (AM) in the 2009 Australia Day Honours List for "service to the Parliament of New South Wales, and to the community through executive roles with youth, educational, church and broadcasting organisations."
Price was a longstanding Member of the Council of the University of Newcastle, serving as a Parliamentary appointee from 1984 to 1991 and 1995 to 2004 and from 2004 to 2014 as a City Council nominee. Price later served as Deputy Chancellor of the university and was acting Chancellor from October 2012 to June 2013 after the death in office of Chancellor Dr Ken Moss. Price retired as Deputy Chancellor in March 2014 and the University Council in May 2014. In November 2014, the University of Newcastle awarded him with an honorary degree of Doctor of Letters (Hon.D.Litt.).
Notes
Living people
1939 births
Politicians from Newcastle, New South Wales
Australian Labor Party members of the Parliament of New South Wales
Members of the New South Wales Legislative Assembly
Deputy and assistant speakers of the New South Wales Legislative Assembly
Australian Labor Party councillors
21st-century Australian politicians
Australian engineers
Draughtsmen
Australian academic administrators
Academic staff of the University of Newcastle (Australia)
Chancellors of the University of Newcastle (Australia)
Members of the Order of Australia | John Price (New South Wales politician) | [
"Engineering"
] | 538 | [
"Design engineering",
"Draughtsmen"
] |
9,659,923 | https://en.wikipedia.org/wiki/Interferon%20type%20I | The type-I interferons (IFN) are cytokines which play essential roles in inflammation, immunoregulation, tumor cells recognition, and T-cell responses. In the human genome, a cluster of thirteen functional IFN genes is located at the 9p21.3 cytoband over approximately 400 kb including coding genes for IFNα (IFNA1, IFNA2, IFNA4, IFNA5, IFNA6, IFNA7, IFNA8, IFNA10, IFNA13, IFNA14, IFNA16, IFNA17 and IFNA21), IFNω (IFNW1), IFNɛ (IFNE), IFNк (IFNK) and IFNβ (IFNB1), plus 11 IFN pseudogenes.
Interferons bind to interferon receptors. All type I IFNs bind to a specific cell surface receptor complex known as the IFN-α receptor (IFNAR) that consists of IFNAR1 and IFNAR2 chains.
Type I IFNs are found in all mammals, and homologous (similar) molecules have been found in birds, reptiles, amphibians and fish species.
Sources and functions
IFN-α and IFN-β are secreted by many cell types including lymphocytes (NK cells, B-cells and T-cells), macrophages, fibroblasts, endothelial cells, osteoblasts and others. They stimulate both macrophages and NK cells to elicit an anti-viral response, involving IRF3/IRF7 antiviral pathways, and are also active against tumors. Plasmacytoid dendritic cells have been identified as being the most potent producers of type I IFNs in response to antigen, and have thus been coined natural IFN producing cells.
IFN-ω is released by leukocytes at the site of viral infection or tumors.
IFN-α acts as a pyrogenic factor by altering the activity of thermosensitive neurons in the hypothalamus thus causing fever. It does this by binding to opioid receptors and eliciting the release of prostaglandin-E2 (PGE2).
A similar mechanism is used by IFN-α to reduce pain; IFN-α interacts with the μ-opioid receptor to act as an analgesic.
In mice, IFN-β inhibits immune cell production of growth factors, thereby slowing tumor growth, and inhibits other cells from producing vessel-producing growth factors, thereby blocking tumor angiogenesis and hindering the tumour from connecting into the blood vessel system.
In both mice and human, negative regulation of type I interferon signaling is known to be important. Few endogenous regulators have been found to elicit this important regulatory function, such as SOCS1 and Aryl Hydrocarbon Receptor Interacting Protein (AIP).
Mammalian types
The mammalian types are designated IFN-α (alpha), IFN-β (beta), IFN-κ (kappa), IFN-δ (delta), IFN-ε (epsilon), IFN-τ (tau), IFN-ω (omega), and IFN-ζ (zeta, also known as limitin). Of these types, IFN-α, IFN
-ω, and IFN-τ can work across species.
IFN-α
The IFN-α proteins are produced mainly by plasmacytoid dendritic cells (pDCs). They are mainly involved in innate immunity against viral infection. The genes responsible for their synthesis come in 13 subtypes that are called IFNA1, IFNA2, IFNA4, IFNA5, IFNA6, IFNA7, IFNA8, IFNA10, IFNA13, IFNA14, IFNA16, IFNA17, IFNA21. These genes are found together in a cluster on chromosome 9.
IFN-α is also made synthetically as medication in hairy cell leukemia. The International Nonproprietary Name (INN) for the product is interferon alfa. The recombinant type is interferon alfacon-1. The pegylated types are pegylated interferon alfa-2a and pegylated interferon alfa-2b.
Recombinant feline interferon omega is a form of cat IFN-α (not ω) for veterinary use.
IFN-β
The IFN-β proteins are produced in large quantities by fibroblasts. They have antiviral activity that is involved mainly in innate immune response. Two types of IFN-β have been described, IFN-β1 (IFNB1) and IFN-β3 (IFNB3) (a gene designated IFN-β2 is actually IL-6).
IFN-ε, -κ, -τ, -δ and -ζ
IFN-ε, -κ, -τ, and -ζ appear, at this time, to come in a single isoform in humans, IFNK. Only ruminants encode IFN-τ, a variant of IFN-ω. So far, IFN-ζ is only found in mice, while a structural homolog, IFN-δ is found in a diverse array of non-primate and non-rodent placental mammals. Most but not all placental mammals encode functional IFN-ε and IFN-κ genes..
IFN-ω
IFN-ω, although having only one functional form described to date (IFNW1), has several pseudogenes: , , , , , , and in humans. Many non-primate placental mammals express multiple IFN-ω subtypes.
IFN-ν
This subtype of type I IFN was recently described as a pseudogene in human, but potentially functional in the domestic cat genome. In all other genomes of non-feline placental mammals, IFN-ν is a pseudogene; in some species, the pseudogene is well preserved, while in others, it is badly mutilated or is undetectable. Moreover, in the cat genome, the IFN-ν promoter is deleteriously mutated. It is likely that the IFN-ν gene family was rendered useless prior to mammalian diversification. Its presence on the edge of the type I IFN locus in mammals may have shielded it from obliteration, allowing its detection.
Interferon type I in cancer
Therapeutics
From the 1980s onward, members of type-I IFN family have been the standard care as immunotherapeutic agents in cancer therapy. In particular, IFNα has been approved by the US Food and Drug Administration (FDA) for cancer. To date, pharmaceutical companies produce several types of recombinant and pegylated IFNα for clinical use; e.g., IFNα2a (Roferon-A, Roche), IFNα2b (Intron-A, Schering-Plough) and pegylated IFNα2b (Sylatron, Schering Corporation) for treatment of hairy cell leukemia, melanoma, renal cell carcinoma, Kaposi's sarcoma, multiple myeloma, follicular and non-Hodgkin lymphoma, and chronic myelogenous leukemia. Human IFNβ (Feron, Toray ltd.) has also been approved in Japan to treat glioblastoma, medulloblastoma, astrocytoma, and melanoma.
Copy number alteration of the interferon gene cluster in cancer
A large individual patient data meta-analysis using 9937 patients obtained from cBioportal indicates that copy number alteration of the IFN gene cluster is prevalent among 24 cancer types. Notably deletion of this cluster is significantly associated with increased mortality in many cancer types particularly uterus, kidney, and brain cancers. The Cancer Genome Atlas PanCancer analysis also showed that copy number alteration of the IFN gene cluster is significantly associated with decreased overall survival. For instance, the overall survival of patients with brain glioma reduced from 93 months (diploidy) to 24 months. In conclusion, the copy number alteration of the IFN gene cluster is associated with increased mortality and decreased overall survival in cancer.
Use of Interferon type I in therapeutics
In cancer
From the 1980s onward, members of type-I IFN family have been the standard care as immunotherapeutic agents in cancer therapy. In particular, IFNα has been approved by the US Food and Drug Administration (FDA) for cancer. To date, pharmaceutical companies produce several types of recombinant and pegylated IFNα for clinical use; e.g., IFNα2a (Roferon-A, Roche), IFNα2b (Intron-A, Schering-Plough) and pegylated IFNα2b (Sylatron, Schering Corporation) for treatment of hairy cell leukemia, melanoma, renal cell carcinoma, Kaposi's sarcoma, multiple myeloma, follicular and non-Hodgkin lymphoma, and chronic myelogenous leukemia. Human IFNβ (Feron, Toray ltd.) has also been approved in Japan to treat glioblastoma, medulloblastoma, astrocytoma, and melanoma.
Combinational therapy with PD-1/PD-L1 inhibitors
By combining PD-1/PD-L1 inhibitors with type I interferons, researchers aim to tackle multiple resistance mechanisms and enhance the overall anti-tumor immune response. The approach is supported by preclinical and clinical studies that show promising synergistic effects, particularly in melanoma and renal carcinoma. These studies reveal increased infiltration and activation of T cells within the tumor microenvironment, the development of memory T cells, and prolonged patient survival.
In viral infection
Due to their strong antiviral properties, recombinant type 1 IFNs can be used for the treatment for persistent viral infection. Pegylated IFN-α is the current standard of care when it comes to chronic Hepatitis B and C infection.
In multiple sclerosis
Currently, there are four FDA approved variants of IFN-β1 used as a treatment for relapsing multiple sclerosis. IFN-β1 is not an appropriate treatment for patients with progressive, non-relapsing forms of multiple sclerosis. Whilst the mechanism of action is not completely understood, the use of IFN-β1 has been found to reduce brain lesions, increase the expression of anti-inflammatory cytokines and reduce T cell infiltration into the brain.
Side effects of type I interferon therapy
One of the major limiting factors in the efficacy of type I interferon therapy are the high rates of side effects. Between 15% - 40% of people undergoing type 1 IFN treatment develop major depressive disorders. Less commonly, interferon treatment has also been associated with anxiety, lethargy, psychosis and parkinsonism. Mood disorders associated with IFN therapy can be reversed by discontinuation of treatment, and IFN therapy related depression is effectively treated with the selective serotonin reuptake inhibitor class of antidepressants.
Interferonopathies
Interferonopathies are a class of hereditary auto-inflammatory and autoimmune diseases characterised by upregulated type 1 interferon and downstream interferon stimulated genes. The symptoms of these diseases fall in a wide clinical spectrum, and often resemble those of viral infections acquired while the child is in utero, although lacking any infectious origin. The aetiology is largely still unknown, but the most common genetic mutations are associated with nucleic acid regulation, leading most researchers to suggest these arise from the failure of antiviral systems to differentiate between host and viral DNA and RNA.
Non-mammalian types
Avian type I IFNs have been characterized and preliminarily assigned to subtypes (IFN I, IFN II, and IFN III), but their classification into subtypes should await a more extensive characterization of avian genomes.
Functional lizard type I IFNs can be found in lizard genome databases.
Turtle type I IFNs have been purified (references from 1970s needed). They resemble mammalian homologs.
The existence of amphibian type I IFNs have been inferred by the discovery of the genes encoding their receptor chains. They have not yet been purified, or their genes cloned.
Piscine (bony fish) type I IFN has been cloned first in zebrafish. and then in many other teleost species including salmon and mandarin fish. With few exceptions, and in stark contrast to avian and especially mammalian IFNs, they are present as single genes (multiple genes are however seen in polyploid fish genomes, possibly arising from whole-genome duplication). Unlike amniote IFN genes, piscine type I IFN genes contain introns, in similar positions as do their orthologs, certain interleukins. Despite this important difference, based on their 3-D structure these piscine IFNs have been assigned as Type I IFNs. While in mammalian species all Type I IFNs bind to a single receptor complex, the different groups of piscine type I IFNs bind to different receptor complexes. Until now several type I IFNs (IFNa, b, c, d, e, f and h)
has been identified in teleost fish with as low as only one subtype in green pufferfish and as many as six subtypes in salmon with an addition of recently identified novel subtype, IFNh in mandarin fish.
References
External links
Cytokines
Antiviral drugs
Immunostimulants | Interferon type I | [
"Chemistry",
"Biology"
] | 2,919 | [
"Cytokines",
"Antiviral drugs",
"Biocides",
"Signal transduction"
] |
9,659,931 | https://en.wikipedia.org/wiki/Interferon%20type%20III | The type III interferon group is a group of anti-viral cytokines, that consists of four IFN-λ (lambda) molecules called IFN-λ1, IFN-λ2, IFN-λ3 (also known as IL29, IL28A and IL28B respectively), and IFN-λ4. They were discovered in 2003. Their function is similar to that of type I interferons, but is less intense and serves mostly as a first-line defense against viruses in the epithelium.
Genomic location
Genes encoding this group of interferons are all located on the long arm of chromosome 19 in human, specifically in region between 19q13.12 and 19q13.13. The IFNL1 gene, encoding IL-29, is located downstream of IFNL2, encoding IL-28A. IFNL3, encoding IL28B, is located downstream of IFNL4.
In mice, the genes encoding for type III interferons are located on chromosome 7 and the family consists only of IFN-λ2 and IFN-λ3.
Structure
Interferons
All interferon groups belong to class II cytokine family which have a conserved structure that comprises six α-helices. The proteins of type III interferon group are highly homologous and show high amino acid sequence similarity between. The similarity between IFN-λ2 and IFN-λ3 is approximately 96%, similarity of IFNλ1 to IFNλ 2/3 is around 81%. Lowest similarity is found between IFN-λ4 and IFN-λ3 - only around 30%. Unlike type I interferon group, which consist of only one exon, type III interferons consist of multiple exons.
Receptor
The receptors for these cytokines are also structurally conserved. The receptors have two type III fibronectin domains in their extracellular domain. The interface of these two domains forms the cytokine binding site. The receptor complex for type III interferons consists of two subunits - IL10RB (also called IL10R2 or CRF2-4) and IFNLR1 (formerly called IL28RA, CRF2-12).
In contrast to the ubiquitous expression of receptors for type I interferons, IFNLR1 is largely restricted to tissues of epithelial origin. Despite high homology between type III interferons, the binding affinity to IFNLR1 differ, with IFN-λ1 showing the highest binding affinity, and IFN-λ3 showing the lowest binding affinity.
Signalling pathway
IFN-λ production is induced by pathogen sensing through pattern recognition receptors (PRR), including TLR, Ku70 and RIG-I-like. The main producer of IFN-λ are type 2 myeloid dendritic cells.
IFN-λ binds to IFNLR1 with a high affinity, which then recruits the low-affinity subunit of the receptor, IL10Rb. This interaction creates a signalling complex. Upon binding of the cytokine to the receptor, JAK-STAT signalling pathway gets activated, specifically JAK1 and TYK2 and phosphorylate and activate STAT-1 and STAT-2, which then induces downstream signalling that leads to induction of expression of hundreds of IFN-stimulated genes (ISG), e.g.: NF-κB, IRF, ISRE, Mx1, OAS1.
The signalling is modulated by suppressor of cytokine signalling 1 (SOCS1) and ubiquitin-specific peptidase 18 (USP18).
Function
Functions of type III interferons overlap largely with that of type I interferons. Both of these cytokine groups modulate the immune response after a pathogen has been sensed in the organism, their functions are mostly anti-viral and anti-proliferative. However, type III interferons tend to be less inflammatory and show a slower kinetics than type I. Also, because of the restricted expression of IFNLR1, the immunomodulatory effect of type III interferons is limited.
Because the receptors for type I and type II interferons are expressed on almost all nucleated cells, their function is rather systemic. Type III interferon receptors are expressed more specifically on epithelial cells and some immune cells such as neutrophils, and depending on the species, B cells and dendritic cells as well. Therefore, their antiviral effects are most prominent in barriers, in gastrointestinal, respiratory and reproductive tracts. Type III interferons usually act as the first line of defense against viruses at the barriers.
In the gastrointestinal tract, both type I and type III interferons are needed to effectively fight reovirus infection. Type III interferons restrict the initial replication of the virus and diminish the shedding of through feces, while type I interferons prevent the systematic infection. On the other hand, in the respiratory tract these two groups of interferons seem to be rather redundant, as documented by the susceptibility of double-deficient mice (in receptors for type I and type III interferons), but the resistance to respiratory virus in mice that are deficient in either type I or type III interferon receptors. Additional gastrointestinal viruses such as rotavirus and norovirus, as well as non-gastrointestinal viruses like influenza and West Nile virus, are also restricted by type III interferons.
References
Cytokines
Antiviral drugs | Interferon type III | [
"Chemistry",
"Biology"
] | 1,170 | [
"Cytokines",
"Antiviral drugs",
"Biocides",
"Signal transduction"
] |
9,660,855 | https://en.wikipedia.org/wiki/STRIDE%20%28algorithm%29 | In protein structure, STRIDE (Structural identification) is an algorithm for the assignment of protein secondary structure elements given the atomic coordinates of the protein, as defined by X-ray crystallography, protein NMR, or another protein structure determination method. In addition to the hydrogen bond criteria used by the more common DSSP algorithm, the STRIDE assignment criteria also include dihedral angle potentials. As such, its criteria for defining individual secondary structures are more complex than those of DSSP. The STRIDE energy function contains a hydrogen-bond term containing a Lennard-Jones-like 8-6 distance-dependent potential and two angular dependence factors reflecting the planarity of the optimized hydrogen bond geometry. The criteria for individual secondary structural elements, which are divided into the same groups as those reported by DSSP, also contain statistical probability factors derived from empirical examinations of solved structures with visually assigned secondary structure elements extracted from the Protein Data Bank.
Although DSSP is the older method and continues to be the most commonly used, the original STRIDE definition reported it to give a more satisfactory structural assignment in at least 70% of cases. In particular, STRIDE was observed to correct for the propensity of DSSP to assign shorter secondary structures than would be assigned by an expert crystallographer, usually due to the minor local variations in structure that are most common near the termini of secondary structure elements. Using a sliding-window method to smooth variations in assignment of single terminal residues, current implementations of STRIDE and DSSP are reported to agree in up to 95.4% of cases. Both STRIDE and DSSP, among other common secondary structure assignment methods, are believed to underpredict pi helices.
See also
DSSP
References
External links
STRIDE - includes web interface, a print of the original STRIDE paper, and software documentation
Paper on the original webserver implementation
Protein structure | STRIDE (algorithm) | [
"Chemistry"
] | 376 | [
"Protein structure",
"Structural biology"
] |
9,661,075 | https://en.wikipedia.org/wiki/Applied%20Physics%20A | Applied Physics A: Materials Science and Processing is a peer-reviewed scientific journal that is published monthly by Springer Science+Business Media. The editor-in-chief is Thomas Lippert (Paul Scherrer Institute). This publication is complemented by Applied Physics B (Lasers & Optics).
History
The journal Applied Physics was originally conceived and founded in 1972 by Helmut K.V. Lotsch at Springer-Verlag Berlin Heidelberg New York. Lotsch edited the journal up to volume 25 and split it thereafter into the two part A26(Solids and Surfaces) and B26(Photophysics and Laser Chemistry). He continued his editorship up to the volumes A61 and B61. Starting in 1995 the two journals were continued under separate editorships.
Aims and scope
Applied Physics A journal covers theoretical and experimental research in applied physics, including surfaces, thin films, the condensed phase of materials, nanostructured materials, application of nanotechnology, and techniques pertaining to advanced processing and characterization. Coverage also includes characterizing materials, evaluating materials, optical & electronic materials, production engineering, process engineering, interfaces (surfaces & thin films), corrosion, and finally coatings.
Publishing formats include articles pertaining to original research, reviews, and rapid communications. Invited papers are also included on a regular basis and collected in special issues.
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.584.
References
External links
Materials science journals
Physics journals
Springer Science+Business Media academic journals
English-language journals
Academic journals established in 1973
Journals published between 13 and 25 times per year | Applied Physics A | [
"Materials_science",
"Engineering"
] | 337 | [
"Nanotechnology journals",
"Materials science journals",
"Materials science"
] |
9,661,780 | https://en.wikipedia.org/wiki/TRPV1 | The transient receptor potential cation channel subfamily V member 1 (TRPV1), also known as the capsaicin receptor and the vanilloid receptor 1, is a protein that, in humans, is encoded by the TRPV1 gene. It was the first isolated member of the transient receptor potential vanilloid receptor proteins that in turn are a sub-family of the transient receptor potential protein group. This protein is a member of the TRPV group of transient receptor potential family of ion channels. Fatty acid metabolites with affinity for this receptor are produced by cyanobacteria, which diverged from eukaryotes at least 2000 million years ago (MYA).
The function of TRPV1 is detection and regulation of body temperature. In addition, TRPV1 provides a sensation of scalding heat and pain (nociception). In primary afferent sensory neurons, it cooperates with TRPA1 (a chemical irritant receptor) to mediate the detection of noxious environmental stimuli.
Function
TRPV1 is an element of or mechanism used by the mammalian somatosensory system. It is a nonselective cation channel that may be activated by a wide variety of exogenous and endogenous physical and chemical stimuli. The best-known activators of TRPV1 are: temperature greater than ; acidic conditions; capsaicin (the irritating compound in hot chili peppers); and allyl isothiocyanate, the pungent compound in mustard and wasabi. The activation of TRPV1 leads to a painful, burning sensation. Its endogenous activators include: low pH (acidic conditions), the endocannabinoid anandamide, N-oleyl-dopamine, and N-arachidonoyl-dopamine. TRPV1 receptors are found mainly in the nociceptive neurons of the peripheral nervous system, but they have also been described in many other tissues, including the central nervous system. TRPV1 is involved in the transmission and modulation of pain (nociception), as well as the integration of diverse painful stimuli.
Sensitization
The sensitivity of TRPV1 to noxious stimuli, such as high temperatures, is not static. Upon tissue damage and the consequent inflammation, a number of inflammatory mediators, such as various prostaglandins and bradykinin, are released. These agents increase the sensitivity of nociceptors to noxious stimuli. This manifests as an increased sensitivity to painful stimuli (hyperalgesia) or pain sensation in response to non-painful stimuli (allodynia). Most sensitizing pro-inflammatory agents activate the phospholipase C pathway. Phosphorylation of TRPV1 by protein kinase C has been shown to play a role in sensitization of TRPV1. The cleavage of PIP2 by PLC-beta can result in disinhibition of TRPV1 and, as a consequence, contribute to the sensitivity of TRPV1 to noxious stimuli.
Desensitization
Upon prolonged exposure to capsaicin, TRPV1 activity decreases, a phenomenon called desensitization. Extracellular calcium ions are required for this phenomenon, thus influx of calcium and the consequential increase of intracellular calcium mediate this effect. Various signaling pathways such as phosphorylation by PKA and PKC, interaction with calmodulin, dephosphorylation by calcineurin, and the decrease of PIP2, have been implicated in the regulation of desensitization of TRPV1. Desensitization of TRPV1 is thought to underlie the paradoxical analgesic effect of capsaicin.
Clinical significance
Peripheral nervous system
As a result of its involvement in nociception, TRPV1 has been a target for the development of pain reducers (analgesics). Three major strategies have been used:
TRPV1 use
The TRPV1 receptor can be used to measure how an organism can sense temperature change. In the lab the receptor may be removed from mice giving them the inability to detect differences in ambient temperature. In the pharmaceutical field this allows for the blocking of heat receptors giving patients with inflammatory disorders or severe burning pains a chance to heal without the pain. The lack of the TRPV1 receptor gives a glimpse into the developing brain as heat can kill most organisms in large enough doses, so this removal process shows researchers how the inability to sense heat may be detrimental to the survivability of an organism and then translate this to human heat disorders.
TRPV1 in immune cells
TRPV1 plays an important role not only in neurones but also in immune cells. Activation of TRPV1 modulates immune response including the release of inflammatory cytokines, chemokines, and the ability to phagocytose. However, the role of TRPV1 in immune cells is not entirely understood and it is currently intensely studied. TRPV1 is not the only TRP channel expressed in immune cells. TRPA1, TRPM8 and TRPV4 are the most relevant TRP channels that are also studied in immune cells.
The expression of TRPV1 was confirmed in the cells of innate immunity as well as the cells of adaptive immunity. TRPV1 can be found in monocytes, macrophages, dendritic cells, T lymphocytes, natural killer cells and neutrophiles. TRPV1 is said to be potentially very important in immune cell functioning as it senses higher temperature and lower pH, which can affect the immune cell performance.
TRPV1 and adaptive immunity
TRPV1 is an important membrane channel in T cells as it regulates the influx of calcium cations. TRPV1's involvement is mainly in T cell receptor signalling (TCR) signalling, T cell activation and TCR-mediated influx of calcium ions, but it is involved in T cell cytokine production as well. Indeed, T cells with TRPV1 knockout show impaired calcium uptake after T cell activation via TCR, thus they show dysregulation in signalling pathways such as NF-κB and NFAT.
TRPV1 and innate immunity
Regarding innate immunity, activation of TRPV1 by capsaicin has been shown to suppress the production of nitrite radical, superoxide anion and hydrogen peroxide by macrophages. Furthermore, administration of capsaicin, and subsequent activation of TRPV1, suppresses phagocytosis in dendritic cells. In a mouse model, TRPV1 affect dendritic cell maturation and function, however, further studies are needed to clarify this effect in humans. In neutrophils, the increase in cytosolic calcium cations leads to synthesis of prostaglandins. Activation of TRPV1 by capsaicin modulates neutrophil immune response due to the higher influx of calcium ions into the cell.
TRPV1 is also considered a novel therapeutic agent in many inflammatory diseases. Multiple studies have proven that TRPV1 influences the outcome of several inflammatory diseases such as chronic asthma, esophageal inflammation, rheumatoid arthritis and cancer. Studies using TRPV1's agonists and antagonists have shown that their administration indeed changes the course of inflammation. However, at this point, there is a lot of contradictive evidence about what type of response, pro-inflammatory or anti-inflammatory, TRPV1's activation induces. Further research needs to be carried out. Meanwhile, it is important to highlight that TRPV1's influence on inflammatory diseases is probably not limited to only immune cells as it is rather an interplay between immune cells, neurons, and other cell types (epithelial cells etc.).
TRPV1 and cancer
TRPV1 was found to be overexpressed in several types of cancers, e.g., pancreatic cancer and colon adenocarcinoma. This suggest that certain types of cancers might be more prone to cell death mediated by capsaicin-induced (and also other vanilloid-induced) cell death. Indeed, studies have shown inversed correlation of consumption of chili-based foods and all-cause mortality along with cancers. This beneficial impact of the consumption of chili-based foods was attributed to capsaicinoid content.
TRPV1 activation caused by its agonist capsaicin was shown to induce G0-G1 cell arrest and apoptosis in leukemic cell lines, adult T-cell leukaemia and multiple myeloma. Capsaicin reduces the expression of anti-apoptotic protein Bcl-2 and it also promotes activation of p53, a tumour-suppressor protein known as a major regulator of cell death. This effect of capsaicin in both cases subsequently leads to above-mentioned apoptosis.
TRPV1 and neuroinflammation
The interplay between neurons and immune cells is a well-known phenomenon. TRPV1 plays its role in neuroinflammation, being expressed both in neurons and in immune cells. Significant importance should be paid to the confirmed expression of TRPV1 in microglia and astrocytes, cells found close to neurons. The neuro-immune axis is the place of production of neuroinflammatory molecules and receptors that interplay between the two systems and ensure a complex response to external stimuli (or to the body's own pathologies). Studying TRPV1's involvement in neuroinflammation has a great therapeutical significance for the future.
Cutaneus neurons expressing TRPV1 and dendritic cells were found to be located close to each other. Activation of TRPV1 channels in neurons is associated with subsequent production of interleukin 23 (IL-23) by dendritic cells and further production of IL-17 by T cells. These interleukins are important for host defence against pathogenic fungi (such as Candida albicans) and bacteria (such as Staphylococcus aureus), thus TRPV1's activation can lead to better defence against these pathogens, thanks to the neuro-immune axis.
TRPV1 is said to contribute to autophagy of microglia via its Ca2+-signalling, which leads to mitochondria-induced cell death. The TRPV1 channel also influences microglia-induced inflammation. Migration and chemotaxis of microglia and astrocytes seems to be affected by TRPV1's interaction with the cytoskeleton and Ca2+-signalling. TRPV1 is therefore involved in the neuro-immune axis via its function in microglia as well.
TRPV1 was shown to have protective effect in neurologic disorders such as Huntington's disease, vascular dementia, and Parkinson's disease. However, its precise function needs to be further explored.
Ligands
Antagonists
Antagonists block TRPV1 activity, thus reducing pain. Identified antagonists include the competitive antagonist capsazepine and the non-competitive antagonist ruthenium red. These agents could be useful when applied systemically. Numerous TRPV1 antagonists have been developed by pharmaceutical companies. TRPV1 antagonists have shown efficacy in reducing nociception from inflammatory and neuropathic pain models in rats. This provides evidence that TRPV1 is capsaicin's sole receptor. In humans, drugs acting at TRPV1 receptors could be used to treat neuropathic pain associated with multiple sclerosis, chemotherapy, or amputation, as well as pain associated with the inflammatory response of damaged tissue, such as in osteoarthritis.
These drugs can affect body temperature (hyperthermia) which is a challenge to therapeutic application. For example, a transient temperature gain (~1 °C for a duration of approximately 40 minutes, reverting to baseline by 40 minutes) was measured in rats with the application of TRPV1 antagonist AMG-9810. The role of TRPV1 in the regulation of body temperature has emerged in the last few years. Based on a number of TRPV-selective antagonists' causing a mild increase in body temperature (hyperthermia), it was proposed that TRPV1 is tonically active in vivo and regulates body temperature by telling the body to "cool itself down". Without these signals, the body overheats. Likewise, this explains the propensity of capsaicin (a TRPV1 agonist) to cause sweating (i.e.: a signal to reduce body temperature). In a recent report, it was found that tonically active TRPV1 channels are present in the viscera and keep an ongoing suppressive effect on body temperature. Recently, it was proposed that predominant function of TRPV1 is body temperature maintenance. Experiments have shown that TRPV1 blockade increases body temperature in multiple species, including rodents and humans, suggesting that TRPV1 is involved in body temperature maintenance. In 2008, AMG-517, a highly selective TRPV1 antagonist was dropped out of clinical trials due to the causation of hyperthermia (~38.3 °C mean increase which was most intense on day 1 but was attenuated on days 2–7. Another molecule, SB-705498, was also evaluated in the clinic but its effect on body temperature was not reported. As we increase understanding of modality specific agonism of TRPV1 it seems that next generation therapeutics targeting TRPV1 have the potential to side-step hyperthermia. Moreover, for at least two indications or approaches this may be a secondary issue. Where the therapeutic approach (e.g., in analgesia) is agonist-mediated desensitization then the hyperthermic effects of antagonists may not be relevant. Secondarily in applications such as TRPV1 antagonism for the treatment of severe conditions such as heart failure, then there may be an acceptable trade-off with mild hyperthermia, although no hyperthermia was observed in rodent models of heart failure treated with BCTC, SB-366791 or AMG-9810. Post translational modification of TRPV1 protein by its phosphorylation is critical for its functionality. Reports published from NIH suggest that Cdk5-mediated phosphorylation of TRPV1 is required for its ligand-induced channel opening.
Agonists
TRPV1 is activated by numerous agonists from natural sources. Agonists such as capsaicin and resiniferatoxin activate TRPV1 and, upon prolonged application, cause TRPV1 activity to decrease (desensitization), leading to alleviation of pain via the subsequent decrease in the TRPV1 mediated release of inflammatory molecules following exposures to noxious stimuli. Agonists can be applied locally to the painful area in various forms, generally as a patch or an ointment. Numerous capsaicin-containing creams are available over the counter, containing low concentrations of capsaicin (0.025 - 0.075%). It is debated whether these preparations actually lead to TRPV1 desensitization; it is possible that they act via counter-irritation. Novel preparations containing higher capsaicin concentration (up to 10%) are under clinical trials. Eight percent capsaicin patches have recently become available for clinical use, with supporting evidence demonstrating that a 30-minute treatment can provide up to 3 months analgesia by causing regression of TRPV1-containing neurons in the skin. Currently, these treatments must be re-administered on a regular (albeit infrequent) schedule in order to maintain their analgesic effects.
Cannabinoid ligands
Cannabinoid ligands include:
Cannabidiol (CBD) agonist
Cannabigerol (CBG) agonist
Tetrahydrocannabivarin (THCV) agonist
Cannabigerovarin (CBGV) agonist
N-Acyl amides
N-Acyl Amides that activate cannabimimetic receptors include:
Anandamide (AEA)
N-Arachidonoyl dopamine
N-Oleoyl dopamine
N-Arachidonoyl taurine
N-Docosahexaenoyl ethanolamine
N-Docosahexaenoyl GABA
N-Docosahexaenoyl aspartic acid
N-Docosahexaenoyl glycine
N-Docosahexaenoyl serine
N-Arachidonoyl GABA
N-Linoleyl GABA
Fatty acid metabolites
Certain metabolites of polyunsaturated fatty acids have been shown to stimulate cells in a TRPV1-dependent fashion. The metabolites of linoleic acid, including 13(S)-hydroxy-9Z,11E-octadecadienoic acid (13(S)-HODE), 13(R)-hydroxy-9Z,11E-octadecadienoic acid (13(R)-HODE, 9(S)-hydroxy-10(E),12(Z)-octadecadienoic acid (9(S)-HODE), 9(R)-hydroxy-10(E),12(Z)-octadecadienoic acid (9(R)-HODE), and their respective keto analogs, 13-oxoODE and 9-oxoODE (see 13-HODE and 9-HODE sections on direct actions), activate peripheral and central mouse pain sensing neurons. Reports disagree on the potencies of these metabolites with, for example, the most potent one, 9(S)-HODE, requiring at least 10 micromoles/liter. or a more physiological concentration of 10 nanomoles/liter to activate TRPV1 in rodent neurons. The TRPV1-dependency of these metabolites' activities appears to reflect their direct interaction with TPRV1. Although relatively weak agonists of TRPV1 in comparison to anandamide, these linoleate metabolites have been proposed to act through TRPV1 in mediating pain perception in rodents and to cause injury to airway epithelial cells and thereby to contribute to asthma disease in mice and therefore possibly humans. Certain arachidonic acid metabolites, including 20-hydroxy-5Z,8Z,11Z,14Z-eicosatetraenoic acid (see 20-Hydroxyeicosatetraenoic acid) and 12(S)-hydroperoxy-5Z,8Z,10E,12S,14Z-eicosatetraenoic acid (12(S)-HpETE), 12(S)-hydroxy-5Z,8Z,10E,12S,14Z-eicosatetraenoic acid (12(S)-HETE (see 12-HETE), hepoxilin A3 (i.e. 8R/S-hydroxy-11,12-oxido-5Z,9E,14Z-eicosatrienoic acid) and HxB3 (i.e. 10R/S-hydroxy-11,12-oxido-5Z,8Z,14Z-eicosatrienoic acid) likewise activate TRPV1 and may thereby contribute to tactile hyperalgesia and allodynia (see ).
Studies with mice, guinea pig, and human tissues and in guinea pigs indicate that another arachidonic acid metabolite, Prostaglandin E2, operates through its prostaglandin EP3 G protein coupled receptor to trigger cough responses. Its mechanism of action involves activation and/or sensitization of TRPV1 (as well as TRPA1) receptors, presumably by an indirect mechanism. Genetic polymorphism in the EP3 receptor (rs11209716), has been associated with ACE inhibitor-induced cough in humans.
Resolvin E1 (RvE1), RvD2 (see resolvins), neuroprotectin D1 (NPD1), and maresin 1 (Mar1) are metabolites of the omega 3 fatty acids, eicosapentaenoic acid (for RvE1) or docosahexaenoic acid (for RvD2, NPD1, and Mar1). These metabolites are members of the specialized proresolving mediators (SPMs) class of metabolites that function to resolve diverse inflammatory reactions and diseases in animal models and, it is proposed, humans. These SPMs also dampen pain perception arising from various inflammation-based causes in animal models. The mechanism behind their pain-dampening effects involves the inhibition of TRPV1, probably (in at least certain cases) by an indirect effect wherein they activate other receptors located on the neurons or nearby microglia or astrocytes. CMKLR1, GPR32, FPR2, and NMDA receptors have been proposed to be the receptors through which these SPMs operate to down-regulate TRPV1 and thereby pain perception.
Fatty acid conjugates
N-Arachidonoyl dopamine, an endocannabinoid found in the human CNS, structurally similar to capsaicin, activates the TRPV1 channel with an EC50 of approximately of 50 nM.
N-Oleyl-dopamine, another endogenous agonist, binds to human VR1 with an Ki of 36 Nm.
Another endocannabinoid anandamide has also been shown to act on TRPV1 receptors.
AM404—an active metabolite of paracetamol (also known as acetaminophen)—that serves as an anandamide reuptake inhibitor and COX inhibitor also serves as a potent TRPV1 agonist.
The plant-biosynthesized cannabinoid cannabidiol also shows "either direct or indirect activation" of TRPV1 receptors. TRPV1 colocalizes with CB1 receptors and CB2 receptors in sensory and brain neurons respectively, and other plant-cannabinoids like CBN, CBG, CBC, THCV, and CBDV are also agonists of this ion channel. There is also evidence that non cannabinoid components of the Cannabis secondary metabolome such as myrcene activate TRPV1.
Vitamin D metabolites
The vitamin D metabolites calcifediol (25-hydroxy vitamin D or 25OHD) and calcitriol (1,25-hydroxy vitamin D or 1,25OHD) act as endogenous ligands of TRPV1.
Central nervous system
TRPV1 is also expressed at high levels in the central nervous system and has been proposed as a target for treatment not only of pain but also for other conditions such as anxiety.
Furthermore, TRPV1 appears to mediate long-term synaptic depression (LTD) in the hippocampus. LTD has been linked to a decrease in the ability to make new memories, unlike its opposite long-term potentiation (LTP), which aids in memory formation. A dynamic pattern of LTD and LTP occurring at many synapses provides a code for memory formation. Long-term depression and subsequent pruning of synapses with reduced activity is an important aspect of memory formation. In rat brain slices, activation of TRPV1 with heat or capsaicin induced LTD while capsazepine blocked capsaicin's ability to induce LTD. In the brainstem (solitary tract nucleus), TRPV1 controls the asynchronous and spontaneous release of glutamate from unmyelinated cranial visceral afferents - release processes that are active at normal temperatures and hence quite distinct from TRPV1 responses in painful heat. Hence, there may be therapeutic potential in modulating TRPV1 in the central nervous system, perhaps as a treatment for epilepsy (TRPV1 is already a target in the peripheral nervous system for pain relief).
Interactions
TRPV1 has been shown to interact with:
CALM1
SNAPAP
SYT9
CBD
AEA
NPR1
PKG
Discovery
The dorsal root ganglion (DRG) neurons of mammals were known to express a heat-sensitive ion channel that could be activated by capsaicin. The research group of David Julius, therefore, created a cDNA library of genes expressed in dorsal root ganglion neurons, expressed the clones in HEK 293 cells, and looked for cells that respond to capsaicin with calcium influx (which HEK-293 normally do not). After several rounds of screening and dividing the library, a single clone encoding the TRPV1 channel was finally identified in 1997. It was the first TRPV channel to be identified. Julius was awarded the 2021 Nobel prize in Physiology or Medicine for his discovery.
See also
Capsaicin
Capsinoids
Vanilloids
Vanillotoxin
Cannabinoid receptor
Discovery and development of TRPV1 antagonists
Ruthenium red
Thermoreceptor
:Category:Somatosensory system
Endocannabinoid system
References
Further reading
External links
The Endocannabinoidome The World of Endocannabinoids and Related Mediators Book • 2014
Ion channels | TRPV1 | [
"Chemistry"
] | 5,386 | [
"Neurochemistry",
"Ion channels"
] |
9,662,404 | https://en.wikipedia.org/wiki/Space%20Vehicle%20Mockup%20Facility | The Space Vehicle Mockup Facility (SVMF) is a large open space area located inside Building 9 of Johnson Space Center in Houston. The SVMF houses mockups of most pressurized modules on the International Space Station (ISS). It is primarily used for astronaut training and systems familiarization.
The ISS mockups found in the SVMF are 1:1 scale, and vary in level of fidelity compared to the ISS. An industrial door at the North End, and overhead cranes allows the installation of new mockup spacecraft to be loaded into the facility. Space Center Houston offers a Level 9 VIP tour of the entire training facility during its afternoon tour. Ticket cost is $199..
Previous trainers
Space Shuttle Orbiter Trainers
Full Fuselage Trainer (FFT)
Crew Compartment Trainer (CCT)
Crew Compartment Trainer II (CCT II)
Current trainers
International Space Station Trainer
Space Station Mockup and Training Facility (SSMTF)
Other facilities
Precision Air Bearing Facility (PABF)
Partial Gravity Simulator (POGO)
External links
https://web.archive.org/web/20110720095029/http://dx14.jsc.nasa.gov/svmf.htm
References
Human spaceflight
Johnson Space Center | Space Vehicle Mockup Facility | [
"Astronomy"
] | 257 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
9,662,955 | https://en.wikipedia.org/wiki/Convection%20%28heat%20transfer%29 | Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Although often discussed as a distinct method of heat transfer, convective heat transfer involves the combined processes of conduction (heat diffusion) and advection (heat transfer by bulk fluid flow). Convection is usually the dominant form of heat transfer in liquids and gases.
Note that this definition of convection is only applicable in Heat transfer and thermodynamic contexts. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two.
Overview
Convection can be "forced" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Thermal expansion of fluids may also force convection. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called "natural convection". An example is the draft in a chimney or around any fire. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when the fluids of different densities are affected by gravity (or any g-force). For example, when water is heated on a stove, hot water from the bottom of the pan is displaced (or forced up) by the colder denser liquid, which falls. After heating has stopped, mixing and conduction from this natural convection eventually result in a nearly homogeneous density, and even temperature. Without the presence of gravity (or conditions that cause a g-force of any type), natural convection does not occur, and only forced-convection modes operate.
The convection heat transfer mode comprises two mechanism. In addition to energy transfer due to specific molecular motion (diffusion), energy is transferred by bulk, or macroscopic, motion of the fluid. This motion is associated with the fact that, at any instant, large numbers of molecules are moving collectively or as aggregates. Such motion, in the presence of a temperature gradient, contributes to heat transfer. Because the molecules in aggregate retain their random motion, the total heat transfer is then due to the superposition of energy transport by random motion of the molecules and by the bulk motion of the fluid. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.
Types
Two types of convective heat transfer may be distinguished:
Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. In the absence of an internal source, when the fluid is in contact with a hot surface, its molecules separate and scatter, causing the fluid to be less dense. As a consequence, the fluid is displaced while the cooler fluid gets denser and the fluid sinks. Thus, the hotter volume transfers heat towards the cooler volume of that fluid. Familiar examples are the upward flow of air due to a fire or hot object and the circulation of water in a pot that is heated from below.
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.
In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).
Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.
Further classification can be made depending on the smoothness and undulations of the solid surfaces. Not all surfaces are smooth, though a bulk of the available information deals with smooth surfaces. Wavy irregular surfaces are commonly encountered in heat transfer devices which include solar collectors, regenerative heat exchangers, and underground energy storage systems. They have a significant role to play in the heat transfer processes in these applications. Since they bring in an added complexity due to the undulations in the surfaces, they need to be tackled with mathematical finesse through elegant simplification techniques. Also, they do affect the flow and heat transfer characteristics, thereby behaving differently from straight smooth surfaces.
For a visual experience of natural convection, a glass filled with hot water and some red food dye may be placed inside a fish tank with cold, clear water. The convection currents of the red liquid may be seen to rise and fall in different regions, then eventually settle, illustrating the process as heat gradients are dissipated.
Newton's law of cooling
Convection-cooling is sometimes loosely assumed to be described by Newton's law of cooling.
Newton's law states that the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings while under the effects of a breeze. The constant of proportionality is the heat transfer coefficient. The law applies when the coefficient is independent, or relatively independent, of the temperature difference between object and environment.
In classical natural convective heat transfer, the heat transfer coefficient is dependent on the temperature. However, Newton's law does approximate reality when the temperature changes are relatively small, and for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference.
Convective heat transfer
The basic relationship for heat transfer by convection is:
where is the heat transferred per unit time, A is the area of the object, h is the heat transfer coefficient, T is the object's surface temperature, and Tf is the fluid temperature.
The convective heat transfer coefficient is dependent upon the physical properties of the fluid and the physical situation. Values of h have been measured and tabulated for commonly encountered fluids and flow situations.
See also
Conjugate convective heat transfer
Convection
Forced convection
Natural convection
Mixed convection
Heat transfer coefficient
Heat transfer enhancement
Heisler chart
Thermal conductivity
Convection–diffusion equation
References
Thermodynamics
Heat transfer | Convection (heat transfer) | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,326 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"Dynamical systems"
] |
9,663,941 | https://en.wikipedia.org/wiki/Shq1 | Shq1p is a protein involved in the rRNA processing pathway. It was discovered by Pok Yang in the Chanfreau laboratory at UCLA. Depletion of Shq1p has led to decreased level of various H/ACA box snoRNAs (H/ACA box snoRNAs are responsible for pseuduridylation of pre-rRNA) and certain pre-rRNA intermediates.
Background
During the synthesis of eukaryotic ribosomes, four mature ribosomal RNAs (the 5S, 5.8S, 18S, and 25S) must be synthesized. Three of these rRNAs (5.8S, 18S, and 25S) come from a single pre-rRNA known as the 35S. Although many of the intermediates in this rRNA processing pathway have been identified in the last thirty years, there are still a number of proteins involved in this process whose specific function is unknown.
Function
Shq1, a protein thought to play a role in the stabilization and/or production of box H/ACA snoRNA, is still uncharacterized. It has been proposed that Shq1, along with Naf1p, is involved in the initial steps of the biogenesis of H/ACA box snoRNPs (box H/ACA snoRNAs form complexes with proteins, thereby forming snoRNPs) because of its association with certain snoRNP proteins during the snoRNP’s maturation, while showing very little association with the mature snoRNP complex. Despite the known involvement of Shq1 in H/ACA box snoRNP's production, the exact function of this protein in the overall rRNA processing pathway is still unknown.
See also
rRNA
snoRNA
Ribosomes
Eukaryotic translation
Proteins
References
External links
Chanfreau laboratory
Shq1 gene in yeast genome
Molecular biology
Proteins
RNA | Shq1 | [
"Chemistry",
"Biology"
] | 398 | [
"Biochemistry",
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,664,088 | https://en.wikipedia.org/wiki/Taprogge | Taprogge GmbH is a medium-sized company based in Wetter, Germany. The company is named after founding brothers Ludwig and Josef Taprogge. Founded in 1953, the company is known for its tube cleaning systems for steam turbine condensers, heat exchangers and debris filters for water-cooled shell and tube heat exchangers and condensers.
Invention of the tube cleaning system
Josef Taprogge was a turbine engineer in a power plant nearby Essen and was responsible for the cleaning of the turbine condenser tubes. Cleaning had to be performed while the turbine was out of operation, with the power station not being able to supply any electrical energy to the grid during the turbine outage. On the other hand, the careful elimination of fouling from inside of the tubing is important for a high vacuum in the condenser and thereby for the optimal efficiency of the energy generation through water vapour. He generated patents for these self-cleaning systems in 1953 and 1957.
To avoid economic losses caused by shutdowns, Josef Taprogge invented a continuously working cleaning system which kept the condenser free from fouling during the operation of the steam turbine. A prototype was installed into the cooling water pipe leading to the condenser. During the time of the German Wirtschaftswunder, the process which was marketed and further developed by Taprogge GmbH was widespread and very well received in the power stations due to its efficiency. The efficiency of the power stations that are equipped with the systems increases by around 2 – 4%. The cleaning process became well known and the name "Taprogge System" has been used in the technical literature.
Tube cleaning systems
The patented process uses sponge rubber balls which are injected into the cooling water flow (1) before it enters into the condenser. The diameter of the cleaning balls is only slightly bigger than the nominal diameter of the condenser tubing. Due to their elasticity they generate a contact pressure on their way through the condenser tubes by which fouling is removed from the inner tube walls. At the condenser outlet a strainer (2) is installed in the connecting pipe which separates the balls from the water flow and feeds them into a DN 80 pipe. From there the balls are pumped back to their starting point by a 4 kW impeller pump via a DN 80 pipe. To inject the balls into the cycle, a pressure vessel with detachable cover is installed downstream of the pump. This so-called collector (3b) is equipped with a screen and a flap. At open flap, the balls can pass and with closed flap they remain in the collector and can be replenished or exchanged. The process works continuously and the tubes remain free of mud, algae, bacteria and scaling. The operation of the system is monitored via sight glasses and electronic measuring instruments. The screen surfaces are arranged on shafts with pivoted bearings and can be turned on demand to have fouling removed by the water flow. In this process the balls are caught in the collector. This time-consuming procedure is automatized (3c) meaning there is no need to close the plant for cleaning or interrupting plant processes. Gear motors (M) operate the relevant actuators. The minal diameters of the screens have been adjusted to respond to the developments in power station technology and are produced in sizes from nominal diameter 150 mm to 3600 mm. The cleaning ball diameters range from 14 to 30 mm and filling one collector normally requires several hundred of them. However, some cleaning systems can require well over a thousand cleaning balls. The lifetime of the cleaning balls which are produced of biodegradable natural rubber is around 4 weeks.
A specialized technology is the production of tube cleaning systems for seawater desalination plants. As the heated seawater called brine has a particularly corrosive effect, excellent corrosion resistant yet heat conducting materials (like Titanium) have to be used for such systems. Due to the large tube diameters in the evaporators, the cleaning balls have diameters of up to 45 mm.
Debris filtration systems
In the 1970s, the product range was extended by backwash filters to protect the heat exchangers and condensers from macro fouling, like stones, pieces of wood, fibres, plastic sheeting, and mussels. Foreign matter will first settle on the filter surface. As fouling builds up, differential pressure between filter inlet and outlet increases and the filter has to be cleaned by backwashing. For this purpose an electrically driven rotor covers the filter surface which is connected with a pipe leading outside. Installed in this pipe is a valve that is opened during the backwash process. The accumulated fouling is drawn off and discharged via the pipe which, downstream of the condenser, leads to the main cooling water pipe or a debris container. This technology was spread in power stations and industrial plants the world over. Depending on the flow rates to be filtered, the filters are produced in nominal diameters from 150 mm to DN 3200 mm. The filter surface consists of stainless steel with punched holes. For difficult types of debris, filter surfaces of plastic or grids can be used. A further type produced by the company are fine filters with filtration degrees from 50 to 1000 μm.
Water intake systems
Since the late 1990s, Taprogge offers another filter system which retains fouling already at the intake into the cooling water system – in this way the entire system and the long cooling water pipes can be protected. The system called TAPIS (Taprogge Air Powered Intake System) is installed in the water at the cooling water pipe inlet in the form of a polyhedral housing with plain filter surfaces. It is cleaned by pressurized air blast. In contrast to submarine rakes for seaborne matter, the stainless steel filter has no moving parts and masters biggest water flows. The filter surfaces are made of coated plastic provided with drilled holes.
Literature
Heat Exchanger Fouling, Fundamental Approaches and Technical Solutions; Editor: Prof.Dr.-Ing. Hans Mueller-Steinhagen .
References
External links
Taprogge GmbH (German and English)
Companies based in North Rhine-Westphalia
Fouling
Engineering companies of Germany | Taprogge | [
"Materials_science"
] | 1,272 | [
"Materials degradation",
"Fouling"
] |
9,664,491 | https://en.wikipedia.org/wiki/TOMNET | The TOMNET optimization Environment is a platform for solving applied optimization problems in Microsoft .NET. It makes it possible to use solvers like SNOPT, MINOS and CPLEX with one single model formulation. The solvers handle everything from linear programming and integer programming to global optimization.
External links
(home page)
Numerical software
Mathematical optimization software | TOMNET | [
"Mathematics"
] | 71 | [
"Numerical software",
"Mathematical software"
] |
9,664,551 | https://en.wikipedia.org/wiki/Amorphism | An amorphism, in chemistry, crystallography and, by extension, to other areas of the natural sciences is a substance or feature that lacks an ordered form. In the specific case of crystallography, an amorphic material is one that lacks long range (significant) crystalline order at the molecular level. In the history of chemistry, amorphism was recognised even before the discovery of the nature of the exact atomic crystalline lattice structure. The concept of amorphism can also be found in the fields of art, biology, archaeology and philosophy as a characterisation of objects without form, or with random or unstructured form.
Amorphous and Crystalline solid
In the context of solids, amorphous and crystalline are terms used to describe the structure of materials. Amorphous solids are the opposite of crystalline. The atoms or molecules in amorphous substances are arranged randomly without any long-range order. As a result, they do not have a sharp melting point. The phase transition from solid to liquid occurs over a range of temperatures. Some examples include glass, rubber and some plastics.
See also
Glass
Obsidian
References
Bibliography
Crystallography
Physical chemistry | Amorphism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 234 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Physical chemistry stubs"
] |
9,665,280 | https://en.wikipedia.org/wiki/WIPI%20%28platform%29 | WIPI (; ), Wireless Internet Platform for Interoperability, was a middleware platform used in South Korea that allowed mobile phones, regardless of manufacturer or carrier, to run applications. Much of WIPI was based on Java, but it also included the ability to download and run compiled binary applications as well.
The specification was created by the Mobile Platform Special Subcommittee of the Korea Wireless Internet Standardization Forum (KWISF). The South Korean government declared that all cellular phones sold in that country include the WIPI platform to avoid inordinate competition between mobile companies, but the policy has been withdrawn from April 2009.
References
External links
WIPI Forum
Telecommunications in South Korea
Mobile phone standards
Mobile software | WIPI (platform) | [
"Technology"
] | 141 | [
"Mobile technology stubs"
] |
9,665,492 | https://en.wikipedia.org/wiki/OpenKODE | OpenKODE is a set of native APIs for handheld games and media applications providing a cross-platform abstraction layer for other media technologies such as OpenGL ES, OpenVG, OpenMAX AL and OpenSL ES. Besides of being an umbrella specification of the other APIs, OpenKODE also contains an API of its own, OpenKODE Core. OpenKODE Core defines POSIX-like functions to access operating system resources such as file access.
OpenKODE is managed by the non-profit technology consortium Khronos Group.
See also
DirectX
References
External links
Public Registry
Public forums
Public bug tracker
OpenKODE Conformant companies
Freekode open source implementation of OpenKODE
Application programming interfaces | OpenKODE | [
"Engineering"
] | 147 | [
"Software engineering",
"Software engineering stubs"
] |
9,665,803 | https://en.wikipedia.org/wiki/Safety%20Network%20International%20e.V. | Safety Network International e.V. is an association that is based in Ostfildern and is registered at Esslingen district court.
Origin
Safety Network International e.V. was established by eight founding companies: ASK Systems GmbH, Dürr AG, Daimler AG, EMG Automation GmbH, Festo AG & Co.KG., SICK AG, Pilz GmbH & Co.KG. and Volkswagen AG. It was established in 1999 under the name SafetyBUS p Club International e.V. In 2006 the association changed its name to Safety Network International e.V. Almost 70 companies and institutions are now members of Safety Network International e.V. (as of 2013).
In addition to the headquarters in Germany, there are also the following regional organisations of the Safety Network International e.V.
Safety Network Japan – established in 2000
Safety Network International, USA – established in 2001
Objectives
The purpose of the association is to promote the use and dissemination of the safety-related bus system SafetyBUS p and the industrial communication system SafetyNET p. A further objective of the association is to promote integration of the safety-related bus system SafetyBUS and the industrial communication system SafetyNET into existing and future automation technologies. Members of the association work together in the Security Workgroup, Infrastructure Committee and Implementation Committee. This is where members discuss common viewpoints, define standards and formulate recommendations. Since 2007 Safety Network International e.V. has been a "Liaison D" member of the IEC and is committed to working on standards. The association publishes its own magazine on safety and automation under the name “Connected”.
References
Industrial automation
Safety organizations | Safety Network International e.V. | [
"Engineering"
] | 329 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
9,666,599 | https://en.wikipedia.org/wiki/Tanaka%27s%20formula | In the stochastic calculus, Tanaka's formula for the Brownian motion states that
where Bt is the standard Brownian motion, sgn denotes the sign function
and Lt is its local time at 0 (the local time spent by B at 0 before time t) given by the L2-limit
One can also extend the formula to semimartingales.
Properties
Tanaka's formula is the explicit Doob–Meyer decomposition of the submartingale |Bt| into the martingale part (the integral on the right-hand side, which is a Brownian motion), and a continuous increasing process (local time). It can also be seen as the analogue of Itō's lemma for the (nonsmooth) absolute value function , with and ; see local time for a formal explanation of the Itō term.
Outline of proof
The function |x| is not C2 in x at x = 0, so we cannot apply Itō's formula directly. But if we approximate it near zero (i.e. in [−ε, ε]) by parabolas
and use Itō's formula, we can then take the limit as ε → 0, leading to Tanaka's formula.
References
(Example 5.3.2)
Equations
Martingale theory
Probability theorems
Stochastic calculus | Tanaka's formula | [
"Mathematics"
] | 269 | [
"Mathematical objects",
"Equations",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems"
] |
9,667,001 | https://en.wikipedia.org/wiki/Social%20development%20theory | Social development theory attempts to explain qualitative changes in the structure and framework of society, that help the society to better realize aims and objectives. Development can be defined in a manner applicable to all societies at all historical periods as an upward ascending movement featuring greater levels of energy, efficiency, quality, productivity, complexity, comprehension, creativity, mastery, enjoyment and accomplishment. Development is a process of social change, not merely a set of policies and programs instituted for some specific results. During the last five centuries this process has picked up in speed and intensity, and during the last five decades has witnessed a marked surge in acceleration.
The basic mechanism driving social change is increasing awareness leading to better organization. When society senses new and better opportunities for progress it develops new forms of organization to exploit these new openings successfully. The new forms of organization are better able to harness the available social energies and skills and resources to use the opportunities to get the intended results.
Development is governed by many factors that influence the results of developmental efforts. There must be a motive that drives the social change and essential preconditions for that change to occur. The motive must be powerful enough to overcome obstructions that impede that change from occurring. Development also requires resources such as capital, technology, and supporting infrastructure.
Development is the result of society's capacity to organize resources to meet challenges and opportunities. Society passes through well-defined stages in the course of its development. They are nomadic hunting and gathering, rural agrarian, urban, commercial, industrial, and post-industrial societies. Pioneers introduce new ideas, practices, and habits that conservative elements initially resist. At a later stage, innovations are accepted, imitated, organized, and used by other members of the community. Organizational improvements introduced to support the innovations can take place simultaneously at four different levels—physical, social, mental, and psychological. Moreover, four different types of resources are involved in promoting development. Of these four, physical resources are most visible, but least capable of expansion. Productivity of resources increases enormously as the quality of organization and level of knowledge inputs rise.
Development pace and scope varies according to the stage society is in. The three main stages are physical, vital (vital refers to the dynamic and nervous social energies of humanity that propel individuals to accomplish), and mental.
Terminology
Though the term development usually refers to economic progress, it can apply to political, social, and technological progress as well. These various sectors of society are so intertwined that it is difficult to neatly separate them. Development in all these sectors is governed by the same principles and laws, and therefore the term applies uniformly.
Economic development and human development need not mean the same thing. Strategies and policies aimed at greater growth may produce greater income in a country without improving the average living standard. This happened in oil-producing Middle Eastern countries—a surge in oil prices boosted their national income without much benefit to poorer citizens. Conversely, people-oriented programs and policies can improve health, education, living standards, and other quality-of-life measures with no special emphasis on monetary growth. This occurred in the 30 years of socialist and communist rule in Kerala in India.
Four related but distinct terms and phenomena form successive steps in a graded series: survival, growth, development, and evolution. Survival refers to a subsistence lifestyle with no marked qualitative changes in living standards. Growth refers to horizontal expansion in the existing plane characterized by quantitative expansion—such as a farmer increasing the area under cultivation, or a retailer opening more stores. Development refers to a vertical shift in the level of operations that causes qualitative changes, such as a retailer turning into a manufacturer or an elementary school turning into a high school.
Human development
Development is a human process, in the sense that human beings, not material factors, drive development. The energy and aspiration of people who seek development form the motive force that drives development. People's awareness may decide the direction of development. Their efficiency, productivity, creativity, and organizational capacities determine the level of people's accomplishment and enjoyment. Development is the outer realization of latent inner potentials. The level of people's education, intensity of their aspiration and energies, quality of their attitudes and values, skills and information all affect the extent and pace of development. These factors come into play whether it is the development of the individual, family, community, nation, or the whole world.
Process of emergence of new activities in society
Unconscious vs. conscious development
Human development normally proceeds from experience to comprehension. As society develops over centuries, it accumulates the experience of countless pioneers. The essence of that experience becomes the formula for accomplishment and success. The fact that experience precedes knowledge can be taken to mean that development is an unconscious process that gets carried out first, while knowledge becomes conscious later on only. Unconscious refers to activities that people carry out without knowing what the end results will be, or where their actions will lead. They carry out the acts without knowing the conditions required for success.
Role of pioneering individuals
The gathering of conscious knowledge of society matures and breaks out on the surface in the form of new ideas—espoused by pioneers who also take new initiatives to give expression to those ideas. Those initiatives may call for new strategies and new organizations, which conservative elements may resist. If the pioneer's initiatives succeed, it encourages imitation and slow propagation in the rest of the community. Later, growing success leads to society assimilating the new practice, and it becomes regularized and institutionalized. This can be viewed in three distinct phases of social preparedness, initiative of pioneers, and assimilation by the society.
The pioneer as such plays an important role in the development process—since through that person, unconscious knowledge becomes conscious. The awakening comes to the lone receptive individual first, and that person spreads the awakening to the rest of the society. Though pioneers appear as lone individuals, they act as conscious representatives of society as a whole, and their role should be viewed in that light.<ref>Cleveland, Harlan and Jacobs, Garry, The Genetic Code for Social Development". In: Human Choice, World Academy of Art & Science, USA, 1999, p. 7.</ref>
Imitation of the pioneer
Though a pioneer comes up with innovative ideas very often the initial response to a pioneer is one of indifference, ridicule or even one of outright hostility. If the pioneer persists and succeeds in an initiative, that person's efforts may eventually get the endorsement of the public. That endorsement tempts others to imitate the pioneer. If they also succeed, news spreads and brings wider acceptance. Conscious efforts to lend organizational support to the new initiative helps institutionalize the new innovation.
Organization of new activities
The organization is the human capacity to harness all available information, knowledge, resources, technology, infrastructure, and human skills to exploit new opportunities—and to face challenges and hurdles that block progress. The development comes through improvements in the human capacity of an organization. In other words, development comes through the emergence of better organizations that enhance society's capacity to make use of opportunities and face challenges.
The development of organizations may come through the formulation of new laws and regulations or new systems. Each new step of progress brings a corresponding new organization. Increasing European international trade in the 16th and 17th centuries demanded corresponding development in the banking industry and new commercial laws and civil arbitration facilities. New types of business ventures were formed to attract the capital needed to finance expanding trade. As a result, a new business entity appeared—the joint-stock company, which limited the investors' liability to the extent of their personal investment without endangering other properties.
Each new developmental advance is accompanied by new or more suitable organizations that facilitate that advance. Often, existing inadequate organizations must change to accommodate new advances.
Many countries have introduced scores of new reforms and procedures—such as the release of business activities directories, franchising, lease purchase, service, credit rating, collection agencies, industrial estates, free trade zones, and credit cards. Additionally, a diverse range of internet services have formed. Each new facility improves effective use of available social energies for productive purposes. The importance of these facilities for speeding development is apparent when they are absent. When Eastern European countries wanted to transition to market-type economies, they were seriously hampered in their efforts due to the absence of supportive systems and facilities.
Organization matures into institution
At a particular stage, organizations mature into institutions that become part of society. Beyond this point, an organization does not need laws or agencies to foster growth or ensure a continued presence. The transformation of an organization into an institution signifies society's total acceptance of that new organization.
The income tax office is an example of an organization that is actively maintained by the enactment of laws and the formation of an office for procuring taxes. Without active governmental support, this organization would disappear, as it does not enjoy universal public support. On the other hand, the institution of marriage is universally accepted, and would persist even if governments withdrew regulations that demand registration of marriage and impose age restrictions. The institution of marriage is sustained by the weight of tradition, not by government agencies and legal enactments.
Cultural transmission by the family
Families play a major role in the propagation of new activities once they win the support of the society. A family is a miniature version of the larger society—acceptance by the larger entity is reflected in the smaller entity. The family educates the younger generation and transmits social values like self-restraint, responsibility, skills, and occupational training. Though children do not follow their parents' footsteps as much as they once did, parents still mold their children's attitudes and thoughts regarding careers and future occupations. When families propagate a new activity, it signals that the new activity has become an integral part of the society.
Education
One of the most powerful means of propagating and sustaining new developments is the educational system in a society. Education transmits society's collective knowledge from one generation to the next. It equips each new generation to face future opportunities and challenges with knowledge gathered from the past. It shows the young generation the opportunities ahead for them, and thereby raises their aspiration to achieve more. Information imparted by education raises the level of expectations of youth, as well as aspirations for higher income. It also equips youth with the mental capacity to devise ways and means to improve productivity and enhance living standards.
Society can be conceived as a complex fabric that consists of interrelated activities, systems, and organizations. Development occurs when this complex fabric improves its own organization. That organizational improvement can take place simultaneously in several dimensions.
Quantitative expansion in the volume of social activities
Qualitative expansion in the content of all those elements that make up the social fabric
Geographic extension of the social fabric to bring more of the population under the cover of that fabric
Integration of existing and new organizations so the social fabric functions more efficiently
Such organizational innovations occur all the time, as a continuous process. New organizations emerge whenever a new developmental stage is reached, and old organizations are modified to suit new developmental requirements. The impact of these new organizations may be powerful enough to make people believe they are powerful in their own right—but it is society that creates the new organizations required to achieve its objectives.
The direction that the developmental process takes is influenced by the population's awareness of opportunities. Increasing awareness leads to greater aspiration, which releases greater energy that helps bring about greater accomplishment
Resources
Since the time of the English economist Thomas Malthus, some have thought that capacity for development is limited by availability of natural resources. Resources can be divided into four major categories: physical, social, mental, and human. Land, water, minerals and oil, etc. constitute physical resources. Social resources consist of society's capacity to manage and direct complex systems and activities. Knowledge, information and technology are mental resources. The energy, skill and capacities of people constitute human resources.
The science of economics is much concerned with scarcity of resources. Though physical resources are limited, social, mental, and human resources are not subject to inherent limits. Even if these appear limited, there is no fixity about the limitation, and these resources continue to expand over time. That expansion can be accelerated by the use of appropriate strategies. In recent decades the rate of growth of these three resources has accelerated dramatically.
The role of physical resources tends to diminish as society moves to higher developmental levels. Correspondingly, the role of non-material resources increases as development advances. One of the most important non-material resources is information, which has become a key input. Information is a non-material resource that is not exhausted by distribution or sharing. Greater access to information helps increase the pace of its development. Ready access to information about economic factors helps investors transfer capital to sectors and areas where it fetches a higher return. Greater input of non-material resources helps explain the rising productivity of societies in spite of a limited physical resource base.
Application of higher non-material inputs also raises the productivity of physical inputs. Modern technology has helped increase the proven sources of oil by 50% in recent years—and at the same time, reduced the cost of search operations by 75%. Moreover, technology shows it is possible to reduce the amount of physical inputs in a wide range of activities. Scientific agricultural methods demonstrated that soil productivity could be raised through synthetic fertilizers. Dutch farm scientists have demonstrated that a minimal water consumption of 1.4 liters is enough to raise a kilogram of vegetables, compared to the thousand liters that traditional irrigation methods normally require.
Henry Ford's assembly line techniques reduced the man-hours of labor required to deliver a car from 783 minutes to 93 minutes. These examples show that the greater input of higher non-material resources can raise the productivity of physical resources and thereby extend their limits.
Technological development
When the mind engages in pure creative thinking, it comes up with new thoughts and ideas. When it applies itself to society it can come up with new organizations. When it turns to the study of nature, it discovers nature's laws and mechanisms. When it applies itself to technology, it makes new discoveries and practical inventions that boost productivity. Technical creativity has had an erratic course through history, with some intense periods of creative output followed by some dull and inactive periods. However, the period since 1700 has been marked by an intense burst of technological creativity that is multiplying human capacities exponentially.
Though many reasons can be cited for the accelerating pace of technological inventions, a major cause is the role played by mental creativity in an increasing atmosphere of freedom. Political freedom and liberation from religious dogma had a powerful impact on creative thinking during the Age of Enlightenment. Dogmas and superstitions greatly restricted mental creativity. For example, when the astronomer Copernicus proposed a heliocentric view of the world, the church rejected it because it did not conform to established religious doctrine. When Galileo used a telescope to view the planets, the church condemned the device as an instrument of the devil, as it seemed so unusual. The Enlightenment shattered such obscurantist fetters on freedom of thought. From then on, the spirit of experimentation thrived.
Though technological inventions have increased the pace of development, the tendency to view developmental accomplishments as mainly powered by technology misses the bigger picture. Technological innovation was spurred by general advances in the social organization of knowledge. In the Middle Ages, efforts at scientific progress were few, mainly because there was no effective system to preserve and disseminate knowledge. Since there was no organized protection for patent rights, scientists and inventors were secretive about observations and discoveries. Establishment of scientific associations and scientific journals spurred the exchange of knowledge and created a written record for posterity.
Technological development depends on social organizations. Nobel laureate economist Arthur Lewis observed that the mechanization of factory production in England—the Industrial Revolution—was a direct result of the reorganization of English agriculture. Enclosure of common lands in England generated surplus income for farmers. That extra income generated additional raw materials for industrial processing, and produced greater demand for industrial products that traditional manufacturing processes could not meet.
The opening of sea trade further boosted demand for industrial production for export. Factory production increased many times when production was reorganized to use steam energy, combined with moving assembly lines, specialization, and division of labor. Thus, technological development was both a result of and a contributing factor to the overall development of society.
Individual scientific inventions do not spring out of the blue. They build on past accomplishments in an incremental manner, and give a conscious form to the unconscious knowledge that society gathers over time. As pioneers are more conscious than the surrounding community, their inventions normally meet with initial resistance, which recedes over time as their inventions gain wider acceptance. If opposition is stronger than the pioneer, then the introduction of an invention gets delayed.
In medieval times, when guilds tightly controlled their members, medical progress was slow mainly because physicians were secretive about their remedies. When Denis Papin demonstrated his steam engine, German naval authorities refused to accept it, fearing it would lead to increased unemployment. John Kay, who developed a flying shuttle textile loom, was physically threatened by English weavers who feared the loss of their jobs. He fled to France where his invention was more favorably received.
The widespread use of computers and application of biotechnology raises similar resistance among the public today. Whether the public receives an invention readily or resists depends on their awareness and willingness to entertain rapid change. Regardless of the response, technological inventions occurs as part of overall social development, not as an isolated field of activity.
Limits to development
The concept of inherent limits to development arose mainly because past development was determined largely by availability of physical resources. Humanity relied more on muscle-power than thought-power to accomplish work. That is no longer the case. Today, mental resources are the primary determinant of development. Where people drove a simple bullock cart, they now design ships and aircraft that carry huge loads across immense distances. Humanity has tamed rivers, cleared jungles and even turned arid desert lands into cultivable lands through irrigation.
By using intelligence, society has turned sand into powerful silicon chips that carry huge amounts of information and form the basis of computers. Since there is no inherent limit to the expansion of society's mental resources, the notion of limits to growth cannot be ultimately binding.
Three stages of development
Society's developmental journey is marked by three stages: physical, vital, and mental. These are not clear-cut stages, but overlap. All three are present in any society at times. One of them is predominant while the other two play subordinate roles. The term 'vital' denotes the emotional and nervous energies that empower society's drive towards accomplishment and express most directly in the interactions between human beings. Before the full development of mind, it is these vital energies that predominate in human personality and gradually yield the ground as the mental element becomes stronger. The speed and circumstances of social transition from one stage to another varies.
Physical stage
The physical stage is characterized by the domination of the physical element of the human personality. Physical stage is distinguished by the predominance of physical part of human personality, which is characterised by minimal technological advances and becoming depended on manual labour for agricultural practices. During this phase, society is preoccupied with bare survival and subsistence. Moreover, societal structures often exhibit rigidity, with little room for social mobility or advancement beyond one's inherited status or position. People follow tradition strictly and there is little innovation and change. Land is the main asset and productive resource during the physical stage and wealth is measured by the size of land holdings. This is the agrarian and feudal phase of society. Inherited wealth and position rule the roost and there is very little upward mobility. Meanwhile, economic transactions primarily revolve around barter systems and the exchange of goods rather than monetary transactions. Feudal lords and military chiefs function as the leaders of the society. Commerce and money play a relatively minor role. As innovative thinking and experimental approaches are discouraged, people follow tradition unwaveringly and show little inclination to think outside of established guidelines. Occupational skills are passed down from parent to child by a long process of apprenticeship. Despite its limitations, the physical stage lays the foundation for subsequent phases of development, serving as a crucial starting point for societal evolution and progress.
Guilds restrict the dissemination of trade secrets and technical knowledge. The Church controls the spread of new knowledge and tries to smother new ideas that does not agree with established dogmas. The physical stage comes to an end when the reorganization of agriculture gives scope for commerce and industry to expand. This happened in Europe during the 18th century when political revolutions abolished feudalism and the Industrial Revolution gave a boost to factory production. The shift to the vital and mental stages helps to break the bonds of tradition and inject new dynamism in social life.
Vital stage
The vital stage of society is infused with dynamism and change. The vital activities of society expand markedly. Society becomes curious, innovative and adventurous. During the vital stage emphasis shifts from interactions with the physical environment to social interactions between people. Trade supplants agriculture as the principal source of wealth.
The dawning of this phase in Europe led to exploratory voyages across the seas leading to the discovery of new lands and an expansion of sea trade. Equally important, society at this time began to more effectively harness the power of money. Commerce took over from agriculture, and money replaced land as the most productive resource. The center of life shifted from the countryside to the towns where opportunities for trade and business were in greater abundance.
The center of power shifted from the aristocracy to the business class, which employed the growing power of money to gain political influence. During the vital stage, the rule of law becomes more formal and binding, providing a secure and safe environment for business to flourish. Banks, shipping companies and joint-stock companies increase in numbers to make use of the opportunities. Fresh innovative thinking leads to new ways of life that people accept as they prove beneficial. Science and experimental approaches begin to make a headway as the hold of tradition and dogma weaken. Demand for education rises.
As the vital stage matures through the expansion of the commercial and industrial complex, surplus income arises, which prompts people to spend more on items so far considered out of reach. People begin to aspire for luxury and leisure that was not possible when life was at a subsistence level.
Mental stage
This stage has three essential characteristics: practical, social, and political application of mind. The practical application of mind generates many inventions. The social application of mind leads to new and more effective types of social organization. The political application leads to changes in the political systems that empower the populace to exercise political and human rights in a free and democratic manner. These changes began in the Renaissance and Enlightenment, and gained momentum in the Reformation, which proclaimed the right of individuals to relate directly to God without the mediation of priests. The political application of mind led to the American and French Revolutions, which produced writing that first recognized the rights of the common man and gradually led to the actual enjoyment of these rights.
Organization is a mental invention. Therefore, it is not surprising that the mental stage of development is responsible for the formulation of a great number of organizational innovations. Huge business corporations have emerged that make more money than even the total earnings of some small countries. Global networks for transportation and communication now connect the nations of the world within a common unified social fabric for sea and air travel, telecommunications, weather reporting and information exchange.
In addition to spurring technological and organizational innovation, the mental phase is also marked by the increasing power of ideas to change social life. Ethical ideals have been with humanity since the dawn of civilization. But their practical application in daily social life had to wait for the mental stage of development to emerge. The proclamation of human rights and the recognition of the value of the individual have become effective only after the development of mind and spread of education. The 20th century truly emerged as the century of the common man. Political, social, economic and many other rights were extended to more and more sections of humanity with each succeeding decade.
The relative duration of these three stages and the speed of transition from one to another varies from one society to another. However broadly speaking, the essential features of the physical, vital and mental stages of development are strikingly similar and therefore quite recognizable even in societies separated by great distance and having little direct contact with one another.
Moreover, societies also learn from those who have gone through these transitions before and, therefore, may be able to make the transitions faster and better. When the Netherlands introduced primary education in 1618, it was a pioneering initiative. When Japan did the same thing late in the 19th century, it had the advantage of the experience of the US and other countries. When many Asian countries initiated primary education in the 1950s after winning independence, they could draw on the vast experience of more developed nations. This is a major reason for the quickening pace of progress.
Natural vs. planned development
Natural development is distinct from development by government initiatives and planning. Natural development involves an unplanned and unconscious evolution of social norms and structures, which makes it different from government-led initiatives. Natural development is the spontaneous and unconscious process of development that normally occurs. It is distinguished by different factors including historical legacies, economic circumstances, cultural standards and its inherent complexities. Planned development is the result of deliberate conscious initiatives by the government to speed development through special programs and policies. Natural development is an unconscious process, since it results from the behavior of countless individuals acting on their own—rather than conscious intention of the community. It is also unconscious in the sense that society achieves the results without being fully conscious of how it did so. On the other hand, planned development is the result of deliberate attempts of governmental authorities to stimulate the developing process through specific policies and programmes.
The natural development of democracy in Europe over the past few centuries can be contrasted with the conscious effort to introduce democratic forms of government in former colonial nations after World War II. Planned development is also largely unconscious: the goals may be conscious, but the most effective means for achieving them may remain poorly understood. Planned development can become fully conscious only when the process of development itself is fully understood. The achievement of planned development relies on a comprehensive understanding of fundamental social dynamics and a sophisticated implementation strategy. While in planned development the government is the initiator, in the natural version it is private individuals or groups that are responsible for the initiative. Whoever initiates, the principles and policies are the same and success is assured only when the conditions and right principles are followed. Over centuries, democracy’s organic growth has been experienced while Europe stands in contrast to the purposeful Post-World War II, which attempts in colonial countries to establish democratic rules. In contrast to planned development projects, it has been found that natural development results from local efforts of private citizens along with community organisations.
Summary
Social development theory offers a comprehensive framework for understanding the qualitative changes in society over time. It highlights the role of increasing awareness and better organization in driving progress. Through stages of physical, vital, and mental development, societies evolve, embracing innovation and adapting to change.
See also
Idea of Progress
Social change
World systems theory
References
Jacobs, Garry et al.. Kamadhenu: The Prosperity Movement, Southern Publications, India, 1988.
Asokan. N. History of USA'', The Mother's Service Society, 2006.
Sociological theories
Economic development
Human development
International development
Technology development | Social development theory | [
"Biology"
] | 5,572 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
9,667,106 | https://en.wikipedia.org/wiki/Minimal%20polynomial%20%28field%20theory%29 | In field theory, a branch of mathematics, the minimal polynomial of an element of an extension field of a field is, roughly speaking, the polynomial of lowest degree having coefficients in the smaller field, such that is a root of the polynomial. If the minimal polynomial of exists, it is unique. The coefficient of the highest-degree term in the polynomial is required to be 1.
More formally, a minimal polynomial is defined relative to a field extension and an element of the extension field . The minimal polynomial of an element, if it exists, is a member of , the ring of polynomials in the variable with coefficients in . Given an element of , let be the set of all polynomials in such that . The element is called a root or zero of each polynomial in
More specifically, Jα is the kernel of the ring homomorphism from F[x] to E which sends polynomials g to their value g(α) at the element α. Because it is the kernel of a ring homomorphism, Jα is an ideal of the polynomial ring F[x]: it is closed under polynomial addition and subtraction (hence containing the zero polynomial), as well as under multiplication by elements of F (which is scalar multiplication if F[x] is regarded as a vector space over F).
The zero polynomial, all of whose coefficients are 0, is in every since for all and . This makes the zero polynomial useless for classifying different values of into types, so it is excepted. If there are any non-zero polynomials in , i.e. if the latter is not the zero ideal, then is called an algebraic element over , and there exists a monic polynomial of least degree in . This is the minimal polynomial of with respect to . It is unique and irreducible over . If the zero polynomial is the only member of , then is called a transcendental element over and has no minimal polynomial with respect to .
Minimal polynomials are useful for constructing and analyzing field extensions. When is algebraic with minimal polynomial , the smallest field that contains both and is isomorphic to the quotient ring , where is the ideal of generated by . Minimal polynomials are also used to define conjugate elements.
Definition
Let E/F be a field extension, α an element of E, and F[x] the ring of polynomials in x over F. The element α has a minimal polynomial when α is algebraic over F, that is, when f(α) = 0 for some non-zero polynomial f(x) in F[x]. Then the minimal polynomial of α is defined as the monic polynomial of least degree among all polynomials in F[x] having α as a root.
Properties
Throughout this section, let E/F be a field extension over F as above, let α ∈ E be an algebraic element over F and let Jα be the ideal of polynomials vanishing on α.
Uniqueness
The minimal polynomial f of α is unique.
To prove this, suppose that f and g are monic polynomials in Jα of minimal degree n > 0. We have that r := f−g ∈ Jα (because the latter is closed under addition/subtraction) and that m := deg(r) < n (because the polynomials are monic of the same degree). If r is not zero, then r / cm (writing cm ∈ F for the non-zero coefficient of highest degree in r) is a monic polynomial of degree m < n such that r / cm ∈ Jα (because the latter is closed under multiplication/division by non-zero elements of F), which contradicts our original assumption of minimality for n. We conclude that 0 = r = f − g, i.e. that f = g.
Irreducibility
The minimal polynomial f of α is irreducible, i.e. it cannot be factorized as f = gh for two polynomials g and h of strictly lower degree.
To prove this, first observe that any factorization f = gh implies that either g(α) = 0 or h(α) = 0, because f(α) = 0 and F is a field (hence also an integral domain). Choosing both g and h to be of degree strictly lower than f would then contradict the minimality requirement on f, so f must be irreducible.
Minimal polynomial generates Jα
The minimal polynomial f of α generates the ideal Jα, i.e. every g in Jα can be factorized as g=fh for some h' in F[x].
To prove this, it suffices to observe that F[x] is a principal ideal domain, because F is a field: this means that every ideal I in F[x], Jα amongst them, is generated by a single element f. With the exception of the zero ideal I = {0}, the generator f must be non-zero and it must be the unique polynomial of minimal degree, up to a factor in F (because the degree of fg is strictly larger than that of f whenever g is of degree greater than zero). In particular, there is a unique monic generator f, and all generators must be irreducible. When I is chosen to be Jα, for α algebraic over F, then the monic generator f is the minimal polynomial of α.
Examples
Minimal polynomial of a Galois field extension
Given a Galois field extension the minimal polynomial of any not in can be computed asif has no stabilizers in the Galois action. Since it is irreducible, which can be deduced by looking at the roots of , it is the minimal polynomial. Note that the same kind of formula can be found by replacing with where is the stabilizer group of . For example, if then its stabilizer is , hence is its minimal polynomial.
Quadratic field extensions
Q()
If F = Q, E = R, α = , then the minimal polynomial for α is a(x) = x2 − 2. The base field F is important as it determines the possibilities for the coefficients of a(x). For instance, if we take F = R, then the minimal polynomial for α = is a(x) = x − .
Q( )
In general, for the quadratic extension given by a square-free , computing the minimal polynomial of an element can be found using Galois theory. Thenin particular, this implies and . This can be used to determine through a series of relations using modular arithmetic.
Biquadratic field extensions
If α = + , then the minimal polynomial in Q[x] is a(x) = x4 − 10x2 + 1 = (x − − )(x + − )(x − + )(x + + ).
Notice if then the Galois action on stabilizes . Hence the minimal polynomial can be found using the quotient group .
Roots of unity
The minimal polynomials in Q[x] of roots of unity are the cyclotomic polynomials. The roots of the minimal polynomial of 2cos(2/n) are twice the real part of the primitive roots of unity.
Swinnerton-Dyer polynomials
The minimal polynomial in Q[x] of the sum of the square roots of the first n prime numbers is constructed analogously, and is called a Swinnerton-Dyer polynomial.
See also
Ring of integers
Algebraic number field
References
Pinter, Charles C. A Book of Abstract Algebra. Dover Books on Mathematics Series. Dover Publications, 2010, p. 270–273.
Polynomials
Field (mathematics) | Minimal polynomial (field theory) | [
"Mathematics"
] | 1,558 | [
"Polynomials",
"Algebra"
] |
9,667,107 | https://en.wikipedia.org/wiki/Minimal%20polynomial%20%28linear%20algebra%29 | In linear algebra, the minimal polynomial of an matrix over a field is the monic polynomial over of least degree such that . Any other polynomial with is a (polynomial) multiple of .
The following three statements are equivalent:
is a root of ,
is a root of the characteristic polynomial of ,
is an eigenvalue of matrix .
The multiplicity of a root of is the largest power such that strictly contains . In other words, increasing the exponent up to will give ever larger kernels, but further increasing the exponent beyond will just give the same kernel.
If the field is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in ) alone, in other words they may have irreducible polynomial factors of degree greater than . For irreducible polynomials one has similar equivalences:
divides ,
divides ,
the kernel of has dimension at least .
the kernel of has dimension at least .
Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of : extending the base field will not introduce any new such relations (nor of course will it remove existing ones).
The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if is a multiple of the identity matrix, then its minimal polynomial is since the kernel of is already the entire space; on the other hand its characteristic polynomial is (the only eigenvalue is , and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field).
Formal definition
Given an endomorphism on a finite-dimensional vector space over a field , let be the set defined as
where is the space of all polynomials over the field . is a proper ideal of . Since is a field, is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to a unit in . A particular choice among the generators can be made, since precisely one of the generators is monic. The minimal polynomial is thus defined to be the monic polynomial that generates . It is the monic polynomial of least degree in .
Applications
An endomorphism of a finite-dimensional vector space over a field is diagonalizable if and only if its minimal polynomial factors completely over into distinct linear factors. The fact that there is only one factor for every eigenvalue means that the generalized eigenspace for is the same as the eigenspace for : every Jordan block has size . More generally, if satisfies a polynomial equation where factors into distinct linear factors over , then it will be diagonalizable: its minimal polynomial is a divisor of and therefore also factors into distinct linear factors. In particular one has:
: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than , since is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.
: endomorphisms satisfying are called projections, and are always diagonalizable (moreover their only eigenvalues are and ).
By contrast if with then (a nilpotent endomorphism) is not necessarily diagonalizable, since has a repeated root .
These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof.
Computation
For a nonzero vector in define:
This definition satisfies the properties of a proper ideal. Let be the monic polynomial which generates it.
Properties
Example
Define to be the endomorphism of with matrix, on the canonical basis,
Taking the first canonical basis vector and its repeated images by one obtains
of which the first three are easily seen to be linearly independent, and therefore span all of . The last one then necessarily is a linear combination of the first three, in fact
,
so that:
.
This is in fact also the minimal polynomial and the characteristic polynomial  : indeed divides which divides , and since the first and last are of degree and all are monic, they must all be the same. Another reason is that in general if any polynomial in annihilates a vector , then it also annihilates (just apply to the equation that says that it annihilates ), and therefore by iteration it annihilates the entire space generated by the iterated images by of ; in the current case we have seen that for that space is all of , so . Indeed one verifies for the full matrix that is the zero matrix:
See also
Annihilating polynomial
References
Matrix theory
Polynomials | Minimal polynomial (linear algebra) | [
"Mathematics"
] | 1,052 | [
"Polynomials",
"Algebra"
] |
9,667,364 | https://en.wikipedia.org/wiki/Energy%20drift | In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in simulations the energy might fluctuate on a short time scale and increase or decrease on a very long time scale due to numerical integration artifacts that arise with the use of a finite time step Δt. This is somewhat similar to the flying ice cube problem, whereby numerical errors in handling equipartition of energy can change vibrational energy into translational energy.
More specifically, the energy tends to increase exponentially; its increase can be understood intuitively because each step introduces a small perturbation δv to the true velocity vtrue, which (if uncorrelated with v, which will be true for simple integration methods) results in a second-order increase in the energy
(The cross term in v · δv is zero because of no correlation.)
Energy drift - usually damping - is substantial for numerical integration schemes that are not symplectic, such as the Runge-Kutta family. Symplectic integrators usually used in molecular dynamics, such as the Verlet integrator family, exhibit increases in energy over very long time scales, though the error remains roughly constant. These integrators do not in fact reproduce the actual Hamiltonian mechanics of the system; instead, they reproduce a closely related "shadow" Hamiltonian whose value they conserve many orders of magnitude more closely. The accuracy of the energy conservation for the true Hamiltonian is dependent on the time step. The energy computed from the modified Hamiltonian of a symplectic integrator is from the true Hamiltonian.
Energy drift is similar to parametric resonance in that a finite, discrete timestepping scheme will result in nonphysical, limited sampling of motions with frequencies close to the frequency of velocity updates. Thus the restriction on the maximum step size that will be stable for a given system is proportional to the period of the fastest fundamental modes of the system's motion. For a motion with a natural frequency ω, artificial resonances are introduced when the frequency of velocity updates, is related to ω as
where n and m are integers describing the resonance order. For Verlet integration, resonances up to the fourth order frequently lead to numerical instability, leading to a restriction on the timestep size of
where ω is the frequency of the fastest motion in the system and p is its period. The fastest motions in most biomolecular systems involve the motions of hydrogen atoms; it is thus common to use constraint algorithms to restrict hydrogen motion and thus increase the maximum stable time step that can be used in the simulation. However, because the time scales of heavy-atom motions are not widely divergent from those of hydrogen motions, in practice this allows only about a twofold increase in time step. Common practice in all-atom biomolecular simulation is to use a time step of 1 femtosecond (fs) for unconstrained simulations and 2 fs for constrained simulations, although larger time steps may be possible for certain systems or choices of parameters.
Energy drift can also result from imperfections in evaluating the energy function, usually due to simulation parameters that sacrifice accuracy for computational speed. For example, cutoff schemes for evaluating the electrostatic forces introduce systematic errors in the energy with each time step as particles move back and forth across the cutoff radius if sufficient smoothing is not used. Particle mesh Ewald summation is one solution for this effect, but introduces artifacts of its own. Errors in the system being simulated can also induce energy drifts characterized as "explosive" that are not artifacts, but are reflective of the instability of the initial conditions; this may occur when the system has not been subjected to sufficient structural minimization before beginning production dynamics. In practice, energy drift may be measured as a percent increase over time, or as a time needed to add a given amount of energy to the system.
The practical effects of energy drift depend on the simulation conditions, the thermodynamic ensemble being simulated, and the intended use of the simulation under study; for example, energy drift has much more severe consequences for simulations of the microcanonical ensemble than the canonical ensemble where the temperature is held constant. However, it has been shown that long microcanonical ensemble simulations can be performed with insignificant energy drift, including those of flexible molecules which incorporate constraints and Ewald summations. Energy drift is often used as a measure of the quality of the simulation, and has been proposed as one quality metric to be routinely reported in a mass repository of molecular dynamics trajectory data analogous to the Protein Data Bank.
References
Further reading
Sanz-Serna JM, Calvo MP. (1994). Numerical Hamiltonian Problems. Chapman & Hall, London, England.
Molecular dynamics
Numerical differential equations
Numerical artifacts | Energy drift | [
"Physics",
"Chemistry"
] | 1,003 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
9,667,552 | https://en.wikipedia.org/wiki/Periodic%20boundary%20conditions | Periodic boundary conditions (PBCs) are a set of boundary conditions which are often chosen for approximating a large (infinite) system by using a small part called a unit cell. PBCs are often used in computer simulations and mathematical models. The topology of two-dimensional PBC is equal to that of a world map of some video games; the geometry of the unit cell satisfies perfect two-dimensional tiling, and when an object passes through one side of the unit cell, it re-appears on the opposite side with the same velocity. In topological terms, the space made by two-dimensional PBCs can be thought of as being mapped onto a torus (compactification). The large systems approximated by PBCs consist of an infinite number of unit cells. In computer simulations, one of these is the original simulation box, and others are copies called images. During the simulation, only the properties of the original simulation box need to be recorded and propagated. The minimum-image convention is a common form of PBC particle bookkeeping in which each individual particle in the simulation interacts with the closest image of the remaining particles in the system.
One example of periodic boundary conditions can be defined according to smooth real functions by
for all m = 0, 1, 2, ... and for constants and .
In molecular dynamics simulations and Monte Carlo molecular modeling, PBCs are usually applied to calculate properties of bulk gasses, liquids, crystals or mixtures. A common application uses PBC to simulate solvated macromolecules in a bath of explicit solvent. Born–von Karman boundary conditions are periodic boundary conditions for a special system.
In electromagnetics, PBC can be applied for different mesh types to analyze the electromagnetic properties of periodical structures.
Requirements and artifacts
Three-dimensional PBCs are useful for approximating the behavior of macro-scale systems of gases, liquids, and solids. Three-dimensional PBCs can also be used to simulate planar surfaces, in which case two-dimensional PBCs are often more suitable. Two-dimensional PBCs for planar surfaces are also called slab boundary conditions; in this case, PBCs are used for two Cartesian coordinates (e.g., x and y), and the third coordinate (z) extends to infinity.
PBCs can be used in conjunction with Ewald summation methods (e.g., the particle mesh Ewald method) to calculate electrostatic forces in the system. However, PBCs also introduce correlational artifacts that do not respect the translational invariance of the system, and requires constraints on the composition and size of the simulation box.
In simulations of solid systems, the strain field arising from any inhomogeneity in the system will be artificially truncated and modified by the periodic boundary. Similarly, the wavelength of sound or shock waves and phonons in the system is limited by the box size.
In simulations containing ionic (Coulomb) interactions, the net electrostatic charge of the system must be zero to avoid summing to an infinite charge when PBCs are applied. In some applications it is appropriate to obtain neutrality by adding ions such as sodium or chloride (as counterions) in appropriate numbers if the molecules of interest are charged. Sometimes ions are even added to a system in which the molecules of interest are neutral, to approximate the ionic strength of the solution in which the molecules naturally appear. Maintenance of the minimum-image convention also generally requires that a spherical cutoff radius for nonbonded forces be at most half the length of one side of a cubic box. Even in electrostatically neutral systems, a net dipole moment of the unit cell can introduce a spurious bulk-surface energy, equivalent to pyroelectricity in polar crystals. Another consequence of applying PBCs to a simulated system such as a liquid or a solid is that this hypothetical system has no contact with its “surroundings”, due to it being infinite in all directions. Therefore, long-range energy contributions such as the electrostatic potential, and by extension the energies of charged particles like electrons, are not automatically aligned to experimental energy scales. Mathematically, this energy level ambiguity corresponds to the sum of the electrostatic energy being dependent on a surface term that needs to be set by the user of the method.
The size of the simulation box must also be large enough to prevent periodic artifacts from occurring due to the unphysical topology of the simulation. In a box that is too small, a macromolecule may interact with its own image in a neighboring box, which is functionally equivalent to a molecule's "head" interacting with its own "tail". This produces highly unphysical dynamics in most macromolecules, although the magnitude of the consequences and thus the appropriate box size relative to the size of the macromolecules depends on the intended length of the simulation, the desired accuracy, and the anticipated dynamics. For example, simulations of protein folding that begin from the native state may undergo smaller fluctuations, and therefore may not require as large a box, as simulations that begin from a random coil conformation. However, the effects of solvation shells on the observed dynamics – in simulation or in experiment – are not well understood. A common recommendation based on simulations of DNA is to require at least 1 nm of solvent around the molecules of interest in every dimension.
Practical implementation: continuity and the minimum image convention
An object which has passed through one face of the simulation box should re-enter through the opposite face—or its image should do it. Evidently, a strategic decision must be made: Do we (A) “fold back” particles into the simulation box when they leave it, or do we (B) let them go on (but compute interactions with the nearest images)? The decision has no effect on the course of the simulation, but if the user is interested in mean displacements, diffusion lengths, etc., the second option is preferable.
(A) Restrict particle coordinates to the simulation box
To implement a PBC algorithm, at least two steps are needed.
Restricting the coordinates is a simple operation which can be described with the following code, where x_size is the length of the box in one direction (assuming an orthogonal unit cell centered on the origin) and x is the position of the particle in the same direction:
if (periodic_x) then
if (x < -x_size * 0.5) x = x + x_size
if (x >= x_size * 0.5) x = x - x_size
end if
Distance and vector between objects should obey the minimum image criterion.
This can be implemented according to the following code (in the case of a one-dimensional system where dx is the distance direction vector from object i to object j):
if (periodic_x) then
dx = x(j) - x(i)
if (dx > x_size * 0.5) dx = dx - x_size
if (dx <= -x_size * 0.5) dx = dx + x_size
end if
For three-dimensional PBCs, both operations should be repeated in all 3 dimensions.
These operations can be written in a much more compact form for orthorhombic cells if the origin is shifted to a corner of the box. Then we have, in one dimension, for positions and distances respectively:
! After x(i) update without regard to PBC:
x(i) = x(i) - floor(x(i) / x_size) * x_size ! For a box with the origin at the lower left vertex
! Works for x's lying in any image.
dx = x(j) - x(i)
dx = dx - nint(dx / x_size) * x_size
(B) Do not restrict the particle coordinates
Assuming an orthorhombic simulation box with the origin at the lower left forward corner, the minimum image convention for the calculation of effective particle distances can be calculated with the “nearest integer” function as shown above, here as C/C++ code:
x_rsize = 1.0 / x_size; // compute only when box size is set or changed
dx = x[j] - x[i];
dx -= x_size * nearbyint(dx * x_rsize);
The fastest way of carrying out this operation depends on the processor architecture. If the sign of dx is not relevant, the method
dx = fabs(dx);
dx -= static_cast<int>(dx * x_rsize + 0.5) * x_size;
was found to be fastest on x86-64 processors in 2013.
For non-orthorhombic cells the situation is more complicated.
In simulations of ionic systems more complicated operations
may be needed to handle the long-range Coulomb interactions spanning several box images, for instance Ewald summation.
Unit cell geometries
PBC requires the unit cell to be a shape that will tile perfectly into a three-dimensional crystal. Thus, a spherical or elliptical droplet cannot be used. A cube or rectangular prism is the most intuitive and common choice, but can be computationally expensive due to unnecessary amounts of solvent molecules in the corners, distant from the central macromolecules. A common alternative that requires less volume is the truncated octahedron.
General dimension
For simulations in 2D and 3D space, cubic periodic boundary condition is most commonly used since it is simplest in coding. In computer simulation of high dimensional systems, however, the hypercubic periodic boundary condition can be less efficient because corners occupy most part of the space. In general dimension, unit cell can be viewed as the Wigner-Seitz cell of certain lattice packing. For example, the hypercubic periodic boundary condition corresponds to the hypercubic lattice packing. It is then preferred to choose a unit cell which corresponds to the dense packing of that dimension. In 4D this is D4 lattice; and E8 lattice in 8-dimension. The implementation of these high dimensional periodic boundary conditions is equivalent to error correction code approaches in information theory.
Conserved properties
Under periodic boundary conditions, the linear momentum of the system is conserved, but Angular momentum is not. Conventional explanation of this fact is based on Noether's theorem, which states that conservation of angular momentum follows from rotational invariance of Lagrangian. However, this approach was shown to not be consistent: it fails to explain the absence of conservation of angular momentum of a single particle moving in a periodic cell. Lagrangian of the particle is constant and therefore rotationally invariant, while angular momentum of the particle is not conserved. This contradiction is caused by the fact that Noether's theorem is usually formulated for closed systems. The periodic cell exchanges mass momentum, angular momentum, and energy with the neighboring cells.
When applied to the microcanonical ensemble (constant particle number, volume, and energy, abbreviated NVE), using PBC rather than reflecting walls slightly alters the sampling of the simulation due to the conservation of total linear momentum and the position of the center of mass; this ensemble has been termed the "molecular dynamics ensemble" or the NVEPG ensemble. These additional conserved quantities introduce minor artifacts related to the statistical mechanical definition of temperature, the departure of the velocity distributions from a Boltzmann distribution, and violations of equipartition for systems containing particles with heterogeneous masses. The simplest of these effects is that a system of N particles will behave, in the molecular dynamics ensemble, as a system of N-1 particles. These artifacts have quantifiable consequences for small toy systems containing only perfectly hard particles; they have not been studied in depth for standard biomolecular simulations, but given the size of such systems, the effects will be largely negligible.
See also
Helical boundary conditions
Molecular modeling
Software for molecular mechanics modeling
Notes
References
See esp. pp15–20.
See esp. pp272–6.
Molecular dynamics
Boundary conditions | Periodic boundary conditions | [
"Physics",
"Chemistry"
] | 2,503 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
9,668,061 | https://en.wikipedia.org/wiki/Legion%20%28taxonomy%29 | The legion, in biological classification, is a non-obligatory taxonomic rank within the Linnaean hierarchy sometimes used in zoology.
Taxonomic rank
In zoological taxonomy, the legion is:
subordinate to the class
superordinate to the cohort.
consists of a group of related orders
Legions may be grouped into superlegions or subdivided into sublegions, and these again into infralegions.
Use in zoology
Legions and their super/sub/infra groups have been employed in some classifications of birds and mammals. Full use is made of all of these (along with cohorts and supercohorts) in, for example, McKenna and Bell's classification of mammals.
See also
Linnaean taxonomy
Mammal classification
References
Biology terminology
Taxa by rank
rank08a | Legion (taxonomy) | [
"Biology"
] | 156 | [
"Zoological nomenclature",
"Biological nomenclature",
"nan"
] |
9,668,094 | https://en.wikipedia.org/wiki/United%20Nations%20Scientific%20Committee%20on%20the%20Effects%20of%20Atomic%20Radiation | The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) was set up by resolution of the United Nations General Assembly in 1955. Twenty-one states are designated to provide scientists to serve as members of the committee which holds formal meetings (sessions) annually and submits a report to the General Assembly. The organisation has no power to set radiation standards nor to make recommendations in regard to nuclear testing. It was established solely to "define precisely the present exposure of the population of the world to ionizing radiation". A small secretariat, located in Vienna and functionally linked to the United Nations Environment Programme (UNEP), organizes the annual sessions and manages the preparation of documents for the committee's scrutiny.
Function
UNSCEAR issues major public reports on Sources and Effects of Ionizing Radiation from time to time. As of 2017, there have been 28 major publications from 1958 to 2017. The reports are all available from the UNSCEAR website. These works are very highly regarded as sources of authoritative information and are used throughout the world as a scientific basis for the evaluation of radiation risk. The publications review studies undertaken separately from a range of sources. Reports from UN member states and other international organisations on data from survivors of the atomic bombings of Hiroshima and Nagasaki, the Chernobyl disaster, accidental, occupational, and medical exposure to ionizing radiation.
Administration
Originally, in 1955, India and the Soviet Union wanted to add several neutral and communist states, such as mainland China. Eventually, a compromise with the US was made and Argentina, Belgium, Egypt and Mexico were permitted to join. The organisation was charged with collecting all available data on the effects of "ionising radiation upon man and his environment". (James J. Wadsworth - American representative to the General Assembly).
The committee was originally based in the Secretariat Building in New York City but moved to the United Nations Office at Vienna in 1974.
The Secretaries of the Committee have been:
Dr. Ray K. Appleyard (UK) (1956–1961)
Dr. Francesco Sella (Italy) (1961–1974)
Dr. Dan Jacobo Beninson (Argentina) (1974–1979)
Dr. Giovanni Silini (Italy) (1980–1988)
Dr. Burton Bennett (1988 acting; 1991–2000)
Dr. Norman Gentner (2001–2004; 2005 acting)
Dr. Malcolm Crick (2005–2018)
Dr. Ferid Shannoun (2018–2019 acting)
Ms. Borislava Batandjieva-Metcalf (Bulgaria) (2019–)
Contents of UNSCEAR 2008 report
UNSCEAR has published 20 major reports. The latest is the 2010 Summary Report (14 pages), while the last full report was the 2008 Report Vol. I and Vol. II with scientific annexes (A to E).
"UNSCEAR 2008 REPORT Vol.I" main report and 2 scientific annexes
Report to the General Assembly (without scientific annexes; 24 pages)
Includes short overviews of the materials and conclusions contained in the scientific annexes
Scientific Annex
Annex A: "Medical radiation exposures" (202 pages)
Annex B: "Exposures of the public and workers from various sources of radiation" (245 pages)
Tables (downloadable) "Public.xls" (A1 to A14), "Worker.xls" (A15 to A31)
"UNSCEAR 2008 REPORT Vol.II" 3 scientific annexes
Annex C: "Radiation exposures in accidents" (49 pages)
Annex D:"Health effects due to radiation from the Chernobyl accident" (179 pages)
Annex E: "Effects of ionizing radiation on non-human biota" (97 pages)
Contents of UNSCEAR 2020/2021 report
UNSCEAR has published in 2022 its last full report, the UNSCEAR 2020/2021 Report Vol. I, Vol. II, Vol. III and Vol. IV with scientific annexes (A to D).
See also
European Committee on Radiation Risk
International Commission on Radiological Protection
Radiation protection
References
External links
UNSCEAR Website
UNSCEAR Publications
Radiation health effects
Nuclear organizations
United Nations General Assembly subsidiary organs
Radiation protection organizations
United Nations organizations based in Vienna
1955 establishments in New York City | United Nations Scientific Committee on the Effects of Atomic Radiation | [
"Chemistry",
"Materials_science",
"Engineering"
] | 860 | [
"Radiation health effects",
"Nuclear organizations",
"Radiation protection organizations",
"Radiation effects",
"Energy organizations",
"Radioactivity"
] |
9,669,620 | https://en.wikipedia.org/wiki/Metropolis%20%28architecture%20magazine%29 | Metropolis is an internationally recognized design and architecture–concentrated magazine with a strong focus on ethics, innovation and sustainability in the creative sector. The magazine was established in 1981 by Horace Havemeyer III of Bellerophon Publications, Inc alongside his wife Eugenie Cowan Havemeyer and is based in New York City. Metropolis's work towards future focused is based in their motto "design at all scales".
The magazine is published ten times a year with over 50,000 subscribers. Metropolis publishes both print and digital editorial coverage encouraging design focused conversation through a range of diverse mediums. Alongside the magazine itself, Metropolis produces four additional print supplements and a series of live across the United States. Metropolis produces digital media for their website and social accounts. Their website receives approximately 85,000 unique visitors every month while its socials amass an audience of over 100,000 followers across Instagram and Facebook combined. In 2019 Metropolis was acquired by Sandow Media for an undisclosed amount. Metropolis annually hosts a range of virtual and in-person events alongside design competition schemes encouraging innovation and sustainability .
History
Metropolis was launched in 1981 by Horace Havemeyer III (1942–2014) and Eugenie Cowan Havemeyer. Havemeyer III was born in Dix Hills moving to New York City in 1969 where he worked at Doubleday publishers as a production planning supervisor for a decade. He went on to completed courses at the Institute for Architecture and Urban Studies prompting his work at the IAUS journal, Skyline until it closed. In 1981, he founded Bellerophon Publications alongside his wife, serving as publishing body and founders of Metropolis magazine. As evidenced in early editions, the magazine began with a particular focus on architecture in New York City and quickly expanded to embrace a range of design disciplines internationally.
In 1985 Suzan S. Szenasy took over as the editor-in-chief of Metropolis. As a contemporary figurehead in design and innovation and notable "voice of the design world" Szenasy led Metropolis to international acclaim. Szenasy's influence addressed sustainability and inclusive design on the stage of a mainstream publication before familiarity of such practices was attracted.
Metropolis''' values largely surrounded Szenasy's writings, which emphasize the importance of ethics and sustainable intervention in the education and practices of designers.
In 2017, Avinash Rajagopal took over from Szenasy as the editor-in-chief of Metropolis as Szenasy moved into a new role as Director of Design Innovation. Rajagopal's role as editor-in-chief followed his success as Metropolis' senior editor from 2011 to 2016. In 2019 Rajagopal worked alongside Eugenie Havemeyer in Sandow Media's acquisition of the magazine to their portfolio.
Editors-In-Chief
Horace Havemeyer III, 1981–986
Susan S. Szenasy, 1986–2017
Avinash Rajagopal, 2017–present; assumed this role after working as senior editor at the magazine from 2011 to 2016.
InfluenceMetropolis is centered around futurism in its focus on the innovative needs of consumers and the planet. The magazine investigates the role designers can play in reversing the climate crisis by rejecting developments founded on self-interest and monetary gain. With a strong focus on futurist ideals and sustainability Metropolis encourages affirmative action in developing a progressive and environmentally enduring world.
Futurism and SustainabilityMetropolis is heavily inspired aesthetically and culturally by futurist values. It focuses significantly on the technological progress of the modern machine age, eagerly anticipating vitality and progression of design and architectural realms. The magazine embraces an artistic futurist aesthetic, frequently applying neo-impressionism and cubism in its covers as a direct reflection of the dynamism of modernity the magazine embodies.
The magazine remains heavily influenced by the work and writings of former editor, Susan S. Szenasy, due to the accuracy at which she was able to anticipate changing realities within design from the 80’s to the present moment.
As part of Metropolis's 40th anniversary the magazine republished old pieces that anticipated trends in design that remain pertinent today with a particular focus on "work, sustainability, the wellness movement, the concept of reuse, gender issues, accessibility and the new digital technologies." The articles republished were included online and within print supplements entitled "40 Years of Looking Forward". Editor-in-chief Avinash Rajagopal wrote in the July/August 2021 issue of Metropolis:
The republished work included:
March 1989—This issue featured an article by Susan S. Szenasy entitled "Making Home Work" with exact parallels to the Covid-19 work from home movement.
May 1989—Featured an article entitled "Building with the Sun" by Don Prowler focalising the emergence of the sustainability movement.
October 1996 – Included the article, "Well-Being", by Barry M. Katz introduced this concept to designers.
May 1999 – "The Mall Doctor", by Ellen Barr campaigned for reuse over reconstruction in architecture.
Metropolis's embodiment of futurism and sustainability thereby encourages a "coexistence with future generations for the premonition, and anticipatory belief in future formulas".
Notable Volumes
The following table highlights volumes of Metropolis considered notable within the design community. The listed issues are significant due to contentious and innovative design thinking.
Competition Metropolis hosts a range of creative competitions annually to encourage innovation and forward thinking among contemporary design practitioners. The magazines most notable award schemes are the annual 'Next Generation Design Competition', 'Planet Positive Awards' and the 'Future 100’.
Next Generation Design Metropolis annually hosts the 'Next Generation Design Competition' alongside Staples Business Advantage. The competition encourages experienced designers to create based around five major themes: collaboration, wellness, effectiveness and productivity, office culture, and sustainability. The competition awards victors with $10,000 in venture capital to encourage production, with the only eligibility criteria being entrants must have been practicing for a decade or less are eligible.
Planet Positive Awards
The 'Planet Positive Awards' recognize sustainable creative projects and products internationally that benefit both people and the planet. The competition inaugurally occurred in 2021. Project eligibility included sustainable architecture and interior design projects developed over the past three years dating from June 2018-June 2021, and sustainable products released between June 2019-June 2021. As a prerequisite to entrance all competition entries must recognize they are targeting sustainability through official sustainability certifications. The 'Planet Positive Award' winners were published in the November/December 2021 issue of the magazine, Volume 41, No 6 and recognized formally in a virtual awards ceremony facilitated by Sandow through their Design TV platform.
Award Recipients
The awards were grouped into seven categories for judgement including: civic/cultural, workplace, healthcare, multifamily, hospitality, education, and products.
Future 100 Metropolis 'Future 100' awards act as the connector of top tier architectural talent with leading design firms. The award recognizes the top 100 graduating students from interior and architecture design programs in the United States and Canada. Award recipients will be featured in Metropolis on both print and digital scales, alongside recognition of their programs, nominators, and school.
The inaugural 'Future 100' interior design and architecture graduating students were named in 2021 and can be found on the Metropolis website.
Awards
In 2007 and 2008 Metropolis was a finalist in the National Magazine Awards in the 'Under 100,000 Circulation' category for General Excellence. This award honours effectiveness and overall excellence in which "writing, reporting, editing, and design all come together to command readers attention and fulfill the magazines unique editorial decision".
In 2007 Havemeyer III, Szenasy and Metropolis won the CIVITAS August Heckscher Award "for their twenty-five years pursuing enlightened and intelligent documentation of life in urban America, especially New York City".
In 2009 Havemeyer III was awarded the Institute Honor for Collaborative Achievement by the American Institute of Architects on behalf of Metropolis magazine.
In 2017, Susan S. Szenasy (Metropolis's editor-in-chief from 1986 to 2017) received the Cooper–Hewitt, Smithsonian Design Museum's Director's Award in recognition of her work at Metropolis'' and beyond.
References
External links
Metropolis website
Visual arts magazines published in the United States
Architecture magazines
Design magazines
Magazines established in 1981
Magazines published in New York City
Ten times annually magazines | Metropolis (architecture magazine) | [
"Engineering"
] | 1,697 | [
"Design magazines",
"Design"
] |
9,669,896 | https://en.wikipedia.org/wiki/Virtual%20Global%20University | The Virtual Global University (VGU) is a virtual university offering online distance education or virtual education on the Internet.
Organization
The Virtual Global University (VGU) is a private organization founded in 2001 by 17 professors of Business Informatics from 14 different universities in Germany, Austria, and Switzerland. The VGU brings together the knowledge and experience of people from different universities in one virtual organization. At the same time, it is a real organization, according to German civil law under the name "VGU Private Virtual Global University GmbH."
Within the Virtual Global University, the School of Business Informatics (SBI) is the organizational unit that offers online courses and an online study program.
Studies
The focus of VGU's study offerings is information technology (IT) and management—or Business Informatics as it is called in Central Europe. Students of Business Informatics (BI) are taught how to use IT effectively to develop business solutions for global challenges.
All courses offered by the VGU are based on the Internet as well as on commonly available information and communication technology and are given entirely, or are substantially supported, by means of electronic media. The MBI can be conducted either in English or in German.
MBI program
The VGU offers a master's program leading to the degree of an "International Master of Business Informatics" (MBI). The creation of the MBI was supported by the German "Bundesministerium für Bildung und Forschung" (federal ministry of education and research) within the program "New Media in Education." The program is accredited by the government as well as by ACQUIN. The master's degree is awarded by the European University Viadrina (EUV) in Frankfurt (Oder), Germany, in cooperation with the VGU. While the latter provides expertise and teaching for the program, EUV is responsible for ensuring that the academic and educational standards of the program are maintained at an appropriate level.
Certificate courses
Independent certificate courses on a number of IT and management topics are offered in addition to the master program MBI.
Faculty and management
Head
The head of the Virtual Global University is Prof. Dr. Karl Kurbel. He is also CEO of the VGU GmbH and head of the Business Informatics Chair at the European University Viadrina in Frankfurt (Oder), Germany.
Faculty
The faculty of the School of Business Informatics consists of 18 professors plus external lecturers, assisted by teaching assistants. The current faculty members are:
Prof. Dr. Freimut Bodendorf, Chair of Information Systems II, Friedrich-Alexander-University, Erlangen-Nuremberg, Germany
Prof. Dr. Stefan Eicker, Research Group for Business Informatics and Software Engineering, University of Duisburg-Essen, Essen, Germany
Prof. Dr. Dimitris Karagiannis, Institute of Applied Computer Science and Information Systems - Knowledge Engineering, University of Vienna, Vienna, Austria
Prof. Dr. Gerhard Knolmayer, Institute of Information Systems - Research Group "Information Engineering," University of Berne, Berne, Switzerland
Prof. Dr. Hermann Krallmann, Department of Computer Science - Systems Analysis, Technische Universität Berlin, Berlin, Germany
Prof. Dr. Karl Kurbel, Chair of Business Informatics, European University Viadrina, Frankfurt (Oder), Germany
Prof. Dr. Susanne Leist, Faculty of Business, Economics and Information Systems, University of Regensburg, Regensburg, Germany
Prof. Dr. Gustaf Neumann, Chair of Information Systems and New Media, Vienna University of Economics and Business Administration, Vienna, Austria
Prof. Dr. Andreas Oberweis, Institute of Applied Informatics and Formal Description Methods, University of Karlsruhe, Karlsruhe, Germany
Prof. Dr. Guenther Pernul, Faculty of Business, Economics and Information Systems, University of Regensburg, Regensburg, Germany
Prof. Dr. Claus Rautenstrauch, Department of Business Information Systems, Otto-von-Guericke University, Magdeburg, Germany
Prof. Dr. Susanne Robra-Bissantz, Department of Business Informatics, Braunschweig University of Technology, Braunschweig, Germany
Prof. Dr.-Ing. Hans Roeck, Chair of Business Informatics, University of Rostock, Rostock, Germany
Prof. Dr. August-Wilhelm Scheer, Institute of Business Informatics, Saarland University, Saarbruecken, Germany
Prof. Dr. Bernd Scholz-Reiter, Bremen Institute of Industrial Technology and Applied Work Science, University of Bremen, Bremen, Germany
Prof. Dr. Wolffried Stucky, Institute of Applied Informatics and Formal Description Methods, University of Karlsruhe, Karlsruhe, Germany
Prof. Dr. Alfred Taudes, Department of Production Management, Vienna University of Economics and Business Administration, Vienna, Austria
Prof. Dr. Robert Winter, Institute of Information Management, University of St. Gallen, St. Gallen, Switzerland
References
External links
Virtual Global University
European University Viadrina
2001 establishments in Europe
Information technology organizations
Educational organizations based in Europe
Information technology education | Virtual Global University | [
"Technology"
] | 1,039 | [
"Information technology",
"Information technology education",
"Information technology organizations"
] |
9,670,200 | https://en.wikipedia.org/wiki/Kinetic%20logic | Kinetic logic, developed by René Thomas, is a Qualitative Modeling approach feasible to model impact, feedback, and the temporal evolution of the variables. It uses symbolic descriptions and avoids continuous descriptions e.g. differential equations.The derivation of the dynamics from the interaction graphs of systems is not easy. A lot of parameters have to be inferred, for differential description, even if the type of each interaction is known in the graph. Even small modifications in parameters can lead to a strong change in the dynamics. Kinetic Logic is used to build discrete models, in which such details of the systems are not required. The information required can be derived directly from the graph of interactions or from a sufficiently explicit verbal description. It only considers the thresholds of the elements and uses logical equations to construct state tables. Through this procedure, it is a straightforward matter to determine the behavior of the system.
Formalism
Following is René Thomas’s formalism for Kinetic Logic :
In a directed graph G = (V, A), we note G− (v) and G+ (v) the set of predecessors and successors of a node v ∈ V respectively.
Definition 1: A biological regulatory network (BRN) is a tuple G = (V, A, l, s, t, K) where
(V, A) is a directed graph denoted by G,
l is a function from V to N,
s is a function from A to {+, −},
t is a function from A to N such that, for all u ∈ V, if G+(u) is not empty then {t(u, v) | v ∈ G+(u)} = { 1, . . . , l(u)}.
K = {Kv | v ∈ V} is a set of maps: for each v ∈ V, Kv is a function from 2G− (v) to {0, . . . , l(v)} such that Kv(ω) ≤ Kv(ω_) for all ω ⊆ ω_ ⊆ G−(v).
The map l describes the domain of each variable v: if l (v) = k, the abstract concentration on v holds its value in {0, 1, . . . , k}. Similarly, the map s represents the sign of the regulation (+ for an activation, − for an inhibition). t (u, v) is the threshold of the regulation from u to v: this regulation takes place iff the abstract concentration of u is above t(u, v), in such a case the regulation is said active. The condition on these thresholds states that each variation of the level of u induces a modification of the set of active regulations starting from u. For all x ∈ [0, . . ., l(u) − 1], the set of active regulations of u, when the discrete expression level of u is x, differs from the set when the discrete expression level is x + 1. Finally, the map Kv allows us to define what is the effect of a set of regulators on the specific target v. If this set is ω ⊆ G− (v), then, the target v is subject to a set of regulations which makes it to evolve towards a particular level Kv(ω).
Definition 2 (States):
A state μ of a BRN G = (V, A, l, s, t, K) is a function from V to N such that μ (v) ∈ {0 .., l (v)} for all variables v ∈ V. We denote EG the set of states of G.
When μ (u) ≥ t (u, v) and s (u, v) = +, we say that u is a resource of v since the activation takes place. Similarly when μ (u) < t (u, v) and s (u, v) = −, u is also a resource of v since the inhibition does not take place (the absence of the inhibition is treated as an activation).
Definition 3 (Resource function):
Let G = (V, A, l, s, t, K) be a BRN. For each v ∈ V we define the resource function ωv: EG → 2G− (v) by:
ωv (μ) = {u ∈ G−(v) | (μ(u) ≥ t(u, v) and s(u, v) = +) or (μ (u) < t (u, v) and s (u, v) = −)}.
As said before, at state μ, Kv (ωv(μ)) gives the level towards which the variable v tends to evolve. We consider three cases,
if μ(v) < Kv(ωv(μ)) then v can increase by one unit
if μ(v) > Kv(ωv(μ)) then v can decrease by one unit
if μ(v) = Kv (ωv (μ)) then v cannot evolve.
Definition 4 (Signs of derivatives):
Let G = (V, A, l, s, t, K) be a BRN and v ∈ V.
We define αv: EG → {+1, 0, −1} by αv(μ) =
+1 if Kv (ωv(μ)) > μ(u)
0 if Kv (ωv(μ)) = μ(u)
−1 if Kv (ωv(μ)) < μ(u)
The signs of derivatives show the tendency of the solution trajectories.
The state graph of BRN represents the set of the states that a BRN can adopt with transitions among them deduced from the previous rules:
Definition 5 (State graph):
Let G = (V, A, b, s, t,K) be a BRN. The state graph of G is a directed graph G = (EG, T) with (μ, μ_) ∈ T if there exists v ∈ V such that:
αv (μ) ≠ 0 and μ’ (v) = μ (v) + αv (μ) and μ (u) = μ’ (u), ∀u ∈ V \ {v}.
Critical Assumptions
The critical assumptions of Kinetic Logic are:
The elements of system have slight effect on each other until they reach a threshold.
At high levels the effect on each other tends to reach a plateau. So an element is present when greater than the threshold level and absent when it is below the threshold level.
Steps of Application
Following are the steps of Application of Kinetic Logic (Also shown in figure A).
Biological Regulatory Network (BRN)
Keeping the research problem in mind, the behavior of elements in the system and their interactions are studied. Elements of a system can interact positively or negatively, that is, the level of an element may activate or reduce the rate of production of other elements or of itself. These interactions are represented as positive (activation) or negative (inhibition).
When elements are connected in a topologically circular way, they exert an influence on their own rate of synthesis and they form a feedback loop. A feedback loop is positive or negative according to whether it contains an even or odd number of negative interactions. In a positive loop, each element of the system exerts a positive effect on its own rate of synthesis, whereas in a simple negative loop, each element has a negative effect on its own rate of synthesis. A simple positive feedback loop results in epigenetic regulation and have multiple steady states and a simple negative feedback loop results in homeostatic regulation.
Abstraction: A chain of positive interactions is equivalent to a direct positive interaction between the two extreme elements, and any two negative interactions cancel out each other’s effect. In this way, any simple feedback loop can be abridged to a one-element loop, positive or negative according to the number of negative interactions (even or odd) in the original loop. Accordingly, through extensive literature survey and the application of the above-mentioned rules, a BRN is abstracted.
Logical Variable and Functions
Logical variables are associated with the elements of the system to describe the state of the system. They consist of the logical values. For example, a system whose state is appropriately described by the levels of substances a, b, and c, each of which can be absent, present at low level, or present at high level are represented by logical values 0, 1, and 2 respectively.
If a product an acts to stimulate the production of b, it is a positive regulator. In this case, the rate of synthesis of b increases with increasing concentration of a, and makes a curve similar to that shown in figure B.
There is little effect of a, until it reaches a threshold concentration theta, and at higher concentrations a plateau is reached which shows the maximal rate of synthesis of b. Such a nonlinear, bounded curve is called a sigmoid. It can be suggested that a is "absent" for a < theta and "present" for a > theta. The sigmoid curve can be approximated by the step function, as in figure C.
Not only logical variables (x, y, z ...) are associated to the elements, that represent their level (e.g., concentration), but also logical functions (X, Y, Z ...) whose value reflects the rate of synthesis of the element. Thus,
x = 0 means "gene product absent"
x = 1 means "gene product present"
&
X = 0 means "gene off"
X = 1 means "gene on"
Graph of Interactions and Logical Equations
Kinetic Logic has two forms depending on the following two types of descriptions:
Naïve Logical Description
Consider a simple two-element system in which product x activates gene Y and product y represses gene X as shown in figure D. Each variable takes only two values; 0 and 1. In other words,
X = 1 if y = 0 (X "on” if y absent)
Y = 1 if x = 1 (Y "on" if x present)
The logical relation of the system can be written:
X =y
Y=x
Generalized Kinetic Logic
The naive logical description can be generalized and made to accommodate situations in which some variables take more than two values, without complicating the analysis. Any variable has a number of biologically relevant levels, determined by the number of elements regulated by the product x. There is a specific threshold for each regulatory interaction, so if x regulates n elements, it will have up to n different thresholds.
For the logical sum, there is a procedure that assigns a specific weight to each term in the logical relation. According to the scale of thresholds of the corresponding variable, the weighted algebraic sum is then discretized, so an n-valued variable is associated with an n-valued function. After discretization the integers of certain weights or sums of weights are called logical parameters.
Generalized Kinetic Logic, although maintaining the analytic simplicity of the naive description, has certain features in common with the differential description. The generalized logical relations are completely independent of the differential description and can be directly derived from the graph of interactions or from an explicit verbal description.
Consider an example of two elements in figure E. Using a software, this graph of interactions is drawn as shown in figure F. There are two thresholds assigned to element y: Ѳ12, concerning its interaction with x and Ѳ22, concerning its interaction with itself. The variable y and function Y have three possible values: 0, 1, and 2. Element x have a single threshold, Ѳ21, because of the interaction x to +y, so the variable x and function X will be two-valued.
State Table and State Graph
The state table of graph of interactions in figure D is shown in figure G. This table states for each state of the variables (x, y) i.e. present or absent, which products are synthesized and which are not synthesized at a significant rate. Consider the state 00/10, in which both of the gene products are absent but gene X is on. As product x is absent but being synthesized so it can be expected that in near future it will be present and the logical value of x will change from 0 to 1. This can be described by the notation Ō, in which the dash above the digit is due to the fact that variable x is committed to change its value from 0 to 1. Generally, a dash over the figure representing the logical value of a variable each time this value is different from that of the corresponding function. The state just considered can thus be represented as ŌO.
Time Delays
The movement of a system from one state to another depends on the time delays. The time delays in systems are short time shifts of arbitrary duration. In view of the relation between a function (gene on or off) and its associated variable (gene product present or absent), the time delays become real entities whose values, far from being arbitrary, reflect specific physical processes (synthesis, degradation, dilution, etc.). The values of the different time delays play an important role in determining the pathway along which the system evolves.
The temporal relation between a logical variable x which is associated with the level of an element and a logical function X which is associated with its evolution can be explained as follows.
Consider a gene that is off (X = 0) for a considerable time, then is switched on (X = 1) by a signal, and then, after some time, it is switched off again (X= 0) by another signal and the product reappears but not immediately until a proper delay tx has elapsed. If a signal switches the gene off temporarily, the product is still present because it also requires a time delay tx’. This can be represented graphically as shown in figure H. Using the state table the temporal sequence of states of the system can be represented as shown in figure I.
Identifying Cycles and Stable Steady States
Cycles
The state table in D can be used to identify the cyclic behavior of the system. We can see that state 01 changes to 00, and 00 changes to 10, 10 changes to 11 and 11 changes back to 01. This represents a cycle as the system starts from the state 01 and returns to the same state. The system keeps oscillating between these states.
Deadlocks
Consider another example in which:
X=y
Y=x
The state table for the system is shown in figure J. The states that are encircled are stable states, as they do not evolve towards any other state. The logical stable states are defined as those for which the vectors xy . .. and XY ... are equal. When we considered the time delays i.e. from ŌŌ the system will proceed to state 1 0 or to state 01, according to whether tx < ty or ty < tx, and from ĪĪ the system will proceed to state 10 or to state 0 1 according to whether ty < tx or tx < ty. The state graph representing delays is shown in figure K.
The sequence of states a system depends on the relative values of the time delays. It is assumed that two delays (or sums of delays) are never exactly equal, therefore that two variables will not change their values at the same instant. But do not exclude the possibility because if this rule is applied rigidly, it could occasionally lead to the loss of interesting pathways.
Analyzing the Results
The cycles and the deadlock states identified by this process are then analyzed by comparing them with the in vitro and in vivo findings. These results can be used to make important predictions about the system. Cyclic behaviors correspond to homeostatic regulations that retain the level of a variable at or near a fixed or optimal value. Deadlocks represent epigenetic regulation in which the concentrations exist between extreme levels.
History
The first approach for qualitative modeling was based on extreme discretization since all genes could be either on (present) or off (absent). This Boolean approach was generalized into a multi-valued approach i.e. Kinetic Logic, in which logical identification of all steady states became possible.
Application
Kinetic logic has been employed to study the factors that influence the selection of specific pathway from many different pathways that the system can follow and the factors that lead the system towards stable states and cyclic behaviors. It has been used to reveal the logic that lie behind the functional organization and kinetic behavior of molecules. Model checking techniques have also been applied to models built through kinetic logic, in order to infer their continuous behaviors.
Kinetic logic has been applied on many different types of systems in biology, Psychology and Psychiatry. Mostly Kinetic Logic has been used in modeling the biological networks especially the Gene Regulatory Networks (GRNs).
Following are the examples in which Kinetic Logic was employed as the modeling formalism:
To model the dynamics of chronic psychosis and schizophrenia.
To show that how residence time of the hormone on the receptor can decide the specificity of signaling between the alternative metabolic or mitogenic pathways,
To explain the positive (cell proliferation and cytokine production) and negative (anergy induction) signaling of T lymphocytes; to determine how the timing of the binding and intracellular signal-transduction events can influence the properties of receptor signaling and decide the type of cellular response
To demonstrate how prion infection proceeds
To model other biological regulatory networks (BRNs) like toy gene, lambda phage (Ahmad and Roux, 2008), Pseudomonas aeruginosa and circadian rhythm.
To reveal the logic that lie behind the functional organization and kinetic behavior of the thioredoxin system
Tool
As theoretical analysis through Kinetic Logic is a time consuming process, a tool known as Genotech, for modeling and analysis of BRNs was developed on the basis of Kinetic Logic and has been used for a number of Kinetic Logic-based studies. It analyzes behaviors like stable cycles, stable steady states and paths in the state graph (discrete model) of biological systems, accelerating the process of modeling. GenoTech is extremely useful as it allows repeated experimentation by automating the whole process. This tool is available on request.
Books
References
Mathematical and theoretical biology | Kinetic logic | [
"Mathematics"
] | 3,723 | [
"Applied mathematics",
"Mathematical and theoretical biology"
] |
9,671,027 | https://en.wikipedia.org/wiki/Lysophosphatidic%20acid | A lysophosphatidic acid (LPA) is a phospholipid derivative that can act as a signaling molecule.
Function
LPA acts as a potent mitogen due to its activation of three high-affinity G-protein-coupled receptors called LPAR1, LPAR2, and LPAR3 (also known as EDG2, EDG4, and EDG7). Additional, newly identified LPA receptors include LPAR4 (P2RY9, GPR23), LPAR5 (GPR92) and LPAR6 (P2RY5, GPR87).
Clinical significance
Because of its ability to stimulate cell proliferation, aberrant LPA-signaling has been linked to cancer in numerous ways. Dysregulation of autotaxin or the LPA receptors can lead to hyperproliferation, which may contribute to oncogenesis and metastasis.
LPA may be the cause of pruritus (itching) in individuals with cholestatic (impaired bile flow) diseases.
GTPase activation
Downstream of LPA receptor activation, the small GTPase Rho can be activated, subsequently activating Rho kinase. This can lead to the formation of stress fibers and cell migration through the inhibition of myosin light-chain phosphatase.
Metabolism
There are a number of potential routes to its biosynthesis, but the most well-characterized is by the action of a lysophospholipase D called autotaxin, which removes the choline group from lysophosphatidylcholine.
Lysophosphatidic acids are also intermediates in the synthesis of phosphatidic acids.
See also
Autotaxin
GPR35
Phosphatidic acid
Sphingosine-1-phosphate
Gintonin
References
Further reading
Phospholipids | Lysophosphatidic acid | [
"Chemistry"
] | 394 | [
"Phospholipids",
"Signal transduction"
] |
9,671,330 | https://en.wikipedia.org/wiki/Birth%20Control%20%28film%29 | Birth Control (also known as The New World) is a lost 1917 American documentary film produced by and starring Margaret Sanger and describing her family planning work. It was the first film banned under the 1915 ruling of the United States Supreme Court in Mutual Film Corporation v. Industrial Commission of Ohio, which held that the exhibition of films did not constitute free speech.
The banning of Birth Control was upheld by the New York Court of Appeals on the grounds that a film on family planning may be censored "in the interest of morality, decency, and public safety and welfare."
See also
Film censorship in the United States
List of lost films
References
External links
1917 films
1917 lost films
American black-and-white films
Birth control
Works subject to a lawsuit
Documentary films about health care
1917 documentary films
Black-and-white documentary films
Films about activists
Family planning
American silent feature films
Lost American films
American documentary films
Obscenity controversies in film
1910s English-language films
1910s American films
English-language documentary films | Birth Control (film) | [
"Biology"
] | 201 | [
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
9,671,515 | https://en.wikipedia.org/wiki/Pentoxyverine | Pentoxyverine (rINN) or carbetapentane is an antitussive (cough suppressant) commonly used for cough associated with illnesses like common cold. It is sold over-the-counter as Solotuss, or in combination with other medications, especially decongestants. One such product is Certuss, a combination of guaifenesin and pentoxyverine. The drug has been available in the form of drops, suspensions and suppositories.
It was formerly available over-the-counter in United States. However, the U.S. Food & Drug Administration ruled in 1987 that pentoxyverine was not generally recognized as safe and effective and ordered it to be removed from the over-the-counter market.
Uses
The drug is used for the treatment of dry cough associated with conditions such as common cold, bronchitis or sinusitis. Like codeine and other antitussives, it relieves the symptom, but does not heal the illness. No controlled clinical trials regarding the efficiency of pentoxyverine are available.
Pharmacologists use the substance as a selective agonist at the sigma-1 receptor in animal and in vitro experiments.
Contraindications
Pentoxyverine is contraindicated in persons with bronchial asthma or other kinds of respiratory insufficiency (breathing difficulties), as well as angle-closure glaucoma. No data are available for the use of pentoxyverine during pregnancy, lactation, or children under two years of age, wherefore the drug must not be used under these circumstances.
Antitussive drugs are not useful in patients with extensive phlegm production because they prevent coughing up the phlegm.
Adverse effects
The most common side effects (seen in more than 1% of patients) are upper abdominal (belly) pain, diarrhoea, dry mouth, and nausea or vomiting. Allergic reactions of the skin like itching, rashes, hives and angiooedema are rare. The same is true for anaphylactic shock and convulsions.
Overdose
Overdosage leads to drowsiness, agitation, nausea and anticholinergic effects like tachycardia (high heart rate), dry mouth, blurred vision, glaucoma, or urinary retention. Especially in children, pentoxyverine can cause hypoventilation, but much more seldom than codeine and other opioid antitussives.
The treatment of overdosage aims at the symptoms; there are no specific antidotes available.
Interactions
No interactions have been described at usual doses. It is possible that pentoxyverine can increase the potency of sedative drugs like benzodiazepines, some anticonvulsants and antidepressants, and alcohol. Likewise, some consumer informations warn patients from taking the drug in combination with or up to two weeks after monoamine oxidase inhibitors, which are known to cause potentially fatal reactions in combination with the (chemically only distantly related) antitussive dextromethorphan.
Mechanism of action
Pentoxyverine is believed to suppress the cough reflex in the central nervous system, but the exact mechanism of action is not known with certainty. The drug acts as an antagonist at muscarinic receptors (subtype M1) and as an agonist at sigma receptors (subtype σ1) with an IC50 of 9 nM. Its anticholinergic properties can theoretically relax the pulmonary alveoli and reduce phlegm production. Spasmolytic and local anaesthetic properties have also been described. The clinical relevance of these mechanisms is uncertain.
Pharmacokinetics
The substance is absorbed quickly from the gut and reaches its maximum plasma concentration (Cmax) after about two hours. If applied rectally, Cmax is reached after four hours. The bioavailability of the suppositories, measured as area under the curve (AUC), is about twofold that of oral formulations, due to a first pass effect of over 50%. By far the most important metabolisation reaction is ester hydrolysis, which accounts for 26.3% of the total clearance through the kidneys. Only 0.37% are cleared in form of the original substance. The plasma half life is 2.3 hours for oral formulations and three to 3.5 hours for suppositories. Pentoxyverine is also excreted into the breast milk.
Chemical properties
Pentoxyverine dihydrogen citrate, the salt that is commonly used for oral preparations, is a white to off-white, crystalline powder. It dissolves easily in water or chloroform, but not in benzene, diethyl ether, or petroleum ether. It melts at . Other orally available salts are the hydrochloride and the tannate; suppositories contain the free base.
See also
Cough syrup
Noscapine
Codeine; Pholcodine
Dextromethorphan; Dimemorfan
Racemorphan; Dextrorphan; Levorphanol
Butamirate
Tipepidine
Cloperastine; Levocloperastine
References
Antitussives
Carboxylate esters
Ethers
Diethylamino compounds
Sigma agonists
Cyclopentanes | Pentoxyverine | [
"Chemistry"
] | 1,124 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
9,672,025 | https://en.wikipedia.org/wiki/Beat%20detection | In signal analysis, beat detection is using computer software or computer hardware to detect the beat of a musical score. There are many methods available and beat detection is always a tradeoff between accuracy and speed. Beat detectors are common in music visualization software such as some media player plugins. The algorithms used may utilize simple statistical models based on sound energy or may involve sophisticated comb filter networks or other means. They may be fast enough to run in real time or may be so slow as to only be able to analyze short sections of songs.
See also
Pitch detection
External links
Beat This > Beat Detection Algorithm
Audio Analysis using the Discrete Wavelet Transform
Signal processing | Beat detection | [
"Technology",
"Engineering"
] | 131 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
9,672,221 | https://en.wikipedia.org/wiki/Sisu%20RA-140%20DS | The Sisu RA-140 DS "Raisu" is a flail-type demining vehicle developed and produced by the Finnish company Sisu-Auto and later produced by Patria Vehicles in the years 1994–2001. The production totalled 41 units.
Development
Design work on a new demining vehicle began in 1986 and the first prototype was ready in 1990. The basic structure was borrowed from the SA-110 prototype but with extensive modification to suit the new vehicle's purpose. Technology and design assets of Pasi were utilised in the development of the armoured cabin.
Sisu military vehicles typically have a nickname in addition to the official model name. The vehicle was nearly given the nickname Misu which means "kitty" or "pussycat". It was derived from Miina-Sisu ("Mine Sisu"), and it was already printed on some brochures. However, the engineering department quickly changed the name to Raisu, from Raivaus-Sisu ("Clearance Sisu"), as 'Misu' was considered infantile. Raisu also means "boisterous".
Production
Production of the RA-140 DS took place in Hämeenlinna and it was the last Sisu design to be developed there. Four vehicles were produced for Danish Defence in the years 1996–1997; the rest were assigned to different brigades of Finnish Defence Forces.
Operation and characteristics
The RA-140 DS is intended for clearing of minefields composed of surface-placed or conventionally buried non-directional mines set against infantry, and sparsely placed anti-tank mines up to 10 kg. It can clear a suitable passage for a vehicle convoy. Raisu is not meant for combat missions.
The mine clearing tool includes 82 hammers which are fastened to a rotating flail by chains. In operation the vehicle is driven in reverse to ensure the best protection for the crew. The driver and passenger are protected against the pressure and mine fragments by a protection shield next to the flail. The vehicle makes a 3.4-metre wide path and the clearing depth can be controlled manually or automatically. The maximum speed in operation is 6 km/h and the Raisu can eliminate mines up to 10 kg of explosive. The clearing flail is turned longitudinally and mounted on the top of the vehicle for during transportation.
The maximum gradient that the vehicle can navigate is 60% and the steepest side slope 30%. The highest vertical step the vehicle can climb is 0.5 metres and the trenching capability is 0.6 metres. The maximum fording depth is 0.8 metres. The Raisu's cross-country mobility is good and it can be quickly moved to a new site. This is a key benefit compared to conventional, heavier tank-based applications.
Technical data
The vehicle is powered by air-cooled six-cylinder Deutz BF 6L 913 C turbodiesel engine. The gearbox is a four-speed automatic Renk Doromat 874 AM/PTO. The front axle is portal type and sprung with coil springs. Both axles are driven and equipped with lockable differentials. The front axle has got disc brakes and the rear axle is with drum brakes.
When in service, the vehicle is driven by hydraulic transmission which is also using the flail and lifting or lowering an armoured-steel-made deflector shield between the flail and vehicle. The flail rotating speed can be infinitely adjusted between 0–500 rpm. The armoured cabin contains seats for the driver and commander and is designed with NBC protection and to withstand a 10 kg explosion under the vehicle. The armour protects against 7.62 calibre bullets. A 2061 VHF radio is for communication. The wheels are mine and bullet resistant. There is an option to mount a 12.7-calibre machine gun. A self-recovery winch is fitted as standard.
Operational history
The four vehicles of Danish Defence were used in the former Yugoslavia. The vehicles faced severe problems in Bosnia and therefore the Danish army placed their further mine clearing vehicle orders for domestic made Hydrema 910 MCV.
Finland has used Raisu's in UN missions.
References
External links
A UN-painted Sisu RA-140 DS in action.
A UN-painted Sisu RA-140 DS in transportation.
Ra140ds
Mine warfare countermeasures
Military engineering vehicles
Vehicles introduced in 1994
Military vehicles of Finland
Patria (company) | Sisu RA-140 DS | [
"Engineering"
] | 899 | [
"Engineering vehicles",
"Military engineering",
"Military engineering vehicles"
] |
9,672,320 | https://en.wikipedia.org/wiki/Contextual%20Query%20Language | Contextual Query Language (CQL), previously known as Common Query Language, is a formal language for representing queries to information retrieval systems such as search engines, bibliographic catalogs and museum collection information. Based on the semantics of Z39.50, its design objective is that queries be human readable and writable, and that the language be intuitive while maintaining the expressiveness of more complex query languages. It is being developed and maintained by the Z39.50 Maintenance Agency, part of the Library of Congress.
Examples of query syntax
Simple queries:
dinosaur
"complete dinosaur"
title = "complete dinosaur"
title exact "the complete dinosaur"
dinosaur or bird
Palomar assignment and "ice age"
dinosaur not reptile
dinosaur and bird or dinobird
(bird or dinosaur) and (feathers or scales)
"feathered dinosaur" and (yixian or jehol)
Queries accessing publication indexes:
publicationYear < 1980
lengthOfFemur > 2.4
bioMass >= 100
Queries based on the proximity of words to each other in a document:
ribs prox/distance<=5 chevrons
ribs prox/unit=sentence chevrons
ribs prox/distance>0/unit=paragraph chevrons
Queries across multiple dimensions:
date within "2002 2005"
dateRange encloses 2003
Queries based on relevance:
subject any/relevant "fish frog"
subject any/rel.lr "fish frog"
The latter example specifies using a specific algorithm for logistic regression.
References
External links
Z39.50 Maintenance Agency at the Library of Congress
A Gentle Introduction to CQL
Information retrieval systems
Library science
Library of Congress
Query languages
Knowledge representation languages | Contextual Query Language | [
"Technology"
] | 349 | [
"Information technology",
"Information retrieval systems"
] |
9,672,531 | https://en.wikipedia.org/wiki/Jordan%20Transverse%20Mercator | Jordan Transverse Mercator (JTM) () is a grid system created by the Royal Jordan Geographic Center (RJGC). This system is based on 6° belts with a Central Meridian of 37° East and a Scale Factor at Origin (mo) = 0.9998. The JTM is based on the Hayford ellipsoid adopted by the IUGG in 1924. No transformation parameters are presently offered by the government. However, Prof. Stephen H. Savage of Arizona State University provides the following parameters for the projection:
Jordan Transverse Mercator
Geographic Coordinate System: GCS_International_1924
Datum: D:International_1924
Spheroid: International_1924
Axis: 6378388
Flattening: 297
Prime Meridian: Greenwich
Prime Meridian Longitude: 0
Units: Degree
Unit Scale Factor: 0.017453292519943295
Projection: Transverse Mercator
False Easting: 500,000
False Northing: -3,000,000
Central Meridian: 37
Scale Factor: 0.9998
Central Parallel: 0
Units: Meter
Scale Factor 1
Three-parameter transformation to WGS84 is:
ΔX = –86 meters
ΔY = –98 meters
ΔZ = –119 meters
Prof. Savage also offers software, ReprojectME!, which will convert coordinates between JTM and other systems. (See http://daahl.ucsd.edu/gaialab/# for more information.)
The central meridian of 37° East is roughly midway between the extremes of Jordan: the Karameh Border Crossing with Iraq is close to 39° East, while the city of Aqaba on the Red Sea is close to 35° East.
See also
Jordan
Mercator projection
References
External links
Jordanian Department Of Lands & Survey
Geographic coordinate systems
Geodesy | Jordan Transverse Mercator | [
"Mathematics"
] | 377 | [
"Geographic coordinate systems",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
9,672,835 | https://en.wikipedia.org/wiki/Split%20tunneling | Split tunneling is a computer networking concept which allows a user to access dissimilar security domains like a public network (e.g., the Internet) and a local area network or wide area network at the same time, using the same or different network connections. This connection state is usually facilitated through the simultaneous use of a LAN network interface controller (NIC), radio NIC, Wireless LAN (WLAN) NIC, and VPN client software application without the benefit of an access control.
For example, suppose a user utilizes a remote access VPN software client connecting to a campus network using a hotel wireless network. The user with split tunneling enabled is able to connect to file servers, database servers, mail servers and other servers on the corporate network through the VPN connection. When the user connects to Internet resources (websites, FTP sites, etc.), the connection request goes directly out the gateway provided by the hotel network. However, not every VPN allows split tunneling. Some VPNs with split tunneling include Private Internet Access (PIA), ExpressVPN, Surfshark and NordVPN
Split tunneling is sometimes categorized based on how it is configured. A split tunnel configured to only tunnel traffic destined to a specific set of destinations is called a split-include tunnel. When configured to accept all traffic except traffic destined to a specific set of destinations, it is called a split-exclude tunnel.
Advantages
One advantage of using split tunneling is that it alleviates bottlenecks and conserves bandwidth as Internet traffic does not have to pass through the VPN server.
Another advantage is in the case where a user works at a supplier or partner site and needs access to network resources on both networks. Split tunneling prevents the user from having to continually connect and disconnect.
Disadvantages
A disadvantage is that when split tunneling is enabled, users bypass gateway level security that might be in place within the company infrastructure. For example, if web or content filtering is in place, this is something usually controlled at a gateway level, not the client PC.
ISPs that implement DNS hijacking break name resolution of private addresses with a split tunnel.
Variants and related technology
Inverse split tunneling
A variant of this split tunneling is called "inverse" split tunneling. By default all datagrams enter the tunnel except those destination IPs explicitly allowed by VPN gateway. The criteria for allowing datagrams to exit the local network interface (outside the tunnel) may vary from vendor to vendor (i.e.: port, service, etc.) This keeps control of network gateways to a centralized policy device such as the VPN terminator. This can be augmented by endpoint policy enforcement technologies such as an interface firewall on the endpoint device's network interface driver, group policy object or anti-malware agent. This is related in many ways to network access control (NAC).
Dynamic split tunneling
A form of split-tunneling that derives the IP addresses to include/exclude at runtime-based on a list of hostname rules/policies. [Dynamic Split Tunneling] (DST)
IPv6 dual-stack networking
Internal IPv6 content can be hosted and presented to sites via a unique local address range at the VPN level, while external IPv4 & IPv6 content can be accessed via site routers.
References
Further reading
Juniper(r) Networks Secure Access SSL VPN Configuration Guide, By Rob Cameron, Neil R. Wyler, 2011, , P. 241
Citrix Access Suite 4 Advanced Concepts: The Official Guide, 2/E, By Steve Kaplan, Andy Jones, 2006, , McGraw-Hill Education
Microsoft Forefront Uag 2010 Administrator's Handbook, By Erez Ben-Ari, Ran Dolev, 2011, , Packt Publishing
Cisco ASA Configuration By Richard Deal, 2009, page 413, , McGraw-Hill Education
External links
Split Tunneling in Linux
Network architecture
Computer network security
Internet privacy
Virtual private networks | Split tunneling | [
"Engineering"
] | 816 | [
"Network architecture",
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
9,672,896 | https://en.wikipedia.org/wiki/Comparison%20of%20open-source%20configuration%20management%20software | This is a comparison of notable free and open-source configuration management software, suitable for tasks like server configuration, orchestration and infrastructure as code typically performed by a system administrator.
Basic properties
"Verify mode" (also called dry run) refers to having an ability to determine whether a node is conformant with a guarantee of not modifying it, and typically involves the exclusive use of an internal language supporting read-only mode for all potentially system-modifying operations. Mutual authentication (mutual auth) refers to the client verifying the server and vice versa.
Agent describes whether additional software daemons are required. Depending on the management software these agents are usually deployed on the target system or on one or many central controller servers. Although Agent-less = No is colored red and might seem to be a negative, instead, having an agent can be considered quite advantageous to many. Consider the impact if an agent-less tool loses connectivity to a node while making critical changes—leaving the node in an indeterminate state that compromises its (production?) function.
Platform support
Note: This means platforms on which a recent version of the tool has actually been used successfully, not platforms where it should theoretically work since it is written in good portable C/C++ or an interpreted language. It should also be listed as a supported platform on the project's web site.
Short descriptions
Not all tools have the same goal and the same feature set. To help distinguish between all of these software packages, here is a short description of each one.
Ansible
Combines multi-node deployment, ad-hoc task execution, and configuration management in one package. Manages nodes over SSH and requires python (2.6+ or 3.5+) to be installed on them. Modules work over JSON and standard output and can be written in any language. Uses YAML to express reusable descriptions of systems.
Bcfg2
Software to manage the configuration of a large number of computers using a central configuration model and the client–server paradigm. The system enables reconciliation between clients' state and the central configuration specification. Detailed reports provide a way to identify unmanaged configuration on hosts. Generators enable code or template-based generation of configuration files from a central data repository.
CFEngine
Lightweight agent system. Manages configuration of a large number of computers using the client–server paradigm or stand-alone. Any client state which is different from the policy description is reverted to the desired state. Configuration state is specified via a declarative language. CFEngine's paradigm is convergent "computer immunology".
cdist
cdist is a zero dependency configuration management system: It requires only ssh on the target host, which is usually enabled on all Unix-like machines. Only the administration host needs to have Python 3.2 installed.
Chef
Chef is a configuration management tool written in Erlang, and uses a pure Ruby DSL for writing configuration "recipes". These recipes contain resources that should be put into the declared state. Chef can be used as a client–server tool, or used in "solo" mode.
Consfigurator
While Debian and derivatives are the best supported distributions, Consfigurator also work on other distributions and various unixes but they have less support for properties for configuring specific aspects of the system. Consfigurator can set properties to be applied in scheme. This requires Consfigurator to be installed on the target computer. A more restricted language is also available which works without needing Consfigurator to be installed on the target. Remote configuration is also supported: the of hosts can be defined with scheme code.
Guix
Guix integrates many things in the same tool (a distribution, package manager, configuration management tool, container environment, etc). To remotely manage systems, it needs the target machines to already run Guix or it can also alternatively deploy configurations inside Digital Ocean Droplet. The machines are configured with Scheme.
ISconf
Tool to execute commands and replicate files on all nodes. The nodes do not need to be up; the commands will be executed when they boot. The system has no central server so commands can be launched from any node and they will replicate to all nodes.
Juju
Juju concentrates on the notion of service, abstracting the notion of machine or server, and defines relations between those services that are automatically updated when two linked services observe a notable modification.
Local Configuration system (LCFG)
LCFG manages the configuration with a central description language in XML, specifying resources, aspects and profiles. Configuration is deployed using the client–server paradigm. Appropriate scripts on clients (called components) transcribe the resources into configuration files and restart services as needed.
Open PC server integration (Opsi)
Opsi is desktop management software for Windows clients based on Linux servers. It provides automatic software deployment (distribution), unattended installation of OS, patch management, hard- and software inventory, license management and software asset management, and administrative tasks for the configuration management.
PIKT
PIKT is foremost a monitoring system that also does configuration management. "PIKT consists of a sophisticated, feature-rich file preprocessor; an innovative scripting language with unique labor-saving features; a flexible, centrally directed process scheduler; a customizing file installer; a collection of powerful command-line extensions; and other useful tools."
Puppet
Puppet consists of a custom declarative language to describe system configuration, distributed using the client–server paradigm (using XML-RPC protocol in older versions, with a recent switch to REST), and a library to realize the configuration. The resource abstraction layer enables administrators to describe the configuration in high-level terms, such as users, services and packages. Puppet will then ensure the server's state matches the description. There was brief support in Puppet for using a pure Ruby DSL as an alternative configuration language starting at version 2.6.0. However this feature was deprecated beginning with version 3.1.
Quattor
The quattor information model is based on the distinction between the desired state and the actual state. The desired state is registered in a fabric-wide configuration database, using a specially designed configuration language called Pan for expressing and validating configurations, composed out of reusable hierarchical building blocks called templates. Configurations are propagated to and cached on the managed nodes.
Radmind
Radmind manages hosts configuration at the file system level. In a similar way to Tripwire (and other configuration management tools), it can detect external changes to managed configuration, and can optionally reverse the changes. Radmind does not have higher-level configuration element (services, packages) abstraction. A graphical interface is available (only) for OS X.
Rex
Rex is a remote execution system with integrated configuration management and software deployment capabilities. The admin provides configuration instructions via so-called Rexfiles. They are written in a small DSL but can also contain arbitrary Perl. It integrates well with an automated build system used in CI environments.
Salt
Salt started out as a tool for remote server management. As its usage has grown, it has gained a number of extended features, including a more comprehensive mechanism for host configuration. This is a relatively new feature facilitated through the Salt States component. With the traction that Salt has gotten in the last bit, the support for more features and platforms might continue to grow.
SmartFrog
Java-based tool to deploy and configure applications distributed across multiple machines. There is no central server; you can deploy a .SF configuration file to any node and have it distributed to peer nodes according to the distribution information contained inside the deployment descriptor itself.
Spacewalk
Spacewalk is an open source Linux and Solaris systems management solution and is the upstream project for the source of Red Hat Network Satellite. Spacewalk works with RHEL, Fedora, and other RHEL derivative distributions like CentOS, Scientific Linux, etc. There are ongoing efforts on getting it packaged for inclusion in Fedora. Spacewalk provides systems inventory (hardware and software information, installation and updates of software, collection and distribution of custom software packages into manageable groups, provision systems, management and deployment of configuration files, system monitoring, virtual guest provisioning, starting/stopping/configuring virtual guests and delegating all of these actions to local or LDAP users and system entitlements). As of May 2020, Spacewalk is now EOL with users having moved to either Uyuni or Foreman/Katello.
STAF
The Software Testing Automation Framework (STAF) enables users to create cross-platform, distributed software test environments. STAF removes the tedium of building an automation infrastructure, thus enabling users to focus on building their automation solution. The STAF framework provides the foundation upon which to build higher-level solutions, and provides a pluggable approach supported across a large variety of platforms and languages.
Synctool
Synctool aims to be easy to understand, learn and use. It is written in Python and makes use of SSH (passwordless, with host-based or key-based authentication) and rsync. No specific language is needed to configure Synctool. Synctool has dry run capabilities that enable surgical precision. Synctool depends on Python2 which is now EOL and there are no current plans to migrate it to Python3.
See also
List of systems management systems
Notes
References
Software comparisons
Configuration management software | Comparison of open-source configuration management software | [
"Technology",
"Engineering"
] | 1,941 | [
"Systems engineering",
"Configuration management",
"Software comparisons",
"Computing comparisons"
] |
9,673,027 | https://en.wikipedia.org/wiki/Sociologists%20for%20Women%20in%20Society | Sociologists for Women in Society (SWS) is an international organization of social scientists—students, faculty, practitioners, and researchers—working together to improve the position of women within sociology and society in general.
History
In 1969, several hundred women gathered at a "counter-convention" at Glide Memorial Church rather than attend the ASA meetings at the Hilton Hotel. Sharing feelings of insecurity and stories of initially mystifying experiences as graduate students and faculty, and encouraging each other with applause, they came to see that some of the stresses in being sociologists were not idiosyncratic, but part of the experience of being women. Later that year, some 20 founding women met to build an organization and network. Although SWS was created to redress the plight of women sociologists, SWS has become an organization that also focuses on improving the social position of women in society through feminist sociological research and writing.
SWS holds annual meetings and publishes the academic journal Gender & Society.
Journal
Gender & Society
References
External links
Sociologists for Women in Society
Gender & Society (home page for journal)
Sociological organizations
Academic organizations based in the United States
Feminist organizations in the United States
Organizations for women in science and technology
1969 establishments in California
1969 in San Francisco | Sociologists for Women in Society | [
"Technology"
] | 252 | [
"Organizations for women in science and technology",
"Women in science and technology"
] |
9,673,217 | https://en.wikipedia.org/wiki/Sakurai%27s%20Object | Sakurai's Object (V4334 Sagittarii) is a star in the constellation of Sagittarius. It is thought to have previously been a white dwarf that, as a result of a very late thermal pulse, swelled and became a red giant. It is located at the center of a planetary nebula and is believed to currently be in thermal instability and within its final shell helium flash phase.
At the time of its discovery, astronomers believed Sakurai's Object to be a slow nova. Later spectroscopic analysis suggested that the star was not a nova, but had instead undergone a very late thermal pulse similar to that of V605 Aquilae, causing it to vastly expand. V605 Aquilae, which was discovered in 1919, is the only other star known to have been observed during the high luminosity phase of a very late thermal pulse, and models predict that Sakurai's Object, over the next few decades, will follow a similar life cycle.
Sakurai's Object and other similar stars are expected to end up as helium-rich white dwarfs after retracing their evolution track from the "born-again" giant phase back to the white dwarf cooling track. There are few other suspected "born-again" objects, one example being FG Sagittae. Having erupted in 1995, it is expected that Sakurai's Object's final helium flash will be the first well-observed one.
Observation history
An International Astronomical Union Circular sent on 23 February 1996 announced the discovery of a "possible 'slow' nova" of magnitude 11.4 by Yukio Sakurai, an amateur astronomer. Japanese astronomer Syuichi Nakano reported the discovery, drawing attention to the fact that the object had not been visible in images from 1993 nor in Center for Astrophysics Harvard & Smithsonian records for the years 1930–1951, despite it appearing to slowly brighten over the previous years. Nakano wrote that "While the outburst [suggests] a slow or symbiotic nova, the lack of obvious emission lines one year after brightening is very unusual."
Following the initial announcement, Hilmar Duerbeck published a study investigating the "possible final helium flash" seen by Sakurai. In it, they noted that the location of Sakurai's Object corresponded to a faint object detected in 1976 of magnitude 21, and discussed other observations in the years 1994–1996, by which time the magnitude had increased to around 11–15. By investigating the measured fluxes, angular diameter, and mass of the nebula, a distance of 5.5 kpc and luminosity of was determined. The researchers noted that this was in agreement with their appearance and model predictions and that the outburst luminosity was in the area of 3100 solar luminosities; lower than predicted by a factor of 3.
The first infrared observations were published in 1998, in which both near and far infrared spectroscopy data was presented. The collected data showed Sakurai's Object's steep brightening in 1996, followed by a sharp decline in 1999 as expected. It was later found that the star's steep decline in light was due to the circumstellar dust located around the star, which was present at a temperature of ~680 K. Further infrared data recorded by the United Kingdom Infrared Telescope was published in 2000, in which findings of the changing absorption lines were discussed.
Observations from the United Kingdom Infrared Telescope (UKIRT) in 1999 revealed that the star is in a RCB-like phase with the release of dust and huge loss of mass.
Since 2005, it has been observed in the ejected particles of Sakurai's Object that photoionization of carbon is taking place.
Properties
Sakurai's Object is a highly evolved post-asymptotic giant branch star which has, following a brief period on the white dwarf cooling track, undergone a helium shell flash (also known as a very late thermal pulse). The star is thought to have a mass of around . Observations of Sakurai's Object show increasing reddening and pulsing activity, suggesting that the star is exhibiting thermal instability during its final helium-shell flash.
Prior to its reignition V4334 Sgr is thought to have been cooling towards a white dwarf with a temperature around 100,000 K and a luminosity around . The luminosity rapidly increased about a hundred-fold and then the temperature decreased to around 10,000 K. The star developed the appearance of an F class supergiant (F2 Ia). The apparent temperature continued to cool to below 6,000 K and the star was gradually obscured at optical wavelengths by the formation of carbon dust, similar to an R CrB star. Since then the temperature has increased to around 20,000 K.
The properties of Sakurai's Object are quite similar to that of V605 Aquilae. V605, discovered in 1919, is the only other known star observed during the high luminosity phase of a very late thermal pulse, and Sakurai's Object is modeled to increase in temperature in the next few decades to match the current state of V605.
Dust cloud
During the second half of 1998 an optically thick dust shell obscured Sakurai's Object, causing a rapid decrease in visibility of the star, until in 1999 it disappeared from optical wavelength observations altogether. Infrared observations showed that the dust cloud around the star is primarily carbon in an amorphous form. In 2009 it was discovered that the dust shell is strongly asymmetrical, as a disc with a major axis oriented at an angle of 134°, and inclination of around 75°. The disc is thought to be growing more opaque due to the fast spectral evolution of the source towards lower temperatures.
Planetary nebula
Sakurai's Object is surrounded by a planetary nebula created following the star's red giant phase around 8300 years ago. It has been determined that the nebula has a diameter of 44 arcseconds and expansion velocity of roughly 32 km/s.
Similarities to other stars
Research in 1996 revealed that Sakurai's Object possessed the characteristics of a R Coronae Borealis variable star with the anomaly of Carbon-13 (13C) deficit. Also, the metallicity of Sakurai's object in 1996 was similar to that of V605 Aquilae in 1921. However, it is expected that Sakurai's object will grow in its metallicity to match that of V605 Aquilae.
Significance in astronomical research
A significant amount of new star formation and star destruction data is expected to be recorded from continued observation of Sakurai's Object, as well as be used as reference data in the future research of similar stars. For example, Sakurai's Object is a prime target to study the recombination that occurs after planetary nebulae are ionized, because the conditions would be very difficult to replicate in a laboratory. The reason that stars such as Sakurai's Object and V605 Aquilae exist, as well as experience a shorter lifespan compared to most stars, is largely unknown. Sakurai's Object and V605 Aquilae have been observed experiencing born-again behavior for only 10 years, while FG Sagittae has undergone such behavior for 120 years. It is hypothesized that this is due to Sakurai's Object and V605 Aquilae evolving to the asymptotic giant branch of stars for the first time, while FG Sagittae is undergoing the process a second time.
References
External links
AAVSO Variable Star of the Month. V4334 Sgr (Sakurai's Object): June 2002
Stellar evolution
Sagittarius (constellation)
Sagittarii, V4334
Novae
J17523269-1741080
F-type supergiants
Astronomical objects discovered in 1996 | Sakurai's Object | [
"Physics",
"Astronomy"
] | 1,609 | [
"Novae",
"Astronomical events",
"Constellations",
"Astrophysics",
"Stellar evolution",
"Sagittarius (constellation)"
] |
9,673,324 | https://en.wikipedia.org/wiki/Electron%20beam-induced%20deposition | Electron-beam-induced deposition (EBID) is a process of decomposing gaseous molecules by an electron beam leading to deposition of non-volatile fragments onto a nearby substrate. The electron beam is usually provided by a scanning electron microscope, which results in high spatial accuracy (potentially below one nanometer) and the possibility to produce free-standing, three-dimensional structures.
Process
The focused electron beam of a scanning electron microscope (SEM) or scanning transmission electron microscope (STEM) is commonly used. Another method is ion-beam-induced deposition (IBID), where a focused ion beam is applied instead. Precursor materials are typically liquid or solid and gasified prior to deposition, usually through vaporization or sublimation, and introduced, at accurately controlled rate, into the high-vacuum chamber of the electron microscope. Alternatively, solid precursors can be sublimated by the electron beam itself.
When deposition occurs at a high temperature or involves corrosive gases, a specially designed deposition chamber is used; it is isolated from the microscope, and the beam is introduced into it through a micrometre-sized orifice. The small orifice size maintains differential pressure in the microscope (vacuum) and deposition chamber (no vacuum). Such deposition mode has been used for EBID of diamond.
In the presence of the precursor gas, the electron beam is scanned over the substrate, resulting in deposition of material. The scanning is usually computer-controlled. The deposition rate depends on a variety of processing parameters, such as the partial precursor pressure, substrate temperature, electron beam parameters, applied current density, etc. It usually is in the order of 10 nm/s.
Deposition mechanism
Primary electron energies in SEMs or STEMs are usually between 10 and 300 keV, where reactions induced by electron impact, i.e. precursor dissociation, have a relatively low cross section. The majority of decomposition occurs via low energy electron impact: either by low energy secondary electrons, which cross the substrate-vacuum interface and contribute to the total current density, or inelastically scattered (backscattered) electrons.
Spatial resolution
Primary S(T)EM electrons can be focused into spots as small as ~0.045 nm. While the smallest structures deposited so far by EBID are point deposits of ~0.7 nm diameter, deposits usually have a larger lateral size than the beam spot size. The reason are the so-called proximity effects, meaning that secondary, backscattered and forward scattered (if the beam dwells on already deposited material) electrons contribute to the deposition. As these electrons can leave the substrate up to several microns away from the point of impact of the electron beam (depending on its energy), material deposition is not necessarily confined to the irradiated spot. To overcome this problem, compensation algorithms can be applied, which is typical for electron beam lithography.
Materials and precursors
As of 2008 the range of materials deposited by EBID included Al, Au, amorphous carbon, diamond, Co, Cr, Cu, Fe, GaAs, GaN, Ge, Mo, Nb, Ni, Os, Pd, Pt, Rh, Ru, Re, Si, Si3N4, SiOx, TiOx, W, and was being expanded. The limiting factor is the availability of appropriate precursors, gaseous or having a low sublimation temperature.
The most popular precursors for deposition of elemental solids are metal carbonyls of Me(CO)x structure or metallocenes. They are easily available, however, due to incorporation of carbon atoms from the CO ligands, deposits often exhibit a low metal content. Metal-halogen complexes (WF6, etc.) result in cleaner deposition, but are more difficult to handle as they are toxic and corrosive. Compound materials are deposited from specially crafted, exotic gases, e.g. D2GaN3 for GaN.
Advantages
Very flexible regarding deposit shape and composition; the electron beam is lithographically controlled and a multitude of potential precursors is available
Lateral size of the produced structures and accuracy of deposition are unprecedented
The deposited material can be characterized using the electron microscopy techniques (TEM, EELS, EDS, electron diffraction) during or right after deposition. In situ electrical and optical characterization is also possible.
Disadvantages
Serial material deposition and low deposition rates in general limit throughput and thus mass production
Controlling the elemental or chemical deposit composition is still a major challenge, as the precursor decomposition pathways are mostly unknown
Proximity effects can lead to unintended structure broadening
Ion-beam-induced deposition
Ion-beam-induced deposition (IBID) is very similar to EBID with the major difference that focused ion beam, usually 30 keV Ga+, is used instead of the electron beam. In both techniques, it is not the primary beam, but secondary electrons which cause the deposition. IBID has the following disadvantages as compared to EBID:
Angular spread of secondary electrons is larger in IBID thus resulting in lower spatial resolution.
Ga+ ions introduce additional contamination and radiation damage to the deposited structure, which is important for electronic applications.
Deposition occurs in a focused ion beam (FIB) setup, which strongly limits characterization of the deposit during or right after the deposition. Only SEM-like imaging using secondary electrons is possible, and even that imaging is restricted to short observations due to sample damaging by the Ga+ beam. The use of a dual beam instrument, that combines a FIB and an SEM in one, circumvents this limitation.
The advantages of IBID are:
Much higher deposition rate
Higher purity
Shapes
Nanostructures of virtually any 3-dimensional shape can be deposited using computer-controlled scanning of electron beam. Only the starting point has to be attached to the substrate, the rest of the structure can be free standing. The achieved shapes and devices are remarkable:
World smallest magnet
Fractal nanotrees
Nanoloops (potential nanoSQUID device)
Superconducting nanowires
See also
Electron microscopy
Focused ion beam
Metal carbonyl
Metallocene
Organometallic chemistry
Scanning electron microscope
Scanning transmission electron microscopy
Transmission electron microscopy
Researcher : Lisa McElwee-White
References
External links
"Nanofabrication: Fundamentals and Applications" Ed.: Ampere A. Tseng, World Scientific Publishing Company (March 4, 2008), ,
K. Molhave: "Tools for in-situ manipulations and characterization of nanostructures", PhD thesis, Technical University of Denmark, 2004
Electron beam
Electron microscopy
Nanotechnology | Electron beam-induced deposition | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,338 | [
"Electron",
"Electron microscopy",
"Electron beam",
"Materials science",
"Microscopy",
"Nanotechnology"
] |
13,692,927 | https://en.wikipedia.org/wiki/35%20Piscium | 35 Piscium is a triple star system in the northern constellation Pisces, located about 250 light years away from the Sun. Because it is a variable star, it has been given the variable star designation UU Piscium; 35 Piscium is the Flamsteed designation. This system is faintly visible to the naked eye with a combined apparent visual magnitude of 5.88. It is catalogued as a member of the IC 2391 supercluster by Olin J. Eggen.
In the past the inner pair, designated component A, has been described as an eclipsing binary system, showing a primary minimum of 6.05 and a secondary minimum of 6.04. They have an orbital period of 0.841658 days, zero eccentricity, and an inclination of 19 degrees. However, Bruno Cester argued that the apparent eclipses are not real, and were caused by seeing different portions of distorted-shaped stars in a near contact binary system. As of 2017, it is classified as a rotating ellipsoidal variable and possibly a W Ursae Majoris-type system, although not in physical contact. The components of this pair appear to be equal, with stellar classifications of F0 V or F0 IV.
The magnitude 7.72 tertiary member, designated component B, lies at an angular separation of from the main pair.
References
F-type main-sequence stars
F-type subgiants
Spectroscopic binaries
Triple star systems
Pisces (constellation)
Durchmusterung objects
Piscium, 035
001061
001196
0050
Piscium, UU
TIC objects | 35 Piscium | [
"Astronomy"
] | 340 | [
"Pisces (constellation)",
"Constellations"
] |
13,693,008 | https://en.wikipedia.org/wiki/Physiomics | Physiomics is a systematic study of physiome in biology. Physiomics employs bioinformatics to construct networks of physiological features that are associated with genes, proteins and their networks. A few of the methods for determining individual relationships between the DNA sequence and physiological function include metabolic pathway engineering and RNAi analysis. The relationships derived from methods such as these are organized and processed computationally to form distinct networks. Computer models use these experimentally determined networks to develop further predictions of gene function.
History
Physiomics arose from the imbalance between the amount of data being generated by genome projects and the technological ability to analyze the data on a large scale. As technologies such as high-throughput sequencing were being used to generate large amounts of genomic data, effective methods needed to be designed to experimentally interpret and computationally organize this data. Science can be illustrated as a cycle linking knowledge to observations. In the post-genomic era, the ability of computational methods to aid in this observation became evident. This cycle, aided by computer models, is the basis for bioinformatics and, thus, physiomics.
Physiome projects
In 1993, the International Union of Physiological Sciences (IUPS) in Australia presented a physiome project with the purpose of providing a quantitative description of physiological dynamics and functional behavior of the intact organism. The Physiome Project became a major focus of the IUPS in 2001. The National Simulation Resource Physiome Project is a North American project at The University of Washington. The key elements of the NSR Project are the databasing of physiological, pharmacological, and pathological information on humans and other organisms and integration through computational modeling. Other North American projects include the Biological Network Modeling Center at the California Institute of Technology, the National Center for Cell Analysis and Modeling at The University of Connecticut, and the NIH Center for Integrative Biomedical Computing at The University of Utah.
Research applications
There are many different possible applications of physiomics, each requiring different computational models or the combined use of several different models. Examples of such applications include a three dimensional model for tumor growth, the modelling of biological pattern formation, a mathematical model for the formation of stretch marks in humans, and predictive algorithms for the growth of viral infections within insect hosts.
Modelling and simulation software
Collaborative physiomics research is promoted in part by the open availability of bioinformatics software such as simulation programs and modelling environments. There are many institutions and research groups that make their software available to the public. Examples of openly available software include:
JSim and Systems Biology Workbench – bioinformatics tools offered by The University of Washington.
BISEN – a simulation environment made available by The Medical College of Wisconsin.
SimTK – a collection of biological modelling resources made available by The National NIH Center for Biomedical Computing.
E-Cell System – a simulation and modelling environment for biological systems offered by Keio University in Tokyo, Japan.
Tools such as these are developed using markup languages specific to bioinformatics research. Many of these markup languages are freely available for use in software development, such as CellML, NeuroML, and SBML.
See also
Genomics
Omics
Phenomics
Proteomics
References
External links
List of omics – Lists far more than this page, with references/origins. Maintained by the (CHI) Cambridge Health Institute. One of the earliest lists.
National Centers for Systems Biology – News and information about systems biology research centers.
Omics | Physiomics | [
"Biology"
] | 722 | [
"Bioinformatics",
"Omics"
] |
13,693,330 | https://en.wikipedia.org/wiki/Thiazyl%20trifluoride | Thiazyl trifluoride is a chemical compound of nitrogen, sulfur, and fluorine, having the formula . It exists as a stable, colourless gas, and is an important precursor to other sulfur-nitrogen-fluorine compounds. It has tetrahedral molecular geometry around the sulfur atom, and is regarded to be a prime example of a compound that has a sulfur-nitrogen triple bond.
Preparation
can be synthesised by the fluorination of thiazyl fluoride, NSF, with silver(II) fluoride, :
or by the oxidative decomposition of by silver(II) fluoride:
It is also a product of the oxidation of ammonia by .
Direct fluorination of mercury difluorosulfinimide (Hg(NSF2)2) does not give thiazyl trifluoride, but instead the isomeric fluoriminosulfur difluoride (F2SNF).
Reactions
is much more stable than thiazyl fluoride, does not react with ammonia and hydrogen chloride, and only reacts with sodium at 400 °C. However, the fluoride ligands are labile, and can be displaced by secondary amines. Thiazyl trifluoride reacts with carbonyl fluoride () in the presence of hydrogen fluoride to form pentafluorosulfanyl isocyanate ().
References
Fluorides
Nonmetal halides
Nitrides
Sulfur–nitrogen compounds | Thiazyl trifluoride | [
"Chemistry"
] | 316 | [
"Fluorides",
"Salts"
] |
13,693,464 | https://en.wikipedia.org/wiki/Thiazyl%20fluoride | Thiazyl fluoride, NSF, is a colourless, pungent gas at room temperature and condenses to a pale yellow liquid at 0.4 °C. Along with thiazyl trifluoride, NSF3, it is an important precursor to sulfur-nitrogen-fluorine compounds. It is notable for its extreme hygroscopicity.
Synthesis
Thiazyl fluoride can be synthesized by various methods, such as fluorination of tetrasulfur tetranitride with silver(II) fluoride or mercuric fluoride. It can be purified by vacuum distillation. However, because this synthetic pathway yields numerous side-products, an alternative approach is the reaction of imino(triphenyl)phosphines with sulfur tetrafluoride by cleavage of the bond to form sulfur difluoride imides and triphenyldifluorophosphorane. These products readily decompose yielding thiazyl fluoride.
For synthesis on a preparative scale, the decomposition of compounds already containing the moiety is commonly used:
Reactivity
Reactions with electrophiles and Lewis acids
Lewis acids remove fluoride to afford thiazyl salts:
Thiazyl fluoride functions as a ligand in . and (M = Co, Ni). In all of its complexes, NSF is bound to the metal center through nitrogen.
Reactions with nucleophiles
Thiazyl fluoride reacts violently with water:
Nucleophilic attack on thiazyl fluoride occurs at sulfur atom:.
Fluoride gives an adduct:
The halogen derivatives XNSF2 (X = F, Cl, Br, I) can be synthesized from reacting Hg(NSF)2 with X2; whereby, ClNSF2 is the most stable compound observed in this series.
Oligomerization and cycloaddition
At room temperature, thiazyl fluoride undergoes cyclic trimerization via the N-S multiple bonding:
1,3,5-trifluoro-1,3,5,2,4,6-trithiatriazine is the yielded cyclic trimer, where each sulfur atom remains tetravalent.
Thiazyl fluoride also reacts via exothermic cycloaddition in the presence of dienes.
Structure and bonding
The N−S bond length is 1.448 Å, which is short, indicating multiple bonding, and can be represented by the following resonance structures:
The NSF molecule has 18 total valence electrons and is isoelectronic to sulfur dioxide. Thiazyl fluoride adopts Cs-symmetry and has been shown by isotopic substitution to be bent in the ground state. A combination of rotational analysis with Franck-Condon calculations has been applied to study the electronic excitation from the A''A' states, which results in the elongation of the N-S bond by 0.11 Å and a decrease in the NSF by 15.3.
References
Fluorides
Nonmetal halides
Nitrides
Thiohalides
Sulfur–nitrogen compounds | Thiazyl fluoride | [
"Chemistry"
] | 671 | [
"Fluorides",
"Salts"
] |
13,693,851 | https://en.wikipedia.org/wiki/Specified%20minimum%20yield%20strength | Specified Minimum Yield Strength (SMYS) means the specified minimum yield strength for steel pipe manufactured in accordance with a listed specification1. This is a common term used in the oil and gas industry for steel pipe used under the jurisdiction of the United States Department of Transportation. It is an indication of the minimum stress a pipe may experience that will cause plastic (permanent) deformation.
The SMYS is required to determine the maximum allowable operating pressure (MAOP) of a pipeline, as determined by Barlow's Formula which is P = (2 * S * T)/(OD * SF), where P is pressure, OD is the pipe’s outside diameter, S is the SMYS, T is its wall thickness, and SF is a [Safety Factor].
See also
History of the petroleum industry in the United States
References
ASME B31G-2012 "Manual for Determining the Remaining Strength of Corroded Pipelines pg. 2
Mechanical standards
Petroleum in the United States
Plasticity (physics) | Specified minimum yield strength | [
"Materials_science",
"Engineering"
] | 206 | [
"Deformation (mechanics)",
"Mechanical standards",
"Mechanical engineering",
"Plasticity (physics)"
] |
13,694,127 | https://en.wikipedia.org/wiki/Sump%20%28cave%29 | A sump, or siphon, is a passage in a cave that is submerged under water. A sump may be static, with no inward or outward flow, or active, with continuous through-flow. Static sumps may also be connected underwater to an active stream passage. When short in length, a sump may be called a duck, however this can also refer to a section or passage with some (minimal) airspace above the water.
Depending on hydrological factors specific to a cave – such as the sea tide, changes in river flow, or the relationship with the local water table – sumps and ducks may fluctuate in water level and depth (and sometimes in length, due to the shape of adjacent passage).
Exploration past a sump
Diving
Short sumps may be passed simply by holding one's breath while ducking through the submerged section (for example, Sump 1 in Swildon's Hole). This is known as "free diving" and can only be attempted if the sump is known to be short and not technically difficult (e.g. constricted or requiring navigation). Longer and more technically difficult sumps can only be passed by cave diving (as happened repeatedly in the exploration of Krubera Cave).
Draining
When practical, a sump can also be drained using buckets, pumps or siphons. Pumping the water away requires the inward flow of water into the sump to be less than the rate at which the pump empties it, as well as a suitable place to collect the emptied water. Upstream sumps have been successfully emptied using hoses to siphon water out of them, such as at the Sinkhole Dersios during exploration in 2005. The water was sent deeper into the sinkhole, and the emptied sumps revealed virgin passage behind them. During a rescue from beyond a downstream sump at Sarkhos Cave in 2002, water was pumped upstream into a dam constructed a few metres above the flooded passage.
Some manuals also mention the use of explosives or other forms of force to empty sumps, but the ecological damage done to the fragile cave environment usually rules out the use of such methods.
See also
References
External links
Sump Rescue
Caving
Hydrology | Sump (cave) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 459 | [
"Hydrology",
"Environmental engineering"
] |
13,695,352 | https://en.wikipedia.org/wiki/Dynamic%20electrophoretic%20mobility | Dynamic electrophoretic mobility is a parameter that determines intensity of electroacoustic phenomena, such as Colloid Vibration Current and Electric Sonic Amplitude in colloids. It is similar to electrophoretic mobility, but at high frequency, on a scale of megahertz. Usual electrophoretic mobility is the low frequency limit of the dynamic electrophoretic mobility.
Colloidal chemistry
Condensed matter physics
Soft matter | Dynamic electrophoretic mobility | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 88 | [
"Materials science stubs",
"Colloidal chemistry",
"Soft matter",
"Phases of matter",
"Materials science",
"Colloids",
"Surface science",
"Condensed matter physics",
"Condensed matter stubs",
"Matter"
] |
13,696,274 | https://en.wikipedia.org/wiki/Goldeneye%20Gas%20Platform | Goldeneye Gas Platform was an unmanned and now demolished offshore gas production platform in North Sea block 14/29, in the South Halibut basin area of the outer Moray Firth, 105 km northeast of St Fergus Gas Plant in Scotland.
Field
The field was discovered in October 1996 with well 14/29a-3. The field extends into blocks 14/28b, 20/30b and 20/4b. The reservoir is a lower cretaceous Captain sandstone with high rates of production, up to per day at standard conditions per well and lies at a depth of .
Infrastructure
The jacket was a four-legged piled structure weighing 3,500 tonnes, anchored by eight piles weighing 2,500 tonnes. The jacket supported the 1,000 tonnes topsides. The topsides were designed by SLP Engineering Ltd. Production was from five wells with well fluids separated by a single vertical separator vessel. Separated liquids were re-injected into the export gas stream without further treatment. Facilities were provided for the installation of a future produced water coalescer and flash drum. Gas and liquid were piped ashore to St Fergus under well pressure, via a 20 inch pipeline without using compressors, where it was processed. It operated in of water.
Production
Production started in 2004 and ceased in 2011. It had a potential use for carbon dioxide storage. The topsides and jacket were removed in September 2021 and taken to Vats, Norway for dismantling and recycling.
Like most North Sea fields operated by Shell, it is named after a bird - in this case Bucephala clangula, a small duck found in Scotland and elsewhere.
See also
Operation Goldeneye - World War II operation involving Ian Fleming
References
North Sea energy
Natural gas platforms
Buildings and structures in Aberdeenshire
Oil and gas industry in Scotland
2004 establishments in Scotland | Goldeneye Gas Platform | [
"Engineering"
] | 367 | [
"Structural engineering",
"Natural gas platforms"
] |
13,697,217 | https://en.wikipedia.org/wiki/Cable%20management | Cable management refers to management of electrical or optical cable in a cabinet or an installation. The term is used for products, workmanship or planning.
Cables can easily become tangled, making them difficult to work with, sometimes resulting in devices accidentally becoming unplugged as one attempts to move a cable. Such cases are known as "cable spaghetti", and any kind of problem diagnosis and future updates to such enclosures could be very difficult.
Cable management both supports and contains cables during installation, and makes subsequent maintenance or changes to the cable system easier. Products such as cable trays, cable ladders, and cable baskets are used to support a cable through cabling routes.
Cable selection
The choice of cables is also important; for instance, ribbon cables used to connect Parallel ATA drives to the motherboard can disrupt the airflow inside of computers, making case fans less effective; most SATA cables are more compact and therefore do not have this problem.
Cable labeling
Color-coding of cables is sometimes used to keep track of which is which. For instance, the wires coming out of ATX power supplies are color-coded by voltage. Documenting and labeling cable runs, tying related cables together by cable ties, cable lacing, rubber bands or other means, running them through cable guides, and clipping or stapling them to walls are other common methods of keeping them organized. Above drop ceilings, hooks or trays are used to organize cables and protect them from electrical interference
Planning is especially crucial for cables such as thicknet that do not bend around corners easily and fiber optic which is very difficult to splice once cut.
Cable strain relief
Cable strain relief is a mechanical protection for flexible electrical cables, wires, conduits and pneumatic hoses. It is regulated by the European standard EN 62444 (formerly EN 50262.).
With a strain relief component, the connection between a flexible electrical line and its connection port is protected against mechanical stress. Usually, the lines are fixed by clamping them into single cable clamps made of plastic or metal. Another possibility is to use so called cord grips which consist of weaved wire strands that put a grip around the cables.
A more cable-friendly alternative is attaching the lines to special strain relief plates using common cable ties. In case of industrial applications these strain relief plates are also cost-effective because the packing density (meaning the possible number of lines to be fixed on one plate) is much higher than with common cable clamps which are normally designed for holding one single line.
Furthermore, most of the available cable clamps are not very flexible when it comes to routing lines with varying diameters. That causes higher acquisition and storage costs. The installation of the single cable clamps can take a lot of mounting time, depending on the laying length of the lines. Strain relief plates are therefore a more flexible solution which allows a parallel routing of several lines with varying diameters.
Strain relief is often required for terminated electrical lines that are plugged into sockets or ports to prevent unplugging or accidentally ripping out of the connector. At which point the lines have to be strain relieved depends on the application. For PROFINET, for example, which is used in automation it is recommended to set the strain relief component approx. 1 m / 3.5 ft from the connection point.
Strain relief components are also used in applications where cables, conduits and hoses are exposed to constant dynamic stress (cable carriers / drag chains).
Computer data cabling, structured cabling, LAN cabling
Generally, one end of a cable is terminated in the data cabinet. The other end of a cable ends at the desk. The cable management needs at either end are different .
Buildings and office furniture are often designed with cable management in mind; for instance, desks sometimes have holes to pass cables, and dropped ceilings, raised floors and in-floor cellular raceway systems provide easy access. Some cables have requirements for minimum bending radius or proximity to other cables, particularly power cables, to avoid crosstalk or interference. Power cables often need to be grouped separately and suitably apart from data cables, and only cross at right angles which minimizes electromagnetic interference.
The organized routing of cables inside the computer case allows for optimal airflow and cooling. Good cable management also makes working inside a computer much easier by providing safer hardware installation, repair, or removal. Some PC mod enthusiasts showcase the internal components of their systems with a window mod, which displays the aesthetics of internal cabling as well as the skills and wealth of the modder.
The IT industry needs data cables to be added, moved, or removed many times during the life of the installation. It is usual practice to install "fixed cables" between cabling closets or cabinets. These cables are contained in cable trays etc., and are terminated at each end onto patch panels in the communications cabinet or outlets at the desktop. The circuits are then interconnected to the final destination using patch cables.
Software, such as data center infrastructure management (DCIM), is sometimes used to manage cabling for large IT infrastructures.
Cable planning
Because large IT infrastructures often encompass vast networks of cables — all of which need to be serviced, removed, added, and so on throughout an installation lifecycle — cable planning is a necessity.
Different methods of cable planning may be employed, depending upon the level of detail required for proper management. Spreadsheet software can be used for this purpose, but there is often a need to visually organize information that goes beyond the capabilities of such general-purpose software. For proper visualization of cabling, companies may opt to use a cable management software package.
In hospitals
In hospital situations, cable management can be critical to preventing medical mistakes. In these settings, cable management includes tubes and hoses used for liquids and gases used in healthcare, along with electrical and other cables. Emergency room nurse manager Pat Gabriel said, "My wish is that we could somehow not have spaghetti on the bed. When you look at all those wires and those IVs, it's just spaghetti".
Cabling in healthcare facilities must be grounded, shielded and routed in accordance with life safety codes to minimize interference with medical equipment.
In offices
Cables are managed by five methods in commercial buildings:
Concrete trenching - a trench is dug into the building's concrete. Cabling and floor boxes are installed and the trench is sealed with concrete.
Floor decking - cables are installed on the ceiling of the floor below. Holes are drilled through the floor and outlets are installed on top of the floor.
Overhead cabling - cables are installed on the ceiling. Cable drop downs give users access to outlets.
Access flooring - cables are installed below a raised floor.
In Floor Cellular Raceway Systems - utilize enclosed steel raceways located within a concrete floor slab to distribute power, data, and telecom cabling throughout a space to any location where these services are required today – and where they may be required in the future.
Cord concealers (also called cord protectors) are commonly used in offices to prevent accidents while protecting the cord and the appliance its attached to. Office furniture can sometimes have built in cable management solutions.
In moving equipment
Cable management is particularly important in powered equipment which must move large distances while tethered to a power source and control cabling. There are several common methods of cable management.
With a suspended sliding coil, the cables are coiled like a spring, with each loop of the coil attached to a sliding shoe on a track. As the cabling is paid out, the shoes slide individually along the track and the coils expand. When sliding the other direction, the coils fold back together into a compact spiral. This is also referred to as a festoon.
Folded linear cable uses either a flexible backbone shell, or a flat cable folded into an arc along its long axis. This style of cabling is very common in computer printers to connect the printhead to the circuitry, but is also used in very large linear moving gantries. The cables are flexed only in a small region in a tight radius and so need to be very flexible.
See also
Cable harness
Cable entry system
Cable grommet
Cable gland
Cable tray
Cable dressing
Underwriter's knot
References
Information technology management
Management
Metaphors referring to spaghetti | Cable management | [
"Technology"
] | 1,679 | [
"Information technology",
"Information technology management"
] |
13,697,553 | https://en.wikipedia.org/wiki/Grundmann%20aldehyde%20synthesis | The Grundmann aldehyde synthesis is a chemical reaction that produces an aldehyde from an acyl halide.
Because of the Rosenmund reduction and DIBAL-H accomplish similar transformations, this reaction sequence is not practiced much currently.
References
Name reactions
Organic redox reactions | Grundmann aldehyde synthesis | [
"Chemistry"
] | 58 | [
"Name reactions",
"Chemical reaction stubs",
"Organic redox reactions",
"Organic reactions"
] |
13,697,814 | https://en.wikipedia.org/wiki/ITK-SNAP | ITK-SNAP is an interactive software application that allows users to navigate three-dimensional medical images, manually delineate anatomical regions of interest, and perform automatic image segmentation. The software was designed with the audience of clinical and basic science researchers in mind, and emphasis has been placed on having a user-friendly interface and maintaining a limited feature set to prevent feature creep. ITK-SNAP is most frequently used to work with magnetic resonance imaging (MRI), cone-beam computed tomography (CBCT) and computed tomography (CT) data sets.
Features
The purpose of the tool is to make it easy for researchers to delineate anatomical structures and regions of interest in imaging data. The set of features is kept to a minimum. The main features of the program are
Image navigation three orthogonal cut planes through the image volume are shown at all times. The cut planes are linked by a common cursor, so that moving the cursor in one cut plane updates the other cut planes. The cursor is moved by dragging the mouse over the cut planes, making for smooth navigation. The linked cursor also works across ITK-SNAP sessions, making it possible to navigate multimodality imaging data (e.g., two MRI scans of a subject from a single session).
Manual segmentation ITK-SNAP provides tools for manual delineation of anatomical structures in images. Labeling can take place in all three orthogonal cut planes and results can be visualized as a three-dimensional rendering. This makes it easier to ensure that the segmentation maintains reasonable shape in 3D.
Automatic segmentation ITK-SNAP provides automatic functionality segmentation using the level-set method. This makes it possible to segment structures that appear somewhat homogeneous in medical images using very little human interaction. For example, the lateral ventricles in MRI can be segmented reliably, as can some types of tumors in CT and MRI.
ITK-SNAP is open-source software distributed under the GNU General Public License. It is written in C++ and it leverages the Insight Segmentation and Registration Toolkit (ITK) library. ITK-SNAP can read and write a variety of medical image formats, including DICOM, NIfTI, and Mayo Analyze. It also offers limited support for multi-component (e.g., diffusion tensor imaging) and multi-variate imaging data.
Applications
ITK-SNAP has been applied in the following areas
Craniofacial pathologies and anatomical studies
KCOT
Ameloblastoma
Cysts
Condyle Volumes
Carotid artery segmentation
Diffusion MRI Analysis
Target definition for cancer radiotherapy
lung cancer radiotherapy
Prenatal Image Analysis
Diagnosis of spina bifida
Virtual reality in Medicine
Orthodontics
Brain morphometry
Corpus callosum and ventricle analysis in 22q11.2 deletion syndrome
Hippocampus size and shape measurement in neurodegenerative disorders.
Human brain tumors (e.g., Meningioma)
References
External links
Main ITK-SNAP website (downloads, bugs, mailing lists)
ITK-SNAP on SourceForge
Medical software
Free health care software
Free DICOM software | ITK-SNAP | [
"Biology"
] | 654 | [
"Medical software",
"Medical technology"
] |
13,698,005 | https://en.wikipedia.org/wiki/Creighton%20process | The Creighton process involves the hydrogenation of a 6 carbon chain aldehyde. The reactant is 2,3,4,5,6-pentahydroxyhexanal (an aldehyde) and the product is 1,2,3,4,5,6-hexanehexol (an alcohol). The product thus has two more hydrogen atoms than the reactant: -CHO is replaced by -CH2OH.
The Creighton process was patented in the 1920s.
References
Name reactions | Creighton process | [
"Chemistry"
] | 112 | [
"Name reactions",
"Organic chemistry stubs"
] |
13,698,198 | https://en.wikipedia.org/wiki/In-band%20control | In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP. This is as opposed to Out-of-band control used by protocols such as FTP.
Example
Here is an example of an SMTP client-server interaction:
Server: 220 example.com
Client: HELO example.net
Server: 250 Hello example.net, pleased to meet you
Client: MAIL FROM: <jane.doe@example.net>
Server: 250 jane.doe@example.net... Sender ok
Client: RCPT TO: <john.doe@example.com>
Server: 250 john.doe@example.com ... Recipient ok
Client: DATA
Server: 354 Enter mail, end with "." on a line by itself
Client: Do you like ketchup?
Client: How about pickles?
Client: .
Server: 250 Message accepted for delivery
Client: QUIT
Server: 221 example.com closing connection
SMTP is in-band because the control messages, such as "HELO" and "MAIL FROM", are sent in the same stream as the actual message content.
See also
Out-of-band control
Computer networks engineering | In-band control | [
"Technology",
"Engineering"
] | 267 | [
"Computing stubs",
"Computer networks engineering",
"Computer engineering",
"Computer network stubs"
] |
13,698,301 | https://en.wikipedia.org/wiki/Out-of-band%20control | Out-of-band control is a characteristic of network protocols with which data control is regulated. Out-of-band control passes control data on a separate connection from main data. Protocols such as FTP use out-of-band control.
FTP sends its control information, which includes user identification, password, and put/get commands, on one connection, and sends data files on a separate parallel connection. Because it uses a separate connection for the control information, FTP uses out-of-band control.
See also
Out-of-band management
In-band control
Computer networks | Out-of-band control | [
"Technology"
] | 120 | [
"Computing stubs",
"Computer network stubs"
] |
13,699,176 | https://en.wikipedia.org/wiki/Blok%202BL | The Blok 2BL was a rocket stage, a member of Blok L family, used as an upper stage on some versions of the Molniya-M carrier rocket. It was used as a fourth stage to launch the Oko missile early warning defence spacecraft.
References
Rocket stages | Blok 2BL | [
"Astronomy"
] | 59 | [
"Rocketry stubs",
"Astronomy stubs"
] |
13,699,192 | https://en.wikipedia.org/wiki/In%20the%20Shadow%20of%20the%20Moon%20%28book%29 | In the Shadow of the Moon: A Challenging Journey to Tranquility is a 2007 non-fiction book by space historians Francis French and Colin Burgess. Drawing on a number of original personal interviews with astronauts, cosmonauts and those who worked closely with them, the book chronicles the American and Soviet programs from 1965 onwards, through the Gemini, Soyuz and early Apollo flights, up to the first landing on the Moon by Apollo 11.
The book is the second volume in the Outward Odyssey spaceflight history series by the University of Nebraska Press.
Although the book shares its name with a documentary, and both include many original interviews with Apollo lunar astronauts, it is neither a source of, nor a tie-in to, the documentary.
The book was named as a finalist for the 2007 Eugene M. Emme Award for Astronautical Literature given by the American Astronautical Society, and named as "2009 Outstanding Academic Title" by Choice magazine.
External links
In the Shadow of the Moon Official Publisher Site
In the Shadow of the Moon book review by Hugo-nominated reviewer Steven H Silver
2007 AAS Emme Award finalist announcements
2007 non-fiction books
Spaceflight books | In the Shadow of the Moon (book) | [
"Astronomy"
] | 232 | [
"Outer space stubs",
"Astronomy book stubs",
"Outer space",
"Astronomy stubs"
] |
13,699,607 | https://en.wikipedia.org/wiki/Moment%20%28unit%29 | A moment () is a medieval unit of time. The movement of a shadow on a sundial covered 40 moments in a solar hour, a twelfth of the period between sunrise and sunset. The length of a solar hour depended on the length of the day, which, in turn, varied with the season. Although the length of a moment in modern seconds was therefore not fixed, on average, a medieval moment corresponded to 90 seconds. A solar day can be divided into 24 hours of either equal or unequal lengths, the former being called natural or equinoctial, and the latter artificial. The hour was divided into four (quarter-hours), 10 , or 40 .
The unit was used by medieval computists before the introduction of the mechanical clock and the base 60 system in the late 13th century. The unit would not have been used in everyday life. For medieval commoners the main marker of the passage of time was the call to prayer at intervals throughout the day.
The earliest reference found to the moment is from the 8th century writings of the Venerable Bede, who describes the system as 1 solar hour = 4 = 5 lunar = 10 = 15 = 40 . Bede was referenced five centuries later by both Bartholomeus Anglicus in his early encyclopedia (On the Properties of Things), as well as Roger Bacon, by which time the moment was further subdivided into 12 ounces of 47 atoms each, although no such divisions could ever have been used in observation with equipment in use at the time.
References
Units of time
zh:时刻 | Moment (unit) | [
"Physics",
"Mathematics"
] | 318 | [
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
13,700,439 | https://en.wikipedia.org/wiki/TA-CD | TA-CD is a vaccine developed by the Xenova Group and designed to negate the effects of cocaine, making it suitable for use in treatment of addiction. It is created by combining norcocaine with inactivated cholera toxin.
It works in much the same way as a regular vaccine. A large protein molecule attaches to cocaine, which stimulates response from antibodies, which destroy the molecule. This also prevents the cocaine from crossing the blood–brain barrier, negating the euphoric high and rewarding effect of cocaine caused from stimulation of dopamine release in the mesolimbic reward pathway. The vaccine does not affect the user's "desire" for cocaine—only the physical effects of the drug.
Results
Phase III Clinical Trials completed in 2014 showed no significant difference between users given placebo and users given TA-CD. Patients in the high antibody group had a lower drop out rate and fewer positive cocaine urine results in the last 2 weeks of the trial, but it was not significant versus the low antibody or placebo group. However at other points of the study, high antibody users had more positive urine results. This is most likely due to users trying to overcome the antibodies by taking more excessive amounts of cocaine.
This vaccine does not have any effect on the underlying neurobiological cause of addiction which is a possible explanation for the clinical trial's failure.
See also
Cocaine haptens – structures which elicit anti-bodies against cocaine
TA-NIC – Similar nicotine vaccine
Notes
References
Scientific American Mind: Cocaine Vaccine
External links
Would you vaccinate your child against cocaine?
A thermostable bacterial cocaine esterase rapidly eliminates cocaine from brain in nonhuman primates. Translational Psychiatry (2014) 4, e407; doi:10.1038/tp.2014.48
Vaccines against drugs
Cocaine
Vaccines | TA-CD | [
"Biology"
] | 376 | [
"Vaccination",
"Vaccines"
] |
13,700,503 | https://en.wikipedia.org/wiki/Semantic%20interpretation | Semantic interpretation is an important component in dialog systems. It is related to natural language understanding, but mostly it refers to the last stage of understanding. The goal of interpretation is binding the user utterance to concept, or something the system can understand.
Typically it is creating a database query based on user utterance.
References
Multimodal interaction
Natural language processing | Semantic interpretation | [
"Technology",
"Engineering"
] | 73 | [
"Computer engineering",
"Computer engineering stubs",
"Natural language processing",
"Computing stubs",
"Natural language and computing"
] |
13,700,509 | https://en.wikipedia.org/wiki/Norcocaine | Norcocaine is a minor metabolite of cocaine. It is the only confirmed pharmacologically active metabolite of cocaine, although salicylmethylecgonine is also speculated to be an active metabolite. The local anesthetic potential of norcocaine has been shown to be higher than that of cocaine, however cocaine continues to be more widely used. Norcocaine used for research purposes is typically synthesized from cocaine. Several methods for the synthesis have been described.
Legal status
The legal status of norcocaine is somewhat ambiguous. The US DEA does not list norcocaine as a controlled substance. However, some suppliers of norcocaine, like Sigma-Aldrich, consider the drug to be a Schedule II drug (same as cocaine) for the purpose of their own sales.
Toxicity
The LD50 of norcocaine has been studied in mice. When administered by the intraperitoneal route, the LD50 in mice was 40 mg/kg.
Controversy
Some researchers have suggested that hair drug testing for cocaine use should include testing for metabolites like norcocaine. The basis for this suggestion is the potential for external contamination of hair during testing. There is considerable debate about whether current means of washing hair samples are sufficient for removing external contamination. Some researchers state the methods are sufficient, while others state, the residual contamination may result in a false positive test. Metabolites of cocaine, like norcocaine, in addition to cocaine, should be present in samples from drug users. Authors have stated that the metabolites should be present in any samples declared positive. Issues arise because the metabolites are present in only low concentrations. If the metabolites are present, it is possible for them to be from other contamination.
References
Cocaine
Tropanes
Stimulants
Benzoate esters
Recreational drug metabolites
Human drug metabolites | Norcocaine | [
"Chemistry"
] | 393 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
13,700,748 | https://en.wikipedia.org/wiki/Microaggression | Microaggression is a term used for commonplace verbal, behavioral or environmental slights, whether intentional or unintentional, that communicate hostile, derogatory, or negative attitudes toward those of different races, cultures, beliefs, or genders. The term was coined by Harvard University psychiatrist Chester M. Pierce in 1970 to describe insults and dismissals which he regularly witnessed non-black Americans inflicting on African Americans. By the early 21st century, use of the term was applied to the casual disparagement of any socially marginalized group, including LGBT people, poor people, and disabled people. Psychologist Derald Wing Sue defines microaggressions as "brief, everyday exchanges that send denigrating messages to certain individuals because of their group membership". The persons making the comments may be otherwise well-intentioned and unaware of the potential impact of their words.
A number of scholars and social commentators have criticized the concept of microaggression for its lack of a scientific basis, over-reliance on subjective evidence, and promotion of psychological fragility. Critics argue that avoiding behaviors that one interprets as microaggressions restricts one's own freedom and causes emotional self-harm, and that employing authority figures to address microaggressions (i.e. call-out culture) can lead to an atrophy of those skills needed to mediate one's own disputes. Some argue that, because the term "microaggression" uses language connoting violence to describe verbal conduct, it can be abused to exaggerate harm, resulting in retribution and the elevation of victimhood.
D. W. Sue, who popularized the term microaggressions, has expressed doubts on how the concept is being used: "I was concerned that people who use these examples would take them out of context and use them as a punitive rather than an exemplary way." In the 2020 edition of his book with Lisa Spanierman and in a 2021 book with his doctoral students, Dr. Sue introduces the idea of "microinterventions" as potential solutions to acts of microaggression.
Description
Microaggressions are common, everyday slights and comments that relate to various aspects of one's appearance or identity such as class, gender, sex, sexual orientation, race, ethnicity, mother tongue, age, body shape, disability, or religion.
They are thought to spring from unconsciously held prejudices and beliefs which may be demonstrated consciously or unconsciously through daily verbal interactions. Although these communications typically appear harmless to observers, they are considered a form of covert racism or everyday discrimination. Microaggressions differ from what Pierce referred to as "macroaggressions", which are more extreme forms of racism (such as lynchings or beatings) due to their ambiguity, size and commonality. Microaggressions are experienced by most stigmatized individuals and occur on a regular basis. These can be particularly stressful for people on the receiving end as they are easily denied by those committing them. They are also harder to detect by members of the dominant culture, as they are often unaware they are causing harm. Sue describes microaggressions as including statements that repeat or affirm stereotypes about the minority group or subtly demean its members.
Race or ethnicity
Social scientists Sue, Bucceri, Lin, Nadal, and Torino (2007) described microaggressions as "the new face of racism", saying that the nature of racism has shifted over time from overt expressions of racial hatred and hate crimes, toward expressions of aversive racism, such as microaggressions, that are more subtle, ambiguous, and often unintentional. Sue says this has led some Americans to believe wrongly that non-white Americans no longer suffer from racism. One example of such subtle expressions of racism is Asian students being either pathologized or penalized as too passive or quiet. An incident that caused controversy at UCLA occurred when a teacher corrected a student's use of "indigenous" in a paper by changing it from upper- to lowercase.
According to Sue et al., microaggressions seem to appear in four forms:
Microassault: an explicit racial derogation; verbal/nonverbal; e.g. name-calling, avoidant behavior, purposeful discriminatory actions.
Microinsult: communications that convey rudeness and insensitivity and demean a person's racial heritage or identity; subtle snubs; unknown to the perpetrator; hidden insulting message to the recipient.
Microinvalidation: communications that exclude, negate, or nullify the psychological thoughts, feelings, or experiential reality of a person belonging to a particular group.
Environmental Microaggressions (Macro-Level): Racial assaults, insults and invalidations which are manifested on systemic and environmental level.
Some psychologists have criticized microaggression theory for assuming that all verbal, behavioral, or environmental indignities are due to bias. Thomas Schacht says that it is uncertain whether a behavior is due to racial bias or is a larger phenomenon that occurs regardless of identity conflict. However, Kanter and colleagues found that microaggressions were robustly correlated to five separate measures of bias. In reviewing the microaggression literature, Scott Lilienfeld suggested that microassaults should probably be struck from the taxonomy because the examples provided in the literature tend not to be "micro", but are outright assaults, intimidation, harassment and bigotry; in some cases, examples have included criminal acts. Others have pointed out that what could be perceived as subtle snubs could be due to people having conditions such as autism or social anxiety disorders, and assuming ill will could be harmful to these people.
Examples
In conducting two focus groups with Asian-Americans, for instance, Sue proposed different themes under the ideology of microinsult and microinvalidation.
Microinvalidation:
Alien in own land: When people assume people of color are foreigners.
E.g.: "So where are you really from?" or "Why don't you have an accent?"
Denial of racial reality: When people emphasize that a person of color does not suffer from racial discrimination or inequality (this correlates to the idea of model minority).
Invisibility: Asian-Americans are considered invisible or outside discussions of race and racism.
E.g.: Discussions on race in the United States excluding Asian-Americans by focusing only on 'white and black' issues.
Refusal to acknowledge intra-ethnic differences: When a speaker ignores intra-ethnic differences and assumes a broad homogeneity over multiple ethnic groups.
E.g.: Descriptions such as "all Asian-Americans look alike", or assumptions that all members of an ethnic minority speak the same language or have the same cultural values.
Microinsult:
Pathologizing cultural values/communication styles: When Asian American culture and values are viewed as less desirable.
E.g.: Viewing the valuation of silence (a cultural norm present in some Asian communities) as a fault, leading to disadvantages caused by the expectation of verbal participation common in many Western academic settings.
Second-class citizenship: When minorities are treated as lesser human beings, or are not treated with equal rights or priority.
E.g.: A Korean man asking for a drink in a bar being ignored by the bartender, or the bartender choosing to serve a white man before serving the Korean man.
Ascription of intelligence: When people of color are stereotyped to have a certain level of intelligence based on their race.
E.g.: "You people always do well in school", or "If I see a lot of Asian students in my class, I know it's going to be a hard class".
Exoticization of non-white women: When non-white women are stereotyped as being in the "exotic" category based on gender, appearance, and media expectations.
E.g.: Descriptions of an Asian-American woman as a 'Dragon Lady', 'Tiger mother', or 'Lotus Blossom', or using symbols associated with Eastern cultures.
In a 2017 peer-reviewed review of the literature, Scott Lilienfeld critiqued microaggression research for hardly having advanced beyond taxonomies such as the above, which was proposed by Sue nearly ten years earlier. While acknowledging the reality of "subtle slights and insults directed toward minorities", Lilienfeld concluded that the concept and programs for its scientific assessment are "far too underdeveloped on the conceptual and methodological fronts to warrant real-world application". He recommended abandonment of the term microaggression since "the use of the root word 'aggression' in 'microaggression' is conceptually confusing and misleading". In addition, he called for a moratorium on microaggression training programs until further research can develop the field.
In 2017 Althea Nagai, who works as a research fellow at the conservative Center for Equal Opportunity, published an article in the National Association of Scholars journal Academic Questions, criticizing microaggression research as pseudoscience. Nagai said that the prominent critical race researchers behind microaggression theory "reject the methodology and standards of modern science." She lists various technical shortcomings of microaggression research, including "biased interview questions, reliance on narrative and small numbers of respondents, problems of reliability, issues of replicability, and ignoring alternative explanations."
Gender
Explicit sexism in the society of the US is on the decline, but still exists in a variety of subtle and non-subtle expressions. Women encounter microaggressions in which they are made to feel inferior, sexually objectified, and bound to restrictive gender roles, both in the workplace and in academia, as well as in athletics. Microaggressions based on gender are applied to female athletes when their abilities are compared only to men, when they are judged on "attractiveness", and when they are restricted to "feminine" or sexually attractive attire during competition.
Other examples of sexist microaggressions are "[addressing someone by using] a sexist name, a man refusing to wash dishes because it is 'women's work,' displaying nude pin-ups of women at places of employment, someone making unwanted sexual advances toward another person".
Makin and Morczek also use the term gendered microaggression to refer to male interest in violent rape pornography.
Sociologists Sonny Nordmarken and Reese Kelly (2014) identified trans-specific microaggressions that transgender people face in healthcare settings, which include pathologization, sexualization, rejection, invalidation, exposure, isolation, intrusion, and coercion.
Sexuality and sexual orientation
In focus groups, individuals identifying as bisexual report such microaggressions as others denying or dismissing their self-narratives or identity claims, being unable to understand or accept bisexuality as a possibility, pressuring them to change their bisexual identity, expecting them to be sexually promiscuous, and questioning their ability to maintain monogamous relationships.
Some LGBTQ individuals report receiving expressions of microaggression from people even within the LGBTQ community. They say that being excluded, or not being made welcome or understood within the gay and lesbian community is a microaggression. Roffee and Waling suggest that the issue arises, as occurs among many groups of people, because a person often makes assumptions based on individual experience, and when they communicate such assumptions, the recipient may feel that it lacks taking the second individual into account and is a form of microaggression.
Intersectionality
People who are members of overlapping marginal groups (e.g., a gay Asian American man or a trans woman) experience microaggressions based in correspondingly varied forms of marginalization.
For example, in one study Asian American women reported feeling they were classified as sexually exotic by majority-culture men or were viewed by them as potential trophy wives simply because of their group membership. African American women report microaggressions related to characteristics of their hair, which may include invasion of personal space as an individual tries to touch it, or comments that a style that is different from that of a European American woman looks "unprofessional".
People with mental illnesses
People with mental illness report receiving more overt forms of microaggression than subtle ones, coming from family and friends as well as from authority figures. In a study involving college students and adults who were being treated in community care, five themes were identified: invalidation, assumption of inferiority, fear of mental illness, shaming of mental illness, and being treated as a second-class citizen. Invalidation would occur, for example, when friends and family members minimized mental health symptoms; one participant described others claiming "You can't be depressed, you're smiling." People would sometimes falsely assume that mental illness means lower intelligence; a participant reported that the hospital staff in a psych ward were speaking to mentally ill patients as if they could not understand instructions.
Disability
Individuals who have an aspect of their identity that lacks a sense of systemic power are subject to microaggressions; thus, persons with disabilities are subject to ableist microaggressions. Like others with marginalized identities, microaggressions toward individuals with disabilities may manifest as a microassault, a microinsult, or a microinvalidation, all of which may also be executed as an environmental microaggression.
Current literature is available to better understand microaggressions in the context of ability. In one qualitative study, a group of researchers studied a sample of individuals with Multiple Sclerosis (MS) diagnoses. MS is a chronic disease that may impact mental, cognitive, and physical abilities. The researchers illustrated examples of real-life ableist microaggressions in the context of microassaults, microinsults, and microinvalidations faced by their sample, specifically in the workplace.
People with physical disabilities also face microaggressions, such as
the misconception that those with disabilities want or require correction
asking inappropriate questions
Media
Members of marginalized groups have also described microaggressions committed by performers or artists associated with various forms of media, such as television, film, photography, music, and books. Some researchers believe that such cultural content reflects but also molds society, allowing for unintentional bias to be absorbed by individuals based on their media consumption, as if it were expressed by someone with whom they had an encounter.
A study of racism in TV commercials describes microaggressions as gaining a cumulative weight, leading to inevitable clashes between races due to subtleties in the content. As an example of a racial microaggression, or microassault, this research found that black people were more likely than white counterparts to be shown eating or participating in physical activity, and more likely to be shown working for, or serving others. The research concludes by suggesting that microaggressive representations can be omitted from a body of work, without sacrificing creativity or profit.
Pérez Huber and Solorzano start their analysis of microaggressions with an anecdote about Mexican "bandits" as portrayed in a children's book read at bedtime. The article gives examples of negative stereotypes of Mexicans and Latinos in books, print, and photos, associating them with the state of racial discourse within majority culture and its dominance over minority groups in the US. The personification of these attitudes through media can also be applied to microaggressive behaviors towards other marginalized groups.
A 2015 review of the portrayal of LGBT characters in film says that gay or lesbian characters are presented in "offensive" ways. In contrast, LGBT characters portrayed as complex characters who are more than a cipher for their sexual orientation or identity are a step in the right direction. Ideally, "queer film audiences finally have a narrative pleasure that has been afforded to straight viewers since the dawn of film noir: a central character who is highly problematical, but fascinating."
Ageism and intolerance
Microaggression can target and marginalize any definable group, including those who share an age grouping or belief system. Microaggression is a manifestation of bullying that employs microlinguistic power plays in order to marginalize any target with a subtle manifestation of intolerance by signifying the concept of "other".
Perpetrators
Because microaggressions are subtle and perpetrators may be unaware of the harm they cause, the recipients often experience attributional ambiguity, which may lead them to dismiss the event and blame themselves as overly sensitive to the encounter.
If challenged by the minority person or an observer, perpetrators will often defend their microaggression as a misunderstanding, a joke, or something small that should not be blown out of proportion.
A 2020 study involving American college students found a correlation between likelihood to commit microaggressions, and racial bias.
Effects
A 2013 scholarly review of the literature on microaggressions concluded that "the negative impact of racial microaggressions on psychological and physical health is beginning to be documented; however, these studies have been largely correlational and based on recall and self-report, making it difficult to determine whether racial microaggressions actually cause negative health outcomes and, if so, through what mechanisms". A 2017 review of microaggression research argued that as scholars try to understand the possible harm caused by microaggressions, they have not conducted much cognitive or behavioral research, nor much experimental testing, and they have overly relied on small collections of anecdotal testimonies from samples who are not representative of any particular population. These assertions were later argued against in that same journal in 2020, but the response was criticized for failing to address the findings of the systematic reviews and continuing to draw causal inferences from correlational data.
Recipients of microaggressions may feel anger, frustration, or exhaustion. African Americans have reported feeling under pressure to "represent" their group or to suppress their own cultural expression and "act white". Over time, the cumulative effect of microaggressions is thought by some to lead to diminished self-confidence and a poor self-image for individuals, and potentially also to such mental-health problems as depression, anxiety, and trauma. Many researchers have argued that microaggressions are more damaging than overt expressions of bigotry precisely because they are small and therefore often ignored or downplayed, leading the victim to feel self-doubt for noticing or reacting to the encounter, rather than justifiable anger, and isolation rather than support from others about such incidents. Studies have found that in the U.S. when people of color perceived microaggressions from mental health professionals, client satisfaction with therapy is lower.
Some studies suggest that microaggressions represent enough of a burden that some people of color may fear, distrust, and/or avoid relationships with white people in order to evade such interaction. On the other hand, some people report that dealing with microaggressions has made them more resilient. Scholars have suggested that, although microaggressions "might seem minor", they are "so numerous that trying to function in such a setting is 'like lifting a ton of feathers.
An ethnographic study of transgender people in healthcare settings observed that participants sometimes responded to microaggressions by leaving a hospital in the middle of treatment, and never returning to a formal healthcare setting again.
Criticism
Public discourse and harm to speakers
Kenneth R. Thomas wrote in American Psychologist that recommendations inspired by microaggression theory, if "implemented, could have a chilling effect on free speech and on the willingness of White people, including some psychologists, to interact with people of color." Sociologists Bradley Campbell and Jason Manning have written in the academic journal Comparative Sociology that the microaggression concept "fits into a larger class of conflict tactics in which the aggrieved seek to attract and mobilize the support of third parties" that sometimes involves "building a case for action by documenting, exaggerating, or even falsifying offenses". The concept of microaggressions has been described as a symptom of the breakdown in civil discourse, and that microaggressions are "yesterday's well-meaning faux pas".
One suggested type of microaggression by an Oxford University newsletter was avoiding eye contact or not speaking directly to people. This spurred a controversy in 2017 when it was pointed out that such assumptions are insensitive to autistic people who may have trouble making eye contact.
In a 2019 journal article, Scott Lilienfeld, who is a critic of microaggression theory, titled a section: "The Search for Common Ground." Lilienfeld agrees that "a discussion of microaggressions, however we choose to conceptualize them, may indeed have a place on college campuses and businesses." In such conversations, Lilienfeld states it is important to assume "most or all individuals…were genuinely offended," "to listen nondefensively to their concerns and reactions," and to "be open to the possibility that we have been inadvertently insensitive." In his latest book, D.W. Sue, who popularized the term microaggression, also recommends a "collaborative rather than an attacking tone."
Culture of victimhood
In their article "Microaggression and Moral Cultures", sociologists Bradley Campbell and Jason Manning say that the discourse of microaggression leads to a culture of victimhood. Social psychologist Jonathan Haidt states that this culture of victimhood lessens an individual's "ability to handle small interpersonal matters on one's own" and "creates a society of constant and intense moral conflict as people compete for status as victims or as defenders of victims". Similarly, the linguist and social commentator John McWhorter says that "it infantilizes black people to be taught that microaggressions, and even ones a tad more macro, hold us back, permanently damage our psychology, or render us exempt from genuine competition." McWhorter does not disagree that microaggressions exist. However, he worries that too much societal focus on microaggressions will cause other problems and has stated that the term should be confined to "when people belittle us on the basis of stereotypes."
Emotional distress
In The Atlantic, Greg Lukianoff and Jonathan Haidt expressed concern that the focus on microaggressions can cause more emotional trauma than the experience of the microaggressions at the time of occurrence. They believe that self-policing by an individual of thoughts or actions in order to avoid committing microaggressions may cause emotional harm as a person seeks to avoid becoming a microaggressor, as such extreme self-policing may share some characteristics of pathological thinking. Referring especially to prevention programs at schools or universities, they say that the element of protectiveness, of which identifying microaggression allegations are a part, prepares students "poorly for professional life, which often demands intellectual engagement with people and ideas one might find uncongenial or wrong". They also said that it has become "unacceptable to question the reasonableness (let alone the sincerity) of someone's emotional state", resulting in adjudication of alleged microaggressions having characteristics of witch trials.
Amitai Etzioni, writing in The Atlantic, suggested that attention to microaggressions distracts individuals and groups from dealing with much more serious acts.
Political correctness
According to Derald Wing Sue, whose works popularized the term, many critiques are based on the term being misunderstood or misused. He said that his purpose in identifying such comments or actions was to educate people and not to silence or shame them. He further notes that, for instance, identifying that someone has used racial microaggressions is not intended to imply that they are racist.
Mind reading
According to Lilienfeld, a possible harmful effect of microaggression programs is to increase an individual's tendency to over-interpret the words of others in a negative way. Lilienfeld refers to this as mind reading, "in which individuals assume—without attempts at verification—that others are reacting negatively to them.... For example, Sue et al...regarded the question 'Where were you born?' directed at Asian Americans as a microaggression."
In popular culture
Microaggression has been mentioned in popular culture since it was coined. In 2016, American academic Fobazi Ettarh created Killing Me Softly: A Game About Microaggressions, an open-access video game. which allows players to navigate through the life of a character who experiences microaggression.
See also
References
Aggression
Discrimination
Racism
Race-related controversies
African-American-related controversies
Linguistic controversies
Social psychology concepts
Social sciences terminology
Interpersonal communication
1970s neologisms
Sue, D.W., Alsaidi, S., Awad, M. N., Glaeser, E., Calle, C.Z., & Mendez, N. (2019). Disarming racial microaggressions: Microintervention strategies for targets, Whites allies, and bystanders. American Psychologist, 74(1), 128.
Sue, D.W., Capodilupo, C. M., Torino, G. C., Bucceri, J. M., Holder, A., Nadal, K. L., & Esquilin, M. (2007). Racial microaggressions in everyday life: implications for clinical practice. American Psychologist, 62(4), 271. | Microaggression | [
"Biology"
] | 5,325 | [
"Behavior",
"Aggression",
"Human behavior",
"Discrimination"
] |
13,700,969 | https://en.wikipedia.org/wiki/Sundiver%20%28space%20mission%29 | Sundiver was a proposed space mission to crash a probe into the Sun, while sending back data to Earth before burning up. It was proposed as a design study by the Australian Academy of Science's National Committee for Space Science as a Flagship mission to kick-start an Australian space program. The design study was proposed as a five-year study from 2011-2015 with a complement of 10 PhDs, budgeted at a cost of $10 M (Australian), leading to a Go/NoGo Decision in 2015.
The mission would have been comparable, in its close approach to the Sun, to the NASA Parker Solar Probe mission, although it would have only made a single pass into the solar corona.
External links
Article in The Australian announcing plans
Decadal Plan for Australian Space Science (Sundiver proposal begins on page 90)
Spaceflight | Sundiver (space mission) | [
"Astronomy"
] | 173 | [
"Outer space stubs",
"Spaceflight",
"Astronomy stubs",
"Outer space"
] |
13,700,979 | https://en.wikipedia.org/wiki/Grillo%20telephone | The Grillo telephone is a 1960s flip-phone telephone from Italy. It was designed by Richard Sapper and Marco Zanuso, and manufactured by Siemens for Italtel. Introduced in1967, the "Grillo" remained in production until 1979, and was a popular and iconic symbol of 1960s Italian design.
Design
The modern styling, compact form factor, and automatically opening clamshell design set "Grillo" apart from other telephones that were available at the time. Innovative features that contributed to the phone's compact size include a dial that replaced the conventional rotary finger stop mechanism with a button in each of the number holes which, when actioned, pushed a pin through the back of the dial to stop the mechanism in its correct position. The incorporation of the ringer mechanism into the wall plug rather than the phone itself, and the use of a thin ABS plastic shell also helped reduce both its size and weight. The name "Grillo", which means cricket in Italian, "derives from its shape and its chirping ringtone: an insect-like metallic chirp has replaced the harassing ring."
"Grillo" was designed in 1965 by Richard Sapper and Marco Zanuso, who, as a team, also collaborated with Italian companies such as Brionvega, Gavina, Kartell, and Alfa Romeo throughout the 1960s and 1970s. The design was awarded the 1967 Compasso d'oro in Milan and the Gold Medal at the 1968 Ljubljna Biennale of Design (BIO3). Examples are held in many museum collections, including the Museum of Modern Art (MoMA) and the Cooper Hewitt, Smithsonian Design Museum in New York, the Philadelphia Museum of Art, the Israel Museum in Jerusalem, the Pompidou Centre in Paris, and the ADI Design Museum and Museo Nazionale Scienza e Tecnologia in Milan.
The "Grillo" would subsequently influence the design of flip phone mobile telephones developed during the 1990s like the Motorola StarTAC and RAZR, as well other electronic devices such as portable computers and games.
In popular culture
The "Grillo" telephone appears in multiple episodes of the original 1960s Mission Impossible television series.
The car phone depicted in the early 1970s American television series The Magician is a "Grillo" telephone.
Patrizia Reggiani (played by Lady Gaga) uses a "Grillo" telephone in the 2021 film House of Gucci.
Gallery
See also
Communicator (Star Trek)
Ericofon
Trimphone
Motorola StarTAC
Motorola RAZR
Notes
References
External links
Grillo Telephone at the Museum of Modern Art
Grillo Telephone at the Centre Pompidou
Grillo Telephone at the ADI Design Museum
Grillo Telephone at the Israel Museum
Grillo Telephone at the Cooper Hewitt Smithsonian Design Museum
Telecommunications collection, Museum of Science and Technology, Milan
Italian Design Files #15: the Grillo phone by Marco Zanuso and Richard Sapper Design Street (history, archival images)
Telefono Grillo: pubblicità vintage (vintage Grillo publicity posters)
Telephony equipment
Industrial design
Product design
Italian design
Compasso d'Oro Award recipients
Collection of the Museum of Modern Art (New York City)
Collection of the Musée National d'Art Moderne
Consumer electronics | Grillo telephone | [
"Engineering"
] | 668 | [
"Industrial design",
"Design engineering",
"Design",
"Product design"
] |
13,703,166 | https://en.wikipedia.org/wiki/Net3 | Net3 was a Wi-Fi-like system developed, manufactured and commercialised by Olivetti in the early 1990s. It could wirelessly connect PCs to an Ethernet fixed LAN at a speed of up to 512 kbit/s, over a very wide area. It was a micro-cellular system, in which each base station had an effective range of about 100m indoors, 300m outdoors, and the system supported seamless handover between base stations.
Design and history
The system was based on the DECT standard, published in 1992. A prototype system was first demonstrated at the Telecom '91 show in Geneva in October 1991, and is believed to be the first public demonstration of the DECT transmission system. The product was launched in June 1993, and was the first product based on the DECT standard to reach the market, narrowly beating Siemens' highly successful Gigaset cordless telephone. It is also believed to be the first wireless LAN to be sold on the European market.
In its first version, the adapter cards consisted of half-size PC cards connected to an external desk-seated radio unit of modest dimensions. The second version, launched at Telecom '95, consisted of a PCMCIA card and a small external radio unit suitable for portable use.
The system was developed in the laboratories of Olivetti Sixtel, the telecommunications technology division of Olivetti in Ivrea, Italy. At a time when knowledge of commercial digital radio technology was scarce in Italy, the group began research in 1988 and developed in-house a high level of capability in DECT technology, including patented technology that became fundamental to the standard. The development was funded partly from corporate venture resources, partly from ESPRIT funding, and partly from an unusual but highly effective tool of industrial policy, invented by Ing. Augusto Vighi of the Istituto Superiore delle Poste e Telecomunicazioni. Vighi placed a contract for proof-of-concept DECT demonstration systems with a consortium of Italian technology companies, covering the full range of DECT applications. This accelerated the development not only of the Net3 wireless lan by Olivetti, but of the FIDO public system by Italtel and of a complete wireless PABX by SELTA.
Net3 was originally conceived as a means to substitute LAN cabling in problematic buildings, which are especially numerous in the historic centres of Italian cities. In practice this was not a fast-growing or eager market, and the product eventually instead found success when integrated with rugged portable computers on forklift trucks in large warehouses and stockyards. A system was also installed inside a steel works and worked reliably despite the very high levels of electrical interference.
The team developing Net3 was also deeply involved in the development of the DECT standards, and contributed the chairmen of the DECT standards groups that designed the DECT network protocols and the data transport and interworking protocols. As a result, the DECT standards contained a high level of standardised, embedded support for wireless LAN functionality. The product benefited greatly from the availability of dedicated spectrum (1880-1900 MHz)throughout Europe thanks to a European directive on DECT, and from a single-stop type approval process arising from DECT's status as a pan-European standard. Although a very leading-edge product, Net3 was nevertheless able to exploit the availability of early semiconductor devices designed and priced to meet the mass consumer market for DECT-based cordless telephones.
In 1995 Olivetti cancelled all its commercial telecommunications products as part of its strategy of transformation into a telecoms operator. The Net3 product was progressively withdrawn from the market. The technology was repurposed to work as a high performance, low cost wireless local-loop infrastructure, supporting both toll-quality voice and broadband internet access. A pilot system was built and operated in Ivrea. The approach eventually floundered on the difficulty of redistributing signals within the apartment blocks so prevalent in the Italian urban fabric, and the Net3 team was eventually disbanded in 1997.
References
External links
Electronic Times 1994
DECTWeb 1998
Byte December 1995
Local Access Network Technologies: book edited by Paul France, IEE 2004
International Patent covering the Net3 system
International Patent regarding Net3 radio technology
Wireless networking
DECT | Net3 | [
"Technology",
"Engineering"
] | 864 | [
"Mobile telecommunications",
"Wireless networking",
"Computer networks engineering",
"DECT"
] |
13,703,641 | https://en.wikipedia.org/wiki/HD%20166 | HD 166 or V439 Andromedae (ADS 69 A) is a 6th magnitude star in the constellation Andromeda, approximately 45 light years away from Earth. It is a variable star of the BY Draconis type, varying between magnitudes 6.13 and 6.18 with a 6.23 days periodicity. It appears within one degree of the star Alpha Andromedae and is a member of the Hercules-Lyra association moving group. It also happens to be less than 2 degrees from right ascension 00h 00m.
Star characteristics
HD 166 is a K-type main sequence star, cooler and dimmer than the Sun, and has a stellar classification of K0Ve where the e suffix indicates the presence of emission lines in the spectrum. The star has a proper motion of 0.422 arcseconds per year in a direction 114.1° from north. It has an estimated visual luminosity of 61% of the Sun, and is emitting like a blackbody with an effective temperature of 5,327K. It has a diameter that is about 90% the size of the Sun and a radial velocity of −6.9 km/s. Age estimates range from as low as 78 million years old based on its chromospheric activity, up to 9.6 billion years based on a comparison with theoretical evolutionary tracks. X-ray emission has been detected from this star, with an estimated luminosity of .
An infrared excess has been detected around HD 166, most likely indicating the presence of a circumstellar disk at a radius of 7.5 AU. The temperature of this dust is 90 K.
Variability
Eric J. Gaidos et al. first detected variability in HD 166 in the year 2000. It was given its variable star designation, V439 Andromedae, in 2006.
It has been found that the periodicity in the photometric variability of HD 166 is coincident with the rotation period. This leads to its classification as a BY Draconis variable, where brightness variations are caused by the presence of large starspots on the surface and by chromospheric activity.
References
External links
Image HD 166
nstars.nau.edu
Andromeda (constellation)
BY Draconis variables
000166
Spectroscopic binaries
K-type main-sequence stars
HD, 000166
Andromedae, V439
0008
000544
Durchmusterung objects
0005
Emission-line stars | HD 166 | [
"Astronomy"
] | 514 | [
"Andromeda (constellation)",
"Constellations"
] |
13,703,825 | https://en.wikipedia.org/wiki/Urodilatin | Urodilatin (URO) is a hormone that causes natriuresis by increasing renal blood flow. It is secreted in response to increased mean arterial pressure and increased blood volume from the cells of the distal tubule and collecting duct. It is important in oliguric patients (such as those with acute kidney injury and chronic kidney failure) as it lowers serum creatinine and increases urine output.
Interactions
Atrial natriuretic peptide (CDD/ANP-99-126) is a hormone system of clinical importance. Urodilatin (CDD/ ANP-95-126) is a homologue natriuretic peptide that differs from CDD/ANP-99-126, which is excreted into the circulation via exocytosis. The prototype of the natriuretic hormones is cardiodilatin / atrial natriuretic peptide (CDD/ANP). The endocrine heart is composed of specific myoendocrine cells that synthesize and secrete the natriuretic peptide hormones, which exhibit diuretic and vasorelaxant properties; secretion is the basis for a paracrine system regulating water and sodium reabsorption.
Research efforts since the early 1980s have studied their effects on electrolyte homeostasis. When administered intravenously, urodilatin induces strong diuresis and natriuresis with tolerable hemodynamic side effects. Urodilatin is localized in the kidney, differentially processed (involved in the regulation of body fluid volume and water-electrolyte excretion, while circulating), and secreted into the urine. As a consequence, urodilatin is involved in drug development along with the prohormone CDD/ANP-1-126 and cardiodilatin CDD/ANP-99-126. A message for the preprohormone is transcribed in the heart and kidneys from the gene of NP type A, resulting in a cGMP-dependent signal transduction, which induces diuresis and natriuresis, differentially processed to a peptide of 32 amino acids from the same precursor as renal ANP, may not be identical to the circulating cardiac hormone ANP. The kidneys produce their own natriuretic 32-residue peptide. Urodilatin renal natriuretic peptide potency equals or exceeds that of Atriopeptin [ANP-(99-126)], the prototype of cardiodilatin. Atriopeptin is only of trivial importance in the regulation of sodium excretion during normal living conditions.
Urodilatin is little affected by renal enzymes that inactivate atriopeptin, as the kidney elutes with urodilatin rather than with ANP. The degradation rates of (125I)-urodilatin and [125I]-ANP by pure recombinant NEP (rNEP) were compared. Phosphoramidon, a potent inhibitor of NEP, completely protected both peptides from metabolism by rNEP. Urodilatin has a four-residue extension at the N-terminus neutral endopeptidase-24.11 (NEP) plays a physiological role in metabolizing atrial natriuretic peptide, and C-type natriuretic peptide (prohormone) degraded the bioactive peptides at about half the rate though the C-terminal that compete with natriuretic peptides for hydrolysis by neutral endopeptidase.
References
Hormones of the kidneys
Peptides | Urodilatin | [
"Chemistry"
] | 759 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.