id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
20,303,498 | https://en.wikipedia.org/wiki/Spinning%20%28polymers%29 | Spinning is a manufacturing process for creating polymer fibers. It is a specialized form of extrusion that uses a spinneret to form multiple continuous filaments.
Melt spinning
If the polymer is a thermoplastic then it can undergo melt spinning. The molten polymer is extruded through a spinneret composed of capillaries where the resulting filament is solidified by cooling. Nylon, olefin, polyester, saran, and sulfar are produced via this process.
Extrusion spinning
Pellets or granules of the solid polymer are fed into an extruder. The pellets are compressed, heated and melted by an extrusion screw, then fed to a spinning pump and into the spinneret.
Direct spinning
The direct spinning process avoids the stage of solid polymer pellets. The polymer melt is produced from the raw materials, and then from the polymer finisher directly pumped to the spinning mill. Direct spinning is mainly applied during production of polyester fibers and filaments and is dedicated to high production capacity (>100 ton/day).
Solution spinning
If the melting point of the polymer is higher than its degradation temperature, the polymer must undergo solution spinning techniques for fiber formation. The polymer is first dissolved in a solvent, forming a spinning solution (sometimes called a "dope"). The spinning solution then undergoes dry, wet, dry-jet wet, gel, or electrospinning techniques.
Dry spinning
A spinning solution consisting of polymer and a volatile solvent is extruded through a spinneret into an evaporating chamber. A stream of hot air impinges on the jets of spinning solution emerging from the spinneret, evaporating the solvent, and solidifying the filaments. Solution blow spinning is a similar technique where polymer solution is sprayed directly onto a target to produce a nonwoven fiber mat.
Wet spinning
Wet spinning is the oldest of the five processes. The polymer is dissolved in a spinning solvent where it is extruded out through a spinneret submerged in a coagulation bath composed of nonsolvents. The coagulation bath causes the polymer to precipitate in fiber form. Acrylic, rayon, aramid, modacrylic, and spandex are produced via this process.
A variant of wet spinning is dry-jet wet spinning, where the spinning solution passes through an air-gap prior to being submerged into the coagulation bath. This method is used in Lyocell spinning of dissolved cellulose, and can lead to higher polymer orientation due to the higher stretchability of the spinning solution versus the precipitated fiber.
Gel spinning
Gel spinning, also known as semi-melt spinning, is used to obtain high strength or other special properties in the fibers. Instead of wet spinning, which relies on precipitation as the main mechanism for solidification, gel spinning relies on temperature-induced physical gelation as the primary method for solidification. The resulting gelled fiber is then swollen with the spinning solvent (similar to gelatin desserts) which keeps the polymer chains somewhat bound together, resisting relaxation which is prevalent in wet spinning. The high solvent retention allows for ultra-high drawing as with ultra high molecular weight polyethylene (UHMWPE) (e.g., Spectra®) to produce fibers with a high degree of orientation, which increases fiber strength. The fibers are first cooled either with air or in a liquid bath to induce gelation, then the solvent is removed through ageing in a nonsolvent, or during the drawing stage. Some high strength polyethylene and polyacrylonitrile fibers are produced via this process.
Electrospinning
Electrospinning uses an electrical charge to draw very fine (typically on the micro or nano scale) fibres from a liquid - either a polymer solution or a polymer melt. Electrospinning shares characteristics of both electrospraying and conventional solution dry spinning of fibers. The process does not require the use of coagulation chemistry or high temperatures to produce solid threads from solution. This makes the process particularly suited to the production of fibers using large and complex molecules. Melt electrospinning is also practiced; this method ensures that no solvent can be carried over into the final product.
Post-spin processes
Drawing
Finally, the fibers are drawn to increase strength and orientation. This may be done while the polymer is still solidifying or after it has completely cooled.
See also
Spinneret (polymers)
References
Plastics industry
Synthetic fibers
Textile engineering | Spinning (polymers) | Physics,Chemistry,Engineering | 929 |
36,589,562 | https://en.wikipedia.org/wiki/Lud%C4%9Bk%20Pe%C5%A1ek | Luděk Pešek (26 April 1919 – 4 December 1999) was a Czech artist and novelist. He was noted for his representations of astronomical subjects. The asteroid 6584 Ludekpesek is named after him. He was influenced by Lucien Rudaux.
Biography
Luděk Pešek was born in 1919 in Kladno, Czechoslovakia, and grew up in the city of Ostrava. His boyhood was marked by the longing for mountains, and distant lands, laying the ground for his later interest in geology and astronomy. His potential artistic and literary talents were recognized early, and encouraged by his art teacher at grammar school. It was also on that occasion, that he first had the opportunity to use an astronomical telescope. At the age of fifteen, Pešek acquired a painter's easel, and began to practice his hobby earnestly. Later, he attended the Academy of Fine Arts in Prague.
He produced his first art works around the age of 19. His first publications were The Moon and Planets (1963), and Our Planet Earth (1967). His work first reached US readers through the National Geographic Magazine, which commissioned him to do a series of works about Mars. Previous to the Mars article, he had painted 15 scenes for an article called Journey to the Planets in August 1970. In 1967, Pešek wrote his first science-fiction novel, Log of a Moon Expedition, which he illustrated in black and white. Another, The Earth Is Near, won Prize of Honour in Germany in 1971. It was published in the UK and United States in 1974. He illustrated Space Shuttles in 1976. He worked with writer Peter Ryan on several slim books for children: Journey to the Planets (1972), Planet Earth (1972), The Ocean World (1973), and UFOs and Other Worlds (1975); he later worked with the same author on the large-format Solar System (1978). He also illustrated the excellent Bildatlas des Sonnensystems (1974), with German text by Bruno Stanek.
From 1981 to 1985, he produced a series of 35 paintings on The Planet Mars, and a series of 50 paintings, Virgin Forests in the USA, one of which can be seen on the Earth page.
He produced several 360-degree panoramas for projection in the domes of the planetariums at Stuttgart, Winnipeg and Lucerne, and exhibited in Washington, D.C., Boston, Nashville, Stuttgart, Bern, Lucerne, Zürich, and other venues. His work is in the collection of the Smithsonian Institution.
He died in Stäfa, Switzerland.
Books in English
The Ocean World – by Peter Ryan and Luděk Pešek – 1973
The Earth is Near – 1974
UFOs and Other Worlds – by Peter Ryan and Luděk Pešek – 1975
An Island for Two – 1975
A Beautiful, Peaceful World – by Hans-Joachim Gelberg and Willi Glasauer – 1976
Log of a Moon Expedition – 1969
Trap For Perseus – 1980
The Moon and the Planets – by Josef Sadil and Luděk Pešek – 1963
Journey to the Planets – by Peter Ryan and Luděk Pešek – 1972
See also
List of space artists
References
External links
1919 births
1999 deaths
Czech science fiction writers
Swiss science fiction writers
Czech speculative fiction artists
Swiss speculative fiction artists
Space artists
Czechoslovak artists
People from Kladno
Czechoslovak emigrants to Switzerland
Czechoslovak writers | Luděk Pešek | Astronomy | 692 |
46,273,230 | https://en.wikipedia.org/wiki/Hafnium%28IV%29%20iodide | Hafnium(IV) iodide is the inorganic compound with the formula HfI4. It is a red-orange, moisture sensitive, sublimable solid that is produced by heating a mixture of hafnium with excess iodine. It is an intermediate in the crystal bar process for producing hafnium metal.
In this compound, the hafnium centers adopt octahedral coordination geometry. Like most binary metal halides, the compound is a polymeric. It is one-dimensional polymer consisting of chains of edge-shared bioctahedral Hf2I8 subunits, similar to the motif adopted by HfCl4. The nonbridging iodide ligands have shorter bonds to Hf than the bridging iodide ligands.
References
Iodides
Hafnium compounds
Metal halides | Hafnium(IV) iodide | Chemistry | 172 |
10,369,986 | https://en.wikipedia.org/wiki/List%20of%20JavaScript%20libraries | This is a list of notable JavaScript libraries.
Constraint programming
Cassowary (software)
CHR.js
DOM (manipulation) oriented
Google Polymer
Dojo Toolkit
jQuery
MooTools
Prototype JavaScript Framework
Tay
Graphical/visualization (canvas, SVG, or WebGL related)
AnyChart
Babylon.js
Chart.js
Cytoscape
D3.js
Dojo Toolkit
FusionCharts
Google Charts
Highcharts
p5.js
Plotly
Processing.js
Raphaël
RGraph
SWFObject
Teechart
Three.js
Velocity.js
Verge3D
Webix
GUI (Graphical user interface) and widget related
Angular (application platform) by Google
AngularJS by Google
Bootstrap
Dojo Widgets
Ext JS by Sencha
Foundation by ZURB
jQuery UI
jQWidgets
OpenUI5 by SAP
Polymer (library) by Google
qooxdoo
React.js by Facebook
Vue.js
Webix
WinJS
Svelte
No longer actively developed
Glow
Lively Kernel
Script.aculo.us
YUI Library
Pure JavaScript/Ajax
Google Closure Library
Joose
JsPHP
Microsoft's Ajax library
MochiKit
PDF.js
Socket.IO
Spry framework
Underscore.js
Template systems
jQuery Mobile
Mustache
Jinja-JS
Twig.js
Unit testing
Jasmine
Mocha
QUnit
Web-application related (MVC, MVVM)
Angular (application platform) by Google
AngularJS by Google
Backbone.js
Echo
Ember.js
Enyo
Express.js
Ext JS
Google Web Toolkit
JsRender/JsViews
Knockout
Meteor
Mojito
MooTools
Next.js
Nuxt.js
OpenUI5 by SAP
Polymer (library) by Google
Prototype JavaScript Framework
qooxdoo
React.js
SproutCore
Vue.js
Other
Blockly
Cannon.js
MathJax
Modernizr
TensorFlow
Brain.js
See also
Ajax framework
Comparison of JavaScript frameworks
JavaScript library | List of JavaScript libraries | Technology | 423 |
2,424,936 | https://en.wikipedia.org/wiki/Spermatophylax | A spermatophylax is a gelatinous bolus which some male insects eject during copulation with females through their aedeagi together with spermatophores, and which functions as a nutritive supplement for the female.
See also
Nuptial gift
References
Insect anatomy
Sexual anatomy | Spermatophylax | Biology | 62 |
5,895,822 | https://en.wikipedia.org/wiki/Sensitivity%20index | The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
Definition
The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation.
Equal variances/covariances
For two univariate distributions and with the same standard deviation, it is denoted by ('dee-prime'):
.
In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix , (whose symmetric square-root, the standard deviation matrix, is ), this generalizes to the Mahalanobis distance between the two distributions:
,
where is the 1d slice of the sd along the unit vector through the means, i.e. the equals the along the 1d slice through the means.
For two bivariate distributions with equal variance-covariance, this is given by:
,
where is the correlation coefficient, and here and , i.e. including the signs of the mean differences instead of the absolute.
is also estimated as .
Unequal variances/covariances
When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to for equal variance/covariance.
Bayes discriminability index
This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification by an ideal observer, or its complement, the optimal accuracy :
,
where is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed (Matlab code), and may also be used as an approximation when the distributions are close to normal.
is a positive-definite statistical distance measure that is free of assumptions about the distributions, like the Kullback-Leibler divergence . is asymmetric, whereas is symmetric for the two distributions. However, does not satisfy the triangle inequality, so it is not a full metric.
In particular, for a yes/no task between two univariate normal distributions with means and variances , the Bayes-optimal classification accuracies are:
,
where denotes the non-central chi-squared distribution, , and . The Bayes discriminability
can also be computed from the ROC curve of a yes/no task between two univariate normal distributions with a single shifting criterion. It can also be computed from the ROC curve of any two distributions (in any number of variables) with a shifting likelihood-ratio, by locating the point on the ROC curve that is farthest from the diagonal.
For a two-interval task between these distributions, the optimal accuracy is ( denotes the generalized chi-squared distribution), where
. The Bayes discriminability .
RMS sd discriminability index
A common approximate (i.e. sub-optimal) discriminability index that has a closed-form is to take the average of the variances, i.e. the rms of the two standard deviations: (also denoted by ). It is times the -score of the area under the receiver operating characteristic curve (AUC) of a single-criterion observer. This index is extended to general dimensions as the Mahalanobis distance using the pooled covariance, i.e. with as the common sd matrix.
Average sd discriminability index
Another index is , extended to general dimensions using as the common sd matrix.
Comparison of the indices
It has been shown that for two univariate normal distributions, , and for multivariate normal distributions, still.
Thus, and underestimate the maximum discriminability of univariate normal distributions. can underestimate by a maximum of approximately 30%. At the limit of high discriminability for univariate normal distributions, converges to . These results often hold true in higher dimensions, but not always. Simpson and Fitter promoted as the best index, particularly for two-interval tasks, but Das and Geisler have shown that is the optimal discriminability in all cases, and is often a better closed-form approximation than , even for two-interval tasks.
The approximate index , which uses the geometric mean of the sd's, is less than at small discriminability, but greater at large discriminability.
Contribution to discriminability by each dimension
In general, the contribution to the total discriminability by each dimension or feature may be measured using the amount by which the discriminability drops when that dimension is removed. If the total Bayes discriminability is and the Bayes discriminability with dimension removed is , we can define the contribution of dimension as . This is the same as the individual discriminability of dimension when the covariance matrices are equal and diagonal, but in the other cases, this measure more accurately reflects the contribution of a dimension than its individual discriminability.
Scaling the discriminability of two distributions
We may sometimes want to scale the discriminability of two data distributions by moving them closer or farther apart. One such case is when we are modeling a detection or classification task, and the model performance exceeds that of the subject or observed data. In that case, we can move the model variable distributions closer together so that it matches the observed performance, while also predicting which specific data points should start overlapping and be misclassified.
There are several ways of doing this. One is to compute the mean vector and covariance matrix of the two distributions, then effect a linear transformation to interpolate the mean and sd matrix (square root of the covariance matrix) of one of the distributions towards the other.
Another way that is by computing the decision variables of the data points (log likelihood ratio that a point belongs to one distribution vs another) under a multinormal model, then moving these decision variables closer together or farther apart.
See also
Receiver operating characteristic (ROC)
Summary statistics
Effect size
References
External links
Interactive signal detection theory tutorial including calculation of d′.
Detection theory
Signal processing
Summary statistics | Sensitivity index | Technology,Engineering | 1,308 |
233,654 | https://en.wikipedia.org/wiki/World%20Geodetic%20System | The World Geodetic System (WGS) is a standard used in cartography, geodesy, and satellite navigation including GPS. The current version, WGS 84, defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum, and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency.
History
Efforts to supplement the various national surveying systems began in the 19th century with F.R. Helmert's book (Mathematical and Physical Theories of Physical Geodesy). Austria and Germany founded the (Central Bureau of International Geodesy), and a series of global ellipsoids of the Earth were derived (e.g., Helmert 1906, Hayford 1910 and 1924).
A unified geodetic system for the whole world became essential in the 1950s for several reasons:
International space science and the beginning of astronautics.
The lack of inter-continental geodetic information.
The inability of the large geodetic systems, such as European Datum (ED50), North American Datum (NAD), and Tokyo Datum (TD), to provide a worldwide geo-data basis
Need for global maps for navigation, aviation, and geography.
Western Cold War preparedness necessitated a standardised, NATO-wide geospatial reference system, in accordance with the NATO Standardisation Agreement
WGS 60
In the late 1950s, the United States Department of Defense, together with scientists of other institutions and countries, began to develop the needed world system to which geodetic data could be referred and compatibility established between the coordinates of widely separated sites of interest. Efforts of the U.S. Army, Navy and Air Force were combined leading to the DoD World Geodetic System 1960 (WGS 60). The term datum as used here refers to a smooth surface somewhat arbitrarily defined as zero elevation, consistent with a set of surveyor's measures of distances between various stations, and differences in elevation, all reduced to a grid of latitudes, longitudes, and elevations. Heritage surveying methods found elevation differences from a local horizontal determined by the spirit level, plumb line, or an equivalent device that depends on the local gravity field (see physical geodesy). As a result, the elevations in the data are referenced to the geoid, a surface that is not readily found using satellite geodesy. The latter observational method is more suitable for global mapping. Therefore, a motivation, and a substantial problem in the WGS and similar work is to patch together data that were not only made separately, for different regions, but to re-reference the elevations to an ellipsoid model rather than to the geoid.
In accomplishing WGS 60, a combination of available surface gravity data, astro-geodetic data and results from HIRAN and Canadian SHORAN surveys were used to define a best-fitting ellipsoid and an earth-centered orientation for each initially selected datum. (Every datum is relatively oriented with respect to different portions of the geoid by the astro-geodetic methods already described.) The sole contribution of satellite data to the development of WGS 60 was a value for the ellipsoid flattening which was obtained from the nodal motion of a satellite.
Prior to WGS 60, the U.S. Army and U.S. Air Force had each developed a world system by using different approaches to the gravimetric datum orientation method. To determine their gravimetric orientation parameters, the Air Force used the mean of the differences between the gravimetric and astro-geodetic deflections and geoid heights (undulations) at specifically selected stations in the areas of the major datums. The Army performed an adjustment to minimize the difference between astro-geodetic and gravimetric geoids. By matching the relative astro-geodetic geoids of the selected datums with an earth-centered gravimetric geoid, the selected datums were reduced to an earth-centered orientation. Since the Army and Air Force systems agreed remarkably well for the NAD, ED and TD areas, they were consolidated and became WGS 60.
WGS 66
Improvements to the global system included the Astrogeoid of Irene Fischer and the astronautic Mercury datum. In January 1966, a World Geodetic System Committee composed of representatives from the United States Army, Navy and Air Force was charged with developing an improved WGS, needed to satisfy mapping, charting and geodetic requirements. Additional surface gravity observations, results from the extension of triangulation and trilateration networks, and large amounts of Doppler and optical satellite data had become available since the development of WGS 60. Using the additional data and improved techniques, WGS 66 was produced which served DoD needs for about five years after its implementation in 1967. The defining parameters of the WGS 66 Ellipsoid were the flattening ( determined from satellite data) and the semimajor axis ( determined from a combination of Doppler satellite and astro-geodetic data). A worldwide 5° × 5° mean free air gravity anomaly field provided the basic data for producing the WGS 66 gravimetric geoid. Also, a geoid referenced to the WGS 66 Ellipsoid was derived from available astrogeodetic data to provide a detailed representation of limited land areas.
WGS 72
After an extensive effort over a period of approximately three years, the Department of Defense World Geodetic System 1972 was completed. Selected satellite, surface gravity and astrogeodetic data available through 1972 from both DoD and non-DoD sources were used in a Unified WGS Solution (a large scale least squares adjustment). The results of the adjustment consisted of corrections to initial station coordinates and coefficients of the gravitational field.
The largest collection of data ever used for WGS purposes was assembled, processed and applied in the development of WGS 72. Both optical and electronic satellite data were used. The electronic satellite data consisted, in part, of Doppler data provided by the U.S. Navy and cooperating non-DoD satellite tracking stations established in support of the Navy's Navigational Satellite System (NNSS). Doppler data was also available from the numerous sites established by GEOCEIVERS during 1971 and 1972. Doppler data was the primary data source for WGS 72 (see image). Additional electronic satellite data was provided by the SECOR (Sequential Collation of Range) Equatorial Network completed by the U.S. Army in 1970. Optical satellite data from the Worldwide Geometric Satellite Triangulation Program was provided by the BC-4 camera system (see image). Data from the Smithsonian Astrophysical Observatory was also used which included camera (Baker–Nunn) and some laser ranging.
The surface gravity field used in the Unified WGS Solution consisted of a set of 410 10° × 10° equal area mean free air gravity anomalies determined solely from terrestrial data. This gravity field includes mean anomaly values compiled directly from observed gravity data wherever the latter was available in sufficient quantity. The value for areas of sparse or no observational data were developed from geophysically compatible gravity approximations using gravity-geophysical correlation techniques. Approximately 45 percent of the 410 mean free air gravity anomaly values were determined directly from observed gravity data.
The astrogeodetic data in its basic form consists of deflection of the vertical components referred to the various national geodetic datums. These deflection values were integrated into astrogeodetic geoid charts referred to these national datums. The geoid heights contributed to the Unified WGS Solution by providing additional and more detailed data for land areas. Conventional ground survey data was included in the solution to enforce a consistent adjustment of the coordinates of neighboring observation sites of the BC-4, SECOR, Doppler and Baker–Nunn systems. Also, eight geodimeter long line precise traverses were included for the purpose of controlling the scale of the solution.
The Unified WGS Solution, as stated above, was a solution for geodetic positions and associated parameters of the gravitational field based on an optimum combination of available data. The WGS 72 ellipsoid parameters, datum shifts and other associated constants were derived separately. For the unified solution, a normal equation matrix was formed based on each of the mentioned data sets. Then, the individual normal equation matrices were combined and the resultant matrix solved to obtain the positions and the parameters.
The value for the semimajor axis () of the WGS 72 Ellipsoid is . The adoption of an -value 10 meters smaller than that for the WGS 66 Ellipsoid was based on several calculations and indicators including a combination of satellite and surface gravity data for position and gravitational field determinations. Sets of satellite derived station coordinates and gravimetric deflection of the vertical and geoid height data were used to determine local-to-geocentric datum shifts, datum rotation parameters, a datum scale parameter and a value for the semimajor axis of the WGS Ellipsoid. Eight solutions were made with the various sets of input data, both from an investigative point of view and also because of the limited number of unknowns which could be solved for in any individual solution due to computer limitations. Selected Doppler satellite tracking and astro-geodetic datum orientation stations were included in the various solutions. Based on these results and other related studies accomplished by the committee, an -value of and a flattening of 1/298.26 were adopted.
In the development of local-to WGS 72 datum shifts, results from different geodetic disciplines were investigated, analyzed and compared. Those shifts adopted were based primarily on a large number of Doppler TRANET and GEOCEIVER station coordinates which were available worldwide. These coordinates had been determined using the Doppler point positioning method.
WGS 84
In the early 1980s, the need for a new world geodetic system was generally recognized by the geodetic community as well as within the US Department of Defense. WGS 72 no longer provided sufficient data, information, geographic coverage, or product accuracy for all then-current and anticipated applications. The means for producing a new WGS were available in the form of improved data, increased data coverage, new data types and improved techniques. Observations from Doppler, satellite laser ranging and very-long-baseline interferometry (VLBI) constituted significant new information. An outstanding new source of data had become available from satellite radar altimetry. Also available was an advanced least squares method called collocation that allowed for a consistent combination solution from different types of measurements all relative to the Earth's gravity field, measurements such as the geoid, gravity anomalies, deflections, and dynamic Doppler.
The new world geodetic system was called WGS 84. It is the reference system used by the Global Positioning System. It is geocentric and globally consistent within . Current geodetic realizations of the geocentric reference system family International Terrestrial Reference System (ITRS) maintained by the IERS are geocentric, and internally consistent, at the few-cm level, while still being metre-level consistent with WGS 84.
The WGS 84 reference ellipsoid was based on GRS 80, but it contains a very slight variation in the inverse flattening, as it was derived independently and the result was rounded to a different number of significant digits. This resulted in a tiny difference of in the semi-minor axis.
The following table compares the primary ellipsoid parameters.
Definition
The coordinate origin of WGS 84 is meant to be located at the Earth's center of mass; the uncertainty is believed to be less than .
The WGS 84 meridian of zero longitude is the IERS Reference Meridian, 5.3 arc seconds or east of the Greenwich meridian at the latitude of the Royal Observatory. (This is related to the fact that the local gravity field at Greenwich does not point exactly through the Earth's center of mass, but rather "misses west" of the center of mass by about 102 meters.) The longitude positions on WGS 84 agree with those on the older North American Datum 1927 at roughly 85° longitude west, in the east-central United States.
The WGS 84 datum surface is an oblate spheroid with equatorial radius = at the equator and flattening = . The refined value of the WGS 84 gravitational constant (mass of Earth's atmosphere included) is = . The angular velocity of the Earth is defined to be = .
This leads to several computed parameters such as the polar semi-minor axis which equals = , and the first eccentricity squared, = .
Updates and new standards
The original standardization document for WGS 84 was Technical Report 8350.2, published in September 1987 by the Defense Mapping Agency (which later became the National Imagery and Mapping Agency). New editions were published in September 1991 and July 1997; the latter edition was amended twice, in January 2000 and June 2004. The standardization document was revised again and published in July 2014 by the National Geospatial-Intelligence Agency as NGA.STND.0036. These updates provide refined descriptions of the Earth and realizations of the system for higher precision.
The original WGS84 model had an absolute accuracy of 1–2 meters. WGS84 (G730) first incorporated GPS observations, taking the accuracy down to 10 cm/component rms. All following revisions including WGS84 (G873) and WGS84 (G1150) also used GPS.
WGS 84 (G1762) is the sixth update to the WGS reference frame.
WGS 84 has most recently been updated to use the reference frame G2296, which was released on 7 January 2024 as an update to G2139, now aligned to both the ITRF2020, the most recent ITRF realization, and the IGS20, the frame used by the International GNSS Service (IGS). G2139 was aligned with the IGb14 realization of the International Terrestrial Reference Frame (ITRF) 2014 and uses the new IGS Antex standard.
Updates to the original geoid for WGS 84 are now published as a separate Earth Gravitational Model (EGM), with improved resolution and accuracy. Likewise, the World Magnetic Model (WMM) is updated separately. The current version of WGS 84 uses EGM2008 and WMM2020.
Solution for Earth orientation parameters consistent with ITRF2014 is also needed (IERS EOP 14C04).
Identifiers
Components of WGS 84 are identified by codes in the EPSG Geodetic Parameter Dataset:
EPSG:4326 – 2D coordinate reference system (CRS)
EPSG:4979 – 3D CRS
EPSG:4978 – geocentric 3D CRS
EPSG:7030 – reference ellipsoid
EPSG:6326 – horizontal datum
See also
Degree Confluence Project
Earth Gravitational Model
European Terrestrial Reference System 1989
Geo (microformat) – for marking up WGS 84 coordinates in (X)HTML
geo URI scheme
Geographic information system
Geotagging
GIS file formats
North American Datum
Point of interest
TRANSIT system
References
External links
NGA Standardization Document Department of Defense World Geodetic System 1984, Its Definition and Relationships With Local Geodetic Systems (2014-07-08)
DMA Technical Report 8350.2 Department of Defense World Geodetic System 1984, Its Definition and Relationships With Local Geodetic Systems (1991-09-01). This edition documents the original Earth Gravitational Model.
NGA webpage for WGS 84
Geodesy for the Layman, Chapter VIII, "The World Geodetic System"
Spatial reference for EPSG:4326
ANTEX (.atx) files that define IGS20
Coordinate systems
Geodesy
Global Positioning System
Military globalization
Navigation | World Geodetic System | Mathematics,Technology,Engineering | 3,311 |
55,347,654 | https://en.wikipedia.org/wiki/Thermal%20stress | In mechanics and thermodynamics, thermal stress is mechanical stress created by any change in temperature of a material. These stresses can lead to fracturing or plastic deformation depending on the other variables of heating, which include material types and constraints. Temperature gradients, thermal expansion or contraction and thermal shocks are things that can lead to thermal stress. This type of stress is highly dependent on the thermal expansion coefficient which varies from material to material. In general, the greater the temperature change, the higher the level of stress that can occur. Thermal shock can result from a rapid change in temperature, resulting in cracking or shattering.
Temperature gradients
When a material is rapidly heated or cooled, the surface and internal temperature will have a difference in temperature. Quick heating or cooling causes thermal expansion or contraction respectively, this localized movement of material causes thermal stresses. Imagine heating a cylinder, first the surface rises in temperature and the center remains the same initial temperature. After some time the center of the cylinder will reach the same temperature as the surface. During the heat up the surface is relatively hotter and will expand more than the center. An example of this is dental fillings can cause thermal stress in a person's mouth. Sometimes dentists use dental fillings with different thermal expansion coefficients than tooth enamel, the fillings will expand faster than the enamel and cause pain in a person's mouth.
Thermal expansion and contraction
Material will expand or contract depending on the material's thermal expansion coefficient. As long as the material is free to move, the material can expand or contract freely without generating stresses. Once this material is attached to a rigid body at multiple locations, thermal stresses can be created in the geometrically constrained region. This stress is calculated by multiplying the change in temperature, material's thermal expansion coefficient and material's Young's modulus (see formula below). is Young's modulus, is thermal expansion coefficient, is initial temperature and is the final temperature.
When is greater than , the constraints exert a compressive force on the material. The opposite happens while cooling; when is less than , the stress will be tensile. A welding example involves heating and cooling of metal which is a combination of thermal expansion, contraction, and temperature gradients. After a full cycle of heating and cooling, the metal is left with residual stress around the weld.
Thermal shock
This is a combination of a large temperature gradient due to low thermal conductivity, in addition to rapid change in temperature on brittle materials. The change in temperature causes stresses on the surface that are in tension, which encourages crack formation and propagation. Ceramics materials are usually susceptible to thermal shock. An example is when glass is heated up to a high temperature and then quickly quenched in cold water. As the temperature of the glass falls rapidly, stresses are induced and causes fractures in the body of the glass which can be seen as cracks or even shattering in some cases.
References
Solid mechanics | Thermal stress | Physics | 595 |
37,216,583 | https://en.wikipedia.org/wiki/Ambaragudda | Ambaragudda is a hill, covering located in Western Ghats village named "Marati" near Kodachadri in Sagara taluk, in the Indian state of Karnataka. It is covered with rainforests. Mining operations have drawn protests. The Karnataka government declared it as a natural heritage site of Western Ghat region in 2009.
Ambaragudda is a part of Sharavathi valley and is located near Linganamakki hydroelectric dam and the hill and Ammanaghatta hill range give birth to five tributaries of Sharavathi river.
Mining
Mining is opposed by local people, including environmentalists such as Raghaveshwara Bharathi, in view of massive damage to surrounding hills. Local people stopped mining activity during 2005. It was alleged that the mining company furnished false information to the court, stating that the hill is barren, even though it is covered with forests. Certain mining companies undertook illegal mining in 2004. Local people formed a front named "Kodachadri Sanjeevini" to protest all mining activities in and around Ambaragudda and Kodachadri hill range.
See also
Kodachadri
Raghaveshwara Bharathi
References
Hills of Karnataka
Mountains of the Western Ghats
Geography of Shimoga district
Biodiversity Heritage Sites of India | Ambaragudda | Biology | 269 |
10,833,335 | https://en.wikipedia.org/wiki/History%20monoid | In mathematics and computer science, a history monoid is a way of representing the histories of concurrently running computer processes as a collection of strings, each string representing the individual history of a process. The history monoid provides a set of synchronization primitives (such as locks, mutexes or thread joins) for providing rendezvous points between a set of independently executing processes or threads.
History monoids occur in the theory of concurrent computation, and provide a low-level mathematical foundation for process calculi, such as CSP the language of communicating sequential processes, or CCS, the calculus of communicating systems. History monoids were first presented by M.W. Shields.
History monoids are isomorphic to trace monoids (free partially commutative monoids) and to the monoid of dependency graphs. As such, they are free objects and are universal. The history monoid is a type of semi-abelian categorical product in the category of monoids.
Product monoids and projection
Let
denote an n-tuple of (not necessarily pairwise disjoint) alphabets . Let denote all possible combinations of one finite-length string from each alphabet:
(In more formal language, is the Cartesian product of the free monoids of the . The superscript star is the Kleene star.) Composition in the product monoid is component-wise, so that, for
and
then
for all in . Define the union alphabet to be
(The union here is the set union, not the disjoint union.) Given any string , we can pick out just the letters in some using the corresponding string projection . A distribution is the mapping that operates on with all of the , separating it into components in each free monoid:
Histories
For every , the tuple is called the elementary history of a. It serves as an indicator function for the inclusion of a letter a in an alphabet . That is,
where
Here, denotes the empty string. The history monoid is the submonoid of the product monoid generated by the elementary histories: (where the superscript star is the Kleene star applied with a component-wise definition of composition as given above). The elements of are called global histories, and the projections of a global history are called individual histories.
Connection to computer science
The use of the word history in this context, and the connection to concurrent computing, can be understood as follows. An individual history is a record of the sequence of states of a process (or thread or machine); the alphabet is the set of states of the process.
A letter that occurs in two or more alphabets serves as a synchronization primitive between the various individual histories. That is, if such a letter occurs in one individual history, it must also occur in another history, and serves to "tie" or "rendezvous" them together.
Consider, for example, and . The union alphabet is of course . The elementary histories are , , , and . In this example, an individual history of the first process might be while the individual history of the second machine might be . Both of these individual histories are represented by the global history , since the projection of this string onto the individual alphabets yields the individual histories. In the global history, the letters and can be considered to commute with the letters and , in that these can be rearranged without changing the individual histories. Such commutation is simply a statement that the first and second processes are running concurrently, and are unordered with respect to each other; they have not (yet) exchanged any messages or performed any synchronization.
The letter serves as a synchronization primitive, as its occurrence marks a spot in both the global and individual histories, that cannot be commuted across. Thus, while the letters and can be re-ordered past and , they cannot be reordered past . Thus, the global history and the global history both have as individual histories and , indicating that the execution of may happen before or after . However, the letter is synchronizing, so that is guaranteed to happen after , even though is in a different process than .
Properties
A history monoid is isomorphic to a trace monoid, and as such, is a type of semi-abelian categorical product in the category of monoids. In particular, the history monoid is isomorphic to the trace monoid with the dependency relation given by
In simple terms, this is just the formal statement of the informal discussion given above: the letters in an alphabet can be commutatively re-ordered past the letters in an alphabet , unless they are letters that occur in both alphabets. Thus, traces are exactly global histories, and vice versa.
Conversely, given any trace monoid , one can construct an isomorphic history monoid by taking a sequence of alphabets where ranges over all pairs in .
Notes
References
Antoni Mazurkiewicz, "Introduction to Trace Theory", pp. 3–41, in The Book of Traces, V. Diekert, G. Rozenberg, eds. (1995) World Scientific, Singapore
Volker Diekert, Yves Métivier, "Partial Commutation and Traces", In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, Vol. 3, Beyond Words, pages 457–534. Springer-Verlag, Berlin, 1997.
Concurrency (computer science)
Semigroup theory
Formal languages
Free algebraic structures | History monoid | Mathematics | 1,119 |
61,289,418 | https://en.wikipedia.org/wiki/Rhodium%20trifluoride | Rhodium(III) fluoride or rhodium trifluoride is the inorganic compound with the formula RhF3. It is a red-brown, diamagnetic solid.
Synthesis and structure
The compound is prepared by fluorination of rhodium trichloride:
It can also be obtained by direct combination of the elements:
Anhydrous is insoluble in water and does not react with it, but the hydrates and can be prepared by adding hydrofluoric acid to aqueous rhodium(III) solutions.
According to X-ray crystallography, the compound adopts the same structure as vanadium trifluoride, wherein the metal achieves octahedral coordination geometry.
References
Fluorides
Platinum group halides
Rhodium(III) compounds | Rhodium trifluoride | Chemistry | 173 |
55,533,305 | https://en.wikipedia.org/wiki/NGC%201934 | NGC 1934 (also known as ESO 56-SC109) is an emission nebula located in the Dorado constellation and part of the Large Magellanic Cloud. It was discovered by John Herschel on November 23, 1834. Its apparent magnitude is 10.50.
References
Emission nebulae
ESO objects
1934
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1834 | NGC 1934 | Astronomy | 77 |
12,686,484 | https://en.wikipedia.org/wiki/Drug%20policy | A drug policy is the policy regarding the control and regulation of psychoactive substances (commonly referred to as drugs), particularly those that are addictive or cause physical and mental dependence. While drug policies are generally implemented by governments, entities at all levels (from international organisations, national or local government, administrations, or public places) may have specific policies related to drugs.
Drug policies are usually aimed at combatting drug addiction or dependence addressing both demand and supply of drugs, as well as mitigating the harm of drug use, and providing medical assistance and treatment. Demand reduction measures include voluntary treatment, rehabilitation, substitution therapy, overdose management, alternatives to incarceration for drug related minor offenses, medical prescription of drugs, awareness campaigns, community social services, and support for families. Supply side reduction involves measures such as enacting foreign policy aimed at eradicating the international cultivation of plants used to make drugs and interception of drug trafficking, fines for drug offenses, incarceration for persons convicted for drug offenses. Policies that help mitigate the dangers of drug use include needle syringe programs, drug substitution programs, and free facilities for testing a drug's purity.
The concept of "drugs" –a substance subject to control– varies from jurisdiction to jurisdiction. For example, heroin is regulated almost everywhere; substances such as khat, codeine, or alcohol are regulated in some places, but not others. Most jurisdictions also regulate prescription drugs, medicinal drugs not considered dangerous but that can only be supplied to holders of a medical prescription, and sometimes drugs available without prescription but only from an approved supplier such as a pharmacy, but this is not usually described as a "drug policy". There are however some international standards as to which substances are under certain controls, in particular via the three international drug control conventions.
International drug control treaties
History
The first international treaty to control a psychoactive substance was adopted at the Brussels Conference in 1890 in the context of the regulations against slave trade, and concerned alcoholic beverages. It was followed by the final act of the Shanghai Opium Commission of 1909 which attempted to settle peace and arrange the trade in opium, after the Opium Wars in the 19th Century.
In 1912 at the First International Opium Conference held in the Hague, the multilateral International Opium Convention was adopted; it ultimately got incorporated into the Treaty of Versailles in 1919. A number of international treaties related to drugs followed in subsequent decades: the 1925 Agreement concerning the Manufacture of, Internal Trade in and Use of Prepared Opium (which introduced some restrictions—but no total prohibition—on the export of "Indian hemp" pure extracts), the 1931 Convention for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs and Agreement for the Control of Opium Smoking in the Far East, the 1936 Convention for the Suppression of the Illicit Traffic in Dangerous Drugs, among others. After World War II, a series of Protocols signed at Lake Success brought into the mandate of the newly created United Nations these pre-war treaties which had been handled by the League of Nations and the Office international d'hygiène publique.
In 1961 the nine previous drug-control treaties in force were superseded by the 1961 Single Convention, which rationalized global control on drug trading and use. Countries commit to "protecting the health and welfare of [hu]mankind" and to combat substance abuse and addiction. The treaty is not a self-enforcing agreement: countries have to pass their own legislation aligned with the framework of the Convention. The 1961 Convention was supplemented by the 1971 Convention and the 1988 Convention, forming the three international drug control treaties upon which other legal instruments rely. Their implementation has been led by the United States, in particular after the Nixon administration's declaration of "War on drugs" in 1971, and the creation of the Drug Enforcement Administration (DEA) as a U.S. federal law enforcement agency in 1973.
Since the early 2000s the European Union (EU) has developed several comprehensive and multidisciplinary strategies as part of its drug policy to prevent the diffusion of recreational drug use and abuse among the European population and raise public awareness on the adverse effects of drugs among all member states of the European Union, as well as conjoined efforts with European law enforcement agencies, such as Europol and EMCDDA, to counter organized crime and illegal drug trade in Europe.
Current treaties
The core drug control treaties currently in force internationally are:
the Single Convention on Narcotic Drugs, 1961 (1961 Convention or Single Convention) composed of:
the original Single Convention concluded at New York City (United States), 30 March 1961, and
its amendement, the Protocol amending the Single Convention on Narcotic Drugs which was adopted in Geneva (Switzerland), 25 March 1972,
the Convention on Psychotropic Substances (1971 Convention), concluded at Vienna, 21 February 1971, and
the UN Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988 Convention) concluded at Vienna (Austria), 20 December 1988.
There are other treaties that address drugs under international control, such as:
the UN Convention on the Law of the Sea (UNCLOS), concluded on 10 December 1982 in Montego Bay (Jamaica),
the Convention on the Rights of the Child (CRC), concluded on 20 November 1989 in New York City,
the International Convention Against Doping in Sport concluded in Paris (France) on 19 October 2005.
Additionally, other pieces of international law enter into play, like the international human rights treaties protecting the right to health or the rights of indigenous peoples, and, in the case of plants considered as drug crops (coca plant, cannabis, opium poppy), treaties protecting the right to land, farmers' of peasants' rights, and treaties on plant genetic resources or traditional knowledge.
Treaty-mandated organizations
There are four bodies mandated under the international drug control conventions (1961, 1971 and 1988):
The Commission on Narcotic Drugs (CND), a subsidiary body of the United Nations ECOSOC, the CND is acting as a Conference of the parties to the three core Conventions,
the UN Secretary-General, whose mandate is de facto carried on by the United Nations Office on Drugs and Crime (UNODC),
the World Health Organization (WHO), in charge of the scientific review of substances for inclusion under, changes in, or withdrawal from control (scheduling assessment),
the International Narcotics Control Board (INCB), the treaty-body monitoring implementation and collecting statistical data.
Drug policy by country
Australia
Australian drug laws are criminal laws and mostly exist at the state and territory level, not the federal, and are therefore different, which means an analysis of trends and laws for Australia is complicated. The federal jurisdiction has enforcement powers over national borders.
In October 2016, Australia legislated for some medicinal use cannabis.
Bolivia
Like Colombia, the Bolivian government signed onto the ATPA in 1991 and called for the forced eradication of the coca plant in the 1990s and early 2000s. Until 2004, the government allowed each residential family to grow 1600m2 of coca crop, enough to provide the family with a monthly minimum wage. In 2005, Bolivia saw another reformist movement. The leader of a coca grower group, Evo Morales, was elected President in 2005. Morales ended any U.S. backed War on Drugs. President Morales opposed the decriminalization of drugs but saw the coca crop as an important piece of indigenous history and a pillar of the community because of the traditional use of chewing coca leaves. In 2009, the Bolivian Constitution backed the legalization and industrialization of coca products.
Bolivia first proposed an amendment to the Single Convention on Narcotic Drugs in 2009. After its failure, Bolivia left the convention and re-accessed with a reservation for coca leaf in its natural form.
Canada
China
Colombia
Under President Ronald Reagan, the United States declared War on Drugs in the late 1980s; the Colombian drug lords were widely viewed as the root of the cocaine issue in America. In the 1990s, Colombia was home to the world's two largest drug cartels: the Cali cartel and the Medellín cartel. It became Colombia's priority, as well as the priority of the other countries in the Andean Region, to extinguish the cartels and drug trafficking from the region. In 1999, under President Andrés Pastrana, Colombia passed Plan Colombia. Plan Colombia funded the Andean Region's fight against the drug cartels and drug trafficking. With the implementation of Plan Colombia, the Colombian government aimed to destroy the coca crop. This prohibitionist regime has had controversial results, especially on human rights. Colombia has seen a significant decrease in coca cultivation. In 2001, there were 362,000 acres of coca crop in Colombia; by 2011, fewer than 130,000 acres remained. However, farmers who cultivated the coca crop for uses other than for the creation of cocaine, such as the traditional use of chewing coca leaves, became impoverished.
Since 1994, consumption of drugs has been decriminalized. However, possession and trafficking of drugs are still illegal. In 2014, Colombia further eased its prohibitionist stance on the coca crop by ceasing aerial fumigation of the coca crop and creating programs for addicts. President Juan Manuel Santos (2010–2018), has called for the revision of Latin American drug policy, and was open to talks about legalization.
Ecuador
In the mid-1980s, under President León Febres-Cordero, Ecuador adopted the prohibitionist drug policy recommended by the United States. By cooperating with the United States, Ecuador received tariff exemptions from the United States. In February 1990, the United States held the Cartagena Drug Summit, in the hopes of continuing progress on the War on Drugs. Three of the four countries in the Andean Region were invited to the Summit: Peru, Colombia and Bolivia, with the notable absence of Ecuador. Two of those three countries—Colombia and Bolivia—joined the Andean Trade Preference Act, later called the Andean Trade Promotion and Drug Eradication Act, in 1992. Ecuador, along with Peru, would eventually join the ATPA in 1993. The Act united the region in the War on Drugs as well as stimulated their economies with tariff exemptions.
In 1991, President Rodrigo Borja Cevallos passed Law 108, a law that decriminalized drug use, while continuing to prosecute drug possession. In reality, Law 108 set a trap that snared many citizens. Citizens confused the legality of use with the illegality of carrying drugs on their person. This led to a large increase in prison populations, as 100% of drug crimes were processed. In 2007, 18,000 prisoners were kept in a prison built to hold up to 7,000. In urban regions of Ecuador as many as 45% of male inmates were serving time for drug charges; this prison demographic rises to 80% of female inmates. In 2008, under Ecuador's new Constitution, current prisoners serving time were allowed the "smuggler pardon" if they were prosecuted for purchasing or carrying up to 2 kg of any drug, and they already served 10% of their sentence. Later, in 2009, Law 108 was replaced by the Organic Penal Code (COIP). The COIP contains many of the same rules and regulations as Law 108, but it established clear distinctions among large, medium and small drug traffickers, as well as between the mafia and rural growers, and prosecutes accordingly. In 2013, the Ecuadorian government left the Andean Trade Promotion and Drug Eradication Act.
Germany
Compared with other EU countries, Germany's drug policy is considered progressive, but still stricter than, for example, the Netherlands. In 1994 the Federal Constitutional Court ruled that drug addiction was not a crime, nor was the possession of small amounts of drugs for personal use. In 2000, Germany changed the narcotic law ("BtmG") to allow supervised drug injection rooms. In 2002, they started a pilot study in seven German cities to evaluate the effects of heroin-assisted treatment on addicts, compared to methadone-assisted treatment. The positive results of the study led to the inclusion of heroin-assisted treatment into the services of the mandatory health insurance in 2009.
In 2017, Germany re-allowed medical cannabis; after the 2021 German federal election, the new government announced in their coalition agreement their intention to legalise cannabis for all other purposes (including recreational). This was implemented on 1 April 2024. Cannabis can be legally acquired from Cannabis Social Clubs which however have periodic membership fees and a maximum 500 of members as of 2024 or be grown by consumers themselves who can have up to three plants.
India
Indonesia
Like many other governments in Southeast Asia, the Indonesian government applies severe laws to discourage drug use.
Liberia
Liberia prohibits drugs such as cocaine and marijuana. Its drug laws are enforced by the Liberia Drug Enforcement Agency.
Netherlands
Drug policy in the Netherlands is based on two principles: that drug use is a health issue, not a criminal issue, and that there is a distinction between hard and soft drugs. It was also one of the first countries to introduce heroin-assisted treatment and safe injection sites. From 2008, a number of town councils have closed many so called coffee shops that sold cannabis or implemented other new restrictions for sale of cannabis, e.g. for foreigners.
Importing and exporting of any classified drug is a serious offence. The penalty can run up to 12 to 16 years if it is for hard drugs, or a maximum of 4 years for importing or exporting large quantities of cannabis. Investment in treatment and prevention of drug addiction is high when compared to the rest of the world. The Netherlands spends significantly more per capita than all other countries in the EU on drug law enforcement. 75% of drug-related public spending is on law enforcement. Drug use remains at average Western European levels and slightly lower than in English speaking countries.
Peru
According to article 8 of the Constitution of Peru, the state is responsible for battling and punishing drug trafficking. Likewise, it regulates the use of intoxicants. Consumption of drugs is not penalized and possession is allowed for small quantities only. Production and distribution of drugs are illegal.
In 1993, Peru, along with Ecuador, signed the Andean Trade Preference Agreement with the United States, later replaced with the Andean Trade Promotion and Drug Eradication Act. Bolivia and Colombia had already signed the ATPA in 1991, and began enjoying its benefits in 1992. By agreeing to the terms of this Agreement, these countries worked in concert with the United States to fight drug trafficking and production at the source. The Act aimed to substitute the production of the coca plant with other agricultural products. In return for their efforts towards eradication of the coca plant, the countries were granted U.S. tariff exemptions on certain products, such as certain types of fruit. Peru ceased complying with the ATPA in 2012, and lost all tariff exemptions previously granted by the United States through the ATPA. By the end of 2012, Peru overtook Colombia as the world's largest cultivator of the coca plant.
Poland
Portugal
In July 2001, a law maintained the status of illegality for using or possessing any drug for personal use without authorization. The offense was however changed from a criminal one, with prison a possible punishment, to an administrative one if the possessing was no more than up to ten days' supply of that substance. This was in line with the de facto Portuguese drug policy before the reform. Drug addicts were then aggressively targeted with therapy or community service rather than fines or waivers. Even if there are no criminal penalties, these changes did not legalize drug use in Portugal. Possession has remained prohibited by Portuguese law, and criminal penalties are still applied to drug growers, dealers and traffickers.
Russia
Drugs became popular in Russia among soldiers and the homeless, particularly due to the First World War. This included morphine-based drugs and cocaine, which were readily available. The government under Tsar Nicholas II of Russia had outlawed alcohol in 1914 (including vodka) as a temporary measure until the conclusion of the War. Following the Russian Revolution and in particular the October Revolution and the Russian Civil War, the Bolsheviks emerged victorious as the new political power in Russia. The Soviet Union inherited a population with widespread drug addiction, and in the 1920s, tried to tackle it by introducing a 10-year prison sentence for drug-dealers. The Bolsheviks also decided in August 1924 to re-introduce the sale of vodka, which, being more readily available, led to a drop in drug-use.
Sweden
Sweden's drug policy has gradually turned from lenient in the 1960s with an emphasis on drug supply towards a policy of zero tolerance against all illicit drug use (including cannabis). The official aim is a drug-free society. Drug use became a punishable crime in 1988. Personal use does not result in jail time if not combined with driving a car. Prevention includes widespread drug testing, and penalties range from fines for minor drug offenses up to a 10-year prison sentence for aggravated offenses. The condition for suspended sentences could be regular drug tests or submission to rehabilitation treatment. Drug treatment is free of charge and provided through the health care system and the municipal social services. Drug use that threatens the health and development of minors could force them into mandatory treatment if they don't apply voluntarily. If the use threatens the immediate health or the security of others (such as a child of an addict) the same could apply to adults.
Among 9th year students, drug experimentation was highest in the early 1970s, falling towards a low in the late 1980s, redoubling in the 1990s to stabilize and slowly decline in 2000s. Estimates of heavy drug addicts have risen from 6000 in 1967 to 15000 in 1979, 19000 in 1992 and 26000 in 1998. According to inpatient data, there were 28000 such addicts in 2001 and 26000 in 2004, but these last two figures may represent the recent trend in Sweden towards out-patient treatment of drug addicts rather than an actual decline in drug addictions.
The United Nations Office on Drugs and Crime (UNODC) reports that Sweden has one of the lowest drug use rates in the Western world, and attributes this to a drug policy that invests heavily in prevention and treatment as well as strict law enforcement. The general drug policy is supported by all political parties and, according to opinion polls made in the mid 2000s, the restrictive approach received broad support from the public at that time.
Switzerland
The national drug policy of Switzerland was developed in the early 1990s and comprises the four elements of prevention, therapy, harm reduction and prohibition. In 1994 Switzerland was one of the first countries to try heroin-assisted treatment and other harm reduction measures like supervised injection rooms. In 2008 a popular initiative by the right wing Swiss People's Party aimed at ending the heroin program was rejected by more than two-thirds of the voters. A simultaneous initiative aimed at legalizing marijuana was rejected at the same ballot.
Between 1987 and 1992, illegal drug use and sales were permitted in Platzspitz park, Zurich, in an attempt to counter the growing heroin problem. However, as the situation grew increasingly out of control, authorities were forced to close the park.
In 2022, Switzerland initiated pilot trials for the non-medical use of cannabis.
Thailand
Thailand has a strict drug policy. The use, storage, transportation and distribution of drugs is illegal. In 2021, Thailand unified all the laws on narcotic, psychoactive substances, and inhalants into the Narcotic Code 2564 BE (2021 AD) with more relaxing policy. The sentence of many criminal offenses relating to narcotic was reduced as the new law focuses more on drug rehabilitation. According to the Narcotic Code, narcotic substances are divided into 5 categories.
Category I – highly addictive narcotic such as heroin, amphetamines, methamphetamines, etc.
Category II – highly addictive narcotic with medical use such as morphine, cocaine, ketamine, codeine, medicinal opium (opium extracts or products), etc.
Category III – drug formularies that legally contain the category II narcotic, etc.
Category IV – chemicals used for synthesizing the categories I and II narcotic such as acetic anhydride, acetyl chloride, etc.
Category V – narcotic plants such as opium poppy, magic mushroom, cannabis extracts with THC higher than 0.2% by weight and cannabis seed extracts.
With the current law, kratom and cannabis plant no longer belong to the category V narcotic. They are no longer considered narcotic plants. However, plantation, possession, distribution, and use of these plants are still controlled by certain level of permission and regulations.
It is also illegal to import more than 200 cigarettes per person to Thailand. Control takes place at customs at the airport. If the limit has been exceeded, the owner can be fined up to ten times the cost of cigarettes.
In January 2018, Thai authorities imposed a ban on smoking on beaches in some tourist areas. Those who smoke in public places can be punished with a fine of 100,000 Baht or imprisonment for up to one year. It is forbidden to import electronic cigarettes into Thailand. These items are likely to be confiscated, and you can be fined or sent to prison for up to 10 years. The sale or supply of electronic cigarettes and similar devices is also prohibited and is punishable by a fine or imprisonment of up to 5 years.
It is worth noting that most people arrested for possessing a small amount of substances from the V-th category are fined and not imprisoned. At present, in Thailand, the anti-drug police are considering methamphetamines as a more serious and dangerous problem.
On 9 February 2024 the Public Health Ministry published possession limits for many illicit drugs. This means if you possess a small amount of an illegal drug, you have to go to a rehabilitation program, instead of imprisonment. This marks another progressive step in Thailand's drug policy.
Ukraine
Crimes in the sphere of trafficking in narcotic, psychotropic substances and crimes against health are classified using the 13th section of the Criminal Code of Ukraine; articles from 305 to 327.
According to official statistics for 2016, 53% of crimes in the field of drugs fall on art. 309 of the Criminal Code of Ukraine: "illegal production, manufacture, acquisition, storage, transportation or shipment of narcotic drugs, psychotropic substances or their analogues without the purpose of sale".
Sentence for crime:
fine of fifty to one hundred non-taxable minimum incomes of citizens;
or correctional labor for up to two years;
or arrest for up to six months, or restriction of liberty for up to three years;
or imprisonment for the same term.
On 28 August 2013, the Cabinet of Ministers of Ukraine adopted a strategy for state policy on drugs until 2020. This is the first document of this kind in Ukraine. The strategy developed by the State Drug Control Service, involves strengthening criminal liability for distributing large amounts of drugs, and easing the penalty for possession of small doses. Thanks to this strategy, it is planned to reduce the number of injecting drug users by 20% by 2020, and the number of drug overdose deaths by 30%.
In October 2018, the State Service of Ukraine on Drugs and Drug Control issued the first license for the import and re-export of raw materials and products derived from cannabis. The corresponding licenses were obtained by the USA company C21. She is also in the process of applying for additional licenses, including the cultivation of hemp.
United Kingdom
Drugs considered addictive or dangerous in the United Kingdom (with the exception of tobacco and alcohol) are called "controlled substances" and regulated by law. Until 1964 the medical treatment of dependent drug users was separated from the punishment of unregulated use and supply. This arrangement was confirmed by the Rolleston Committee in 1926. This policy on drugs, known as the "British system", was maintained in Britain, and nowhere else, until the 1960s. Under this policy drug use remained low; there was relatively little recreational use and few dependent users, who were prescribed drugs by their doctors as part of their treatment. From 1964 drug use was increasingly criminalised, with the framework still in place largely determined by the 1971 Misuse of Drugs Act.
United States
Modern US drug policy still has roots in the war on drugs started by president Richard Nixon in 1971.
In the United States, illegal drugs fall into different categories and punishment for possession and dealing varies on amount and type. Punishment for marijuana possession is light in most states, but punishment for dealing and possession of hard drugs can be severe, and has contributed to the growth of the prison population.
US drug policy is also heavily invested in foreign policy, supporting military and paramilitary actions in South America, Central Asia, and other places to eradicate the growth of coca and opium. In Colombia, U.S. president Bill Clinton dispatched military and paramilitary personnel to interdict the planting of coca, as a part of the Plan Colombia. The project is often criticized for its ineffectiveness and its negative impact on local farmers, but it has been effective in destroying the once-powerful drug cartels and guerrilla groups of Colombia. President George W. Bush intensified anti-drug efforts in Mexico, initiating the Mérida Initiative, but has faced criticisms for similar reasons.
21 May 2012 the U.S Government published an updated version of its Drug Policy
The director of ONDCP stated simultaneously that this policy is something different from "War on Drugs":
The U.S Government see the policy as a "third way" approach to drug control one that is based on the results of a huge investment in research from some of the world's preeminent scholars on disease of substance abuse.
The policy does not see drug legalization as the "silver bullet" solution to drug control.
It is not a policy where success is measured by the number of arrests made or prisons built.
The U.S. government generates grants to develop and disseminate evidence based addiction treatments. These grants have developed several practices that NIDA endorses, such as community reinforcement approach and community reinforcement and family training approach, which are behavior therapy interventions.
See also
Cannabis rights
Chasing the Scream
Drug checking
Drug liberalization
Drug policy reform
Harm reduction
Legality of cannabis
Maintenance dose
Psilocybin decriminalization in the United States
Supervised injection site
References
Drug control law
Drug control treaties
Criminal justice reform
Public health
Health policy
Global health
World Health Organization | Drug policy | Chemistry | 5,379 |
4,537,039 | https://en.wikipedia.org/wiki/Medial%20axis | The medial axis of an object is the set of all points having more than one closest point on the object's boundary. Originally referred to as the topological skeleton, it was introduced in 1967 by Harry Blum as a tool for biological shape recognition. In mathematics the closure of the medial axis is known as the cut locus.
In 2D, the medial axis of a subset S which is bounded by planar curve C is the locus of the centers of circles that are tangent to curve C in two or more points, where all such circles are contained in S. (It follows that the medial axis itself is contained in S.)
The medial axis of a simple polygon is a tree whose leaves are the vertices of the
polygon, and whose edges are either straight segments or arcs of parabolas.
The medial axis together with the associated radius function of the maximally inscribed discs is called the medial axis transform (MAT). The medial axis transform is a complete shape descriptor (see also shape analysis), meaning that it can be used to reconstruct the shape of the original domain.
The medial axis is a subset of the symmetry set, which is defined similarly, except that it also includes circles not contained in S. (Hence, the symmetry set of S generally extends to infinity, similar to the Voronoi diagram of a point set.)
The medial axis generalizes to k-dimensional hypersurfaces by replacing 2D circles with k-dimension hyperspheres. The 2D medial axis is useful for character and object recognition, while the 3D medial axis has applications in surface reconstruction for physical models, and for dimensional reduction of complex models. In any dimension, the medial axis of a bounded open set is homotopy equivalent to the given set.
If S is given by a unit speed parametrisation , and is the unit tangent vector at each point. Then there will be a bitangent circle with center c and radius r if
For most curves, the symmetry set will form a one-dimensional curve and can contain cusps. The symmetry set has end points corresponding to the vertices of S.
See also
Grassfire transform
Local feature size
Straight skeleton
Voronoi diagram – which can be regarded as a discrete form of the medial axis.
References
Further reading
External links
The Scale Axis Transform – a generalization of the medial axis
Straight Skeleton for polygon with holes – Straight Skeleton builder implemented in java.
Geometric shapes | Medial axis | Mathematics | 492 |
19,517,173 | https://en.wikipedia.org/wiki/CQ%20Camelopardalis | CQ Camelopardalis, abbreviated as CQ Cam, is a solitary variable star in the northern circumpolar constellation Camelopardalis. It has an apparent magnitude of 5.19, making it visible to the naked eye under ideal conditions. The object is relatively far at a distance of about 2,000 light years but is drifting closer with a heliocentric radial velocity of . It has a peculiar velocity of , making it a runaway star.
CQ Cam has a stellar classification of M0 II, indicating that it is a red bright giant. CQ Cam is currently on the asymptotic giant branch, fusing hydrogen and helium shells around an inert carbon core. At present it has 12.7 times the mass of the Sun but, at the age of 16 million years, it has expanded to 333 times the radius of the Sun. The object is a luminous star, with a bolometric luminosity over 10,000 times that of the Sun. Despite this brightness, CQ Cam's large diameter yields an effective temperature of from its photosphere, giving a red hue.
CQ Cam has been classified as a low amplitude slow irregular variable based on Hipparcos photometry. However, there have not been enough observations to confirm this.
References
Camelopardalis
020797
015890
M-type bright giants
Camelopardalis, CQ
1009
Slow irregular variables
BD+64 00391
Runaway stars | CQ Camelopardalis | Astronomy | 303 |
30,916,131 | https://en.wikipedia.org/wiki/Landfill%20gas%20emission%20reduction%20in%20Brazil | Brazil has established a strong public policy using Clean Development Mechanism Projects to reduce methane emissions from landfills. An important component of these projects is the sale of avoided emissions by the private market to generate revenue.
Introduction
Faced with serious pollution challenges, Brazil established public policy that would create incentives for the foreign and national private market to invest financial, technological, and human resources in the country. The premise is that experienced companies would bring their technology to Brazil, in an effort to reduce methane gas emissions. The specific technology and projects discussed in this article refer to landfill gas projects. Although this technology was new to Brazil in the early 2000s when companies first began implementing them, these methods were not new to Europe or North America. Additionally, Brazil is just one of many countries participating in similar projects around the world.
Background
Brazil signed the Kyoto Protocol on April 29, 1998, and ratified it on August 23, 2002. To date, Brazil has 347 clean development mechanism (CDM) projects, which account for 7.3% of the total projects worldwide. Estimated projections by the United Nations Environment Programme (UNEP) show that by 2012, Brazil will have 102 million certified emission reductions (CER), a $1,225 million value. Unlike its fellow BRIC countries, in Brazil the largest component of potential CER projects is landfill gas projects, with a 31.3% share. According to the national survey on basic sanitation (PNSB) conducted in 2008, all of the 5,564 municipalities have access to basic sanitation. According to the Environmental Sanitation Technology Company (CETESB) study, the 6,000 waste sites in Brazil receive 60,000 tonnes of waste per day. Seventy-six percent (76%) of this waste goes to dumps with no management, gas collection, or water treatment. This same study showed that 83.18% of Brazil's methane gas emissions come from uncontrolled waste sites.
Landfill gas projects
Private companies have submitted CDM projects to the United Nations Framework Convention on Climate Change (UNFCCC) to use landfill gas (LFG) discharges from waste management sites to earn carbon credits or CER. There are over 100 LFG CDM projects in Brazil. The diagram below illustrates the process.
First, once the waste management company has developed the landfill with the new technology, (1a) it calculates how much methane (CH4) would have been emitted into the air without its intervention. (1b) Then it converts the CH4 into carbon equivalents (C02e). (2a) Next, the company projects how much methane it expects to emit into the air, with the new technology. Again, (2b) it converts the CH4 into C02e. (3) Next, the company determines the avoided emissions or CER but subtracting the emission projections with the technology from the baseline emissions without the technology. (4) Once credited, the company sells the CER through a broker to companies that will produce emissions greater than their allotted capacity.
SASA
The SASA landfill is located in Tremembé in São Paulo State of Brazil. Onyx SASA is a subsidiary of Veolia Environnement and is an officially registered project with UNFCCC, as of November 24, 2005. SenterNovem, an agency of the Dutch Ministry of Economic Affairs in the Netherlands, is a partner in the project. The following flow chart depicts the process used by the landfill:
(1) Methane (CH4) or carbon equivalents (CO2e) are captured by the vertical wells. (2) Next, a horizontal drain that is connected to the vertical wells extracts the CO2e. (3) Then, a high density collection pipe captures the CO2e and transfers it to the evaporator. (4) Any CO2e that did not evaporate, is transferred to an enclosed flare. (5) The remaining emissions are then vented into the air.
At the filing of the report, Onyx SASA anticipated the landfill would accrue 700,625 tons CO2e from 2003 through 2012 in CER. As of 2011, Onyx SASA has filed monitoring reports for the periods of 2003 through 2007. The following chart outlines the actual CER realized to date:
Additionally, the project design report states Onyx SASA expects to revegetate and reforest the land; upon fulfillment, 150,000 trees will be planted around the landfill.
Paulínia
Empresa de Saneamento e Tratamento de Resíduos (ESTRE) is a private waste management Brazilian-based company, founded in 1999. ESTRE operates seven sites in Brazil, Uruguay, and Argentina. It offers waste management services, including recycling and landfills, to private companies and the government. The Paulínia Landfill Gas Project (EPLGP) is located in Campinas in São Paulo State of Brazil. The project was registered on March 3, 2006, with UNFCCC.
The goal of the EPLGP is to reduce greenhouse emissions. The following schematic illustrates the process of capturing and recycling the gas emissions:
As illustrated above, (1) wells installed in the landfill collect the methane (CH4). (2) Next, high density pipes connected to the wells transfer the CH4 to the blower. (3) Any remaining CH4 is then sent to the flare. (4) Last, the CH4 is flared into the air.
The following table outlines the forecasted and actual yearly outputs of CER according to the monitoring reports filed with UNFCCC:
^The notable increase in actual CER versus the projected CER is due to the increase in waste received by the landfill, from 2.5 tons per day as reported in the CDM application to 5 tons per day.
Legislation: National Policy on Climate Change
After Brazil's Congress passed the climate change legislation, on December 29, 2009, President Luiz Inácio Lula da Silva signed the National Policy on Climate Change (PNMC). The law requires Brazil to reduce greenhouse gas emissions by 38.9% by 2020. On December 9, 2010, President Lula signed a decree which details the provisions of PNMC. At its foundation, PNMC focuses on prevention, citizen participation, and sustainable development.
Law N° 12.187 of 2009
There are 13 articles in the legislation:
Article 1 establishes the laws and principals governing PNMC.
Article 2 establishes definitions and key terms related to climate change including, adverse effects of climate change, emissions, greenhouse gases, and mitigation.
Article 3 states:
Everyone has a duty to reduce the human impact on climate change
Steps will be taken to anticipate, prevent, and minimize the causes of climate change as determined by the scientific community
Measures will be taken to consider and distribute the burden among various socio-economic populations and communities
Sustainable development is a prerequisite for mitigating climate change and the needs of each population and territory should be dually considered
Actions taken at the national level to mitigate climate change must consider actions taken at the municipal, state, and private sectors
Vetoed
Article 4 outlines the goals:
Reconcile socio-economic development and climate protection
Reduce greenhouse emissions
Vetoed
Strengthen methods for removal of anthropogenic green house emissions
Use measures to promote adaption by all three spheres of the Brazilian Federation with participation and collaboration from economic and social sectors, in particular those most affected by climate change
Preserve, conserve, and restore environmental resources, in particular natural biomes considered a National Heritage
Consolidate and expand legally protected areas and encourage reforestation and revegetation of degraded areas
Stimulate development of the Brazilian Market for Emissions Reduction (MBRE)
Article 5 establishes the guidelines of PNMC:
Follow-through on commitments made through the Kyoto Protocol and other climate change measures
Assess measurable benefits, quantifiable and verifiable, of mitigation actions
Adapt measures to reduce adverse effects
Integrate strategies at the local, regional, and national levels
Encourage participation of the federal, state, county, and municipal levels, as well as the productive sector, academia and civil society organizations, and implement policies, plans, programs, and actions
Promote and develop scientific research
Utilize financial and economic instruments to promote mitigation actions
Identify and articulate the instruments used to protect climate change
Promote activities that effectively reduce gas emissions
Promote international cooperation
Improve systematic observation
Promote dissemination of information, education, and training
Stimulate and support practices and activities with low emissions and sustainable consumption
Article 6 outlines the instruments, committees, plans, funding, policy, research, monitoring, indicators, and assessments PNMC will utilize toward climate change.
Article 7 outlines institutional instruments PNMC will utilize:
Interministerial Committee on Climate Change
Interministerial Commission on Global Climate Change
Brazilian Forum on Climate Change
The Brazilian Network for Research on Global Climate Change - Climate Network
Commission for Coordination of Activities of Meteorology, Climatology and Hydrology
Article 8 addresses the official financial institutions line of credit to support climate change efforts.
Article 9 notes that MBRE will be monitored by the Securities Commission.
Article 10 was vetoed.
Article 11 states that public policy and government programs should be compatible with PNMC.
Article 12 states the country's adoption of greenhouse emission gas reductions of 36.1% to 38.9%.
Article 13 states that the law will become official on its publication date of December 31, 2009.
Presidential decree N° 7.390 of 2010
The decree specifies how Brazil quantifies greenhouse emissions, how it will achieve the reduction, and a legal requirement for estimating annual emissions. The policy will use Brazil's 2005 emission rate as the business as usual base line for comparison of future emissions. The Policy:
Provides authority for adopting mitigation actions to achieve the reduction goal
Requires reduction efforts to be compatible with sustainable development and economic and social interests
Designates instruments for implementation, include the National Climate Change Plan and the National Climate Change Fund
Establishes sectoral plans for mitigation and adaptation in forests, agriculture, energy, and transportation
Creates the Brazilian Market for Emissions Reductions (MBRE) for trading in avoided emissions certificates
Specifically, the decree lists the following Action Plans:
Prevention and Control of Deforestation in the Amazon
Prevention and Control of Deforestation and Forest Fires in the Cerrado
Ten Year Plan for Expansion of Energy
Consolidation of an Economy of Low-Carbon in Agriculture
Reducing Emissions from Steel
Per the decree, the Sectoral Plans will include:
Emission reduction target in 2020 and incremental goals with a maximum interval of three years
Actions to be implemented
Definition of indicators for monitoring and evaluating effectiveness
Proposed regulatory instruments and incentives for implementation
Competitive alignment with industry studies' estimated costs and impacts
Per the decree, the following sectors are included in the estimations:
Change of Land Use: 1,404 million tons of CO2e (e=equivalent)
Energy: 868 million tons of CO2e
Agriculture: 730 million tons of CO2e
Industrial Processes and Waste: 234 million tons of CO2e
The National Climate Change Fund "supports mitigation and adaptation projects and will rely principally on a to-be-determined portion of future oil and gas revenues."
See also
Environment of Brazil
References
Emissions reduction
Climate change in Brazil
Landfill
Brazil | Landfill gas emission reduction in Brazil | Chemistry | 2,281 |
39,664,301 | https://en.wikipedia.org/wiki/Ferrate | Ferrate loosely refers to a material that can be viewed as containing anionic iron complexes. Examples include tetrachloroferrate ([FeCl4]2−), oxyanions (), tetracarbonylferrate ([Fe(CO)4]2−), the organoferrates. The term ferrate derives . Some ferrates are called super-iron by some and have uses in battery applications and as an oxidizer. It can be used to clean water safely from a wide range of pollutants, including viruses, microbes, arsenic, sulfur-containing compounds, cyanides and other nitrogen-containing contaminants, many organic compounds, and algae.
References
Iron compounds
Anions
Ferrates | Ferrate | Physics,Chemistry | 160 |
5,197,604 | https://en.wikipedia.org/wiki/Bamberger%20triazine%20synthesis | The Bamberger triazine synthesis in organic chemistry is a classic organic synthesis of a triazine first reported by Eugen Bamberger in 1892.
The reactants are an aryl diazonium salt obtained from reaction of the corresponding aniline with sodium nitrite and hydrochloric acid and the hydrazone of pyruvic acid. The azo intermediate converts to the benzotriazine in the third step with sulfuric acid in acetic acid.
See also
From the same chemist: the Bamberger rearrangement
References
Nitrogen heterocycle forming reactions
Heterocycle forming reactions
Name reactions
Benzotriazines | Bamberger triazine synthesis | Chemistry | 129 |
36,956,765 | https://en.wikipedia.org/wiki/37%20Comae%20Berenices | 37 Comae Berenices is a variable star system located around 690 light years away from the Sun in the northern constellation of Coma Berenices. It has the variable star designation LU Comae Berenices. 37 Comae Berenices was a later Flamsteed designation of 13 Canum Venaticorum. This object is visible to the naked eye as a faint, yellow-hued star with a baseline apparent visual magnitude of 4.88. It is drifting closer to the Earth with a heliocentric radial velocity of −14 km/s.
Tokovinin (2008) catalogued this as a wide triple star system. The primary component is an aging giant star, currently in the Hertzsprung gap, with a stellar classification of . It is a weak G-band star, a luminous giant star with a carbon abundance about a factor of 5 lower than is typical for such stars. This is a variable star most likely of the RS CVn type with an amplitude of 0.15 in magnitude, and it displays magnetic activity. It has 5.25 times the mass of the Sun and, having exhausted the supply of hydrogen at its core, has expanded to 38 times the Sun's radius.
References
G-type giants
RS Canum Venaticorum variables
Triple star systems
Coma Berenices
BD+31 2434
Comae Berenices, 37
112989
063462
4924
Comae Berenices, LU | 37 Comae Berenices | Astronomy | 302 |
77,018,809 | https://en.wikipedia.org/wiki/Transition%20metal%20complexes%20of%20phosphine%20oxides | Transition metal complexes of phosphine oxides are coordination complex containing one or more phosphine oxide ligands. Many phosphine oxides exist and most behave as hard Lewis bases. Almost invariably, phosphine oxides bind metals by formation of M-O bonds.
Structure
The structure of the phosphine oxide is not strongly perturbed by coordination. The geometry at phosphorus remains tetrahedral. The P-O distance elongates by ca. 2%. In triphenylphosphine oxide, the P-O distance is 1.48 Å. In NiCl2[OP(C6H5)3]2, the distance is 1.51 Å (see figure). A similar elongation of the P-O bond is seen in cis-WCl4(OPPh3)2. The trend is consistent with the stabilization of the ionic resonance structure upon complexation.
Examples
Typically, complexes are derived from hard metal centers. Examples include cis-WCl4(OPPh3)2 and NbOCl3(OPPh3)2 Trialkylphosphine oxides are more basic (better ligands) than triarylphosphine oxides. One such complex is FeCl2(OPMe3)2 (Me = CH3).
Synthesis and reactions
Most complexes of phosphine oxides are prepared by treatment of a labile metal complex with preformed phosphine oxide. In some cases, the phosphine oxide is unintentionally generated by air-oxidation of the parent phosphine ligand.
Since phosphine oxides are weak Lewis bases, they are readily displaced from their metal complexes. This behavior has led to investigation of mixed phosphine-phosphine oxide ligands, which exhibit hemilability. Typical phosphine-phosphine oxide ligands are Ph2P(CH2)nP(O)Ph2 (Ph = C6H5) derived from bis(diphenylphosphino)ethane (n = 2) and bis(diphenylphosphino)methane (n = 1).
In one case, coordination of the oxide of dppe to W(0) results in deoxygenation, giving an oxotungsten complex of dppe.
Secondary phosphine oxides as ligands
Secondary phosphine oxides have the formula R2P(O)H. They tautomerize to small amounts of the hydroxy tautomer R2P-OH. Regardless, the hydroxy tautomer forms a wide variety of complexes with transition metals. In contrast to O-bonded phosphine oxide ligands, the P-bonded phosphine oxides are strong field ligands. These ligands, which tend to engage in intramolecular hydrogen bonds. Illustrative is the complex derived from dimethylphosphine oxide, (Me = CH3).
The pattern also applies to several phosphorus compounds including phosphorous acid, which forms complexes as P(OH)3. The complex platinum pop is one example.
The Kläui ligand is the anion {(C5H5)Co[(CH3O)2PO]3}−. It is derived from the trimethylphosphite ligand by dealkylation. In this case the "ligand" is a complex of cobalt that also binds to other metals in a tridentate manner.
References
Coordination chemistry
Coordination complexes
Ligands | Transition metal complexes of phosphine oxides | Chemistry | 717 |
13,279,719 | https://en.wikipedia.org/wiki/List%20of%20widget%20toolkits | This article provides a list of widget toolkits (also known as GUI frameworks), used to construct the graphical user interface (GUI) of programs, organized by their relationships with various operating systems.
Low-level widget toolkits
Integrated in the operating system
Mac OS X uses Cocoa. Mac OS 9 and Mac OS X used to use Carbon for 32-bit applications.
The Windows API used in Microsoft Windows. Microsoft had the graphics functions integrated in the kernel until 2006
The Haiku operating system uses an extended and modernised version of the Be API that was used by its predecessor BeOS. Haiku is expected to drop binary and source compatibility with BeOS at some future time, which will result in a Haiku API.
As a separate layer on top of the operating system
The X Window System contains primitive building blocks, called Xt or "Intrinsics", but they are mostly only used by older toolkits such as: OLIT, Motif and Xaw. Most contemporary toolkits, such as GTK or Qt, bypass them and use Xlib or XCB directly.
The Amiga OS Intuition was formerly present in the Amiga Kickstart ROM and integrated itself with a medium-high level widget library which invoked the Workbench Amiga native GUI. Since Amiga OS 2.0, Intuition.library became disk based and object oriented. Also Workbench.library and Icon.library became disk based, and could be replaced with similar third-party solutions.
Since 2005, Microsoft has taken the graphics system out of Windows' kernel.
High-level widget toolkits
OS dependent
On Amiga
BOOPSI (Basic Object Oriented Programming System for Intuition) was introduced with OS 2.0 and enhanced Intuition with a system of classes in which every class represents a single widget or describes an interface event. This led to an evolution in which third-party developers each realised their own personal systems of classes.
MUI: object-oriented GUI toolkit and the official toolkit for MorphOS.
ReAction: object-oriented GUI toolkit and the official toolkit for AmigaOS.
Zune (GUI toolkit) is an open source clone of MUI and the official toolkit for AROS.
On macOS
Cocoa - used in macOS (see also Aqua). As a result of macOS' OPENSTEP lineage, Cocoa also supports Windows, although it is not publicly advertised as such. It is generally unavailable for use by third-party developers. An outdated and feature-limited open-source subset of Cocoa exists within the WebKit project, however; it is used to render Aqua natively in Safari (web browser) for Windows. Apple's iTunes, which supports both GDI and WPF, includes a mostly complete binary version of the framework as "Apple Application Support".
Carbon - the deprecated framework used in Mac OS X to port “classic” Mac applications and software to the Mac OS X.
MacApp, the framework for the Classic Mac OS by Apple.
PowerPlant, the framework for the Classic Mac OS by Metrowerks.
On Microsoft Windows
The Microsoft Foundation Classes (MFC), a C++ wrapper around the Windows API.
The Windows Template Library (WTL), a template-based extension to ATL and a replacement of MFC
The Object Windows Library (OWL), Borland's alternative to MFC.
The Visual Component Library (VCL) is Embarcadero's toolkit used in C++Builder and Delphi. It wraps the native Windows controls, providing object-oriented classes and visual design, although also allowing access to the underlying handles and other WinAPI details if required. It was originally implemented as a successor to OWL, skipping the OWL/MFC style of UI creation, which by the mid-nineties was a dated design model.
Windows Forms (WinForms) is Microsoft's .NET set of classes that handle GUI controls. In the cross-platform Mono implementation, it is an independent toolkit, implemented entirely in managed code (not wrapping the Windows API, which doesn't exist on other platforms). WinForms' design closely mimics that of the VCL.
The Windows Presentation Foundation (WPF) is the graphical subsystem of the .NET Framework 3.0. User interfaces can be created in WPF using any of the CLR languages (e.g. C#) or with the XML-based language XAML. Microsoft Expression Blend is a visual GUI builder for WPF.
The Windows UI Library (WinUI) is the graphical subsystem of universal apps. User interfaces can be created in WinUI using C++ or any of the .NET languages (e.g., C#) or with the XML-based language XAML. Microsoft Expression Blend is a visual GUI builder that supports WinUI.
On Unix, under the X Window System
Note that the X Window System was originally primarily for Unix-like operating systems, but it now runs on Microsoft Windows as well using, for example, Cygwin, so some or all of these toolkits can also be used under Windows.
Motif used in the Common Desktop Environment.
LessTif, an open source (LGPL) implementation of Motif.
MoOLIT, a bridge between the look-and-feel of OPEN LOOK and Motif
OLIT, an Xt-based OPEN LOOK intrinsics toolkit
Xaw, the Project Athena widget set for the X Window System.
XView, a SunView compatible OPEN LOOK toolkit
Cross-platform
Based on C (including bindings to other languages)
Elementary, open source (LGPL), a part of the Enlightenment Foundation Libraries.
GTK, open source (LGPL), primarily for the X Window System, ported to and emulated under other platforms; used in the GNOME, Rox, LXDE and Xfce desktop environments. The Windows port has support for native widgets.
IUP, open source (MIT), a minimalist GUI toolkit in ANSI C for Windows, UNIX and Linux.
Tk, open source (BSD-style), a widget set accessed from Tcl and other high-level script languages (interfaced in Python as Tkinter).
XForms, the Forms Library for X
XVT, Extensible Virtual Toolkit
Based on C++ (including bindings to other languages)
CEGUI, open source (MIT License), cross-platform widget toolkit designed for game development, but also usable for applications and tool development. Supports multiple renderers and optional libraries.
FLTK, open source (LGPL), cross-platform toolkit designed to be small and fast.
FOX toolkit, open source (LGPL), cross-platform toolkit.
GLUI, a very small toolkit written with the GLUT library.
gtkmm, C++ interface for GTK
Juce provides GUI and widget set with the same look and feel in Microsoft Windows, X Windows Systems, macOS and Android. Rendering can be based on OpenGL.
Qt, proprietary and open source (GPL, LGPL) available under Unix and Linux (with X11 or Wayland), Windows (Desktop, CE and Phone 8), macOS, iOS, Android, BlackBerry 10 and embedded Linux; used in the KDE, Trinity, LXQt, and Lumina desktop environment, it's also used in Ubuntu's Unity shell.
Rogue Wave Views (formerly ILOG Views) provides GUI and graphic library for Windows and the main X11 platforms.
TnFOX, open source (LGPL), a portability toolkit.
U++ is an Open-source application framework bundled with an IDE (BSD license), mainly created for Win32 and Unix-like operating system (X11) but now works with almost any operating systems.
wxWidgets (formerly wxWindows), open source (relaxed LGPL), abstract toolkits across several platforms for C++, Python, Perl, Ruby and Haskell.
Zinc Application Framework, cross-platform widget toolkit.
Based on Python
Tkinter, open source (BSD) is a Python binding to the Tk GUI toolkit. Tkinter is included with standard GNU/Linux, Microsoft Windows and macOS installs of Python.
Kivy, open source (MIT) is a modern library for rapid development of applications that make use of innovative user interfaces, such as multi-touch apps. Fully written in Python with additional speed ups in Cython.
PySide, open source (LGPL) is a Python binding of the cross-platform GUI toolkit Qt developed by The Qt Company, as part of the Qt for Python project.
PyQt, open source (GPL and commercial) is another Python binding of the cross-platform GUI toolkit Qt developed by Riverbank Computing.
PyGTK, open source (LGPL) is a set of Python wrappers for the GTK graphical user interface library.
wxPython, open source (wxWindows License) is a wrapper for the cross-platform GUI API wxWidgets for the Python programming language.
Pyjs, open source (Apache License 2.0) is a rich web application framework for developing client-side web and desktop applications, it is a port of Google Web Toolkit (GWT) from Java.
Based on Flash
Adobe Flash allows creating widgets running in most web browsers and in several mobile phones.
Adobe Flex provides high-level widgets for building web user interfaces. Flash widgets can be used in Flex.
Flash and Flex widgets will run without a web browser in the Adobe AIR runtime environment.
Based on Go
Fyne, open source (BSD) is inspired by the principles of Material Design to create applications that look and behave consistently across Windows, macOS, Linux, BSD, Android and iOS.
Based on XML
GladeXML with GTK
XAML with Silverlight or Moonlight
XUL
Based on JavaScript
General
jQuery UI
MooTools
Qooxdoo Could be understood as Qt for the Web
Script.aculo.us
RIAs
Adobe AIR
Dojo Toolkit
Sencha (formerly Ext JS)
Telerik Kendo UI
Webix
WinJS
React
Full-stack framework
Echo3
SproutCore
Telerik UI for ASP/PHP/JSP/Silverlight
Vaadin - Java
ZK - A Java Web framework for building rich Ajax and mobile applications
Resource-based
Google Web Toolkit (GWT)
Pyjs
FBML Facebook Markup Language
No longer developed
YUI (Yahoo! User Interface Library)
Based on SVG
Raphaël is a JavaScript toolkit for SVG interfaces and animations
Based on C#
Gtk#, C# wrappers around the underlying GTK and GNOME libraries, written in C and available on Linux, MacOS and Windows.
QtSharp, C# wrappers around the Qt widget toolkit, which is itself based-on the C++ language.
Windows Forms. There is an original Microsoft's implementation that is a wrapper around the Windows API and runs on windows, and Mono's alternative implementation that is cross platform.
Based on Java
The Abstract Window Toolkit (AWT) is Sun Microsystems' original widget toolkit for Java applications. It typically uses another toolkit on each platform on which it runs.
Swing is a richer widget toolkit supported since J2SE 1.2 as a replacement for AWT widgets. Swing is a lightweight toolkit, meaning it does not rely on native widgets.
Apache Pivot is an open-source platform for building rich web applications in Java or any JVM-compatible language, and relies on the WTK widget toolkit.
JavaFX and FXML.
The Standard Widget Toolkit (SWT) is a native widget toolkit for Java that was developed as part of the Eclipse project. SWT uses a standard toolkit for the running platform (such as the Windows API, macOS Cocoa, or GTK) underneath.
Codename One originally designed as a cross platform mobile toolkit it later expanded to support desktop applications both through JavaSE and via a JavaScript pipeline through browsers
java-gnome provides bindings to the GTK toolkit and other libraries of the GNOME desktop environment
Qt Jambi, the official Java binding to Qt from Trolltech. The commercial support and development has stopped
Based on Object Pascal
FireMonkey or FMX is a cross-platform widget and graphics library distributed with Delphi and C++Builder since version XE2 in 2011. It has bindings for C++ through C++Builder, and supports Windows, macOS, iOS, Android, and most recently Linux. FireMonkey supports platform-native widgets, such as a native edit control, and custom widgets that are styled to look native on a target operating system. Its graphics are GPU-accelerated and it supports styling, and mixing its own implementation controls with native system controls, which lets apps use native behaviour where it's important (for example, for IME text input.)
IP Pascal uses a graphics library built on top of standard language constructs. Also unusual for being a procedural toolkit that is cross-platform (no callbacks or other tricks), and is completely upward compatible with standard serial input and output paradigms. Completely standard programs with serial output can be run and extended with graphical constructs.
Lazarus LCL (for Pascal, Object Pascal and Delphi via Free Pascal compiler), a class library wrapping GTK+ 1.2–2.x, and the Windows API (Carbon, Windows CE and Qt4 support are all in development).
fpGUI is created with the Free Pascal compiler. It doesn't rely on any large 3rdParty libraries and currently runs on Linux, Windows, Windows CE, and Mac (via X11). A Carbon (macOS) port is underway.
CLX (Component Library for Cross-platform) was used with Borland's (now Embarcadero's) Delphi, C++ Builder, and Kylix, for producing cross-platform applications between Windows and Linux. It was based on Qt, wrapped in such a way that its programming interface was similar to that of the VCL toolkit. It is no longer maintained and distributed, and has been replaced with FireMonkey, a newer toolkit also supporting more platforms, since 2011.
Based on Objective-C
GNUstep and OpenStep
Cocoa and Cocoa Touch
Based on Dart
Flutter (software) is an open-source and cross platform framework created by Google.
Based on Swift
Cocoa Touch is a framework created by Apple to build applications for iOS, iPadOS and tvOS.
Based on Ruby
Shoes (GUI toolkit) is a cross platform framework for graphical user interface development.
Not yet categorised
WINGs
LiveCode
Wt
Immediate Mode GUI
Comparison of widget toolkits
See also
List of platform-independent GUI libraries
References
Widget toolkits | List of widget toolkits | Technology | 3,137 |
14,596,158 | https://en.wikipedia.org/wiki/HD%20118203 | HD 118203 is a star with an orbiting exoplanet located in the northern circumpolar constellation of Ursa Major. It has the proper name Liesma, which means flame, and it is the name of a character from the Latvian poem Staburags un Liesma (Staburags and Liesma). The name was selected in the NameExoWorlds campaign by Latvia, during the 100th anniversary of the IAU.
The apparent visual magnitude of HD 118203 is 8.06, which means it is invisible to the naked eye but it can be seen using binoculars or a telescope. Based on parallax measurements, it is located at a distance of 300 light years from the Sun. The star is drifting closer with a radial velocity of −29 km/s. Based on its position and space velocity this is most likely (97% chance) an older thin disk star. An exoplanet has been detected in a close orbit around the star.
The spectrum of HD 118203 matches a G-type main-sequence star with a class of G0V. It has a low level of chromospheric activity, which means a low level of radial velocity jitter for planet detection purposes. The star has 1.23 times the mass of the Sun and double the Sun's radius. It is around 5.4 billion years old and is spinning with a projected rotational velocity of 7.0 km/s. HD 118203 is radiating 3.8 times the luminosity of the Sun from its photosphere at an effective temperature of 5,741 K.
Planetary system
In 2006, a hot Jupiter, HD 118203 b, was reported in an eccentric orbit around this star. It was discovered using the radial velocity method based on observation of high-metallicity stars begun in 2004. In 2020, it was found that this is a transiting planet, which allowed the mass and radius of the body to be determined. This exoplanet has more than double the mass of Jupiter and is 13% greater in radius. The fact that the parent star is among the brighter known planet hosts (as of 2020) makes it an interesting object for further study. This planet received the proper name Staburags in the 2019 NameExoWorlds campaign.
In 2024, the star HD 118203 was found to display variability with a period matching that of planet b's orbit, suggesting magnetic interaction between the star and planet.
Also in 2024, a second massive planet was discovered using radial velocity observations as well as Hipparcos and Gaia astrometry. HD 118203 c is about 11 times the mass of Jupiter and takes 14 years to complete an orbit around the star. Like planet b, the orbit of planet c is close to edge-on, suggesting an aligned planetary system. The presence of any additional transiting planets at least twice the size of Earth and with periods less than 100 days was ruled out by the observations.
See also
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with two confirmed planets
Ursa Major
BD+54 1609
118203
066192
1271
Liesma | HD 118203 | Astronomy | 653 |
3,005,753 | https://en.wikipedia.org/wiki/Scottish%20units | Scottish or Scots units of measurement are the weights and measures peculiar to Scotland which were nominally replaced by English units in 1685 but continued to be used in unofficial contexts until at least the late 18th century. The system was based on the ell (length), stone (mass), and boll and firlot (volume). This official system coexisted with local variants, especially for the measurement of land area.
The system is said to have been introduced by David I of Scotland (1124–53), although there are no surviving records until the 15th century when the system was already in normal use. Standard measures and weights were kept in each burgh, and these were periodically compared against one another at "assizes of measures", often during the early years of the reign of a new monarch. Nevertheless, there was considerable local variation in many of the units, and the units of dry measure steadily increased in size from 1400 to 1700.
The Scots units of length were technically replaced by the English system by an Act of the Parliament of Scotland in 1685, and the other units by the Treaty of Union with England in 1706. However, many continued to be used locally during the 18th and 19th centuries. The introduction of the Imperial system by the Weights and Measures Act 1824 (5 Geo. 4. c. 74) saw the end of any formal use in trade and commerce, although some informal use as customary units continued into the 20th century. "Scotch measure" or "Cunningham measure" was brought to parts of Ulster in Ireland by Ulster Scots settlers, and used into the mid-19th century.
Length
Scottish inch The Scottish inch was 25.44 mm, almost the same as the English (and modern international) inch (25.40 mm). A fraudulent smaller inch of ell (22.4 mm) is also recorded.
foot (Scots: ) 12 inches (305.3 mm; compare with the English foot of 304.8 mm).
yard () 36 inches (915.9 mm; compare with the English yard of 914.4 mm). Rarely used except with English units, although it appears in an Act of Parliament from 1432: "The king's officer, as is foresaid, shall have a horn, and each one a red wand of three-quarters of a yard at least."
Scots ell The ell () was the basic unit of length, equal to 37 inches (941.3 mm). The "Barony ell" of 42 inches (1069 mm) was used as the basis for land measurement in the Four Towns area near Lochmaben, Dumfriesshire.
fall () 6 ells, or 222 inches (5.648 m). Identical to the Scots rod and ("rope").
Scots mile 320 falls or 5920 feet (1807 metres, compare with the English mile of 5280 English feet or approximately 1609 metres), but varied from place to place. Obsolete by the 19th century.
Area
A number of conflicting systems were used for area, sometimes bearing the same names in different regions, but working on different conversion rates. Because some of the systems were based on what land would produce, rather than the physical area, they are listed in their own section. Please see individual articles for more specific information. Because fertility varied widely, in many areas, production was considered a more practical measure.
Area by size
For information on the squared units, please see the appropriate articles in the length section
square inch
square ell
square fall ()
Scots rood ()
Scots acre
Area by production
Eastern Scotland:
oxgang () The area an ox could plough in a year (around 20 acres).
ploughgate () 8 oxgangs
davoch (, or ) 4 ploughgates
Area by taxation/rent
In western Scotland, including Galloway:
markland (, ) 8 ouncelands (varied)
ounceland (, ) 20 pennylands
pennyland () basic unit; sub-divided into halfpenny-land and farthing-land.
Also:
quarterland ( Of variable value: one-quarter of a Markland.
groatland (Scottish Gaelic, ) Land valued at a groat i.e. four pence
Volume
Dry volume
Dry volume measures were slightly different for various types of grain, but often bore the same name.
chalder () Normally understood as 16 bolls (being just under 12 Winchester quarters)
boll (, or ) Equal to 4 firlots.
firlot
peck
lippie, or forpet
These volume measurements were fixed at slightly different sizes at different times. A unified weights and measures system is attributed to David I – though the first written records of this are from the 14th century. The Assize of 1426 made changes to these measures. Then the Assize of 1457 was followed by four major revisions. These involved increases in the size of the firlot, the basic unit of grain measure, and occurred in: c.1500, 1555 (modified in 1563), 1587 and 1618. This last date gave a fixed Scottish system which only changed with the introduction of English measures. An increase in the size of the firlot allowed greater taxation to be raised (as each unit collected was bigger).
Superimposed on this chronological complexity was the difference between the "legal" measures established by the assizes, and the actual measures used in the markets and everyday trade. The "trading" measure could be one sixteenth larger than the "legal" boll, and the "customary" boll a further one sixteenth larger.
Weight equivalents of one boll are given in a trade dictionary of 1863 as follows:
Flour 140 pounds;
Peas or beans 280 pounds;
Oats 264 pounds;
Barley 320 pounds;
Oatmeal 140 pounds.
Fluid volume
Nipperkin was also used, but perhaps not part of this more formal set.
Weight
Weight was measured according to "troy measure" (Lanark) and "tron measure" (Edinburgh), which were standardised in 1661. In the Troy system these often bore the same name as imperial measures.
drop ()
ounce ()
pound ()
stone ()
Various local measures all existed, often using local weighing stones.
See also the weight meanings of the under the dry volume section, above.
See also
Units of measurement
Systems of measurement
History of measurement
Scottish coinage
Scottish pronunciation
Tron
Bibliography
Collins Encyclopedia of Scotland
Weights and Measures, by D. Richard Torrance, SAFHS, Edinburgh, 1996, (NB book focuses on Scottish weights and measures exclusively)
Scottish National Dictionary and Dictionary of the Older Scottish Tongue
Weights and Measures in Scotland: A European Perspective R. D. Connor, et al. National Museum of Scotland and Tuckwell Press, NMSE Publishing, 2004,
References
External links
Scottish Weights and Measures on Scottish Archive network (SCAN)
Scottish
Medieval history of Scotland
Early modern history of Scotland
Systems of units
Units of measurement by country | Scottish units | Mathematics | 1,414 |
498,206 | https://en.wikipedia.org/wiki/Dental%20material | Dental products are specially fabricated materials, designed for use in dentistry. There are many different types of dental products, and their characteristics vary according to their intended purpose.
Temporary dressings
A temporary dressing is a dental filling which is not intended to last in the long term. They are interim materials which may have therapeutic properties. A common use of temporary dressing occurs if root canal therapy is carried out over more than one appointment. In between each visit, the pulp canal system must be protected from contamination from the oral cavity, and a temporary filling is placed in the access cavity. Examples include:
Zinc oxide eugenol—bactericidal, cheap and easy to remove. Eugenol is derived from oil of cloves, and has an obtundant effect on the tooth and decreases toothache. It is suitable temporary material providing there are no biting forces on it. It is also contraindicated if the final restorative material is composite because eugenol adversely effects the bond/polymerization process; also, when applied directly on the pulp tissue, it can produce chronic inflammation and result in pulp necrosis. Brands include Kalzinol and Sedanol.
Cements
Dental cements are used most often to bond indirect restorations such as crowns to the natural tooth surface. Examples include:
Zinc oxide cement—self setting and hardens when in contact with saliva. Example brands: Cavit, Coltosol.
Zinc phosphate cement
Zinc polycarboxylate cement—adheres to enamel and dentin. Example brand: PolyF.
Glass ionomer cement
Resin-based cement
Copper-based cement
Impression materials
Dental impressions are negative imprints of teeth and oral soft tissues from which a positive representation can be cast. They are used in prosthodontics (to make dentures), orthodontics, restorative dentistry, dental implantology and oral and maxillofacial surgery.
Because patients' soft-tissue undercuts may be shallow or deep, impression materials vary in their rigidity in order to obtain an accurate impression. Rigid materials are used with patients with shallow undercuts, while elastic materials are used with patients with deep undercuts, as the material must be flexible enough to reach the end-point of the undercut.
Impression materials are designed to be liquid or semi-solid when first mixed, then set hard in a few minutes, leaving imprints of oral structures.
Common dental impression materials include sodium alginate, polyether and silicones. Historically, plaster of Paris, zinc oxide eugenol and agar were used.
Lining materials
Dental lining materials are used during restorations of large cavities, and are placed between the remaining tooth structure and the restoration material. The purpose of this is to protect the dentinal tubules and the sensitive pulp, forming a barrier-like structure. After drilling the caries out of the tooth, the dentist applies a thin layer (approximately 1/2mm) to the base of the tooth, followed by light curing. Another layer might be applied if the cavity is very large and deep.
There are many functions to dental lining materials, some of which are listed below:
Lining materials protect the weak tooth from post-operative hypersensitivity, reducing patient discomfort and allowing the tooth to heal at a faster rate after the procedure.
Some dental restorative materials, such as acrylic monomers in resin-based materials and phosphoric acid in silicate materials, may pose toxic and irritable effects to the pulp. Lining materials protect the tooth from such irritants.
Lining materials serve as an insulating layer to the tooth pulp from sudden changes in temperature when the patient takes hot or cold food, protecting them from potential pain resulting from thermal conductivity.
Lining materials are electrically insulating, preventing corrosion by galvanic cell where two dissimilar metals (e.g. gold or amalgam) are placed next to each other.
Types
Calcium hydroxide
Calcium hydroxide is a relatively low compressive strength and a viscous consistency, making it difficult to apply to cavities in thick sections. A common technique to overcome this issue is to apply a thin sub-lining of calcium hydroxide, then build up with zinc phosphate prior to amalgam condensation. This generates a relatively high pH environment around the area surrounding the cement due to calcium hydroxide leaking out, thus making it bactericidal.
It also has a unique effect of initiating calcification and stimulating the formation of secondary dentine, due to an irritation effect of the pulp tissues by the cement.
Calcium hydroxide is radio-opaque and acts as a good thermal and electrical insulation. However, due to its low compressive strength it is unable to withstand amalgam packing; a strong cement base material should be placed above it to counter this.
Calcium silicate-based liners have become alternatives to calcium hydroxide and are preferred by practitioners for their bioactive and sealing properties; the material triggers a biological response and results in formation of bonding with the tissue. They are commonly used as pulp capping agents and lining materials for silicate and resin-based filling materials.
It is usually supplied as two pastes, a glycol salicylate and another paste containing zinc oxide with calcium hydroxide. On mixing, a chelate compound is formed. Light-activated versions are also available; these contain polymerization activators, hydroexyethyl methacrylate, dimethacrylate which when light activated will result in a polymerization reaction of a modified methacrylate monomer.
Polycarboxylate cement
Polycarboxylate cement has the compressive strength to resist amalgam condensation. It is acidic, but less acidic than phosphate cements due to it having a higher molecular weight and polyacrylic acid being weaker than phosphoric acid. It forms a strong bond with dentine and enamel, allowing it to form a coronal seal. In addition, it is an electrical and thermal insulator while also releasing fluoride, rendering it bacteriostatic. It is also radio-opaque, making it an excellent lining material.
Care has to be taken in handling such material, as it has a strong bond with stainless steel instruments once it sets.
Polycarboxylate cement is commonly used as a luting agents or as a cavity base material. However, it tends to be rubbery during its setting reaction and adheres to stainless steel instruments, so most operators prefer not to use it in deep cavities.
It is usually supplied as a power containing zinc oxide and a liquid containing aqueous polyacrylic acid. The reaction consists of an acid base reaction with zinc oxide reacting with the acid groups in polyacid. This forms a reaction product of unreacted zinc oxide cores bound by a salt matrix, with polyacrylic acid chains cross linking with zinc ions.
Glass ionomer
Glass ionomer (GI) has the strongest compressive and tensile strength of all linings, so it can withstand amalgam condensation in high stress bearing areas such as class II cavities. GI is used as a lining material as it is very compatible with most restorative materials, insulates thermally and electrically, and adheres to enamel and dentine. GI lining contains glass of smaller particle sizes compared to its adhesive restorative mix, to allow formation of a thinner film. Some variations are also radiopaque, making them good for X-ray cavity detection. In addition, GI is bacteriostatic due to its fluoride release from un-reacted glass cores.
GIs are usually used as a lining material for composite resins or as luting agents for orthodontic bands.
The reaction is an acid-base reaction between calcium-aluminum-silicate glass powder and polyacrylic acid. They come in a powder and liquid which are mixed on a pad or in capsules which are for single usage. Resin-modified GIs contain a photoinitiator (usually camphorquinone) and an amide, and are light cured with a LED light curing unit. Setting takes place by a combination of acid-base reaction and chemically activated polymerization.
Zinc oxide eugenol
Zinc oxide eugenol has the lowest compressive and tensile strength of the liners, so its use is limited to small or non-stress-bearing areas such as Class V cavities. This cavity lining is often used with a high strength base to provide strength, rigidity and thermal insulation. Zinc oxide eugenol can be used as linings in deep cavities without causing harm to the pulp, due to its obtundant effect on the pulp as well as its bactericidal properties due to zinc. However, eugenol may have an effect on resin-based filling materials, as it interferes with polymerization and occasionally causes discoloration. Caution could therefore be exercised when using both in tandem. It is also radio-opaque, allowing fillings to be visible by X-rays.
Zinc oxide eugenol is usually used as a temporary filling/luting agent due to its low compressive strength making it easily removed, or as a lining for amalgam as it is incompatible with composites resins.
It is supplied as a two paste system. Equal length of two pastes are dispensed into a paper pad and mixed.
Restorative materials
Dental restorative materials are used to replace tooth structure loss, usually due to dental caries (cavities), but also tooth wear and dental trauma. On other occasions, such materials may be used for cosmetic purposes to alter the appearance of an individual's teeth.
There are many challenges for the physical properties of the ideal dental restorative material. The ideal material would be identical to natural tooth structure in strength, adherence, and appearance. The properties of such material can be divided into four categories: physical properties, biocompatibility, aesthetics and application.
Physical properties of good restorative materials include low thermal conductivity and expansion, resistance to different categories of forces and wear such as attrition and abrasion, and resistance to chemical erosion. There must also be good bonding strength to the tooth. Everyday masticatory forces and conditions must be withstood without material fatigue.
Biocompatibility refers to how well the material coexists with the biological equilibrium of the tooth and body systems. Since fillings are in close contact with mucosa, tooth, and pulp, biocompatibility is very important. Common problems with some of the current dental materials include chemical leakage from the material, pulpal irritation and, less commonly, allergic reactions. Some of the byproducts of the chemical reactions during different stages of material hardening need to be considered.
Radiopacity in dental materials is an important property that allows for distinguishing restorations from teeth and surrounding structures, assessing the absorption of materials into bone structure, and detecting cement dissolution or other failures that could cause harm to the patient. Cements, composites, endodontic sealers, bone grafts, and acrylic resins all benefit from the addition of radiopaque materials. Examples of these materials include zinc oxide, zirconium dioxide, titanium dioxide, barium sulfate, and ytterbium(III) fluoride.
Ideally, filling materials should match the surrounding tooth structure in shade, translucency, and texture.
Dental operators require materials that are easy to manipulate and shape, where the chemistry of any reactions that need to occur are predictable or controllable.
Direct restorative materials
Direct restorations are ones which are placed directly into a cavity on a tooth, and shaped to fit. The chemistry of the setting reaction for direct restorative materials is designed to be more biologically compatible. Heat and byproducts generated cannot damage the tooth or patient, since the reaction needs to take place while in contact with the tooth during restoration. This ultimately limits the strength of the materials, since harder materials need more energy to manipulate. The type of filling material used has a minor effect on how long they last. The majority of clinical studies indicate the annual failure rates (AFRs) are between 1% and 3% with tooth colored fillings on back teeth. Root canaled (endodontically) treated teeth have AFRs between 2% and 12%. The main reasons for failure are cavities that occur around the filling and fracture of the real tooth. These are related to personal cavity risk and factors like grinding teeth (bruxism).
Amalgam
Amalgam is a metallic filling material composed from a mixture of mercury (from 43% to 54%) and a powdered alloy made mostly of silver, tin, zinc and copper, commonly called the amalgam alloy. Amalgam does not adhere to tooth structure without the aid of cements or use of techniques which lock in the filling, using the same principles as a dovetail joint.
Amalgam is still used extensively in many parts of the world because of its cost effectiveness, superior strength and longevity. However, the metallic colour is not aesthetically pleasing and tooth coloured alternatives are continually emerging with increasingly comparable properties. Due to the known toxicity of mercury, there is some controversy about the use of amalgams. The Swedish government banned the use of mercury amalgam in June 2009. Research has shown that, while amalgam use is controversial and may increase mercury levels in the human body, these levels are below safety threshold levels established by the World Health Organization and the U.S. Environmental Protection Agency. However, there are certain subpopulations who, due to inherited genetic variabilities, are more sensitive to mercury than these threshold levels. They may experience adverse effects caused by amalgam restoration, including neural defects caused by impaired neurotransmitter processing.
Composite resin
Composite resin fillings (also called white fillings) are a mixture of nanoparticles or powdered glass and plastic resin, and can be made to resemble the appearance of the natural tooth. Although cosmetically superior to amalgam fillings, composite resin fillings are usually more expensive. Bis-GMA based resins contain Bisphenol A, a known endocrine disrupter chemical, and may contribute to the development of breast cancer. However, there is no added risk of kidney or endocrine injury in choosing composite restorations over amalgams. PEX-based materials do not contain Bisphenol A and are the least cytotoxic material available.
Most modern composite resins are light-cured photopolymers, meaning that they harden with light exposure. They can then be polished to achieve maximum aesthetic results. Composite resins experience a very small amount of shrinkage upon curing, causing the material to pull away from the walls of the cavity preparation. This makes the tooth slightly more vulnerable to microleakage and recurrent decay. Microleakage can be minimized or eliminated with proper handling techniques and appropriate material selection.
In some circumstances, using composite resin allows less of the tooth structure to be removed compared to other dental materials such as amalgam and indirect methods of restoration. This is because composite resins bind to enamel (and dentin too, although not as well) via a micromechanical bond. As conservation of tooth structure is a key ingredient in tooth preservation, many dentists prefer placing materials like composite instead of amalgam fillings whenever possible.
Generally, composite fillings are used to fill a carious lesion involving highly visible areas (such as the central incisors or any other teeth that can be seen when smiling) or when conservation of tooth structure is a top priority.
The bond of composite resin to tooth is especially affected by moisture contamination and the cleanliness of the prepared surface. Other materials can be selected when restoring teeth where moisture control techniques are not effective.
Glass ionomer cement
The concept of using "smart" materials in dentistry has attracted a lot of attention in recent years. Conventional glass ionomer cements (GICs) have many applications in dentistry. They are biocompatible with the dental pulp to some extent. Clinically, this material was initially used as a biomaterial to replace the lost osseous tissues in the human body.
GIC fillings are a mixture of glass and an organic acid.
The cavity preparation of a GIC filling is the same as a composite resin. GICs are chemically set via an acid-base reaction. Upon mixing of the material components, no light cure is needed to harden the material once placed in the cavity preparation. After the initial set, GICs still need time to fully set and harden.
An advantage of GICs compared to other restorative materials is that they can be placed in cavities without any need for bonding agents. Another advantage is that they are not subject to shrinkage and microleakage, as the bonding mechanism is an acid-base reaction and not a polymerization reaction. Additionally, GICs contain and release fluoride, which is important to prevent carious lesions. As GICs release their fluoride, they can be "recharged" by the use of fluoride-containing toothpaste; this means they can be used to treat patients at high risk of caries.
Although they are tooth-colored, GICs vary in translucency, and their aesthetic potential is not as great as that of composite resins. Newer formulations that contain light-cured resins can achieve a greater aesthetic result, but do not release fluoride as well as conventional GICs.
The most important disadvantage of GICs is lack of adequate strength and toughness. To improve the mechanical properties of the conventional GIC, resin-modified ionomers have been marketed. GICs are usually weak after setting and are not stable in water; however, they become stronger with the progression of reactions and become more resistant to moisture.
New generations of GICs aim to regenerate tissues; they use bioactive materials in the form of a powder or solution to induce local tissue repair. These materials release chemical agents in the form of dissolved ions or growth factors such as bone morphogenetic protein, which stimulates activate cells.
GICs are about as expensive as composite resin. The fillings do not wear as well as composite resin fillings, but they are generally considered good materials to use for root caries and for sealants.
Resin modified glass-ionomer cement (RMGIC)
A combination of glass-ionomer and composite resin, these fillings are a mixture of glass, an organic acid, and resin monomers that harden when light cured (light-activated polymerization besides the acid-base reaction of conventional GICs). The cost is similar to composite resin. It holds up better than GIC, but not as well as composite resin, and is not recommended for biting surfaces of adult teeth, or when control of moisture cannot be achieved.
Generally, RMGICs can achieve a better aesthetic result than conventional GICs, but not as good as pure composites.
Compomers
Another combination of composite resin and GIC technology, compomers are essentially made up of filler, dimethacrylate monomer, difunctional resin, photo-activator and initiator, and hydrophilic monomers. The filler decreases the proportion of resin and increases the mechanical strength, as well as improving the material's appearance.
Although compomers have better mechanical and aesthetic properties than RMGIC, they have some disadvantages which limit their applications:
Compomers have weaker wear properties.
They cannot adhere to tooth tissue due to the presence of resin, which can make it shrink on polymerisation. They therefore require bonding materials.
They release low levels of fluoride, so cannot act as a fluoride reservoir.
They have high staining susceptibility; uptake of oral fluid causes them to show staining soon after placement.
Due to its relatively weaker mechanical properties, Compomers are unfit for stress-bearing restorations but can be used in the deciduous dentition where lower loads are anticipated.
Cermets
Dental cermets, also known as silver cermets, were created to improve the wear resistance and hardness of glass ionomer cements by adding silver. Their other advantages are that they adhere directly to tooth tissue, and are radio-opaque, which helps with identification of secondary caries when future radiographs are taken.
However, cermets have poorer aesthetics, appearing metallic rather than white. They also have a similar compressive strength, flexural strength, and solubility as GICs, some of the main limiting factors for both materials. In addition, their fluoride release is poorer than that of GICs. Clinical studies have shown cermets perform poorly. All these disadvantages led to the decline in the use of this restorative material.
Indirect restorative materials
An indirect restoration is one where the teeth are first prepared, then an impression is taken and sent to a dental technician who fabricates the restoration according to the dentist's prescription.
Porcelain
Porcelain fillings are hard, but can cause wear on opposing teeth. Their hardness and rigidity enables them to resist abrasion forces, and are good aesthetically as they mimic the appearance of natural teeth. However, they are also brittle and not always recommended for molar fillings. Porcelain materials can be strengthened by soaking fired material in molten salt to allow exchange of sodium and potassium ions on the surface; this successfully creates compressive stresses on the outer layer, by controlling cooling after firing, and by the use of pure alumina inserts, a core of alumina or alumina powder, as they act as crack stoppers and are highly compatible to porcelain.
Dental composite materials
Tooth colored dental composite materials are either used as a direct filling or as the construction material for an indirect inlay. They are usually cured by light.
Nano-ceramic particles
Nano-ceramic particles embedded in a resin matrix are less brittle and therefore less likely to crack, or chip, than all-ceramic indirect fillings. They absorb the shock of chewing more like natural teeth, and more like resin or gold fillings, than do ceramic fillings; at the same time they are more resistant to wear than all-resin indirect fillings. They are available in blocks for use with CAD/CAM systems.
Gold fillings
Gold fillings have excellent durability, wear well, and do not cause excessive wear to the opposing teeth, but they do conduct heat and cold, which can be irritating. There are two categories: cast gold fillings (gold inlays and onlays) made with 14 or 18 kt gold, and gold foil made with pure 24 kt gold that is burnished layer by layer. For years, they have been considered the benchmark of restorative dental materials. However, recent advances in dental porcelains and a consumer focus on aesthetic results have caused the demand for gold fillings to drop. Gold fillings are sometimes quite expensive, but they last a very long time, meaning that gold restorations are less costly and painful in the long run. It is not uncommon for a gold crown to last 30 years.
Other historical fillings
Lead fillings were used in the 18th century, but became unpopular in the 19th century because of their softness. This was before lead poisoning was understood.
According to American Civil War-era dental handbooks, since the early 19th century metallic fillings had been made of lead, gold, tin, platinum, silver, aluminum, or amalgam. A pellet was rolled slightly larger than the cavity, condensed into place with instruments, then shaped and polished in the patient's mouth. The filling was usually left "high", with final condensation—"tamping down"—occurring while the patient chewed food. Gold foil was the most popular filling material during the Civil War. Tin and amalgam were also popular due to lower cost, but were held in lower regard.
One survey of dental practices in the mid-19th century catalogued dental fillings found in the remains of seven Confederate soldiers from the Civil War. They were made of:
Gold foil: preferred because of its durability and safety.
Platinum: rarely used because it was too hard, inflexible and difficult to form into foil.
Aluminum: failed because of its lack of malleability but has been added to some amalgams.
Tin and iron: believed to have been a very popular filling material during the Civil War. Tin foil was recommended when a cheaper material than gold was requested by the patient, but it wore down rapidly; even if it could be replaced cheaply and quickly, there was a concern, specifically from Chapin A. Harris, that it would oxidise in the mouth and cause a recurrence of caries. Due to blackening, tin was only recommended for posterior teeth.
Thorium: the element's radioactivity was unknown at that time, and the dentist probably thought he was working with tin.
Lead and tungsten mixture: probably from shotgun pellets. Lead was rarely used in the 19th century, as it is soft and quickly worn down by mastication, and had known harmful health effects.
Acrylic polymers
Acrylics are used in the fabrication of dentures, artificial teeth, impression trays, maxillofacial / orthodontic appliances and temporary (provisional) restorations. They cannot be used as tooth filling materials because they can lead to pulpitis and periodontitis, as they may generate heat and acids during setting, and in addition they shrink.
Failure of dental restorations
Fillings have a finite lifespan; composites appear to have a higher failure rate than amalgam over five to seven years. How well people keep their teeth clean and avoid cavities is probably a more important factor than the material chosen for the restoration.
Evaluation and regulation of dental materials
The Nordic Institute of Dental Materials (NIOM) performs several tests to evaluate dental products in the Nordic countries. In the European Union, dental materials are classified as medical devices according to the Medical Devices Directive. In USA, the Food and Drug Administration is the regulatory body for dental products.
References
User Guide of Dental Impression Material: https://www.youtube.com/watch?v=-keGMbCHC2A
Dental Materials Fact Sheet, Dental Board of California, May 2004
Restorative dentistry | Dental material | Physics | 5,432 |
4,202,769 | https://en.wikipedia.org/wiki/Benefield%20Anechoic%20Facility | Benefield Anechoic Facility (BAF) is an anechoic chamber located at the southwest side of the Edwards Air Force Base main base. It is currently the world's largest anechoic chamber. The BAF supports installed systems testing for avionics test programs requiring a large, shielded chamber with radio frequency (RF) absorption capability that simulates free space.
The facility is named after Rockwell test pilot and flight commander Tommie Douglas "Doug" Benefield, who was killed in a crash northeast of Edwards Air Force Base in the desert east of Boron on August 29, 1984 during a USAF B-1 Lancer flight test.
Purpose
The BAF is a ground test facility to investigate and evaluate anomalies associated with Electronic Warfare systems, avionics, tactical missiles and their host platforms. Tactical-sized, single or multiple, or large vehicles can be operated in a controlled electromagnetic (EM) environment with emitters on and sensors stimulated while RF signals are recorded and analyzed. The largest platforms tested at the BAF have been the B-52 and C-17 aircraft. The BAF supports testing of other types of systems such as spacecraft, tanks, satellites, air defense systems, drones and armored vehicles.
The BAF equipment generates RF signals with a wide variety of characteristics, simulating red/blue/gray (unfriendly/friendly/unknown) surface-based, sea-based, and airborne systems. With the combination of signals and control functions available, a wide variety of test conditions can be emulated. Many conditions that are not available on outdoor ranges can be easily generated from the aspect of signal density, pulse density and number of simultaneous types.
Through the use of environmental monitoring systems, an independent agency captures, records, and verifies RF generated signals. These systems have the capabilities for real-time and post-test RF signal parameter measurement, instrument display recording, data analysis and test coordination, as well as providing the data for signal verification.
Some aircraft tested at the BAF include:
F-22 Raptor
C-130 Hercules
NC-130H
F-16 Fighting Falcon
B-1 Lancer
X-43A
MH-47 Chinook
V-22 Osprey
KC-46A Tanker
F-15SG Eagle
F-15SA Saudi Advanced Eagle
Special use
In 2003, BMW tested levels of electromagnetic interference on then-upcoming 2004 models of the 530i, 545i and debut model, 645i.
References
External links
Edwards AFB homepage
Satellite image on Google Maps
Avionics
Edwards Air Force Base | Benefield Anechoic Facility | Technology | 522 |
65,568,387 | https://en.wikipedia.org/wiki/United%20Airlines%20Flight%20976 | United Airlines Flight 976 was a regularly scheduled flight from Ministro Pistarini International Airport in Buenos Aires to John F. Kennedy International Airport in New York City on October 19-20, 1995. Upon landing, one passenger, Gerard Finneran, was arrested by the FBI and charged with interfering with a flight crew and threatening a flight attendant.
During the flight, Finneran, a Wall Street investment banker, had been refused further alcoholic beverages when cabin crew determined he was intoxicated. After they thwarted his attempt to pour himself more, Finneran threatened one flight attendant with violence and attacked another one. He then went into the first-class compartment, which was also carrying Portuguese president Mário Soares and Argentinian foreign minister Guido di Tella and their security details. There, he climbed on a service trolley and defecated, using linen napkins to wipe himself, and later tracked and smeared his feces around the cabin.
Food service was canceled due to the unsanitary conditions and the crew sprayed perfume all over the cabin instead to suppress the smell of the feces. The pilots tried to divert to Luis Muñoz Marín International Airport in San Juan, Puerto Rico, but were refused since the presence of foreign dignitaries on board created a security risk. Finneran had by then calmed down and returned to his seat.
Finneran's attorneys claimed he had been suffering from a severe case of traveler's diarrhea and had been prevented from using the first-class toilet to his seat just outside that section by Soares's security. He pleaded guilty and was fined $5,000, with two years' probation; he had also agreed to perform community service and pay $48,000 to reimburse United's cleanup costs and the other passengers for their airfare. The incident has been recalled as the worst case of air rage ever.
Flight
Finneran was a Wall Street investment banker who was then managing director of the Trust Company of the West (TCW). A member of the first graduating class of the U.S. Air Force Academy where he was an athlete, he later got an M.B.A. from University of Michigan's Ross School of Business. He had worked at Citibank and Drexel Burnham Lambert before TCW, becoming an expert on Third World debt, particularly in Latin America. He was returning to his home in Greenwich, Connecticut, near his native Larchmont, New York, where he had graduated from Iona Preparatory School.
Sharon Manskar, one of the flight attendants, recalled that Finneran, seated in first class, had become disruptive even before takeoff on October 19 from Ministro Pistarini International Airport. After having two glasses of champagne, he demanded to be moved into the row reserved for crewmembers, so they can rest on long international flights, complaining that otherwise he was in the smoking section. Finneran began walking around the cabin, threatening crewmembers (at one point shoving Manskar) and pouring himself more champagne from bottles in the galley, against regulations.
As takeoff approached, Manskar was able to take the champagne bottle away from Finneran and persuade him to return to his seat. After the plane was airborne, Finneran was served another two glasses of red wine. Another crewmember again found him in the galley pouring himself more wine, whereupon he took the bottle back to his seat and secured it between his legs. A male flight attendant approached him and told him "we're going to take a little break from drinking now" to which Finneran responded by later getting out of his seat and threatening to assault the man (in the process delaying the attendant from bringing a first aid kit to another passenger who had complained of illness). After the crew supervisor intervened, Finneran was returned to his seat and appeared to calm down.
A meal was served, after which the crew began drawing the curtains across the entrance to first class. Manskar was taking a break when she felt her seat shake. When she got up, Finneran pushed her through the curtains, then entered the first class galley. He found a drinks cart, climbed atop it, crouched down, lowered his pants and underwear, and defecated on the floor behind it, in full view of other passengers, using the linen napkins to wipe himself.
Tracking his excrement on his shoes, and further wiping the soiled napkins on the walls, Finneran then locked himself in the lavatory. With Manskar's help, his business partner and traveling companion, Susan Bergan, was able to open the lock and returned Finneran to his seat, where they both fell asleep after the crew put a blanket on Finneran to cover the odor of his soiled clothing. The crew sprayed Karl Lagerfeld perfume throughout the aisles of the plane to further mask the smell of feces; food service was also canceled due to the unhygienic conditions. Crew rest periods were suspended to attend to Finneran.
By this point, the plane had reached the Caribbean. The pilot sought to divert to Luis Muñoz Marín International Airport in the hopes of having Finneran taken off the flight. Controllers there refused permission since among the other passengers in first class were Portuguese president Mário Soares and Argentinian foreign minister Guido di Tella and their security details, traveling to New York for the United Nations' 50th anniversary celebrations. Emergency landings with foreign dignitaries aboard, unless the aircraft is malfunctioning, are discouraged due to security risks.
Arrest and trial
After the flight landed at Kennedy in the early morning hours of October 20, Finneran was taken into custody by Port Authority police and arrested by the FBI. He was charged with interfering with a flight crew and assaulting and intimidating a flight attendant and released after pleading not guilty and posting $100,000 bond. Ten days later federal magistrate Joan Azrack granted the prosecution's request to amend the terms of Finneran's bail to require that he attend an alcohol counseling program and not fly anywhere without the court's permission.
Finneran's attorney, Charles Stillman, denied both in court and out that his client's alcohol consumption had anything to do with the incident. "It's all totally false, a horrible lie", he told the New York Daily News. Finneran had been experiencing a severe and acute case of traveler's diarrhea, he said, but Soares' security would not let him use the first-class lavatory even though Finneran himself was in that section. "My client was suffering from a dire medical emergency" which the airline should have attended to.
In February 1996 Finneran pleaded guilty to threatening a flight attendant. He told magistrate judge Steven M. Gold of the Eastern District of New York that he had been angry when the crew stopped serving him alcohol. "I became annoyed and said words that implied a physical threat," which he confirmed were, specifically, "I will bust your ass!" Finneran told Gold that he hoped he was sober in court and had not had any alcohol in the previous 24 hours save a glass of wine with his dinner the night before.
At that hearing Stillman told the court that his client had agreed to reimburse the airline $1,000 for its cleaning costs, as well as every other passenger's airfare, which came to $48,000, and do 300 hours of community service. In May he was sentenced to two years' probation and fined $5,000. Gold further ordered him to get counseling and not drink when flying.
Aftermath and legacy
Finneran's arrest and the details of the incident made national news and were fodder for popular comedy. A week afterwards, David Letterman read off the "Top Ten Gerard Finneran Excuses", such as "You try drinking for 14 hours and see if you can tell the difference between a food cart and a bathroom" and "Thought he heard somebody yell, 'We're going to crash!' and that was just something he always wanted to do before he died." Spy ranked the incident as No. 22 for 1995 in its annual "Spy 100" list of the least likable things about each year; as a mitigating factor it noted that Finneran's behavior was more entertaining than any in-flight movie would have been.
At the end of 2004, Finneran died of Alzheimer's disease. In the later years of his life, he volunteered with the South Forty Corporation, a not-for-profit organization that helps convicts find employment and housing after their sentences end. United still uses 976 as a flight number, currently from Newark to Shannon, Ireland.
In the years since, the incident has been recalled as the worst case of air rage ever, particularly when compared to other incidents that have made the news. "It'll be hard to ever top that nasty bit of air rage, at least short of an actual act of terrorism", Forbes wrote in 2015 after an incident involving hotel heir Conrad Hilton III on a flight from London to Los Angeles. Writing in The Wall Street Journal in 2006, Eric Felten described the incident as the "nadir" of drunken misbehavior on a flight.
See also
List of air rage incidents
References
Aviation accidents and incidents in the United States in 1995
Aviation accidents and incidents involving state leaders
976
Defecation
October 1995 crimes in the United States | United Airlines Flight 976 | Biology | 1,906 |
51,153,846 | https://en.wikipedia.org/wiki/Women%20in%20Tech | Women In Tech: Take Your Career to the Next Level with Practical Advice and Inspiring Stories is a 2016 professional career guide written by Tarah Wheeler and published by Sasquatch Books. The book began as a Kickstarter project, with 772 backers and $32,226 in funding.
The book includes advice for women in developing career skills such as salary negotiation, networking, and finding work–life balance, as well as personal stories from female tech professionals.
Reception and Impact
Library Journal called Women in Tech "The essential handbook for women in technology -- engaging, practical, and inspirational."
In the fall of 2016 the University of California, Berkeley taught a class on Wheeler's book and the necessities for overcoming barriers to entry in the technology industry and the requirements for success as a woman trying to enter the field.
Women in Tech has been translated into Korean.
References
2016 non-fiction books
Books about women
Women in technology
Sasquatch Books books
Women and employment | Women in Tech | Technology | 198 |
66,558,678 | https://en.wikipedia.org/wiki/TRPV3-74a | TRPV3-74a is a drug which acts as a selective antagonist for the TRPV3 calcium channel. It has analgesic effects in animal studies against both neuropathic pain and normal pain responses.
References
Trifluoromethyl compounds
2-Pyridyl compounds
Tertiary alcohols
Cyclobutanes | TRPV3-74a | Chemistry | 70 |
4,820,663 | https://en.wikipedia.org/wiki/Normative%20mineralogy | Normative mineralogy is a calculation of the composition of a rock sample that estimates the idealised mineralogy of a rock based on a quantitative chemical analysis according to the principles of geochemistry.
Normative mineral calculations can be achieved via either the CIPW Norm or the Barth-Niggli Norm (also known as the Cation Norm).
Normative calculations are used to produce an idealised mineralogy of a crystallized melt. First, a rock is chemically analysed to determine the elemental constituents. Results of the chemical analysis traditionally are expressed as oxides (e.g., weight percent Mg is expressed as weight percent MgO). The normative mineralogy of the rock then is calculated, based upon assumptions about the order of mineral formation and known phase relationships of rocks and minerals, and using simplified mineral formulas. The calculated mineralogy can be used to assess concepts such as silica saturation of melts.
Because the normative calculation is essentially a computation, it can be achieved via computer programs.
CIPW Norm
The CIPW Norm was developed in the early 1900s and named after its creators, the petrologists Charles Cross, Joseph Iddings, Louis Pirsson, and the geochemist Henry Washington. The CIPW normative mineralogy calculation is based on the typical minerals that may be precipitated from an anhydrous melt at low pressure, and simplifies the typical igneous geochemistry seen in nature with the following four constraints:
The magma crystallizes under anhydrous conditions so that no hydrous minerals (hornblende, biotite) are formed.
The ferromagnesian minerals are assumed to be free of Al2O3.
The Fe/Mg ratio for all ferromagnesian minerals is assumed to be the same.
Several minerals are assumed to be incompatible, thus nepheline and/or olivine never appear with quartz in the norm.
This is an artificial set of constraints, and therefore the results of the CIPW norm do not reflect the true course of igneous differentiation in nature.
The primary benefit of calculating a CIPW norm is determining what the ideal mineralogy of an aphanitic or porphyritic igneous rock is. Secondly, the degree of silica saturation of the melt that formed the rock can be assessed in the absence of diagnostic feldspathoid species.
The silica saturation of a rock varies not only with silica content but the proportion of the various alkalis and metal species within the melt. The silica saturation eutectic plane is thus different for various families of rocks and cannot be easily estimated, hence the requirement to calculate whether the rock is silica saturated or not.
This is achieved by assigning cations of the major elements within the rock to silica anions in modal proportion, to form solid solution minerals in the idealised mineral assemblage starting with phosphorus for apatite, chlorine and sodium for halite, sulfur and FeO into pyrite, FeO and Cr2O3 is allocated for chromite, FeO and equal molar amount of TiO2 for ilmenite, CaO and CO2 for calcite, to complete the most common non-silicate minerals.
From the remaining chemical constituents, Al2O3 and K2O are allocated with silica for orthoclase; sodium, aluminium and potassium for albite, and so on until either there is no silica left (in which case feldspathoids are calculated) or excess, in which case the rock contains normative quartz.
Normative and modal mineralogy
Normative mineralogy is an estimate of the mineralogy of the rock. It usually differs from the visually observable mineralogy, at least as much as the types of mineral species, especially amongst the ferromagnesian minerals and feldspars, where it is possible to have many solid solution series of minerals, or minerals with similar Fe and Mg ratios substituting, especially with water (e.g.; amphibole and biotite replacing pyroxene).
However, in aphanites, or rocks with phenocrysts clearly out of equilibrium with the groundmass, a normative mineral calculation is often the best to understand the evolution of the rock and its relationship to other igneous rocks in the region.
Cautions
The CIPW Norm or Cation Norm is a useful tool for assessing silica saturation or oversaturation; estimations of minerals in a mathematical model are based on many assumptions and the results must be balanced with the observable mineralogy. The following areas create the most errors in calculations;
Cumulate rocks. While a normative mineral calculation isn't necessarily improper for use on cumulate rocks, the information gleaned is of doubtful value because a cumulate rock does not represent the melt from which it was extracted. However, if the groundmass of a cumulate can be analysed, it is valid to use a normative calculation to gain information about the parental melt.
Oxidation state. Because the normative calculation divides Fe between oxide phases and availability for silicate phases, based on estimates of the ratio of Fe2+/Fe3+, miscalculating the ratio for the rock in question may produce erroneous amounts of magnetite or hematite, or alter the silicate mineralogy. If the Fe2+/Fe3+ ratio is known for the sample, the resulting calculation should match the observed mineralogy more closely.
Pressure and temperature. Because the CIPW Norm is based on anhydrous melts and crystallisation at fairly low pressures, the resultant normative mineralogy does not reflect observed mineralogy for all rock types, especially those formed within the mantle. The normative mineralogy is not entirely accurate at reflecting the mineralogy of rocks formed at high pressures where, for instance, phlogopite may substitute for amphibole, amphibole for olivine and so forth. Altered normative calculations have been developed that more correctly reflect the particular pressure regimes of the deep crust and mantle.
Carbon dioxide. The influence of CO2 on most silicate melts is fairly minor but in some cases, especially carbonatite, but also certain lamprophyre type rocks, kimberlite and lamproite, the presence of carbon dioxide and calcite in the melt or accessory phases derives erroneous normative mineralogy. This is because if carbon is not analyzed, there is excess calcium, causing normative silica undersaturation, and increasing the calcium silicate mineral budget. Similarly, if graphite is present (as is the case with some kimberlites) this can produce excess C, and hence skew the calculation toward excess carbonate. Excess elemental C also, in nature, results in reduced oxygen fugacity and alters Fe2+/Fe3+ ratios.
Halides. The presence of some halides and non-metallic elements in the melt alter the resulting chemistry. For instance, boron forms tourmaline; excess chlorine may form scapolite instead of feldspar. This is generally rare, except in some A-type granites and related rocks.
Mineral disequilibrium. Similar to cumulate rocks, a rock may contain extraneous minerals inherited from earlier melts, or may even contain xenoliths or restite. It is improper to calculate normative mineralogy on an igneous breccia, for instance.
For this reason it is not advised to utilise a CIPW norm on kimberlites, lamproites, lamprophyres and some silica-undersaturated igneous rocks. In the case of carbonatite, it is improper to use a CIPW norm upon a melt rich in carbonate.
It is possible to apply the CIPW norm to metamorphosed igneous rocks. The validity of the method holds as true for metamorphosed igneous rocks as any igneous rock, and in this case it is useful in deriving an assumed mineralogy from a rock that may have no remnant protolith mineralogy remaining.
See also
Geochemistry
Mineralogy
Wüstite
References
Hess, P. C. (1989), Origins of Igneous Rocks, President and Fellows of Harvard College (pp. 276–285), .
Blatt, Harvey and Robert Tracy (1996), Petrology, 2nd ed., Freeman (pp. 196–7), .
External links
Basic CIPW calculation, excluding exotic minerals steps, with excel spreadsheet calculator
CIPW Norm alternate resource, with explanations.
Geochemistry
Mineralogy
Igneous petrology | Normative mineralogy | Chemistry | 1,802 |
44,788,496 | https://en.wikipedia.org/wiki/Matthias%20Aschenbrenner | Matthias Aschenbrenner (born 1972 in Bad Kötzting) is a German-American mathematician. He is a professor of mathematics and director of the logic group at the University of Vienna. His research interests include differential algebra and model theory.
Career
Aschenbrenner earned his "Vordiplom" at the University of Passau in 1996. In 2001, he received his Ph.D. from the University of Illinois at Urbana–Champaign, where he was a student of Lou van den Dries. For his dissertation, he was awarded the 2001 Sacks Prize by the Association for Symbolic Logic. After a visiting position at the University of California, Berkeley, Aschenbrenner joined the faculty at the University of Illinois at Chicago in 2003, moving to the University of California, Los Angeles in 2007. In 2012, Aschenbrenner became a Fellow of the American Mathematical Society. He was jointly awarded the 2018 Karp Prize with Lou van den Dries and Joris van der Hoeven "for their work in model theory, especially on asymptotic differential algebra and the model theory of transseries". In 2018, Aschenbrenner was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro. Aschenbrenner moved to the University of Vienna in 2020, where he is also director of the logic group.
References
1972 births
Living people
German emigrants to the United States
Fellows of the American Mathematical Society
Academic staff of the University of Vienna
21st-century American mathematicians
Model theorists
University of Passau alumni
People from Bad Kötzting
University of Illinois College of Liberal Arts and Sciences alumni | Matthias Aschenbrenner | Mathematics | 333 |
75,297,354 | https://en.wikipedia.org/wiki/NGC%205273 | NGC 5273 is a lenticular galaxy located away in the northern constellation of Canes Venatici. This galaxy was discovered by William Herschel on May 1, 1785. It is positioned ° to the southeast of the star 25 Canum Venaticorum.
The morphological classification of this galaxy is SA0(s), indicating it is lenticular in form. It displays a faint, unbarred spiral structure within a generally elliptical profile. NGC 5273 is classified as a type 1.5 Seyfert galaxy, with the X-ray emission from its active galactic nucleus undergoing significant absorption. However, data collected between the year 2000 and 2022 suggest this is a changing–look Seyfert, with the type ranging from 1 to 1.8/1.9. The activity level shows strong variability, allowing reverberation mapping of the supermassive black hole at the core. This object has an estimated mass of .
References
Further reading
Lenticular galaxies
Seyfert galaxies
Canes Venatici
5273
48521
08675
Astronomical objects discovered in 1785
Discoveries by William Herschel | NGC 5273 | Astronomy | 231 |
5,499,409 | https://en.wikipedia.org/wiki/Desmoglein | The desmogleins are a family of desmosomal cadherins consisting of proteins DSG1, DSG2, DSG3, and DSG4. They play a role in the formation of desmosomes that join cells to one another.
Pathology
Desmogleins are targeted in the autoimmune disease pemphigus.
Desmoglein proteins are a type of cadherin, which is a transmembrane protein that binds with other cadherins to form junctions known as desmosomes between cells. These desmoglein proteins thus hold cells together, but, when the body starts producing antibodies against desmoglein, these junctions break down, and this results in subsequent blister or vesicle formation.
References
External links
Cadherins
Single-pass transmembrane proteins
Protein families | Desmoglein | Biology | 176 |
22,764,213 | https://en.wikipedia.org/wiki/Narec | Narec, since 2014 known as the National Renewable Energy Centre, is a part of the Offshore Renewable Energy (ORE) Catapult, a British technology innovation and research centre for offshore wind power, wave energy, tidal energy and low carbon technologies. ORE Catapult's head office is in Glasgow, Scotland. The centre operates multi-purpose offshore renewable energy test and demonstration facilities. It is similar to other centres, such as NREL in the US and (CENER) in Spain. The National Renewable Energy Centre is based in Blyth, Northumberland.
History
Originally known as NaREC (New and Renewable Energy Centre), the centre was created in 2002 by One NorthEast, the North East regional development agency, as part of the Strategy for Success programme. In 2010 the organisation changed its name to Narec (National Renewable Energy Centre). In April 2014, the organisation merged with the Offshore Renewable Energy (ORE) Catapult to focus on the development and cost reduction of offshore wind, wave and tidal energy across the UK.
The organisation was originally involved in a wide range of technologies, including:
Wind (onshore and offshore)
Transmission and distribution
Photovoltaics
Oil and gas
Marine renewables
Fuel cells
Microrenewables
Biomass
In 2010, due to UK government cutbacks, Narec closed, sold off or separated parts of the business. Spin-off companies include:
Decerna – Working on energy efficiency, solar farm design, preparation of MW-scale battery sites, grid connection, and life-cycle assessment. The company was renamed from Narec Distributed Energy Limited in 2022.
Solar Capture Technologies – Specialises on bespoke and novel solar photovoltaic systems, including off-grid systems. Renamed from Narec Solar in 2013.
NCL Technology Ventures – A specialist healthcare investor, originally created by Narec and Ashberg Limited. Renamed from Narec Capital in 2013.
Renewable Risk Advisers Limited – renamed from Narec Capital Risk Solutions Limited in 2012.
Following its merger with ORE Catapult, the National Renewable Energy Centre now focuses on helping to de-risk and accelerate the development and commercialisation of the offshore renewable energy industry in the UK.
Operations
The National Renewable Energy Centre is involved in:
Wind turbine rotor blades
Product certification, verification and investigations for the next generation offshore wind turbines.
Power trains and components
3MW and 15MW facilities that can perform independent performance and reliability assessments of full systems and components.
Electrical networks
UKAS accredited laboratories with specialist test and measurement facilities to help develop technologies needed for developing power systems and exploring life extension opportunities for ageing assets.
Subsea trials and demonstrations
Controlled onshore salt water location for all stages of technology development.
Resource measurement and assessment
Open access facility for testing, calibrating and verifying remote sensor technologies
Closed facilities
Clothier High Voltage Laboratory
The Clothier Electrical Testing Laboratory was opened in 1970 by A. Reyrolle & Company. Narec took over the facility in 2004, to use it to test the robustness of electrical infrastructure offshore locations to onshore sites.
Although one of the few high voltage testing facilities in the world, the facility was closed by Narec in 2011 due to a lack of government funding. Many parts of the lab were relocated to Narec's main campus in Blyth. The ruins of the original lab are now the property of Siemens.
Current facilities
Charles Parsons Technology Centre
Built in 2004, this £5m facility contains a low voltage electrical laboratory for the testing of connecting renewable energy systems to the transmission and distribution grid. Some of the equipment and staff from the closed Narec Clothier Electrical Testing Laboratory were moved to this facility.
Training tower
This is a 27m high tower, for training of offshore wind technicians.
Dry docks
Tests marine devices with three modified dry docks.
Power train test facilities – 3MW and 15MW
Facilities that can perform independent performance and reliability assessments of full systems and components.
Blade test 1 & 2
The blade testing facilities at National Renewable Energy Centre are designed to test wind turbine blades up to 100m in length. Blades are tested using a Compact Resonant Mass (CRM) system. ORE Catapult is working on a technique of blade testing known as "Dual Axis".
European funded research
ORE Catapult is involved in a number of European funded research projects including Tidal EC, Optimus and LIFES50+.
Conferences and papers
Narec staff have written papers which have appeared in journals and international energy conferences. These are mainly in the subjects of photovoltaics, wind, marine, and electrical infrastructure. A short list of some of these is given below:
Snapper, An efficient and compact direct electric power take-off device for wave energy converters.
Availability and Estimation of Marine Renewable Energy Resources
Marine Renewables: A Development Route Map for the UK
Bivariate empirical mode decomposition and its contribution to wind turbine condition monitoring
Experimental tests of an air-cored PM tubular generator for direct drive wave energy converters
Fatigue testing of wind turbine blades with computational verification.
Ensuring Reliability for Offshore Wind – Large Testing Facilities.
Accelerating Technology Development for Round 3 Offshore Deployment.
Electrical Network Testing & Simulation: An effective method of testing the fault ride through capabilities of Small Scale Distributed Generation
Ensuring Reliability for Marine Renewable Drive Train Systems – Nautilus Testing Facilities
LGBC Silicon Solar Cell with modified bus bar suitable for high volume wire bonding
Process and device modelling for enhancement of silicon solar cell efficiency
An intelligent approach to the condition monitoring of large scale wind turbines
Lightning Arresters and Substation Protection
Study on laser parameters for silicon solar cells with LCP selective emitters
Low Cost, 100X point focus silicon concentrator cells made by the LGBC process
Laser Grooved Buried Contact Concentrator Solar Cells
Studying the Groove Profiles Produced for Fine Line Screen Printed Front Contacts in Laser Grooved Buried Contact Solar Cells.
Investigation of cross wafer uniformity of production line produced LGBC concentrator solar cells
Process Development of Coloured LGBC Solar Cells for BIPV Applications
Process optimisation for coloured laser grooved buried contact solar cells
Colour and Shape in Laser Grooved Buried Contact Solar Cells for Applications in the Built Environment
Fine-Line Screen Printing in Large Area Laser Grooved, Buried Contact Silicon Solar Cells
Progress of the LAB2LINE Laser Grooved Buried Contact Screen Printed Solar Cells Hybrid p-type Monocrystalline Process
Development of Laser Fired Contact (LFC) Rear Passivated Laser Groove Buried Contact (LGBC) Solar Cells Using Thin Wafers
The LAB2LINE laser grooved buried contact screen printed solar cells hybrid p-type monocrystaline process
Integrated process and device 'TCAD' for enhancement of C-Si solar cell efficiency
Screen printing in laser grooved buried contact solar cells: The LAB2LINE hybrid processes
Surface passivation by silicon nitride in Laser Grooved Buried Contact (LGBC) silicon solar cells
Optimisation of the front contact for low to medium concentrations in LGBC silicon solar cells
Laser Grooved Buried Contact Solar Cells for Concentration Factors up to 100X
Device Design and Process Optimisation for LGBC Solar Cells for Use Between 50X and 100X Concentration
Design and Optimisation of Laser Grooved Buried Contact Solar Cells for Use At Concentration Factors Up To 100X
Development of Laser Grooved Buried Contact Solar Cells for Use at Concentration Factors up to 100X
Front contact modelling of monocrystaline silicon laser grooved buried contact solar cells
Laser Grooved Buried Contact Concentrator Cells
PC1D modelling of the efficiency of laser grooved buried contact solar cells designed for use at concentration factors up to 100X
Front Dicing Technique for Pre-isolation of Concentrator Silicon Solar Cells
Environmental sustainability of concentrator PV systems: Preliminary LCA results of the APOLLON project
Process development of shape and colour in LGBC solar cells for BIPV applications
A summary of the Havemor project – Process development of shaped and coloured solar cells for BIPV applications
Process and device modelling for enhancement of silicon solar cell efficiency
Technological and Financial Aspects of Laser Grooved Buried Contact Silicon Solar Cell Based Concentrator Systems
First results on the APOLLON project multi-approach for high efficiency integrated and intelligent concentrating PV modules (systems)
References
External links
ORE Catapult
Blyth, Northumberland
Renewable energy organizations
Energy research institutes
Engineering companies of England
Renewable energy in England
Companies based in Northumberland
2002 establishments in England
Companies established in 2002
Non-profit organisations based in England
Catapult centres
Science and technology in Northumberland | Narec | Engineering | 1,726 |
78,347,925 | https://en.wikipedia.org/wiki/Cylindera%20kaleea | Cylindera kaleea is a species of tiger beetle found across East Asia.
Known subspecies
Cylindera kaleea angulimaculata
Cylindera kaleea kaleea
Cylindera kaleea yedoensis
References
kaleea
Beetles described in 1866
Biota of Asia | Cylindera kaleea | Biology | 57 |
37,073,976 | https://en.wikipedia.org/wiki/Saccharum%20edule | Saccharum edule is a species of sugarcane, that is a grass in the genus Saccharum with a fibrous stalk that is rich in sugar. It is cultivated in tropical climates in southeastern Asia. It has many common names which include duruka, tebu telor, PNG/Fiji asparagus, dule (Fiji), pitpit (Melanesia/New Guinea) and naviso.
The young, unopened flower heads of Saccharum edule are eaten raw, steamed, or toasted, and prepared in various ways in Southeastern Asia, including New Guinea, Fiji and certain island communities of Indonesia.
Description
Saccharum edule is a perennial plant that grows in vigorous clumps that grow to a height of . Although the plant resembles sugarcane from a distance, the stem is much narrower and the leaves thinner and more flexible. The large flower panicles do not open but remain inside their leaf sheaths forming a dense mass. Saccharum edule is part of the Saccharum officinarum species complex and its genome has been investigated.
Distribution
Saccharum edule originated in Southeastern Asia and is also grown on various Pacific Islands at heights ranging from sea level to high altitudes. It needs a growing temperature of to and an annual rainfall of .
Uses
The unopened flower heads of Saccharum edule are gathered and used as a vegetable, it's eaten either raw or cooked. In Fiji, a number of different varieties occur and some grow wild along the riverbank. Children enjoy gathering, roasting and eating the flower heads of the early season red duruka, and later the different varieties of white duruka as they mature in rotation. The flower heads are widely sold in local markets for use as a vegetable. A purple duruka which flowers twice a year has been introduced and become popular and it is proposed that a canning operation be set up to sell this as "Fijian asparagus". The plant is also used for erosion control.
In Papua New Guinea pitpit is eaten cooked in coconut milk.
References
edule
Flora of Indo-China
Flora of Malesia
Flora of the Pacific
Inflorescence vegetables
Fijian cuisine
Unplaced names | Saccharum edule | Biology | 453 |
5,747,330 | https://en.wikipedia.org/wiki/Integrated%20Terrorism%20Assessment%20Centre | The Integrated Terrorism Assessment Centre (ITAC; ) is an independent federal organization tasked with assessing threats of terrorism to Canada and Canadian interests abroad. It is the only federal organization with the specific responsibility of analyzing terrorism threats related to Canada.
ITAC is responsible for assessing and recommending the National Terrorism Threat Level, used by the Canadian government and law enforcement agencies to "mitigate the potential effects of terrorism incidents in Canada and abroad." Administratively, ITAC functions as a component of Canadian Security Intelligence Service (CSIS) and is subject to the CSIS Act. However, operationally, it is independent of CSIS and is accountable to the National Security Advisor, rather than the Director of CSIS.
Co-located with CSIS in Ottawa, the ITAC is a cooperative initiative that facilitates intelligence information sharing and analysis within the Canadian intelligence community and to first responders, such as law enforcement.
History
In 2003, the government of Paul Martin established the Integrated Threat Assessment Centre following the environment that followed the September 11 attacks in the US as well as the 2002–2004 SARS outbreak, among other things. The ITAC officially became operational the following year, on 15 October 2004.
In 2008, the government of Stephen Harper changed the group's mandate to prioritize terrorist threats to Canadians and Canadian interests. The ITAC was subsequently renamed the Integrated Terrorism Assessment Centre in June 2011, emphasizing its focus on terrorism.
Partners
Co-located with CSIS in Ottawa, the ITAC is a cooperative initiative that facilitates intelligence information sharing and analysis within the Canadian intelligence community and to first responders, such as law enforcement.
Domestic partners include:
Canadian Armed Forces
Canada Border Services Agency
Canada Revenue Agency
Canadian Security Intelligence Service
Communications Security Establishment
Correctional Service of Canada
Department of National Defence
Financial Transactions and Reports Analysis Centre of Canada
Global Affairs Canada
Ontario Provincial Police
Privy Council Office
Public Safety Canada
Royal Canadian Mounted Police
Sûreté du Québec
Transport Canada
International partners include:
Australia National Threat Assessment Centre
New Zealand Combined Threat Assessment Group
U.S. National Counterterrorism Center
U.K. Joint Terrorism Analysis Centre
National Terrorism Threat Level
The National Terrorism Threat Level (NTTL) is a tool used by Canadian government officials, including law enforcement agencies, to identify risks and vulnerabilities from threats of terrorism in Canada. It represents the probability of a violent act of terrorism occurring in Canada, based on information and intelligence. Assessment and recommendation of the NTTL is the responsibility of the Integrated Terrorism Assessment Centre.
The NTTL was formalized in early October 2014 when the threat level was first raised in Canada, anticipating incidents like the two that occurred later that month—the Saint-Jean-sur-Richelieu ramming attack and the shootings at Parliament Hill. In addition to mitigating the potential effects of terrorism incidents in Canada and abroad, its key benefit is that it helps to ensure a common understanding of the general terrorist threat to Canada.
, Canada's current level is "Medium," which means that a "violent act of terrorism could occur;" it has been at this level since October 2014. More specifically, this means that "extremist groups and individuals located in Canada and abroad, have both the intent AND capability to carry out an act of terrorism in Canada."
Threat Level assessments are conducted every four months at a minimum, or more frequently if needed.
See also
Terrorism in Canada
Terror alert system
References
External links
Integrated Terrorism Assessment Centre home page
Backgrounder
Federal departments and agencies of Canada
Canadian intelligence agencies
Government of Canada
Canadian Security Intelligence Service
2003 establishments in Canada
Public Safety Canada
Canada's National Terrorism Threat Level
National security in Canada | Integrated Terrorism Assessment Centre | Technology | 726 |
4,971,620 | https://en.wikipedia.org/wiki/Gamma1%20Caeli | {{DISPLAYTITLE:Gamma1 Caeli}}
Gamma1 Caeli, Latinized from γ1 Caeli, is a double star in the constellation Caelum. It consists of a K-type giant, and a G-type subgiant.
Properties
Component A
Gamma1 Caeli A has an apparent magnitude of 4.57, which makes it barely visible to the naked eye. According to parallax, the star is located 185 light years away. Gamma1 Caeli A has a similar mass to the Sun, but expanded to 14.3 times the Sun's girth. It radiates at 69.9 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,411 K.
Component B
Gamma1 Caeli B has an apparent magnitude of 8.07, which makes it visible only in binoculars, and is located at a similar distance to Component A. It has 91% of the Sun's mass, and is metal poor, with 79% the abundance of heavy metals compared to the Sun.
References
Caeli, Gamma1
Caelum
Binary stars
K-type giants
032831
023595
1652
Durchmusterung objects
G-type subgiants | Gamma1 Caeli | Astronomy | 256 |
3,587,470 | https://en.wikipedia.org/wiki/Effective%20stress | The effective stress can be defined as the stress, depending on the applied tension and pore pressure , which controls the strain or strength behaviour of soil and rock (or a generic porous body) for whatever pore pressure value or, in other terms, the stress which applied over a dry porous body (i.e. at ) provides the same strain or strength behaviour which is observed at ≠ 0. In the case of granular media it can be viewed as a force that keeps a collection of particles rigid. Usually this applies to sand, soil, or gravel, as well as every kind of rock and several other porous materials such as concrete, metal powders, biological tissues etc. The usefulness of an appropriate ESP formulation consists in allowing to assess the behaviour of a porous body for whatever pore pressure value on the basis of experiments involving dry samples (i.e. carried out at zero pore pressure).
History
Karl von Terzaghi first proposed the relationship for effective stress in 1925. For him, the term "effective" meant the calculated stress that was effective in moving soil, or causing displacements. It has been often interpreted as the average stress carried by the soil skeleton. Afterwards, different formulations have been proposed for the effective stress. Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Alec Skempton in his work in 1960, has carried out an extensive review of available formulations and experimental data in literature about effective stress valid in soil, concrete and rock, in order to reject some of these expressions, as well as clarify what expression was appropriate according to several work hypotheses, such as stress–strain or strength behaviour, saturated or nonsaturated media, rock/concrete or soil behaviour, etc.
In 1962, work by Jeremiah Jennings and John Burland examined the applicability of Terzaghi’s effective stress principle to partly saturated soils. Through a series of experiments undertaken at the University of the Witwatersrand, including oedometer and compression tests on various soil types, they showed that behaviours such as volume changes and shear strength in partly saturated soils do not align with predictions based on effective stress changes alone. Their findings showed that the structural changes due to pressure deficiencies behave differently from changes due to applied stress.
Description
Effective stress (σ') acting on a soil is calculated from two parameters, total stress (σ) and pore water pressure (u) according to:
Typically, for simple examples
Much like the concept of stress itself, the formula is a construct, for the easier visualization of forces acting on a soil mass, especially simple analysis models for slope stability, involving a slip plane. With these models, it is important to know the total weight of the soil above (including water), and the pore water pressure within the slip plane, assuming it is acting as a confined layer.
However, the formula becomes confusing when considering the true behaviour of the soil particles under different measurable conditions, since none of the parameters are actually independent actors on the particles.
Consider a grouping of round quartz sand grains, piled loosely, in a classic "cannonball" arrangement. As can be seen, there is a contact stress where the spheres actually touch. Pile on more spheres and the contact stresses increase, to the point of causing frictional instability (dynamic friction), and perhaps failure. The independent parameter affecting the contacts (both normal and shear) is the force of the spheres above. This can be calculated by using the overall average density of the spheres and the height of spheres above.
If we then have these spheres in a beaker and add some water, they will begin to float a little depending on their density (buoyancy). With natural soil materials, the effect can be significant, as anyone who has lifted a large rock out of a lake can attest. The contact stress on the spheres decreases as the beaker is filled to the top of the spheres, but then nothing changes if more water is added. Although the water pressure between the spheres (pore water pressure) is increasing, the effective stress remains the same, because the concept of "total stress" includes the weight of all the water above. This is where the equation can become confusing, and the effective stress can be calculated using the buoyant density of the spheres (soil), and the height of the soil above.
The concept of effective stress truly becomes interesting when dealing with non-hydrostatic pore water pressure. Under the conditions of a pore pressure gradient, the ground water flows, according to the permeability equation (Darcy's law). Using our spheres as a model, this is the same as injecting (or withdrawing) water between the spheres. If water is being injected, the seepage force acts to separate the spheres and reduces the effective stress. Thus, the soil mass becomes weaker. If water is being withdrawn, the spheres are forced together and the effective stress increases.
Two extremes of this effect are quicksand, where the groundwater gradient and seepage force act against gravity; and the "sandcastle effect", where the water drainage and capillary action act to strengthen the sand. As well, effective stress plays an important role in slope stability, and other geotechnical engineering and engineering geology problems, such as groundwater-related subsidence.
References
Terzaghi, K. (1925). Principles of Soil Mechanics. Engineering News-Record, 95(19-27).
Soil mechanics | Effective stress | Physics | 1,149 |
7,160,712 | https://en.wikipedia.org/wiki/Proclus%20of%20Laodicea | Proclus () or Proculeius, son of the physician Themison, was a hierophant at Laodiceia in Syria. He wrote, according to the Suda, the following works:
On the gods (θεολογία)
On the myth of Pandora in Hesiod (εἰς τὴν παρ' Ἡσιόδῳ τῆς Πανδώρας μῦθον)
On golden words (εἰς τὰ χρυσᾶ ἔπη)
On Nicomachus' introduction to number theory (εἰς τὴν Νικομάχου εἰσαγωγὴν τῆς ἀριθμητικῆς)
some geometrical treatises
He is also mentioned by Damascius in a commentary on Plato.
Although a commentary on the Pythagorean Golden Verses, known through a translation into Arabic (in the El Escorial library as manuscript 888) has sometimes been attributed to this Proclus (following a theory promoted by ), this is disputed, and a more widely accepted theory is that the commentary is instead by Proclus Diadochus.
See also
Proculeia gens
References
Year of birth missing
Year of death missing
Ancient Greek mathematicians
Ancient Greek writers
People from Latakia
Philosophers of mathematics | Proclus of Laodicea | Mathematics | 285 |
12,155,118 | https://en.wikipedia.org/wiki/Johnson%27s%20rule | In operations research, Johnson's rule is a method of scheduling jobs in two work centers. Its primary objective is to find an optimal sequence of jobs to reduce makespan (the total amount of time it takes to complete all jobs). It also reduces the amount of idle time between the two work centers.
The method minimizes the makespan in the case of two work centers. Furthermore, the method finds the shortest makespan in the case of three work centers if additional constraints are met.
Algorithm
The technique requires several preconditions:
The time for each job must be invariant with respect to when it is done.
Job times must be independent of the job sequence.
All jobs must be processed in the first work center before going through the second work center.
All jobs are equally prioritised.
Johnson's rule is as follows:
List the jobs and their times at each work center.
Select the job with the shortest activity time. If that activity time is for the first work center, then schedule the job first. If that activity time is for the second work center then schedule the job last. Break ties arbitrarily.
Eliminate the shortest job from further consideration.
Repeat steps 2 and 3, working towards the center of the job schedule until all jobs have been scheduled.
Given significant idle time at the second work center (from waiting for the job to be finished at the first work center), job splitting may be used.
If there are three work centers, Johnson's rules can still be applied if the minimum processing time in the first (and/or the third) work center is not less than the maximum processing time in the second work center. If so, one can create two virtual work center then apply Johnson's rules like for the two work center case.
Example
Each of five jobs needs to go through work center A and B. Find the optimum sequence of jobs using Johnson's rule.
So, the jobs must be processed in the order C → A → D → E → B, and must be processed in the same order on both work centers.
Notes
References
j
Further reading
William J Stevenson, Operations Management 9th Edition, McGraw-Hill/Irwin, 2007
Production planning
Operations research | Johnson's rule | Mathematics | 447 |
61,751,594 | https://en.wikipedia.org/wiki/Curvularia%20inaequalis | Curvularia inaequalis is a plant saprobe that resides in temperate and subtropical environments. It is commonly found in the soils of forage grasses and grains. The species has been observed in a broad distribution of countries including Turkey, France, Canada, The United States, Japan and India. This species is dematiaceous and a hyphomycete.
History and taxonomy
The Curvularia genus can be identified by its spiral borne phaeophragmospores, which contain both hyaline end cells and disproportionately large cells. They possess conidia with differing curvature and number of septa. C. inaequalis was first described in 1907 by ecologist Cornelius Lott Shear. The fungus was isolated from diseased New Jersey cranberry pulp and termed Helminthosporium inaequale. Later, during Karl Boedijin's taxonomic organization and grouping of this genus, he recognized a similarity between them and H. inaequale. He recognized a morphological similarity between its conidia and those of the lunata group within Curvularia, and so renamed it C. inaequalis. Recognition of the three-septate curved conidia motivated the introduction of the now popularized name.
Growth and morphology
The species' spore producing cells take on a model of sympodial growth. Conidia grow through successive apices which end in a terminal prospore. Growth can be affected by static magnetic fields with field flux densities. Under these conditions, the number of conidia are able to increase by a minimum of 68 percent.
Curvularia inaequalis is a filamentous fungus, with 3 to 12 densely packed filaments. The species is mostly brown in appearance, with pale brown end cells. Conidia themselves, consist of 3-5 cells with thick cell walls and a larger central cell. The diameter of the conidia ranges from 10 to 30 micrometers and have a slight leading curvature. Overall the appearance of the species is described as looking "cottony" with clear branching cells.
The species can be difficult to identify due to its similar appearance to both C. and geniculate. Instead, sequencing of nuclear rRNA internal transcribed spacer regions (ITS) can be done to achieve accurate identification.
Physiology
The optimal growth temperature for the species is 30°C. It is able to produce a multitude of chemical products with enzymatic properties. One enzyme produced is chloroperoxidase, which can catalyze halogenation reactions. Chloroperoxidase secreted from C. inaequalis contains vanadium active site. The presence of the vanadium substrate vanadate is essential for the function of chloroperoxidase. The compound glucose however, acts as an inhibitor for both enzyme function and production. In its active form, the enzyme is able to then produce hypochlorous acid, a strong oxidizing agent. It has been theorized that C. inaequalis utilizes chloroperoxidase and hypochlorous acid in combination to penetrate the host's cell wall.
Other significant compounds produced include of B-galactosidase, and . The species is able to produce large amounts of β-galactosidase, which can hydrolyze lactose in acid whey. C. inaequalis also contains 4-hydroxyradianthin and Curvularone A compounds which have been identified as potential anti-tumor agents.
Pathology and toxicology
Plant pathology and toxicology
Curvularia inaequalis is known to cause leaf spot, also known as Leaf Blight. Symptoms of infection by C. inaequalis include the combination of oval shaped dark brown patches and leaf tip dieback. The infection slowly spreads causes necrosis until it has covered the entirety of the leaf. It results in the thinning of grass vegetation such as Zoysia-, Bent-, Bermuda- and Buffalo- grasses. Blighting is believed to be caused by two C. inaequalis mycotoxins, Pyrenocines and . Pyrenocines A is the more potent of the two, stunting growth and causing necrosis in vegetation. Both cause leaf tip die back in turf grass and leaf leakage of electrolytes in Bermuda grass.
Human pathology
Curvularia inaequalis is typically a rare human pathogen. There are however, recorded medical cases that mention infection by the species. One such case is of an Eosinophilic fungal rhinosinusitis in an immunocompromised male. Endoscopic sinus surgery was required to remove a large polyposis. C. inaequalis was found to have grown favorably in the eosinophilic mucus. Oral itraconazole and other corticosteroids successfully were administered to prevent reinfection. Another case of C. inaequalis causing disease includes peritonitis in an elderly patient.
It is suggested that contraction of the fungus occurs due to contact with soils. Furthermore, a case of recorded aerosolized C. inaequalis in one Canadian home supports airborne movement of spores as an important mode of transfer. While many cases of infection due to soil contact with the genus Curvularia, connection with the specific species has not yet been confirmed. Further studies are required to determine its human pathogen potential.
References
Pleosporaceae
Fungi described in 1907
Fungus species | Curvularia inaequalis | Biology | 1,145 |
24,973,826 | https://en.wikipedia.org/wiki/Human%20genetic%20resistance%20to%20malaria | Human genetic resistance to malaria refers to inherited changes in the DNA of humans which increase resistance to malaria and result in increased survival of individuals with those genetic changes. The existence of these genotypes is likely due to evolutionary pressure exerted by parasites of the genus Plasmodium which cause malaria. Since malaria infects red blood cells, these genetic changes are most common alterations to molecules essential for red blood cell function (and therefore parasite survival), such as hemoglobin or other cellular proteins or enzymes of red blood cells. These alterations generally protect red blood cells from invasion by Plasmodium parasites or replication of parasites within the red blood cell.
These inherited changes to hemoglobin or other characteristic proteins, which are critical and rather invariant features of mammalian biochemistry, usually cause some kind of inherited disease. Therefore, they are commonly referred to by the names of the blood disorders associated with them, including sickle-cell disease, thalassemia, glucose-6-phosphate dehydrogenase deficiency, and others. These blood disorders cause increased morbidity and mortality in areas of the world where malaria is less prevalent.
Development of genetic resistance to malaria
Microscopic parasites, like viruses, protozoans that cause malaria, and others, cannot replicate on their own and rely on a host to continue their life cycles. They replicate by invading the hosts' cells and usurping the cellular machinery to replicate themselves. Eventually, unchecked replication causes the cells to burst, killing the cells and releasing the infectious organisms into the bloodstream where they can infect other cells. As cells die and toxic products of invasive organism replication accumulate, disease symptoms appear. Because this process involves specific proteins produced by the infectious organism as well as the host cell, even a very small change in a critical protein may render infection difficult or impossible. Such changes might arise by a process of mutation in the gene that codes for the protein. If the change is in the gamete, that is, the sperm or egg that join to form a zygote that grows into a human being, the protective mutation will be inherited. Since lethal diseases kill many persons who lack protective mutations, in time, many persons in regions where lethal diseases are endemic come to inherit protective mutations.
When the P. falciparum parasite infects a host cell, it alters the characteristics of the red blood cell membrane, making it "stickier" to other cells. Clusters of parasitized red blood cells can exceed the size of the capillary circulation, adhere to the endothelium, and block circulation. When these blockages form in the blood vessels surrounding the brain, they cause cerebral hypoxia, resulting in neurological symptoms known as cerebral malaria. This condition is characterized by confusion, disorientation, and often terminal coma. It accounts for 80% of malaria deaths. Therefore, mutations that protect against malaria infection and lethality pose a significant advantage.
Malaria has placed the strongest known selective pressure on the human genome since the origin of agriculture within the past 10,000 years. Plasmodium falciparum was probably not able to gain a foothold among African populations until larger sedentary communities emerged in association with the evolution of domestic agriculture in Africa (the agricultural revolution). Several inherited variants in red blood cells have become common in parts of the world where malaria is frequent as a result of selection exerted by this parasite. This selection was historically important as the first documented example of disease as an agent of natural selection in humans. It was also the first example of genetically controlled innate immunity that operates early in the course of infections, preceding adaptive immunity which exerts effects after several days. In malaria, as in other diseases, innate immunity leads into, and stimulates, adaptive immunity.
Mutations may have detrimental as well as beneficial effects, and any single mutation may have both. Infectiousness of malaria depends on specific proteins present in the cell walls and elsewhere in red blood cells. Protective mutations alter these proteins in ways that make them inaccessible to malaria organisms. However, these changes also alter the functioning and form of red blood cells that may have visible effects, either overtly, or by microscopic examination of red blood cells. These changes may impair the function of red blood cells in various ways that have a detrimental effect on the health or longevity of the individual. However, if the net effect of protection against malaria outweighs the other detrimental effects, the protective mutation will tend to be retained and propagated from generation to generation.
These alterations which protect against malarial infections but impair red blood cells are generally considered blood disorders since they tend to have overt and detrimental effects. Their protective function has only in recent times, been discovered and acknowledged. Some of these disorders are known by fanciful and cryptic names like sickle-cell anemia, thalassaemia, glucose-6-phosphate dehydrogenase deficiency, ovalocytosis, elliptocytosis and loss of the Gerbich antigen and the Duffy antigen. These names refer to various proteins, enzymes, and the shape or function of red blood cells.
Innate resistance
The potent effect of genetically controlled innate resistance is reflected in the probability of survival of young children in areas where malaria is endemic. It is necessary to study innate immunity in the susceptible age group (younger than four years) because, in older children and adults, the effects of innate immunity are overshadowed by those of adaptive immunity. It is also necessary to study populations in which random use of antimalarial drugs does not occur. Some early contributions on innate resistance to infections of vertebrates, including humans, are summarized in Table 1.
It is remarkable that two of the pioneering studies were on malaria. The classical studies on the Toll receptor in Drosophila fruit fly were rapidly extended to Toll-like receptors in mammals and then to other pattern recognition receptors, which play important roles in innate immunity. However, the early contributions on malaria remain as classical examples of innate resistance, which have stood the test of time.
Mechanisms of protection
The mechanisms by which erythrocytes containing abnormal hemoglobins, or are G6PD deficient, are partially protected against P. falciparum infections are not fully understood, although there has been no shortage of suggestions. During the peripheral blood stage of replication malaria parasites have a high rate of oxygen consumption and ingest large amounts of hemoglobin. It is likely that HbS in endocytic vesicles is deoxygenated, polymerizes and is poorly digested. In red cells containing abnormal hemoglobins, or which are G6PD deficient, oxygen radicals are produced, and malaria parasites induce additional oxidative stress. This can result in changes in red cell membranes, including translocation of phosphatidylserine to their surface, followed by macrophage recognition and ingestion. The authors suggest that this mechanism is likely to occur earlier in abnormal than in normal red cells, thereby restricting multiplication in the former. In addition, binding of parasitized sickle cells to endothelial cells is significantly decreased because of an altered display of P. falciparum erythrocyte membrane protein-1 (PfMP-1). This protein is the parasite's main cytoadherence ligand and virulence factor on the cell surface. During the late stages of parasite replication red cells are adherent to venous endothelium, and inhibiting this attachment could suppress replication.
Sickle hemoglobin induces the expression of heme oxygenase-1 in hematopoietic cells. Carbon monoxide, a byproduct of heme catabolism by heme oxygenase-1(HO-1), prevents an accumulation of circulating free heme after Plasmodium infection, suppressing the pathogenesis of experimental cerebral malaria. Other mechanisms, such as enhanced tolerance to disease mediated by HO-1 and reduced parasitic growth due to translocation of host micro-RNA into the parasite, have been described.
Types of innate resistance
The first line of defense against malaria is mainly exerted by abnormal hemoglobins and glucose-6-phosphate dehydrogenase deficiency. The three major types of inherited genetic resistance – sickle cell disease, thalassemias, and G6PD deficiency – were present in the Mediterranean world by the time of the Roman Empire.
Hemoglobin abnormalities
Distribution of abnormal hemoglobins
Malaria does not occur in the cooler, drier climates of the highlands in the tropical and subtropical regions of the world.
Tens of thousands of individuals have been studied, and high frequencies of abnormal hemoglobins have not been found in any population that was malaria-free. The frequencies of abnormal hemoglobins in different populations vary greatly, but some are undoubtedly polymorphic, having frequencies higher than expected by recurrent mutation. There is no longer doubt that malarial selection played a major role in the distribution of all these polymorphisms. All of these are in malarious areas,
Sickle cell – The gene for HbS associated with sickle-cell is today distributed widely throughout sub-Saharan Africa, the Middle East, and parts of the Indian subcontinent, where carrier frequencies range from 5–40% or more of the population. Frequencies of sickle-cell heterozygotes were 20–40% in malarious areas of Kenya, Uganda, and Tanzania. Later studies by many investigators filled in the picture. High frequencies of the HbS gene are confined to a broad belt across Central Africa, but excluding most of Ethiopia and the East African highlands; this corresponds closely to areas of malaria transmission. Sickle-cell heterozygote frequencies up to 20% also occur in pockets of India and Greece that were formerly highly malarious.
The thalassemias have a high incidence in a broad band extending from the Mediterranean basin and parts of Africa, throughout the Middle East, the Indian subcontinent, Southeast Asia, Melanesia, and into the Pacific Islands.
α-thalassemia, which attains frequencies of 30% in parts of West Africa;
β-thalassemia, with frequencies up to 10% in parts of Italy;
HbE, which attains frequencies up to 55% in Thailand and other Southeast Asian countries; HbE is found in the eastern half of the Indian subcontinent and throughout Southeast Asia, where, in some areas, carrier rates may exceed 60% of the population.
HbC, which attains frequencies approaching 20% in northern Ghana and Burkina-Faso. HbC is restricted to parts of West and North Africa.
concurrent polymorphisms – double heterozygotes for HbS and β-thalassemia, and for HbS and HbC, suffer from variant forms of sickle-cell disease, milder than SS but likely to reduce fitness before modern treatment was available. As predicted, these variant alleles tend to be mutually exclusive in populations. There is a negative correlation between frequencies of HbS and β-thalassemia in different parts of Greece and of HbS and HbC in West Africa. Where there is no adverse interaction of mutations, as in the case of abnormal hemoglobins and G6PD deficiency, a positive correlation of these variant alleles in populations would be expected and is found.
Sickle-cell
Sickle-cell disease was the genetic disorder to be linked to a mutation of a specific protein. Pauling introduced his fundamentally important concept of sickle cell anemia as a genetically transmitted molecular disease.
The molecular basis of sickle cell anemia was finally elucidated in 1959 when Ingram perfected the techniques of tryptic peptide fingerprinting. In the mid-1950s, one of the newest and most reliable ways of separating peptides and amino acids was by means of the enzyme trypsin, which split polypeptide chains by specifically degrading the chemical bonds formed by the carboxyl groups of two amino acids, lysine and arginine. Small differences in hemoglobin A and S will result in small changes in one or more of these peptides . To try to detect these small differences, Ingram combined paper electrophoresis and the paper chromotography methods. By this combination he created a two-dimensional method that enabled him to comparatively "fingerprint" the hemoglobin S and A fragments he obtained from the tryspin digest. The fingerprints revealed approximately 30 peptide spots, there was one peptide spot clearly visible in the digest of haemoglobin S which was not obvious in the haemoglobin A fingerprint. The HbS gene defect is a mutation of a single nucleotide (A to T) of the β-globin gene replacing the amino acid glutamic acid with the less polar amino acid valine at the sixth position of the β chain.
HbS has a lower negative charge at physiological pH than does normal adult hemoglobin. The consequences of the simple replacement of a charged amino acid with a hydrophobic, neutral amino acid are far-ranging, Recent studies in West Africa suggest that the greatest impact of Hb S seems to be to protect against either death or severe disease—that is, profound anemia or cerebral malaria—while having less effect on infection per se. Children who are heterozygous for the sickle cell gene have only one-tenth the risk of death from falciparum as do those who are homozygous for the normal hemoglobin gene. Binding of parasitized sickle erythrocytes to endothelial cells and blood monocytes is significantly reduced due to an altered display of Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP-1), the parasite's major cytoadherence ligand and virulence factor on the erythrocyte surface.
Protection also derives from the instability of sickle hemoglobin, which clusters the predominant integral red cell membrane protein (called band 3) and triggers accelerated removal by phagocytic cells. Natural antibodies recognize these clusters on senescent erythrocytes. Protection by HbAS involves the enhancement of not only innate but also of acquired immunity to the parasite. Prematurely denatured sickle hemoglobin results in an upregulation of natural antibodies which control erythrocyte adhesion in both malaria and sickle cell disease. Targeting the stimuli that lead to endothelial activation will constitute a promising therapeutic strategy to inhibit sickle red cell adhesion and vaso-occlusion.
This has led to the hypothesis that while homozygotes for the sickle cell gene suffer from disease, heterozygotes might be protected against malaria. Malaria remains a selective factor for the sickle cell trait.
Thalassemias
It has long been known that a kind of anemia, termed thalassemia, has a high frequency in some Mediterranean populations, including Greeks and southern Italians. The name is derived from the Greek words for sea (thalassa), meaning the Mediterranean Sea, and blood (haima). Vernon Ingram deserves the credit for explaining the genetic basis of different forms of thalassemia as an imbalance in the synthesis of the two polypeptide chains of hemoglobin.
In the common Mediterranean variant, mutations decrease production of the β-chain (β-thalassemia). In α-thalassemia, which is relatively frequent in Africa and several other countries, production of the α-chain of hemoglobin is impaired, and there is relative over-production of the β-chain. Individuals homozygous for β-thalassemia have severe anemia and are unlikely to survive and reproduce, so selection against the gene is strong. Those homozygous for α-thalassemia also suffer from anemia and there is some degree of selection against the gene.
The lower Himalayan foothills and Inner Terai or Doon Valleys of Nepal and India are highly malarial due to a warm climate and marshes sustained during the dry season by groundwater percolating down from the higher hills. Malarial forests were intentionally maintained by the rulers of Nepal as a defensive measure. Humans attempting to live in this zone suffered much higher mortality than at higher elevations or below on the drier Gangetic Plain. However, the Tharu people had lived in this zone long enough to evolve resistance via multiple genes. Medical studies among the Tharu and non-Tharu population of the Terai yielded the evidence that the prevalence of cases of residual malaria is nearly seven times lower among Tharus. The basis for resistance has been established to be homozygosity of α-Thalassemia gene within the local population. Endogamy along caste and ethnic lines appear to have prevented these genes from being more widespread in neighboring populations.
HbC and HbE erythroids
There is evidence that the persons with α-thalassemia, HbC and HbE have some degree of protection against the parasite.
Hemoglobin C (HbC) is an abnormal hemoglobin with substitution of a lysine residue for glutamic acid residue of the β-globin chain, at exactly the same β-6 position as the HbS mutation. The "C" designation for HbC is from the name of the city where it was discovered—Christchurch, New Zealand. People who have this disease, particularly children, may have episodes of abdominal and joint pain, an enlarged spleen, and mild jaundice, but they do not have severe crises, as occur in sickle cell disease. Haemoglobin C is common in malarious areas of West Africa, especially in Burkina Faso. In a large case–control study performed in Burkina Faso on 4,348 Mossi subjects, that HbC was associated with a 29% reduction in risk of clinical malaria in HbAC heterozygotes and of 93% in HbCC homozygotes. HbC represents a 'slow but gratis' genetic adaptation to malaria through a transient polymorphism, compared to the polycentric 'quick but costly' adaptation through balanced polymorphism of HbS.
HbC modifies the quantity and distribution of the variant antigen P. falciparum erythrocyte membrane protein 1 (PfEMP1) on the infected red blood cell surface and the modified display of malaria surface proteins reduces parasite adhesiveness (thereby avoiding clearance by the spleen) and can reduce the risk of severe disease.
Hemoglobin E is due to a single point mutation in the gene for the beta chain with a glutamate-to-lysine substitution at position 26. It is one of the most prevalent hemoglobinopathies with 30 million people affected. Hemoglobin E is very common in parts of Southeast Asia. HbE erythrocytes have an unidentified membrane abnormality that renders the majority of the RBC population relatively resistant to invasion by P falciparum.
Other erythrocyte mutations
Other genetic mutations besides hemoglobin abnormalities that confer resistance to Plasmodia infection involve alterations of the cellular surface antigenic proteins, cell membrane structural proteins, or enzymes involved in glycolysis.
Glucose-6-phosphate dehydrogenase deficiency
Glucose-6-phosphate dehydrogenase (G6PD) is an important enzyme in red cells, metabolizing glucose through the pentose phosphate pathway, an anabolic alternative to catabolic oxidation (glycolysis), while maintaining a reducing environment. G6PD is present in all human cells but is particularly important to red blood cells. Since mature red blood cells lack nuclei and cytoplasmic RNA, they cannot synthesize new enzyme molecules to replace genetically abnormal or ageing ones. All proteins, including enzymes, have to last for the entire lifetime of the red blood cell, which is normally 120 days.
In 1956 Alving and colleagues showed that in some African Americans the antimalarial drug primaquine induces hemolytic anemia, and that those individuals have an inherited deficiency of G6PD in erythrocytes. G6PD deficiency is sex-linked, and common in Mediterranean, African and other populations. In Mediterranean countries such individuals can develop a hemolytic diathesis (favism) after consuming fava beans. G6PD deficient persons are also sensitive to several drugs in addition to primaquine.
G6PD deficiency is the second most common enzyme deficiency in humans (after ALDH2 deficiency), estimated to affect some 400 million people. There are many mutations at this locus, two of which attain frequencies of 20% or greater in African and Mediterranean populations; these are termed the A- and Med mutations. Mutant varieties of G6PD can be more unstable than the naturally occurring enzyme, so that their activity declines more rapidly as red cells age.
This question has been studied in isolated populations where antimalarial drugs were not used in Tanzania, East Africa and in the Republic of the Gambia, West Africa, following children during the period when they are most susceptible to falciparum malaria. In both cases parasite counts were significantly lower in G6PD-deficient persons than in those with normal red cell enzymes. The association has also been studied in individuals, which is possible because the enzyme deficiency is sex-linked and female heterozygotes are mosaics due to lyonization, where random inactivation of an X-chromosome in certain cells creates a population of G6PD deficient red blood cells coexisting with normal red blood cells. Malaria parasites were significantly more often observed in normal red cells than in enzyme-deficient cells. An evolutionary genetic analysis of malarial selection of G6PD deficiency genes has been published by Tishkoff and Verelli. The enzyme deficiency is common in many countries that are, or were formerly, malarious, but not elsewhere.
PK deficiency
Pyruvate kinase (PK) deficiency, also called erythrocyte pyruvate kinase deficiency, is an inherited metabolic disorder of the enzyme pyruvate kinase. In this condition, a lack of pyruvate kinase slows down the process of glycolysis. This effect is especially devastating in cells that lack mitochondria because these cells must use anaerobic glycolysis as their sole source of energy because the TCA cycle is not available. One example is red blood cells, which in a state of pyruvate kinase deficiency rapidly become deficient in ATP and can undergo hemolysis. Therefore, pyruvate kinase deficiency can cause hemolytic anemia.
There is a significant correlation between severity of PK deficiency and extent of protection against malaria.
Elliptocytosis
Elliptocytosis, a blood disorder in which an abnormally large number of the patient's erythrocytes are elliptical. There is much genetic variability amongst those affected. There are three major forms of hereditary elliptocytosis: common hereditary elliptocytosis, spherocytic elliptocytosis and southeast Asian ovalocytosis.
Southeast Asian ovalocytosis
Ovalocytosis is a subtype of elliptocytosis, and is an inherited condition in which erythrocytes have an oval instead of a round shape. In most populations ovalocytosis is rare, but South-East Asian ovalocytosis (SAO) occurs in as many as 15% of the indigenous people of Malaysia and of Papua New Guinea. Several abnormalities of SAO erythrocytes have been reported, including increased red cell rigidity and reduced expression of some red cell antigens.
SAO is caused by a mutation in the gene encoding the erythrocyte band 3 protein. There is a deletion of codons 400–408 in the gene, leading to a deletion of 9 amino-acids at the boundary between the cytoplasmic and transmembrane domains of band 3 protein. Band 3 serves as the principal binding site for the membrane skeleton, a submembrane protein network composed of ankyrin, spectrin, actin, and band 4.1. Ovalocyte band 3 binds more tightly than normal band 3 to ankyrin, which connects the membrane skeleton to the band 3 anion transporter. These qualitative defects create a red blood cell membrane that is less tolerant of shear stress and more susceptible to permanent deformation.
SAO is associated with protection against cerebral malaria in children because it reduces sequestration of erythrocytes parasitized by P. falciparum in the brain microvasculature. Adhesion of P. falciparum-infected red blood cells to CD36 is enhanced by the cerebral malaria-protective SAO trait . Higher efficiency of sequestration via CD36 in SAO individuals could determine a different organ distribution of sequestered infected red blood cells. These provide a possible explanation for the selective advantage conferred by SAO against cerebral malaria.
Duffy antigen receptor negativity
Plasmodium vivax has a wide distribution in tropical countries, but is absent or rare in a large region in West and Central Africa, as recently confirmed by PCR species typing. This gap in distribution has been attributed to the lack of expression of the Duffy antigen receptor for chemokines (DARC) on the red cells of many sub-Saharan Africans. Duffy negative individuals are homozygous for a DARC allele, carrying a single nucleotide mutation (DARC 46 T → C), which impairs promoter activity by disrupting a binding site for the hGATA1 erythroid lineage transcription factor. In widely cited in vitro and in vivo studies, Miller et al. reported that the Duffy blood group is the receptor for P. vivax and that the absence of the Duffy blood group on red cells is the resistance factor to P. vivax in persons of African descent. This has become a well-known example of innate resistance to an infectious agent because of the absence of a receptor for the agent on target cells.
However, observations have accumulated showing that the original Miller report needs qualification. In human studies of P. vivax transmission, there is evidence for the transmission of P. vivax among Duffy-negative populations in Western Kenya, the Brazilian Amazon region, and Madagascar. The Malagasy people on Madagascar have an admixture of Duffy-positive and Duffy-negative people of diverse ethnic backgrounds. 72% of the island population were found to be Duffy-negative. P. vivax positivity was found in 8.8% of 476 asymptomatic Duffy-negative people, and clinical P. vivax malaria was found in 17 such persons. Genotyping indicated that multiple P. vivax strains were invading the red cells of Duffy-negative people. The authors suggest that among Malagasy populations there are enough Duffy-positive people to maintain mosquito transmission and liver infection. More recently, Duffy negative individuals infected with two different strains of P. vivax were found in Angola and Equatorial Guinea; further, P. vivax infections were found both in humans and mosquitoes, which means that active transmission is occurring. The frequency of such transmission is still unknown. Because of these several reports from different parts of the world it is clear that some variants of P. vivax are being transmitted to humans who are not expressing DARC on their red cells. The same phenomenon has been observed in New World monkeys. However, DARC still appears to be a major receptor for human transmission of P. vivax.
The distribution of Duffy negativity in Africa does not correlate precisely with that of P. vivax transmission. Frequencies of Duffy negativity are as high in East Africa (above 80%), where the parasite is transmitted, as they are in West Africa, where it is not. The potency of P. vivax as an agent of natural selection is unknown and may vary from location to location. DARC negativity remains a good example of innate resistance to an infection, but it produces a relative and not an absolute resistance to P. vivax transmission.
Gerbich antigen receptor negativity
The Gerbich antigen system is an integral membrane protein of the erythrocyte and plays a functionally important role in maintaining erythrocyte shape. It also acts as the receptor for the P. falciparum erythrocyte binding protein. There are four alleles of the gene which encodes the antigen, Ge-1 to Ge-4. Three types of Ge antigen negativity are known: Ge-1,-2,-3, Ge-2,-3 and Ge-2,+3. persons with the relatively rare phenotype Ge-1,-2,-3, are less susceptible (~60% of the control rate) to invasion by P. falciparum. Such individuals have a subtype of a condition called hereditary elliptocytosis, characterized by oval or elliptical shape erythrocytes.
Other rare erythrocyte mutations
Rare mutations of glycophorin A and B proteins are also known to mediate resistance to P. falciparum.
Human leucocyte antigen polymorphisms
Human leucocyte antigen (HLA) polymorphisms common in West Africans but rare in other racial groups are associated with protection from severe malaria. This group of genes encodes cell-surface antigen-presenting proteins and has many other functions. In West Africa, they account for as great a reduction in disease incidence as the sickle-cell hemoglobin variant. The studies suggest that the unusual polymorphism of major histocompatibility complex genes has evolved primarily through natural selection by infectious pathogens.
Polymorphisms at the HLA loci, which encode proteins that participate in antigen presentation, influence the course of malaria. In West Africa an HLA class I antigen (HLA Bw53) and an HLA class II haplotype (DRB1*13OZ-DQB1*0501) are independently associated with protection against severe malaria. However, HLA correlations vary, depending on the genetic constitution of the polymorphic malaria parasite, which differs in different geographic locations.
Hereditary persistence of fetal hemoglobin
Some studies suggest that high levels of fetal hemoglobin (HbF) confer some protection against falciparum malaria in adults with Hereditary persistence of fetal hemoglobin.
Validating the malaria hypothesis
Evolutionary biologist J.B.S. Haldane was the first to give a hypothesis on the relationship between malaria and the genetic disease. He first delivered his hypothesis at the Eighth International Congress of Genetics held in 1948 at Stockholm on a topic "The Rate of Mutation of Human Genes". He formalised in a technical paper published in 1949 in which he made a prophetic statement: "The corpuscles of the anaemic heterozygotes are smaller than normal, and more resistant to hypotonic solutions. It is at least conceivable that they are also more resistant to attacks by the sporozoa which cause malaria." This became known as 'Haldane's malaria hypothesis', or concisely, the 'malaria hypothesis'.
Detailed study of a cohort of 1022 Kenyan children living near Lake Victoria, published in 2002, confirmed this prediction. Many SS children still died before they attained one year of age. Between 2 and 16 months the mortality in AS children was found to be significantly lower than that in AA children. This well-controlled investigation shows the ongoing action of natural selection through disease in a human population.
Analysis of genome wide association (GWA) and fine-resolution association mapping is a powerful method for establishing the inheritance of resistance to infections and other diseases. Two independent preliminary analyses of GWA association with severe falciparum malaria in Africans have been carried out, one by the Malariagen Consortium in a Gambian population and the other by Rolf Horstmann (Bernhard Nocht Institute for Tropical Medicine, Hamburg) and his colleagues on a Ghanaian population. In both cases the only signal of association reaching genome-wide significance was with the HBB locus encoding the β-chain of hemoglobin, which is abnormal in HbS. This does not imply that HbS is the only gene conferring innate resistance to falciparum malaria; there could be many such genes exerting more modest effects that are challenging to detect by GWA because of the low levels of linkage disequilibrium in African populations. However, the same GWA association in two populations is powerful evidence that the single gene conferring strongest innate resistance to falciparum malaria is that encoding HbS.
Fitnesses of different genotypes
The fitnesses of different genotypes in an African region where there is intense malarial selection were estimated by Anthony Allison in 1954. In the Baamba population living in the Semliki Forest region in Western Uganda the sickle-cell heterozygote (AS) frequency is 40%, which means that the frequency of the sickle-cell gene is 0.255 and 6.5% of children born are SS homozygotes.
It is a reasonable assumption that until modern treatment was available three-quarters of the SS homozygotes failed to reproduce. To balance this loss of sickle-cell genes, a mutation rate of 1:10.2 per gene per generation would be necessary. This is about 1000 times greater than mutation rates measured in Drosophila and other organisms and much higher than recorded for the sickle-cell locus in Africans. To balance the polymorphism, Anthony Allison estimated that the fitness of the AS heterozygote would have to be 1.26 times than that of the normal homozygote. Later analyses of survival figures have given similar results, with some differences from site to site. In Gambians, it was estimated that AS heterozygotes have 90% protection against P. falciparum-associated severe anemia and cerebral malaria, whereas in the Luo population of Kenya it was estimated that AS heterozygotes have 60% protection against severe malarial anemia. These differences reflect the intensity of transmission of P. falciparum malaria from locality to locality and season to season, so fitness calculations will also vary. In many African populations the AS frequency is about 20%, and a fitness superiority over those with normal hemoglobin of the order of 10% is sufficient to produce a stable polymorphism.
Glossary
actin, ankrin, spectrin – proteins that are the major components of the cytoskeleton scaffolding within a cell's cytoplasm
aerobic – uses oxygen for the production of energy (contrast anaerobic)
allele – one of two or more alternative forms of a gene that arise by mutation
α-chain / β-chain (hemoglobin) – subcomponents of the hemoglobin molecule; two α-chains and two β-chains make up normal hemoglobin (HbA)
alveolar – pertaining to the alveoli, the tiny air sacs in the lungs
amino acid – any of twenty organic compounds that are subunits of protein in the human body
anabolic – of or relating to the synthesis of complex molecules in living organisms from simpler ones
together with the storage of energy; constructive metabolism (contrast catabolic)
anaerobic – refers to a process or reaction which does not require oxygen, but produces energy by other means (contrast aerobic)
anion transporter (organic) – molecules that play an essential role in the distribution and excretion of numerous endogenous metabolic products and exogenous organic anions
antigen – any substance (as an immunogen or a hapten) foreign to the body that evokes an immune response either alone or after forming a complex with a larger molecule (as a protein) and that is capable of binding with a component (as an antibody or T cell) of the immune system
ATP – (Adenosine triphosphate) – an organic molecule containing high energy phosphate bonds used to transport energy within a cell
catabolic – of or relatig to the breakdown of complex molecules in living organisms to form simpler ones, together with the release of energy; destructive metabolism (contrast anabolic)
chemokine – are a family of small cytokines, or signaling proteins secreted by cells
codon – a sequence of three nucleotides which specify which amino acid will be added next during protein synthesis
corpuscle – obsolete name for red blood cell
cytoadherance – infected red blood cells may adhere to blood vellel walls and uninfected red blood cells
cytoplasm – clear jelly-like substance, mostly water, inside a cell
diathesis – a tendency to suffer from a particular medical condition
DNA – deoxyribonucleic acid, the hereditary material of the genome
Drosophila – a kind of fruit fly used for genetic experimentation because of ease of reproduction and manipulation of its genome
endocytic – the transport of solid matter or liquid into a cell by means of a coated vacuole or vesicle
endogamy – the custom of marrying only within the limits of a local community, clan, or tribe
endothelial – of or referring to the thin inner surface of blood vessels
enzyme – a protein that promotes a cellular process, much like a catalyst in an ordinary chemical reaction
epidemiology – the study of the spread of disease within a population
erythrocyte – red blood cell, which with the leucocytes make up the cellular content of the blood (contrast leucocyte)
erythroid – of or referring to erythrocytes, red blood cells
fitness (genetic) – loosely, reproductive success that tends to propagate a trait or traits (see natural selection)
genome – (abstractly) all the inheritable traits of an organism; represented by its chromosomes
genotype – the genetic makeup of a cell, an organism, or an individual usually with reference to a specific trait
glycolysis – the breakdown of glucose by enzymes, releasing energy
glycophorin – transmembrane proteins of red blood cells
haplotype – a set of DNA variations, or polymorphisms, that tend to be inherited together.
Hb (HbC, HbE, HbS, etc.) hemoglobin (hemoglobin polymorphisms: hemoglobin type C, hemoglobin type E,
hemoglobin type S)
hematopoietic (stem cell) – the blood stem cells that give rise to all other blood cells
heme oxygenase-1 (HO-1) – an enzyme that breaks down heme, the iron-containing non-protein part of hemoglobin
hemoglobin – iron based organic molecule in red blood cells that transports oxygen and gives blood its red color
hemolysis – the rupturing of red blood cells and the release of their contents (cytoplasm) into surrounding fluid (e.g., blood plasma)
heterozygous – possessing only one copy of a gene for a particular trait
homozygous – possessing two identical copies of a gene for a particular trait, one from each parent
hypotonic – denotes a solution of lower osmotic pressure than another solution with which it is in contact, so that certain molecules will migrate from the region of higher osmotic pressure to the region of lower osmotic pressure, until the pressures are equalized
in vitro – in a test tube or other laboratory vessel; usually used in regard to a testing protocol
in vivo – in a live human (or animal); usually used in regard to a testing protocol
leucocyte – white blood cell, part of the immune system, which together with red blood cells, comprise the cellular component of the blood (contrast erythrocyte)
ligand – an extracellular signal molecule, which when it binds to a cellular receptor, causes a response by the cell
locus (gene or chromosome) – the specific location of a gene or DNA sequence or position on a chromosome
macrophage – a large white blood cell, part of the immune system that ingests foreign particles and infectious microorganisms
major histocompatibility complex (MHC) – proteins found on the surfaces of cells that help the immune system recognize foreign substances; also called the human leucocyte antigen (HLA) system
micro-RNA – a cellular RNA fragment that prevents the production of a particular protein by binding to and destroying the messenger RNA that would have produced the protein.
microvasculature – very small blood vessels
mitochondria – energy producing organelles of a cell
mutation – a spontaneous change to a gene, arising from an error in replication of DNA; usually mutations are referred to in the context of inherited mutations, i.e. changes to the gametes
natural selection – the gradual process by which biological traits become either more or less common in a population as a function of the effect of inherited traits on the differential reproductive success of organisms interacting with their environment (closely related to fitness)
nucleotide – organic molecules that are subunits, of nucleic acids like DNA and RNA
nucleic acid – a complex organic molecule present in living cells, esp. DNA or RNA, which consist of many nucleotides linked in a long chain.
oxygen radical – a highly reactive ion containing oxygen, capable of damaging microorganisms and normal tissues.
pathogenesis – the manner of development of a disease
PCR – Polymerase Chain Reaction, an enzymatic reaction by which DNA is replicated in a test tube for subsequent testing or analysis
phenotype – the composite of an organism's observable characteristics or traits, such as its morphology
Plasmodium – the general type (genus) of the protozoan microorganisms that cause malaria, though only a few of them do
polymerize – to combine replicated subunits into a longer molecule (usually referring to synthetic materials, but also organic molecules)
polymorphism – the occurrence of something in several different forms, as for example hemoglobin (HbA, HbC, etc.)
polypeptide – a chain of amino acids forming part of a protein molecule
receptor (cellular surface) – specialized integral membrane proteins that take part in communication between the cell and the outside world; receptors are responsive to specific ligands that attach to them.
reducing environment (cellular) – reducing environment is one where oxidation is prevented by removal of oxygen and other oxidising gases or vapours, and which may contain actively reducing gases such as hydrogen, carbon monoxide and gases that would oxidize in the presence of oxygen, such as hydrogen sulfide.
RNA – ribonucleic acid, a nucleic acid present in all living cells. Its principal role is to act as a messenger carrying instructions from DNA for controlling the synthesis of proteins
sequestration (biology) – process by which an organism accumulates a compound or tissue (as red blood cells) from the environment
sex-linked – a trait associated with a gene that is carried only by the male or female parent (contrast with autosomal)
Sporozoa – a large class of strictly parasitic nonmotile protozoans, including Plasmodia which cause malaria
TCA cycle – TriCarboxylic Acid cycle is a series of enzyme-catalyzed chemical reactions that form a key part of aerobic respiration in cells
translocation (cellular biology) – movement of molecules from outside to inside (or vice versa) of a cell
transmembrane – existing or occurring across a cell membrane
venous – of or referring to the veins
vesicle – a small organelle within a cell, consisting of fluid enclosed by a fatty membrane
virulence factors – enable an infectious agent to replicate and disseminate within a host in part by subverting or eluding host defenses.
See also
Adaptive immunity
Malaria vaccine
Notes
References
Further reading
Dronamraju KR, Arese P (2006) Malaria: Genetic and Evolutionary Aspects, Springer; Berlin, /
Faye FBK (2009) Malaria Resistance or Susceptibility in Red Cells Disorders, Nova Science Publishers Inc, New York.
External links
Favism
Hemoglobinopathies
Malaria and the Red Cell
Malaria
Human population genetics
Evolutionary biology
Cell biology | Human genetic resistance to malaria | Biology | 9,150 |
8,426,398 | https://en.wikipedia.org/wiki/List%20of%20software%20for%20the%20TRS-80 | The TRS-80 series of computers were sold via Radio Shack & Tandy dealers in North America and Europe in the early 1980s. Much software was developed for these computers, particularly the relatively successful Color Computer I, II & III models, which were designed for both home office and entertainment (gaming) uses.
A list of software for the TRS-80 computer series appears below. This list includes software that was sold labelled as a Radio Shack or Tandy product.#
Note: This List is by no means complete, especially with regards to the earlier non-color computer models.
Model I
Model II
VideoTex
Color Computer
TRS-80 Color Computer
Color Computer 1 & 2
Color Computer 3
Model III
Many of these titles also ran on the Model I, as the Model III was designed to be backward-compatible with the Model I.
Model 16 & 16B
Model 4, 4D & 4P
Model 12
MC-10
Model 100 & 102
TRS-80 software | List of software for the TRS-80 | Technology | 198 |
74,740,823 | https://en.wikipedia.org/wiki/Nevomo | Nevomo (known as Hyper Poland until 2020) is a Polish transportation start-up founded in 2017. The company proposes a Maglev-based transportation system which can be retrofitted to existing railway tracks, and future work on a Hyperloop system.
History
Nevomo was founded in April 2017 under its original name Hyper Poland as a spin-off of a team of university students of Warsaw University of Technology. The student team had successfully participated in the Hyperloop Pod Competition II competition organized by SpaceX in California. By the end of 2018, the company had filed eight patent applications. In October 2019, the company unveiled its first 1:5 scale prototype of the track and MagRail vehicle. In 2020, the company begun test runs on a scaled-down track. In the same year, the company rebranded from Hyper Poland to Nevomo.
In the first quarter of 2022, Nevomo completed the construction of Europe's longest test track for passive magnetic levitation. The 700 meter-long railway track in Subcarpathian Voivodeship in Poland allows vehicles utilizing the company's MagRail technology to travel at speeds of up to 160 kph. The installation of all necessary wayside equipment was completed in December 2022 and tests began in spring 2023. The first levitation tests were planned for 2023.
Technology
Nevomo is developing a proprietary transport system similar to Maglev, which can be retrofitted onto existing rail infrastructure. The company's core technological focus areas are in the development of a new type of linear motor, the levitation and guidance systems, the power electronics and position control systems, as well as monitoring systems. The company anticipates that a railway track will be first upgraded with the company's MagRail technology, which in a later stage is first enclosed to reduce drag, before finally becoming a full-fledged Hyperloop with a vacuum tube. As the later stages are expected to demand many more years of development before becoming technically and commercially viable, Nevomo is currently focusing on a "MagRail Booster" system intended to magnetically propel existing retrofitted rolling stock, and a full "Levitating MagRail" system which implements Maglev by retrofitting existing train tracks.
Funding
The company has secured a grant of PLN 16.5 million from the National Center for Research and Development (NCBiR) and completed two rounds of equity crowdfunding campaigns on Seedrs with PLN 3.7 million. In 2020, the Hütter Private Equity fund from Gdynia, Poland joined the company's investor group.
In mid-2022 Nevomo received funding of €2.5 million from the European Innovation Council (EIC) accelerator program, to be expanded with a campaign component of up to €15 million from the EIC fund. In the same year, EIT InnoEnergy - one of the world's largest investors in sustainable energy innovation - also invested in the company.
Notes
External links
Official Website of Nevomo
Hyperloop
Maglev
Experimental and prototype high-speed trains | Nevomo | Technology,Engineering | 628 |
13,563,938 | https://en.wikipedia.org/wiki/Universal%20parabolic%20constant | The universal parabolic constant is a mathematical constant.
It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P.
In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.)
The value of P is
. The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Derivation
Take as the equation of the parabola. The focal parameter is and the semilatus rectum is .
Properties
P is a transcendental number.
Proof. Suppose that P is algebraic. Then must also be algebraic. However, by the Lindemann–Weierstrass theorem, would be transcendental, which is not the case. Hence P is transcendental.
Since P is transcendental, it is also irrational.
Applications
The average distance from a point randomly selected in the unit square to its center is
Proof.
There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is .
If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola.
References and footnotes
Mathematical constants
Conic sections
Parabolas
Real transcendental numbers | Universal parabolic constant | Mathematics | 396 |
38,146,835 | https://en.wikipedia.org/wiki/Contagium%20vivum%20fluidum | Contagium vivum fluidum (Latin: "contagious living fluid") was a phrase first used to describe a virus, and underlined its ability to slip through the finest ceramic filters then available, giving it almost liquid properties. Martinus Beijerinck (1851–1931), a Dutch microbiologist and botanist, first used the term when studying the tobacco mosaic virus, becoming convinced that the virus had a liquid nature.
The word "virus", from the Latin for "poison", was originally used to refer to any infectious agent, and gradually became used to refer to infectious particles. Bacteria could be seen under microscope, and cultured on agar plates. In 1890, Louis Pasteur declared "tout virus est un microbe": "all infectious diseases are caused by microbes".
In 1892, Dmitri Ivanovsky discovered that the cause of tobacco mosaic disease could pass through Chamberland's porcelain filter. Infected sap, passed through the filter, retained its infectious properties. Ivanovsky thought the disease was caused by an extremely small bacteria, too small to see under microscope, which secreted a toxin. It was this toxin, he thought, which passed through the filter. However, he was unable to culture the purported bacteria.
In 1898, Beijerinck independently found the cause of the disease could pass through porcelain filters. He disproved Ivanovsky's toxin theory by demonstrating infection in series. He found that although he could not culture the infectious agent, it would diffuse through an agar gel. This diffusion inspired him to put forward the idea of a non-cellular "contagious living fluid", which he called a "virus". This was somewhere between a molecule and a cell.
Ivanovsky, irked that Beijerinck had not cited him, demonstrated that particles of ink could also diffuse through agar gel, thus leaving the particulate or fluid nature of the pathogen unresolved. Beijerinck's critics including Ivanovsky argued that the idea of a "contagious living fluid" was a contradiction in terms. However, Beijerinck only used the phrase "contagium vivum fluidum" in the title of his paper, using the word "virus" throughout.
Other scientists began to identify other diseases caused by infectious agents which could pass through a porcelain filter. These became known as "filterable viruses", and later just "viruses". In 1923 Edmund Beecher Wilson wrote "We have now arrived at a borderland, where the cytologist and the colloidal chemist are almost within hailing distance of each other". In 1935 American biochemist and virologist Wendell Meredith Stanley was able to crystallize and isolate the tobacco mosaic virus. Stanley found the crystals were effectively living chemicals: they could be dissolved and would regain their infectious properties.
The tobacco mosaic virus was the first virus to be photographed with an electron microscope, in 1939. Over the second half of the twentieth century, more than 2,000 virus species infecting animals, plants and bacteria were discovered.
References
External links
A Contagium vivum fluidum as the Cause of the Mosaic Diseases of Tobacco Leaves – Martinus W. Beijerinck (1899)
Viruses
Martinus Beijerinck
Latin words and phrases
Biology in the Netherlands | Contagium vivum fluidum | Biology | 677 |
1,773,934 | https://en.wikipedia.org/wiki/Chandra%20Wickramasinghe | Nalin Chandra Wickramasinghe (born 20 January 1939) is a Sri Lankan-born British mathematician, astronomer and astrobiologist of Sinhalese ethnicity. His research interests include the interstellar medium, infrared astronomy, light scattering theory, applications of solid-state physics to astronomy, the early Solar System, comets, astrochemistry, the origin of life and astrobiology. A student and collaborator of Fred Hoyle, the pair worked jointly for over 40 years as influential proponents of panspermia. In 1974 they proposed the hypothesis that some dust in interstellar space was largely organic, later proven to be correct.
Wickramasinghe has advanced numerous fringe claims, including the argument that various outbreaks of illnesses on Earth are of extraterrestrial origins, including the 1918 flu pandemic and certain outbreaks of polio and mad cow disease. For the 1918 flu pandemic they hypothesised that cometary dust brought the virus to Earth simultaneously at multiple locations—a view almost universally dismissed by experts on this pandemic. Claims connecting terrestrial disease and extraterrestrial pathogens have been rejected by the scientific community.
Wickramasinghe has written more than 40 books about astrophysics and related topics; he has made appearances on radio, television and film, and he writes online blogs and articles. He has appeared on BBC Horizon, UK Channel 5 and the History Channel. He appeared on the 2013 Discovery Channel program "Red Rain". He has an association with Daisaku Ikeda, president of the Buddhist sect Soka Gakkai International, that led to the publication of a dialogue with him, first in Japanese and later in English, on the topic of Space and Eternal Life.
Education and career
Wickramasinghe studied at Royal College, Colombo, the University of Ceylon (where he graduated in 1960 with a BSc First Class Honours in mathematics), and at Trinity College and Jesus College, Cambridge, where he obtained his PhD and ScD degrees. Following his education, Wickramasinghe was a Fellow of Jesus College, Cambridge from 1963 to 1973, until he became professor of applied mathematics and astronomy at University College Cardiff. Wickramasinghe was a consultant and advisor to the President of Sri Lanka from 1982 to 1984, and played a key role in founding the Institute of Fundamental Studies in Sri Lanka.
After fifteen years at University College Cardiff, Wickramasinghe took an equivalent position in the University of Cardiff, a post he held from 1990 until 2006. After retirement in 2006, he incubated the Cardiff Center for Astrobiology as a special project reporting to the president of the university. In 2011 the project closed down, losing its funding in a series of UK educational cut backs. After this event, Wickramasinghe was offered the opportunity to move to the University of Buckingham as Director of the Buckingham Centre for Astrobiology, University of Buckingham where he has been since 2011. He maintains his part-time position as a UK Professor at Cardiff University. In 2015 he was elected Visiting scholar, Churchill College, Cambridge, England 2015/16.
He is a co-founder and board member of the Institute for the Study of Panspermia and Astroeconomics, set up in Japan in 2014, and the Editor-in-Chief of the Journal of Astrobiology & Outreach. He was a Visiting By-Fellow, Churchill College, Cambridge, England 2015/16; Professor and Director of the Buckingham Centre for Astrobiology at the University of Buckingham, a post he has held since 2011; Affiliated Visiting Professor, University of Peradeniya, Sri Lanka; and a board member and research director at the Institute for the Study of Panspermia and Astroeconomics, Ogaki-City, Gifu, Japan.
In 2017, Professor Chandra Wickramasinghe was appointed adjunct professor in the Department of Physics, at the University of Ruhuna, Matara, Sri Lanka.
Research
In 1960 he commenced work in Cambridge on his PhD degree under the supervision of Fred Hoyle, and published his first scientific paper "On Graphite Particles as Interstellar Grains" in Monthly Notices of the Royal Astronomical Society in 1962. He was awarded a PhD degree in mathematics in 1963 and was elected a Fellow of Jesus College Cambridge in the same year. In the following year he was appointed a Staff Member of the Institute of Astronomy, Cambridge. Here he continued to work on the nature of interstellar dust, publishing many papers in this field, that led to a consideration of carbon-containing grains as well as the older silicate models.
Wickramasinghe published the first definitive book on Interstellar Grains in 1967. He has made many contributions to this field, publishing over 350 papers in peer-reviewed journals, over 75 of which are in Nature. Hoyle and Wickramasinghe further proposed a radical kind of panspermia that included the claim that extraterrestrial life forms enter the Earth's atmosphere and were possibly responsible for epidemic outbreaks, new diseases, and genetic novelty that Hoyle and Wickramasinghe contended was necessary for macroevolution.
Chandra Wickramasinghe had the longest-running collaboration with Fred Hoyle. Their publications on books and papers arguing for panspermia and a cosmic hypothesis of life are controversial and, in particular detail, essentially contra the scientific consensus in both astrophysics and biology. Several claims made by Hoyle and Wickramasinghe between 1977 and 1981, such as a report of having detected interstellar cellulose, were criticised by one author as pseudoscience. Phil Plait has described Wickramasinghe as a "fringe scientist" who "jumps on everything, with little or no evidence, and says it's from outer space".
Organic molecules in space
In 1974 Wickramasinghe first proposed the hypothesis that some dust in interstellar space was largely organic, and followed this up with other research confirming the hypothesis. Wickramasinghe also proposed and confirmed the existence of polymeric compounds based on the molecule formaldehyde (H2CO). Fred Hoyle and Wickramasinghe later proposed the identification of bicyclic aromatic compounds from an analysis of the ultraviolet extinction absorption at 2175A., thus demonstrating the existence of polycyclic aromatic hydrocarbon molecules in space.
Hoyle–Wickramasinghe model of panspermia
Throughout his career, Wickramasinghe, along with his collaborator Fred Hoyle, has advanced the panspermia hypothesis, that proposes that life on Earth is, at least in part, of extraterrestrial origin. The Hoyle–Wickramasinghe model of panspermia include the assumptions that dormant viruses and desiccated DNA and RNA can survive unprotected in space; that small bodies such as asteroids and comets can protect the "seeds of life", including DNA and RNA, living, fossilized, or dormant life, cellular or non-cellular; and that the collisions of asteroids, comets, and moons have the potential to spread these "seeds of life" throughout an individual star system and then onward to others. The most contentious issue around the Hoyle–Wickramasinghe model of the panspermia hypothesis is the corollary of their first two propositions that viruses and bacteria continue to enter the Earth's atmosphere from space, and are hence responsible for many major epidemics throughout history.
Towards the end of their collaboration, Wickramasinghe and Hoyle hypothesised that abiogenesis occurred close to the Galactic Center before panspermia carried life throughout the Milky Way, and stated a belief that such a process could occur in many galaxies throughout the Universe.
Detection of living cells in the stratosphere
On 20 January 2001 the Indian Space Research Organisation (ISRO) conducted a balloon flight from Hyderabad, India to collect stratospheric dust from a height of 41 km (135,000 ft) with a view to testing for the presence of living cells. The collaborators on this project included a team of UK scientists led by Wickramasinghe. In a paper presented at a SPIE conference in San Diego in 2002 the detection of evidence for viable microorganisms from 41 km above the Earth's surface was presented. However, the experiment did not present evidence as to whether the findings are incoming microbes from space rather than microbes carried up to 41 km from the surface of the Earth.
In 2005 the ISRO group carried out a second stratospheric sampling experiment from 41 km altitude and reported the isolation of three new species of bacteria including one that they named Janibacter hoylei sp.nov. in honour of Fred Hoyle. However, these facts do not prove that bacteria on Earth originated in the cosmic environment. Samplings of the stratosphere have also been carried out by Yang et al. (2005, 2009). During the experiment strains of highly radiation-resistant Deinococcus bacterium were detected at heights up to 35 km. Nevertheless, these authors have abstained from linking these discoveries to panspermia.
Wickramasinghe was also involved in coordinating analyses of the red rain in Kerala in collaborations with Godfrey Louis.
Extraterrestrial pathogens
Hoyle and Wickramasinghe have advanced the argument that various outbreaks of illnesses on Earth are of extraterrestrial origins, including the 1918 flu pandemic and certain outbreaks of polio and mad cow disease. For the 1918 flu pandemic they hypothesised that cometary dust brought the virus to Earth simultaneously at multiple locations—a view almost universally dismissed by external experts on this pandemic.
On 24 May 2003 The Lancet published a letter from Wickramasinghe, jointly signed by Milton Wainwright and Jayant Narlikar, in which they hypothesised that the virus that causes severe acute respiratory syndrome (SARS) could be extraterrestrial in origin instead of originating from chickens. The Lancet subsequently published three responses to this letter, showing that the hypothesis was not evidence-based, and casting doubts on the quality of the experiments referenced by Wickramasinghe in his letter. Claims connecting terrestrial disease and extraterrestrial pathogens have been rejected by the scientific community.
In 2020, Wickramasinghe and colleagues published a paper claiming that Severe acute respiratory syndrome coronavirus 2, the virus responsible for the COVID-19 pandemic was also of extraterrestrial origin, the claim was criticised for lacking evidence.
Polonnaruwa
On 29 December 2012 a green fireball was observed in Polonnaruwa, Sri Lanka. It disintegrated into fragments that fell to the Earth near the villages of Aralaganwila and Dimbulagala and in a rice field near Dalukkane. Rock samples were submitted to the Medical Research Institute of the Ministry of Health in Colombo.
The rocks were sent to the University of Cardiff in Wales for analysis, where Chandra Wickramasinghe's team analyzed them and claimed that they contained extraterrestrial diatoms. From January to March 2013, five papers were published in the fringe Journal of Cosmology outlining various results from teams in the United Kingdom, United States and Germany. However, independent experts in meteoritics stated that the object analyzed by Wickramasinghe's team was of terrestrial origin, a fulgurite created by lightning strikes on Earth. Experts in diatoms complemented the statement, saying that the organisms found in the rock represented a wide range of extant terrestrial taxa, confirming their earthly origin.
Wickramasinghe and collaborators responded, using X-ray diffraction, oxygen isotope analysis, and scanning electron microscope observations, in a March 2013 paper asserting that the rocks they found were indeed meteorites, instead of being created by lightning strikes on Earth as stated by scientists from the University of Peradeniya. However, these claims were also criticised for not providing evidence that the rocks were actually meteorites.
Cephalopod alien origin
In 2018, Wickramasinghe and over 30 other authors published a paper in Progress in Biophysics and Molecular Biology entitled "Cause of Cambrian Explosion - Terrestrial or Cosmic?" which argued in favour of panspermia as the origin of the Cambrian explosion, and posited that cephalopods are alien lifeforms that originated from frozen eggs that were transported to earth via meteor. The claims gained widespread press coverage. Virologist Karin Mölling, in a companion commentary published in the same journal, stated that the claims "cannot be taken seriously".
Participation in the creation-evolution debate
Wickramasinghe and his mentor Fred Hoyle have also used their data to argue in favor of cosmic ancestry, and against the idea of life emerging from inanimate objects by abiogenesis.
Wickramasinghe attempts to present scientific evidence to support the notion of cosmic ancestry and "the possibility of high intelligence in the Universe and of many increasing levels of intelligence converging toward a God as an ideal limit."
During the 1981 scientific creationist trial in Arkansas, Wickramasinghe was the only scientist testifying for the defense, which in turn was supporting creationism. In addition, he wrote that the Archaeopteryx fossil finding is a forgery, a charge that the scientific community considers an "absurd" and "ignorant" statement.
Honours and awards
Commonwealth Scholar at Trinity College, Cambridge, 1960-1963
Powell Prize for English Verse, Trinity College, 1961
Vidya Jyothi from the President of Sri Lanka, 1992
Honorary DLitt, Sōka University (Japan), 1996
Doctor of Science (honoris causa), University of Ruhuna, Sri Lanka, 2004
Visiting By-Fellowship, visiting scholar, Churchill College, Cambridge, England 2015/16
Ada Derana Sri Lankan of the Year 2017 - Global Scientist
Wickramasinghe was appointed Member of the Order of the British Empire (MBE) in the 2022 New Year Honours for services to science, astronomy and astrobiology.
Books
Interstellar Grains (Chapman & Hall, London, 1967)
Light Scattering Functions for Small Particles with Applications in Astronomy (Wiley, New York, 1973)
Solid-State Astrophysics (ed. with D.J. Morgan) (D. Reidel, Boston, 1975)
Interstellar Matter (with F.D. Khan & P.G. Mezger) (Swiss Society of Astronomy and Astrophysics, 1974)
The Cosmic Laboratory (University College of Cardiff, 1975)
Lifecloud: The Origin of Life in the Universe (with Fred Hoyle) (J.M. Dent, London, 1978)
Diseases from Space (with Fred Hoyle) (J.M. Dent, London, 1979)
Origin of Life (with Fred Hoyle) (University College Cardiff Press, 1979)
Space Travellers: The Bringers of Life (with Fred Hoyle) (University College Cardiff Press, 1981)
Evolution from Space (with Fred Hoyle) (J.M. Dent, London, 1981)
Is Life an Astronomical Phenomenon? (University College Cardiff Press, 1982)
Why Neo-Darwinism Does Not Work (with Fred Hoyle) (University College Cardiff Press, 1982)
Proofs that Life is Cosmic (with Fred Hoyle) (Institute of Fundamental Studies, Sri Lanka, Memoirs no.1, 1982)
From Grains to Bacteria (with Fred Hoyle) (University College Cardiff Press, 1984)
Fundamental Studies and the Future of Science (ed.) (University College Cardiff Press, 1984)
Living Comets (with Fred Hoyle) (University College Cardiff Press, 1985)
Archaeopteryx, the Primordial Bird: A Case of Fossil Forgery (with Fred Hoyle) (Christopher Davies, Swansea, 1986)
The Theory of Cosmic Grains (with Fred Hoyle) (Kluwer, Dordrecht, 1991)
Life on Mars? The Case for a Cosmic Heritage (with Fred Hoyle) (Clinical Press, Bristol, 1997)
Astronomical Origins of Life: Steps towards Panspermia (with Fred Hoyle) (Kluwer, Dordrecht, 2000)
Cosmic Dragons: Life and Death on Our Planet (Souvenir Press, London, 2001)
Fred Hoyle's Universe (ed. with G. Burbidge and J. Narlikar) (Kluwer, Dordrecht, 2003)
A Journey with Fred Hoyle (World Scientific, Singapore, 2005)
Comets and the Origin of Life (with J. Wickramasinghe and W. Napier) (World Scientific, Hackensack NJ, 2010)
A Journey with Fred Hoyle, Second Edition (World Scientific, Singapore, April 2013)
The search for our cosmic ancestry, World Scientific, New Jersey 2015, .
Articles
Hoyle, F. and Wickramasinghe, N.C., 1962. On graphite particles as interstellar grains, Mon.Not.Roy.Astr.Soc. 124, 417-433
Wickramasinghe, N.C., 1974. Formaldehyde polymers in interstellar space, Nature 252, 462-463
Hoyle, F. and Wickramasinghe, N.C., 1977. Identification of the λ2,200A interstellar absorption feature, Nature 270, 323-324
Hoyle, F. and Wickramasinghe, N.C., 1977. Polysaccharides and infrared spectra of galactic sources, Nature 268, 610-612
Hoyle, F. and Wickramasinghe, N.C., 1986. The case for life as a cosmic phenomenon, Nature 322, 509-511
Hoyle, F. and Wickramasinghe, N.C., 1990. Influenza – evidence against contagion, Journal of the Royal Society of Medicine 83. 258-261
Chandra Wickramasinghe, A Journey with Fred Hoyle: The Search for Cosmic Life, World Scientific Publishing, 2005,
Janaki Wickramasinghe, Chandra Wickramasinghe and William Napier, Comets and the Origin of Life, World Scientific Publishing, 2009,
Chandra Wickramasinghe and Daisaku Ikeda, Space and Eternal Life, Journeyman Press, 1998,
See also
Panspermia
Red rain in Kerala
Milton Wainwright
References
External links
Professor Wickramasinghe's profile at the University of Buckingham
Interviews
Publication List Chandra Wickramasinghe@ Astrophysics Data System
Prof Chandra Wickramasinghe in conversation with artist and poet, Himali Singh Soin, podcast, 2022
1939 births
Academics of Cardiff University
Academics of the University of Cambridge
Alumni of Royal College, Colombo
Alumni of the University of Ceylon
Alumni of Trinity College, Cambridge
Astrobiologists
20th-century British astronomers
Fellows of Jesus College, Cambridge
Living people
Panspermia
People from Colombo
People from British Ceylon
Sri Lankan mathematicians
Vidya Jyothi
Sri Lankan emigrants to the United Kingdom
Members of the Order of the British Empire
Naturalised citizens of the United Kingdom
British mathematicians
21st-century British astronomers
Indian Space Research Organisation people | Chandra Wickramasinghe | Biology | 3,888 |
75,889,688 | https://en.wikipedia.org/wiki/NBI%20Incorporated | NBI Incorporated was an American computer company based in Boulder, Colorado that offered word processing services. NBI was known for their office automation systems; dedicated hardware platforms for word processing, document production and records management.
Products included:
NBI System 3000
NBI OASys 4000S
NBI OASys 4100S and 4100X
The OASys 4100S and 4100X were introduced in May 1984. The 4100S came with single or dual 5¼" floppy disk drives, and the 4100X with a single disk drive and a 10MB hard drive. Both systems were partially IBM-compatible and came with 128KB RAM.
NBI Incorporated entered Chapter 11 bankruptcy protection in 1991 after several loss-making years.
References
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct computer systems companies
Defunct companies based in Colorado
Companies based in Boulder, Colorado | NBI Incorporated | Technology | 176 |
25,584,664 | https://en.wikipedia.org/wiki/Discharge%20coefficient | In a nozzle or other constriction, the discharge coefficient (also known as coefficient of discharge or efflux coefficient) is the ratio of the actual discharge to the ideal discharge, i.e., the ratio of the mass flow rate at the discharge end of the nozzle to that of an ideal nozzle which expands an identical working fluid from the same initial conditions to the same exit pressures.
Mathematically the discharge coefficient may be related to the mass flow rate of a fluid through a straight tube of constant cross-sectional area through the following:
Where:
, discharge coefficient through the constriction (dimensionless).
, mass flow rate of fluid through constriction (mass per time).
, density of fluid (mass per volume).
, volumetric flow rate of fluid through constriction (volume per time).
, cross-sectional area of flow constriction (area).
, velocity of fluid through constriction (length per time).
, pressure drop across constriction (force per area).
This parameter is useful for determining the irrecoverable losses associated with a certain piece of equipment (constriction) in a fluid system, or the "resistance" that piece of equipment imposes upon the flow.
This flow resistance, often expressed as a dimensionless parameter, , is related to the discharge coefficient through the equation:
which may be obtained by substituting in the aforementioned equation with the resistance, , multiplied by the dynamic pressure of the fluid, .
An example in open channel flow
Due to complex behavior of fluids around some of the structures such as orifices, gates, and weirs etc., some assumptions are made for the theoretical analysis of the stage-discharge relationship. For example, in case of gates, the pressure at the gate opening is non-hydrostatic which is difficult to model; however, it is known that the pressure at the gate is very small. Therefore, engineers assume that the pressure is zero at the gate opening and following equation is obtained for discharge:
where:
Q, discharge
, area of flow
g, acceleration due to gravity
, head just upstream of the gate
However, the pressure is not actually zero at the gate; therefore, discharge coefficient, C is used as follows:
See also
Flow coefficient
Orifice plate
References
External links
Mass Flow Choking, Nancy Hall, 6 April 2018
Fluid dynamics | Discharge coefficient | Chemistry,Engineering | 489 |
10,111,663 | https://en.wikipedia.org/wiki/Spirit%20Mountain%20%28Nevada%29 | Spirit Mountain, also known as Avi Kwa Ame ( ; Mojave: ʔaviː kʷaʔame, "highest mountain", from ʔaviː, "mountain, rock", and ʔamay, "up, above") is a mountain within the Lake Mead National Recreation Area near Laughlin, Nevada. It is listed on the United States National Register of Historic Places as a sacred place to Native American tribes in Southern Nevada. Spirit Mountain is the highest point in the Spirit Mountain Wilderness and is the highest point in the Newberry Mountains with the summit peak at .
History
Environmentalists have sought designation of a significant area to the west of the mountain as a national monument. The Avi Kwa Ame National Monument was established on March 21, 2023, by President Biden and named after the peak as the mountain is visible from almost the entire area.
The mountain was listed on the National Register of Historic Places as a Traditional Cultural Property on September 8, 1999.
Description
Spirit Mountain is the center of creation for all Yuman speaking tribes and is considered a sacred area. The is managed by the Bureau of Land Management, National Park Service, and Bureau of Reclamation.
References
External links
Mountains of Nevada
Religious places of the Indigenous peoples of North America
Sacred mountains of the Americas
Mojave Desert
Mountains of Clark County, Nevada
National Register of Historic Places in Clark County, Nevada
Properties of religious function on the National Register of Historic Places in Nevada
Natural features on the National Register of Historic Places
Creation myths | Spirit Mountain (Nevada) | Astronomy | 308 |
34,951,690 | https://en.wikipedia.org/wiki/Latarcin | Latarcins are short antimicrobial peptides from the venom of the spider Lachesana tarabaevi. Latarcins adopt an amphipathic alpha-helical structure in the plasma membrane. Possible pharmacological applications for latarcins include antimicrobial and anticancer treatments.
References
Protein families
Antimicrobial peptides
Spider toxins | Latarcin | Biology | 82 |
194,467 | https://en.wikipedia.org/wiki/Parity%20bit | A parity bit, or check bit, is a bit added to a string of binary code. Parity bits are a simple form of error detecting code. Parity bits are generally applied to the smallest units of a communication protocol, typically 8-bit octets (bytes), although they can also be applied separately to an entire message string of bits.
The parity bit ensures that the total number of 1-bits in the string is even or odd. Accordingly, there are two variants of parity bits: even parity bit and odd parity bit. In the case of even parity, for a given set of bits, the bits whose value is 1 are counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set of bits is already even, the parity bit's value is 0. In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set (including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the parity bit's value is 0. Even parity is a special case of a cyclic redundancy check (CRC), where the 1-bit CRC is generated by the polynomial x+1.
Parity
In mathematics parity can refer to the evenness or oddness of an integer, which, when written in its binary form, can be determined just by examining only its least significant bit.
In information technology parity refers to the evenness or oddness, given any set of binary digits, of the number of those bits with value one. Because parity is determined by the state of every one of the bits, this property of parity—being dependent upon all the bits and changing its value from even to odd parity if any one bit changes—allows for its use in error detection and correction schemes.
In telecommunications the parity referred to by some protocols is for error-detection. The transmission medium is preset, at both end points, to agree on either odd parity or even parity. For each string of bits ready to transmit (data packet) the sender calculates its parity bit, zero or one, to make it conform to the agreed parity, even or odd. The receiver of that packet first checks that the parity of the packet as a whole is in accordance with the preset agreement, then, if there was a parity error in that packet, requests a retransmission of that packet.
In computer science the parity stripe or parity disk in a RAID provides error-correction. Parity bits are written at the rate of one parity bit per n bits, where n is the number of disks in the array. When a read error occurs, each bit in the error region is recalculated from its set of n bits. In this way, using one parity bit creates "redundancy" for a region from the size of one bit to the size of one disk. See below.
In electronics, transcoding data with parity can be very efficient, as XOR gates output what is equivalent to a check bit that creates an even parity, and XOR logic design easily scales to any number of inputs. XOR and AND structures comprise the bulk of most integrated circuitry.
Error detection
If an odd number of bits (including the parity bit) are transmitted incorrectly, the parity bit will be incorrect, thus indicating that a parity error occurred in the transmission. The parity bit is suitable only for detecting errors; it cannot correct any errors, as there is no way to determine the particular bit that is corrupted. The data must be discarded entirely, and retransmitted from scratch. On a noisy transmission medium, successful transmission can therefore take a long time or even never occur. However, parity has the advantage that it uses only a single bit and requires only a number of XOR gates to generate. See Hamming code for an example of an error-correcting code.
Parity bit checking is used occasionally for transmitting ASCII characters, which have 7 bits, leaving the 8th bit as a parity bit.
For example, the parity bit can be computed as follows. Assume Alice and Bob are communicating and Alice wants to send Bob the simple 4-bit message 1001.
This mechanism enables the detection of single bit errors, because if one bit gets flipped due to line noise, there will be an incorrect number of ones in the received data. In the two examples above, Bob's calculated parity value matches the parity bit in its received value, indicating there are no single bit errors. Consider the following example with a transmission error in the second bit using XOR:
There is a limitation to parity schemes. A parity bit is guaranteed to detect only an odd number of bit errors. If an even number of bits have errors, the parity bit records the correct number of ones even though the data is corrupt. (See also error detection and correction.) Consider the same example as before but with an even number of corrupted bits:
Bob observes even parity, as expected, thereby failing to catch the two bit errors.
Usage
Because of its simplicity, parity is used in many hardware applications in which an operation can be repeated in case of difficulty, or simply detecting the error is helpful. For example, the SCSI and PCI buses use parity to detect transmission errors, and many microprocessor instruction caches include parity protection. Because the Instruction cache data is just a copy of the main memory, it can be disregarded and refetched if it is found to be corrupted.
In serial data transmission, a common format is 7 data bits, an even parity bit, and one or two stop bits. That format accommodates all the 7-bit ASCII characters in an 8-bit byte. Other formats are possible; 8 bits of data plus a parity bit can convey all 8-bit byte values.
In serial communication contexts, parity is usually generated and checked by interface hardware (such as a UART) and, on reception, the result made available to a processor such as the CPU (and so too, for instance, the operating system) via a status bit in a hardware register in the interface hardware. Recovery from the error is usually done by retransmitting the data, the details of which are usually handled by software (such as the operating system I/O routines).
When the total number of transmitted bits, including the parity bit, is even, odd parity has the advantage that both all-zeros and all-ones patterns are detected as errors. If the total number of bits is odd, only one of the patterns is detected as an error, and the choice can be made based on what the more common error is expected to be.
RAID array
Parity data is used by RAID arrays (redundant array of independent/inexpensive disks) to achieve redundancy. If a drive in the array fails, remaining data on the other drives can be combined with the parity data (using the Boolean XOR function) to reconstruct the missing data.
For example, suppose two drives in a three-drive RAID 4 array contained the following data:
To calculate parity data for the two drives, an XOR is performed on their data:
The resulting parity data, 10111001, is then stored on Drive 3.
Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were to fail, its data could be rebuilt using the XOR results of the contents of the two remaining drives, Drive 1 and Drive 3:
as follows:
The result of that XOR calculation yields Drive 2's contents. 11010100 is then stored on Drive 2, fully repairing the array.
XOR logic is also equivalent to even parity (because a XOR b XOR c XOR ... may be treated as XOR(a,b,c,...), which is an n-ary operator that is true if and only if an odd number of arguments is true). So the same XOR concept above applies similarly to larger RAID arrays with parity, using any number of disks. In the case of a RAID 3 array of 12 drives, 11 drives participate in the XOR calculation shown above and yield a value that is then stored on the dedicated parity drive.
Extensions and variations on the parity bit mechanism "double," "dual," or "diagonal" parity, are used in RAID-DP.
History
A parity track was present on the first magnetic-tape data storage in 1951. Parity in this form, applied across multiple parallel signals, is known as a transverse redundancy check. This can be combined with parity computed over multiple bits sent on a single signal, a longitudinal redundancy check. In a parallel bus, there is one longitudinal redundancy check bit per parallel signal.
Parity was also used on at least some paper-tape (punched tape) data entry systems (which preceded magnetic-tape systems). On the systems sold by British company ICL (formerly ICT) the paper tape had 8 hole positions running across it, with the 8th being for parity. 7 positions were used for the data, e.g., 7-bit ASCII. The 8th position had a hole punched in it depending on the number of data holes punched.
See also
BIP-8
Parity function
Single-event upset
Check digit
Thue–Morse sequence
References
External links
Different methods of generating the parity bit, among other bit operations
Binary arithmetic
Data transmission
Error detection and correction
Parity (mathematics)
RAID
fr:Somme de contrôle#Exemple : bit de parité | Parity bit | Mathematics,Engineering | 2,103 |
56,113,766 | https://en.wikipedia.org/wiki/Caldimicrobium | Caldimicrobium is a genus of bacteria from the family of Thermodesulfobacteriaceae.
Caldimicrobium is an anaerobic thermophile which is roughly 1.0–1.2 micrometers long and 0.5 micrometers wide.
See also
List of bacterial orders
List of bacteria genera
References
Further reading
Thermodesulfobacteriota
Bacteria genera | Caldimicrobium | Biology | 88 |
14,880,431 | https://en.wikipedia.org/wiki/KCNIP1 | Kv channel-interacting protein 1 also known as KChIP1 is a protein that in humans is encoded by the KCNIP1 gene.
Function
This gene encodes a member of the family of voltage-gated potassium (Kv) channel-interacting proteins (KCNIPs, also frequently called "KChIP"), which belong to the recoverin branch of the EF-hand superfamily. Members of the KCNIP family are small calcium binding proteins. They all have EF-hand-like domains, and differ from each other in the N-terminus. They are integral subunit components of native Kv4 channel complexes. They may regulate A-type currents, and hence neuronal excitability, in response to changes in intracellular calcium. Alternative splicing results in multiple transcript variant encoding different isoforms.
See also
Voltage-gated potassium channel
References
Further reading
External links
Ion channels
EF-hand-containing proteins | KCNIP1 | Chemistry | 191 |
1,623,847 | https://en.wikipedia.org/wiki/Stickland%20fermentation | Stickland fermentation or The Stickland Reaction is the name for a chemical reaction that involves the coupled oxidation and reduction of amino acids to organic acids. The electron donor amino acid is oxidised to a volatile carboxylic acid one carbon atom shorter than the original amino acid. For example, alanine with a three carbon chain is converted to acetate with two carbons. The electron acceptor amino acid is reduced to a volatile carboxylic acid the same length as the original amino acid. For example, glycine with two carbons is converted to acetate.
In this way, amino acid fermenting microbes can avoid using hydrogen ions as electron acceptors to produce hydrogen gas. Amino acids can be Stickland acceptors, Stickland donors, or act as both donor and acceptor. Only histidine cannot be fermented by Stickland reactions, and is oxidised. With a typical amino acid mix, there is a 10% shortfall in Stickland acceptors, which results in hydrogen production. Under very low hydrogen partial pressures, increased uncoupled anaerobic oxidation has also been observed.
It occurs in proteolytic clostridia such as:
C. perfringens,
Clostridioides difficile,
C. sporogenes,
and C. botulinum.
Additionally, sarcosine and betaine can act as electron acceptors.
References
Biochemical reactions
Name reactions | Stickland fermentation | Chemistry,Biology | 301 |
5,560,456 | https://en.wikipedia.org/wiki/234%20%28number%29 | 234 (two hundred [and] thirty-four) is the integer following 233 and preceding 235.
Additionally:
234 is a practical number.
There are 234 ways of grouping six children into rings of at least two children with one child at the center of each ring.
References
Integers | 234 (number) | Mathematics | 57 |
60,671,814 | https://en.wikipedia.org/wiki/C16orf78 | Uncharacterized protein C16orf78(NP_653203.1) is a protein that in humans is encoded by the chromosome 16 open reading frame 78 gene.
Gene
The C16orf78 gene(123970) is located at 16q12.1 on the plus strand, spanning 25,609 bp from 49,407,734 to 49,433,342.
mRNA
There is one mRNA transcript (NM_144602.3) and no other known splice isoforms. There are 5 exons, totaling a length of 1068 base pairs.
Protein
Sequence
C16orf78 is 265 amino acids long with a predicted molecular weight of 30.8 kDal and pI of 9.8. It is rich in both methionine and lysine, composed of 6.4% methionine and 13.6% lysine. This methionine richness has been hypothesized to serve as a mitochondrial antioxidant.
Post-Transnational Modifications
There are four verified ubiquitination sites and three verified phosphorylation sites.
Structure
Predictions of C16orf78's secondary structure consist primarily of alpha helices and coiled coils. Phyre2 also predicted C16orf78 is primarily helical, but 253 of 265 amino acids were modeled ab initio so the confidence of the model is low.
Subcellular Localization
C16orf78 is predicted to be localized to the cell nucleus. There is also a predicted bipartite nuclear localization signal.
Expression
C16orf78 has restricted expression toward the testis, with much lower expression in other tissues.
Interaction
C16orf78 has a physical association with DNA/RNA-binding protein KIN17 (NP_036443.1), suggesting C16orf78 may also play a role in DNA repair. C16orf78 was found to be phosphorylated by SRPK1(NP_003128.3) and SPRK2 (AAH68547.1).
Clinical Significance
Deletion of the C16orf78 gene has been identified as a determinant of prostate cancer. A SNP in C16orf78 interacts with a SNP in LMTK2 and is associated with risk of prostate cancer.
Amplification of the C16orf78 gene has been linked to metabolically adaptive cancer cells. A duplication of the C16orf78 gene was associated with at least one case of Rolandic Epilepsy.
Homology
Paralogs
C16orf78 has no known paralogs in humans.
Orthologs
C16orf78 has over 80 orthologs, including animals as distant Ciona intestinalis(XP_002132057.1), which is estimated to have diverged from humans 676 million years ago. C16orf78 has orthologs in many types of mammals, reptiles, bony fish, and even some invertebrates, but has no known orthologs in amphibians or birds. Below is a table with samples of orthologs, with divergence dates from TimeTree and similarity calculated by pairwise sequence alignment.
References
External links
Proteins
Genes on human chromosome 16 | C16orf78 | Chemistry | 690 |
152,262 | https://en.wikipedia.org/wiki/Will-o%27-the-wisp | In folklore, a will-o'-the-wisp, will-o'-wisp, or ; ), is an atmospheric ghost light seen by travellers at night, especially over bogs, swamps or marshes.
The phenomenon is known in the United Kingdom by a variety of names, including jack-o'-lantern, friar's lantern, and hinkypunk, and is said to mislead and/or guide travellers by resembling a flickering lamp or lantern. Equivalents of the will-o'-the-wisps appear in European folklore by various names, e.g., in Latin, in French, or in Germany, Hessdalen light in Norway. Equivalents occur in traditions of cultures worldwide (cf. ); e.g., the Naga fireballs on the Mekong in Thailand. In North America the phenomenon is known as the Paulding Light in Upper Peninsula of Michigan, the Spooklight in Southwestern Missouri and Northeastern Oklahoma, and St. Louis Light in Saskatchewan. In Arab folklore it is known as .
In folklore, will-o'-the-wisps are typically attributed as ghosts, fairies or elemental spirits meant to reveal a path or direction. These wisps are portrayed as dancing or flowing in a static form, until noticed or followed, in which case they visually fade or disappear. Modern science explains the light aspect as natural phenomena such as bioluminescence or chemiluminescence, caused by the oxidation of phosphine (), diphosphane () and methane (), produced by organic decay.
Nomenclature
Etymology
The term will-o'-the-wisp comes from wisp, a bundle of sticks or paper sometimes used as a torch and the name 'Will', thus meaning 'Will of the torch'. The term jack-o'-lantern ('Jack of the lantern') originally referred to a will-o'-the-wisp. In the United States, they are often called spook-lights, ghost-lights, or orbs by folklorists.
The Latin name is composed of , meaning 'fire' and , an adjective meaning 'foolish', 'silly' or 'simple'; it can thus be literally translated into English as 'foolish fire' or more idiomatically as 'giddy flame'. Despite its Latin origins, the term is not attested in antiquity, and the name for the will-o'-the-wisp used by the ancient Romans is uncertain. The term is not attested in the Middle Ages either. Instead, the Latin is documented no earlier than the 16th century in Germany, where it was coined by a German humanist, and appears to be a free translation of the long-existing German name ('wandering light' or 'deceiving light') conceived of in German folklore as a mischievous spirit of nature; the Latin translation was made to lend the German name intellectual credibility.
Beside , the will-o'-the-wisp has also been called in German (where translates to 'wisp'), as found in e.g. Martin Luther's writings of the same 16th century.
Synonyms
The names will-o'-the-wisp and jack-o'-lantern are used in etiological folk-tales, recorded in many variant forms in Ireland, Scotland, England, Wales, Appalachia, and Newfoundland.
Folk belief attributes the phenomenon explicitly in the term hob lantern or hobby lantern (var. 'Hob and his Lantern', 'hob-and-lanthorns"). In her book A Dictionary of Fairies, K. M. Briggs provides an extensive list of other names for the same phenomenon, though the place where they are observed (graveyard, bogs, etc.) influences the naming considerably. When observed in graveyards, it is known as a ghost candle or corpse candle.
Folklore
In the etiological (origin) tales, protagonists named either Will or Jack are doomed to haunt the marshes with a light for some misdeed. One version from Shropshire is recounted by Briggs in A Dictionary of Fairies and refers to Will Smith. Will is a wicked blacksmith who is given a second chance by Saint Peter at the gates of heaven, but leads such a bad life that he ends up being doomed to wander the earth. The Devil provides him with a single burning coal with which to warm himself, which he then uses to lure foolish travellers into the marshes.
An Irish version of the tale has a ne'er-do-well named Drunk Jack or Stingy Jack who, when the Devil comes to collect his soul, tricks him into turning into a coin, so he can pay for his one last drink. When the Devil obliges, Jack places him in his pocket next to a crucifix, preventing him from returning to his original form. In exchange for his freedom, the Devil grants Jack ten more years of life. When the term expires, the Devil comes to collect his due. But Jack tricks him again by making him climb a tree and then carving a cross underneath, preventing him from climbing down. In exchange for removing the cross, the Devil forgives Jack's debt. However, no one as bad as Jack would ever be allowed into heaven, so Jack is forced upon his death to travel to hell and ask for a place there. The Devil denies him entrance in revenge but grants him an ember from the fires of hell to light his way through the twilight world to which lost souls are forever condemned. Jack places it in a carved turnip to serve as a lantern. Another version of the tale is "Willy the Whisp", related in Irish Folktales by Henry Glassie. Séadna by Peadar Ua Laoghaire is yet another version—and also the first modern novel in the Irish language.
Global folklore
Americas
Mexico has equivalents. Folklore explains the phenomenon to be witches who transformed into these lights. Another explanation refers to the lights as indicators to places where gold or hidden treasures are buried which can be found only with the help of children. In this one, they are called luces del dinero (money lights) or luces del tesoro (treasure lights).
The swampy area of Massachusetts known as the Bridgewater Triangle has folklore of ghostly orbs of light, and there have been modern observations of these ghost-lights in this area as well.
The fifollet (or feu-follet) of Louisiana derives from the French. The legend says that the fifollet is a soul sent back from the dead to do God's penance, but instead attacks people for vengeance. While it mostly takes part in harmless mischievous acts, the fifollet sometimes sucked the blood of children. Some legends say that it was the soul of a child who died before baptism.
Boi-tatá () is the Brazilian equivalent of the will-o'-the-wisp. Regionally it is called Boitatá, Baitatá, Batatá, Bitatá, Batatão, Biatatá, M'boiguaçu, Mboitatá and Mbaê-Tata. The name comes from the Old Tupi language and means "fiery serpent" (mboî tatá). Its great fiery eyes leave it almost blind by day, but by night, it can see everything. According to legend, Boi-tatá was a big serpent which survived a great deluge. A "boiguaçu" (cave anaconda) left its cave after the deluge and, in the dark, went through the fields preying on the animals and corpses, eating exclusively its favourite morsel, the eyes. The collected light from the eaten eyes gave "Boitatá" its fiery gaze. Not really a dragon but a giant snake (in the native language, boa or mboi or mboa).
In Argentina and Uruguay, the will-o'-the-wisp phenomenon is known as luz mala (evil light) and is one of the most important myths in both countries' folklore. This phenomenon is quite feared and is mostly seen in rural areas. It consists of an extremely shiny ball of light floating a few inches from the ground.
In Colombia, la Bolefuego or Candileja is the will-o'-the-wisp ghost of a vicious grandmother who raised her grandchildren without morals, and as such they became thieves and murderers. In the afterlife, the grandmother's spirit was condemned to wander the world surrounded in flames. In Trinidad and Tobago, a soucouyant is a "fireball witch" — an evil spirit that takes on the form of a flame at night. It enters homes through any gap it can find and drinks the blood of its victims.
Asia
Aleya (or marsh ghost-light) is the name given to a strange light phenomenon occurring over the marshes as observed by Bengalis, especially the fishermen of Bangladesh and West Bengal. This marsh light is attributed to some kind of marsh gas apparitions that confuse fishermen, make them lose their bearings, and may even lead to drowning if one decided to follow them moving over the marshes. Local communities in the region believe that these strange hovering marsh-lights are in fact Ghost-lights representing the ghosts of fisherman who died fishing. Sometimes they confuse the fishermen, and sometimes they help them avoid future dangers. Chir batti (ghost-light), also spelled "chhir batti" or "cheer batti", is a dancing light phenomenon occurring on dark nights reported from the Banni grasslands, its seasonal marshy wetlands and the adjoining desert of the marshy salt flats of the Rann of Kutch Other varieties (and sources) of ghost-lights appear in folklore across India, including the Kollivay Pey of Tamil Nadu and Karnataka, the Kuliyande Choote of Kerala, and many variants from different tribes in Northeast India. In Kashmir, the Bramrachokh carries a pot of fire on its head.
Similar phenomena are described in Japanese folklore, including , hi no tama ("ball of flame"), aburagae, , ushionibi, etc. All these phenomena are described as associated with graveyards. Kitsune, mythical yokai demons, are also associated with will 'o the wisp, with the marriage of two kitsune producing kitsune-bi (狐火), literally meaning 'fox-fire'. These phenomena are described in Shigeru Mizuki's 1985 book Graphic World of Japanese Phantoms (妖怪伝 in Japanese).
In Korea the lights are associated with rice paddies, old trees, mountains or even in some houses and were called 'dokkebi bul’ (Hangul: 도깨비 불), meaning goblin fire (or goblin light). They were deemed malevolent and impish, as they confused and lured passersby to lose their way or fall into pits at night.
The earliest Chinese reference to a will-o'-the-wisp appears to be the Chinese character 粦 lín, attested as far back as the Shang dynasty oracle bones, depicting a human-like figure surrounded by dots presumably representing the glowing lights of the will-o'-the-wisp, to which feet such as those under 舞 wǔ, 'to dance' were added in bronze script. Before the Han dynasty the top had evolved or been corrupted to represent fire (later further corrupted to resemble 米 mǐ, rice), as the small seal script graph in Shuowen Jiezi, compiled in the Han dynasty, shows. Although no longer in use alone, 粦 lín is in the character 磷 lín phosphorus, an element involved in scientific explanations of the will-o'-the-wisp phenomenon, and is also a phonetic component in other common characters with the same pronunciation.
Chinese polymath Shen Gua may have recorded such a phenomenon in the Book of Dreams, stating, "In the middle of the reign of emperor Jia You, at Yanzhou, in the Jiangsu province, an enormous pearl was seen especially in gloomy weather. At first it appeared in the marsh… and disappeared finally in the Xinkai Lake." It was described as very bright, illuminating the surrounding countryside and was a reliable phenomenon over ten years, an elaborate Pearl Pavilion being built by local inhabitants for those who wished to observe it.
Europe
In European folklore the lights are often believed to be the spirits of un-baptised or stillborn children, flitting between heaven and hell (purgatory).
In Germany there was a belief that a Irrlicht was the soul of an unbaptised child, but that it could be redeemed if the remains are first buried near the eaves of the church, so that at the moment rainwater splashes onto this grave, the churchman could pronounce the baptismal formula to sanctify the child.
In Sweden also, the will-o'-the-wisp represents the soul of an unbaptised person "trying to lead travellers to water in the hope of being baptized".
Danes, Finns, Swedes, Estonians, Latvians, Lithuanians, and Irish people and amongst some other groups believed that a will-o'-the-wisp also marked the location of a treasure deep in ground or water, which could be taken only when the fire was there. Sometimes magical procedures, and even a dead man's hand, were required as well, to uncover the treasure. In Finland and several other northern countries, it was believed that early autumn was the best time to search for will-o'-the-wisps and treasures below them. It was believed that when someone hid treasure in the ground, he made the treasure available only at the summer solstice (Midsummer, or Saint John's Day), and set a will-o'-the-wisp to mark the exact place and time so that he could reclaim the treasure.
The Aarnivalkea (also known as virvatuli, aarretuli and aarreliekki), in Finnish mythology, are spots where an eternal flame associated with will-o'-the-wisps burns. They are claimed to mark the places where faerie gold is buried. They are protected by a glamour that would prevent anyone finding them by pure chance. However, if one finds a fern seed from a mythical flowering fern, the magical properties of that seed will lead the fortunate person to these treasures, in addition to providing one with a glamour of invisibility. Since in reality the fern produces no flower and reproduces via spores under the leaves, the myth specifies that it blooms only extremely rarely.
Britain
In Welsh folklore, it is said that the light is "fairy fire" held in the hand of a púca, or pwca, a small goblin-like fairy that mischievously leads lone travellers off the beaten path at night. As the traveller follows the púca through the marsh or bog, the fire is extinguished, leaving them lost. The púca is said to be one of the Tylwyth Teg, or fairy family. In Wales the light predicts a funeral that will take place soon in the locality. Wirt Sikes in his book British Goblins mentions the following Welsh tale about púca.
A peasant travelling home at dusk sees a bright light travelling along ahead of him. Looking closer, he sees that the light is a lantern held by a "dusky little figure", which he follows for several miles. All of a sudden he finds himself standing on the edge of a vast chasm with a roaring torrent of water rushing below him. At that precise moment the lantern-carrier leaps across the gap, lifts the light high over its head, lets out a malicious laugh and blows out the light, leaving the poor peasant a long way from home, standing in pitch darkness at the edge of a precipice. This is a fairly common cautionary tale concerning the phenomenon; however, the ignis fatuus was not always considered dangerous. Some tales present the will-o'-the-wisp as a treasure-guardian, leading those brave enough to follow it to certain riches - a form of behaviour sometimes ascribed also to the Irish leprechaun. Other stories tell of travellers surprising a will-o'-the-wisp while lost in the woods and being either guided out or led further astray, depending on whether they treated the spirit kindly or harshly.
Also related, the pixy-light from Devon and Cornwall which leads travellers away from the safe and reliable route and into the bogs with glowing lights. "Like Poltergeist they can generate uncanny sounds. They were less serious than their German Weiße Frauen kin, frequently blowing out candles on unsuspecting courting couples or producing obscene kissing sounds, which were always misinterpreted by parents." Pixy-Light was also associated with "lambent light" which the Old Norse might have seen guarding their tombs. In Cornish folklore, Pixy-Light also has associations with the Colt pixie. "A colt pixie is a pixie that has taken the shape of a horse and enjoys playing tricks such as neighing at the other horses to lead them astray". In Guernsey, the light is known as the faeu boulanger (rolling fire), and is believed to be a lost soul. On being confronted with the spectre, tradition prescribes two remedies. The first is to turn one's cap or coat inside out. This has the effect of stopping the faeu boulanger in its tracks. The other solution is to stick a knife into the ground, blade up. The faeu, in an attempt to kill itself, will attack the blade.
The will-o'-the-wisp was also known as the Spunkie in the Scottish Highlands where it would take the form of a linkboy (a boy who carried a flaming torch to light the way for pedestrians in exchange for a fee), or else simply a light that always seemed to recede, in order to lead unwary travellers to their doom. The spunkie has also been blamed for shipwrecks at night after being spotted on land and mistaken for a harbour light. Other tales of Scottish folklore regard these mysterious lights as omens of death or the ghosts of once living human beings. They often appeared over lochs or on roads along which funeral processions were known to travel. A strange light sometimes seen in the Hebrides is referred to as the teine sith, or "fairy light", though there was no formal connection between it and the fairy race.
Oceania
The Australian equivalent, known as the Min Min light is reportedly seen in parts of the outback after dark. The majority of sightings are reported to have occurred in the Channel Country region.
Stories about the lights can be found in aboriginal myth pre-dating western settlement of the region and have since become part of wider Australian folklore. Indigenous Australians hold that the number of sightings has increased alongside the increasing ingression of Europeans into the region. According to folklore, the lights sometimes followed or approached people and have disappeared when fired upon, only to reappear later on.
Scientific explanations
Science proposes that will-o'-the-wisp phenomena (ignis fatuus) are caused by the oxidation of phosphine (PH3), diphosphane (P2H4), and methane (CH4). These compounds, produced by organic decay, can cause photon emissions. Since phosphine and diphosphane mixtures spontaneously ignite on contact with the oxygen in air, only small quantities of it would be needed to ignite the much more abundant methane to create ephemeral fires. Furthermore, phosphine produces phosphorus pentoxide as a by-product, which forms phosphoric acid upon contact with water vapor, which can explain "viscous moisture" sometimes described as accompanying ignis fatuus.
Historical explanations
The idea of the will-o'-the-wisp phenomena being caused by natural gases can be found as early as 1596, as mentioned in the works of Ludwig Lavater. In 1776 Alessandro Volta first proposed that natural electrical phenomena (like lightning) interacting with methane marsh gas may be the cause of ignis fatuus. This was supported by the British polymath Joseph Priestley in his series of works Experiments and Observations on Different Kinds of Air (1772–1790); and by the French physicist Pierre Bertholon de Saint-Lazare in De l'électricité des météores (1787).
Early critics of the marsh gas hypothesis often dismissed it on various grounds including the unlikeliness of spontaneous combustion, the absence of warmth in some observed ignis fatuus, the odd behavior of ignis fatuus receding upon being approached, and the differing accounts of ball lightning (which was also classified as a kind of ignis fatuus). An example of such criticism is found in Folk-Lore from Buffalo Valley (1891) by the American anthropologist John G. Owens.
The apparent retreat of ignis fatuus upon being approached might be explained simply by the agitation of the air by nearby moving objects, causing the gases to disperse. This was observed in the very detailed accounts of several close interactions with ignis fatuus published earlier in 1832 by Major Louis Blesson after a series of experiments in various localities where they were known to occur. Of note is his first encounter with ignis fatuus in a marshland between a deep valley in the forest of Gorbitz, Newmark, Germany. Blesson observed that the water was covered by an iridescent film, and during day-time, bubbles could be observed rising abundantly from certain areas. At night, Blesson observed bluish-purple flames in the same areas and concluded that it was connected to the rising gas. He spent several days investigating the phenomenon, finding to his dismay that the flames retreated every time he tried to approach them. He eventually succeeded and was able to confirm that the lights were indeed caused by ignited gas. The British scientist Charles Tomlinson in On Certain Low-Lying Meteors (1893) described Blesson's experiments.
Blesson also observed differences in the colour and heat of the flames in different marshes. The ignis fatuus in Malapane, Upper Silesia (now Ozimek, Poland) could be ignited and extinguished, but were unable to burn pieces of paper or wood shavings. Similarly, the ignis fatuus in another forest in Poland coated pieces of paper and wood shavings with an oily viscous fluid instead of burning them. Blesson also accidentally created ignis fatuus in the marshes of Porta Westfalica, Germany, while launching fireworks.
20th century
A description of 'The Will-o'-the Wisp appeared in a 1936 UK publication of The Scout's Book of Gadgets and Dodges, where the author (Sam F. Braham), describes it as follows:
'This is an uncertain light which may sometimes be seen dancing over churchyards and marshy places. No one really know how it is produced, and chemists are continually experimenting to discover its nature. It is thought that it is formed by the mixing of marsh gas, which is giving off decaying vegetable matter, with phosphoretted hydrogen, a gas which ignites instantly. But this theory has not been definitely proved.'
One attempt to replicate ignis fatuus under laboratory conditions was in 1980 by British geologist Alan A. Mills of Leicester University. Though he did succeed in creating a cool glowing cloud by mixing crude phosphine and natural gas, the color of the light was green and it produced copious amounts of acrid smoke. This was contrary to most eyewitness accounts of ignis fatuus. As an alternative, Mills proposed in 2000 that ignis fatuus may instead be cold flames. These are luminescent pre-combustion halos that occur when various compounds are heated to just below ignition point. Cold flames are indeed typically bluish in color and as their name suggests, they generate very little heat. Cold flames occur in a wide variety of compounds, including hydrocarbons (including methane), alcohols, aldehydes, oils, acids, and even waxes. However it is unknown if cold flames occur naturally, though a lot of compounds which exhibit cold flames are the natural byproducts of organic decay.
A related hypothesis involves the natural chemiluminescence of phosphine. In 2008 the Italian chemists Luigi Garlaschelli and Paolo Boschetti attempted to recreate Mills' experiments. They successfully created a faint cool light by mixing phosphine with air and nitrogen. Though the glow was still greenish in colour, Garlaschelli and Boschetti noted that under low-light conditions, the human eye cannot easily distinguish between colours. Furthermore, by adjusting the concentrations of the gases and the environmental conditions (temperature, humidity, etc.), it was possible to eliminate the smoke and smell, or at least render it to undetectable levels. Garlaschelli and Boschetti also agreed with Mills that cold flames may also be a plausible explanation for other instances of ignis fatuus.
In 1993 professors Derr and Persinger proposed that some ignis fatuus may be geologic in origin, piezoelectrically generated under tectonic strain. The strains that move faults would also heat up the rocks, vaporizing the water in them. Rock or soil containing something piezoelectric, like quartz, silicon, or arsenic, may also produce electricity, channelled up to the surface through the soil via a column of vaporized water, there somehow appearing as earth lights. This would explain why the lights appear electrical, erratic, or even intelligent in their behaviour.
The will-o'-the-wisp phenomena may occur due to the bioluminescence of various forest dwelling micro-organisms and insects. The eerie glow emitted from certain fungal species, such as the honey fungus, during chemical reactions to form white rot could be mistaken for the mysterious will-o'-the-wisp or foxfire lights. There are many other bioluminescent organisms that could create the illusions of fairy lights, such as fireflies. Light reflecting off larger forest dwelling creatures could explain the phenomenon of will-o'-the-wisp moving and reacting to other lights. The white plumage of barn owls may reflect enough light from the Moon to appear as a will-o'-the-wisp; hence the possibility of the lights moving, reacting to other lights, etc.
Ignis fatuus sightings are rarely reported today. The decline is believed to be the result of the draining and reclamation of swamplands in recent centuries, such as the formerly vast Fenlands of eastern England which have now been converted to farmlands.
Global terms
Americas
Canada
Fireship of Baie des Chaleurs in New Brunswick
United States
Arbyrd/Senath Light of Missouri
Bragg Road ghost light (Light of Saratoga) of Texas
Brown Mountain Lights of North Carolina
Devil’s Torchlight or Devil’s Lantern in the Southern United States and Deep South
Gurdon light of Arkansas
Hornet ghost light (The Spooklight) of Missouri-Oklahoma state line
Maco light of North Carolina
Marfa lights of Texas
Paulding Light of Michigan's Upper Peninsula
Cohoke Light of eastern Virginia's Cohoke Swamp wetlands
Light of Saratoga
Argentina and Uruguay
Luz Mala
Asia
Chir batti in Gujarat
Naga fireballs on the Mekong in Thailand
Aleya in Bengal
Dhon guloi in Assam
Europe
Hessdalen light, Norway
Martebo lights, Sweden
Paasselkä devil, Finland
Lidércfény, Hungary
Ballybar, near Carlow, Ireland
Ferbane, County Offaly, Ireland
Dwaallichtjes in the Netherlands and Belgium
Sheeries, Ireland
Liam na lasóige, Ireland
Fuego fatuo, Spain
Fuoco fatuo, Italy
Irrlicht, Germany
Oceania
Min Min light of the Outback Australia
See also
Chir Batti
Corpse road
Feuermann (ghost)
Foo fighter
Hessdalen Lights
Kitsunebi
Lantern man
Lidérc
Mãe-do-Ouro
Omphalotus olearius
Santelmo
Shiranui
Simonside Dwarfs
St. Elmo's fire
Yan-gant-y-tan
Explanatory notes
References
Bibliography
Corliss, William (2001) Remarkable Luminous Phenomena in Nature
Elsschot, Willem Het dwaallicht
Tremayne, Peter The Haunted Abbot
External links
The Ignis Erraticus – A Bibliographic Survey of the names of the Will-'o-the-wisp
Atmospheric ghost lights
European folklore
European ghosts
Wetlands in folklore
Methane
Pixies
Supernatural legends
Swamp monsters
Swamps in fiction
Wetlands | Will-o'-the-wisp | Chemistry,Environmental_science | 5,932 |
41,355,731 | https://en.wikipedia.org/wiki/The%20Game%20of%20Logic | The Game of Logic is a book, published in 1886, written by the English mathematician Charles Lutwidge Dodgson (1832–1898), better known under his literary pseudonym Lewis Carroll.
In addition to his well-known children's literature, Dodgson/Carroll was an academic mathematician who worked in mathematical logic. The book describes, in an informal and playful style, the use of a board game to represent logical propositions and inferences. Dodgson/Carroll incorporated the game into a longer and more formal introductory logic textbook titled Symbolic Logic, published in 1897. The books are sometimes reprinted in a single volume.
The book aims to teach players the fundamentals of logic by asking players to use coins on a board. The proposition used in this context is: "Some fresh cakes are sweet." The game world is divided into four quadrants. It is to be played with five gray coins and four red coins. A red coin symbolizes one or more cakes being present in an area while a gray coin symbolizes the absence of the cake(s). Each quadrant represents a variation of the original proposition. The cakes are fresh and sweet within the northwest quadrant. They are fresh but not sweet in the northeast. They are neither fresh nor sweet in the southeast. They are not fresh but are sweet in the southwest.
The four quadrants are further divided into two subclasses: cakes that are eatable and those that are non-eatable. This subdivision allows players to understand more complex propositions and syllogisms.
The second half of the book introduces players to a 2x2x2 diagram. This allows for players to solve problems involving three propositions at the same time.
The book is divided into several chapters. The first portion, "To My Childhood-Friend" is as an introduction from the author to his readers. This is followed by a preface chapter. Chapter 1 is divided into three parts. In the first part, the author describes the three different types of propositions that will be used. The second part is an outlook on the "Universe of Things" and syllogisms. The third part of the chapter explains the logic to be used and the associated fallacies. This marks the end of the first chapter. The second chapter presents various questions for readers to answer. These questions are then answered and explained by the author in the third chapter. The last and fourth chapter contains various logic games.
See also
Logic puzzle
Carroll diagram
References
External links
Scanned copy at archive.org
Entry at gutenberg.org
Online resource including demonstration tool for The Game of Logic
1886 non-fiction books
Board games introduced in the 1880s
Books about board games
British board games
Educational board games
Logic
Recreational mathematics
Works by Lewis Carroll | The Game of Logic | Mathematics | 555 |
23,820,487 | https://en.wikipedia.org/wiki/Gymnopilus%20chrysimyces | Gymnopilus chrysimyces is a species of mushroom in the family Hymenogastraceae.
Medicinal
In a 1982 study, this species was shown to contain hemagglutinins. Proteins from G. chrysimyces showed activity towards rat erythrocytes, while proteins from Lentinus squarrosulus showed activity towards guinea pig and mouse erythrocytes. The agglutination of proteins from the two species showed that both have more than one hemagglutinin.
See also
List of Gymnopilus species
References
External links
Index Fungorum
chrysimyces
Fungus species | Gymnopilus chrysimyces | Biology | 132 |
19,207,092 | https://en.wikipedia.org/wiki/Yfr1 | Yfr1 is a Cyanobacterial functional RNA that was identified by a comparative genome based screen for RNAs in cyanobacteria. Further analysis has shown that the RNA is well conserved and highly expressed in cyanobacteria. and is required for growth under several stress condition Bioinformatics research combined with follow-up experiments have shown that Yfr1 inhibits the translation of the proteins PMM1119 and PMM1121 by an antisense interaction by base pairing at the ribosomal binding site.
See also
Yfr2 RNA
Cyano-S1 RNA motif
Cyano-2 RNA motif
References
External links
Non-coding RNA
Antisense RNA | Yfr1 | Chemistry | 141 |
31,766,888 | https://en.wikipedia.org/wiki/Punching%20machine | A punching machine is a machine tool for punching and embossing flat sheet-materials to produce form-features needed as mechanical element and/or to extend static stability of a sheet section. According to the file, Richard Walsh, the county of Grayson, and the State of Texas had invented and applied for US patent in 1894.
CNC punching
Punch presses are developed for high flexibility and efficient processing of metal stampings. The main areas of application are for small and medium runs. Those machines are typically equipped with a linear die carrier (tool carrier) and quick change tools. Today the method is used where the application of lasers are inefficient or technically impractical. CNC is the abbreviation of Computer Numerically Controlled.
Principle of operation
After programming the work pieces and entering length of bars the control automatically calculates the maximum number of pieces to be punched (for example, 18 pieces of a bar of 6000 mm). Once the desired number of work pieces is entered, the bar is pushed toward the stop. The machine is fully automated once the production process is launched.
The third CNC axis always moves the cylinder exactly over the tool, which keeps the wear on the bearings and tools to a minimum. All pieces are sent down a slat conveyor and are pushed sideways on a table. Any scrap is carried to the end of the conveyor and dropped into a bin. Different workpieces can be produced within one work cycle to optimize production.
Programming
Programming is done on a PC equipped with appropriate software that can be part of the machine or a connected external workstation. For generating a new program engineering data can be imported or pasted per mouse and keyboard. Through a graphic and menu-driven user interface previous CNC programming skills are not required. All the punches in a work piece are shown on the screen making programming mistakes easily detected. Ideally each program is stored in one database, so it is easy to recover them by search and sort functions. When selecting a new piece, all the necessary tooling changes are displayed. Before transferring it to the control unit the software scans each program for possible collisions. This eliminates most handling errors.
Tool change system
The linear tool carrier (y-axis) has several stations that hold the punching tools and one cutting tool. Especially for flexibility and efficient processing are set up times a crucial cost factor. Downtimes should be reduced to a minimum. Therefore, recent tool systems are designed for fast and convenient change of punches and dies. They are equipped with a special plug-in system for a quick and easy change of tools.
There is no need to screw anything together. The punch and die plate are adjusted to each other automatically punches and dies can be changed rapidly meaning less machine downtime.
Networking with the whole production line
A lot of organizational effort and interface management is saved, if the CNC punch press is connected to the previous and subsequent process. For a connection to other machines and external workstations corporate interfaces have to be established.
One software for programming subsequent production steps at once
Using a standard industrial PC a variety of machines can be easily networked with among each other
shared database for easy integration into existing workflow system and backup at an external server
import production data from other systems or from construction programs (e.g. DXF files)
Integration of further production steps
Besides punching, machines of the high-end class can be equipped with special functions. For example:
Automatic labelling and marking of work pieces
Chipless thread forming by electric driven quick change tools
Automatic infeeding system - loading of the machine
Embossing digits and characters
Coining nubs
See also
Turret punch
Punch press
References
W. Hellwig, M. Kolbe: Spanlose Fertigung Stanzen: Integrierte Fertigung komplexer Präzisions-Stanzteile. Vieweg+Teubner Verlag, 10. edition, June, 2012,
Machine tools
Metalworking tools
Press tools | Punching machine | Engineering | 805 |
51,693,129 | https://en.wikipedia.org/wiki/Castrol%20Technology%20Centre | The Castrol Technology Centre is a research institute owned by BP in South Oxfordshire, north of Whitchurch-on-Thames.
History
Castrol
Castrol was founded by C.C.Wakefield in 1899, making lubricants (Wakefield lubricator) for railways.
The research site is based at Bozedown House, a former private residence originally built by William Fanning c.1870 and then rebuilt by Charles Palmer in 1907 after the original house was destroyed by fire. It became a chemical research site in the 1950s and was purchased by Castrol in 1976.
In 1993 it won the Queen's Award for Technological Achievement for its Castrol Marine Cyltech 80. Castrol employs around 7,000 staff worldwide. Castrol was bought by BP in 2000.
Structure
The site is around three-quarters of a mile north of the River Thames, east of the B471, accessed from the A4074 at Woodcote. The site has around 500 staff.
Function
Castrol has twelve research sites around the world. The site at Pangbourne is the largest of the twelve sites. Research is done on rheology and the viscosity of engine oil.
See also
Former Esso Research Centre in Oxfordshire
Former Shell Technology Centre in Cheshire
References
External links
Where we operate on BP website
CASTROL R&D: research (archived, 13 Mar 2015)
BP buildings and structures
Chemical industry in the United Kingdom
Chemical research institutes
Motor oils
Research institutes established in 1907
Research institutes in Oxfordshire
South Oxfordshire District
1907 establishments in England | Castrol Technology Centre | Chemistry | 310 |
72,411,605 | https://en.wikipedia.org/wiki/Silvilization | Silvilization is a conceptual framework or a vision of the world whereby the forest, a metaphor for primordial living, is the best place for human development and fulfilment. It is a portmanteau of the Latin word silva, meaning forest, and civilization.
History
The term was first coined by Pierre-Doris Maltais, leader of the Iriadamant eco-cult. Erkki Pulliainen, an MP of the Green League, in collaboration with Maltais and the University of Helsinki, implemented the interdisciplinary ESSOC project (“Ecological Sylvilisation and Survival with the Aid of Original Cultures”) in 1991. The project was considered a failure.
In 1997, a publication in the journal Interculture by the Intercultural Institute of Montreal was devoted entirely to the theme of silvilization and ecosophy. The articles were written by authors such as Edward Goldsmith, Gary Snyder, and Gita Mehta.
References
Sociology
Ecology | Silvilization | Biology | 198 |
63,406,512 | https://en.wikipedia.org/wiki/Blue%20space | In urban planning and design, blue space (or blue infrastructure) comprises areas dominated by surface waterbodies or watercourses. In conjunction with greenspace (parks, gardens, etc. specifically: urban open space), it may help in reducing the risks of heat-related illness from high urban temperatures (urban heat island).
Substantial urban waterbodies naturally exist as integral features of the geography of many cities because of their historical development, for example the River Thames in London.
Accessible blue spaces can help revitalizing neighborhoods and promote increased social connectedness as seen on waterfront renovation projects like the Chattanooga Waterfront (Chattanooga, Tennessee), the CityDeck in Green Bay, Wisconsin, or the Brooklyn Bridge Park in New York City, further enhanced by waterfront festivals such as the Christmas lights in Medellin, in Colombia. Design guidelines promoting healthy buildings -such as, WELL -managed by The International WELL Building Institute™ (IWBI™), or Fitwel -developed and managed by The Center for Active Design (CfAD), recommend incorporating including and water features as a strategy to improve the health and wellness of the building occupants, and "the 9 foundations of a Healthy Building" -developed at Harvard T.H. Chan School of Public Health-, also recommends indoor access to nature views or nature-inspired elements.
Because neighborhoods with access to attractive natural features are susceptible to gentrification, the social benefits associated with waterbodies can be unequally distributed, with less affluent areas lacking access to good quality blue spaces.
Health benefits
Proximity to water bodies may bring some risks to humans, like water-borne diseases in drinking water, flooding risks, or drowning. But scientific evidence shows that exposure to blue spaces is also associated with a variety of health benefits to those near water bodies.
This is described by marine biologist Wallace J. Nichols in his book Blue Mind. Another of the mechanisms by which this phenomenon can be explained is by the Biophilia hypothesis developed by Edward O. Wilson. This theory states that humans have developed a strong connection with nature throughout their evolution that leads to subconscious seeking for natural environments, including green and blue spaces.
Recent research has identified three main pathways that can further explain why proximity to green and blue spaces can be beneficial to health.
Mitigation addresses these health benefits in relationship to the physical improvements that natural environments bring to the built environment, such as reduction of urban heat island, traffic air pollution or traffic noise.
Instoration focuses on the promotion of physical activity and other positive outcomes associated with increased physical activity and social connectivity promoted by natural spaces.
Restoration explains how the non-threatening characteristics of the natural environments may reduce negative feelings and increase cognitive restoration.
Assessing the environmental benefits of a blue space intervention can be done by conducting a Health impact assessment (HIA).
Effects on physical health
Increased physical activity
A variety of studies have found that people living near coastal areas, are less sedentary more likely to engage in moderate and vigorous physical activity adequate for health, which could be explained due to the encouraging presence of walk paths along the coast. Another possible explanation is found in the aesthetical attributes of blue spaces that may motivate individuals to engage in physical activities on blue spaces. A study in England found that although more intense activities were conducted on visits to countryside and urban green spaces compared to visits to coastal environments, coastal visits were associated with the highest overall energy expenditure due duration of activity in coastal environments being longer. Results differed by the urbanity or rurality of the respondent's residence and also how far respondents travelled to their destination.
Proximity to water bodies alone is not enough to promote increased levels of physical activity, as those bodies need to be accessible to people. A study focusing on teenagers found that those living near beaches that had a major road between their homes and the water body had lower levels of physical activity than those with a direct access to the beach.
Reduced obesity
Visiting blue spaces may reduce obesity as it promotes increased physical activity. One study has suggested that living far from usable green space or waterfront in urban areas may increase the risk of obesity.
Improved respiratory health
Living near blue spaces can improve the quality of life of people with respiratory diseases, such as asthma, which could be explained by the mists and sprays generated by the water movement as shown on a study measuring the impact in health of green and blue spaces for those with chronic obstructive pulmonary disease (COPD).
Mental health
Improved overall health
Researchers found that individuals across 15 countries in Europe and Australia report better general health when they live closer to the coast or visit it more often. Researchers also found a reduction of psychiatric cases on people living near green or coastal areas. Some of the studies found that ocean exposure or running along river helped war veterans suffering from PTSD. Others found that engaging in water-related activities such as surfing can help coping with mental health issues and help developing self-confidence and self-reliance skills. A large study looking at links between childhood exposure to blue spaces and adult well-being found that exposure to blue spaces in childhood was associated with better adult well-being.
Improved mood and happiness
Exposure to blue spaces is also linked to increased happiness. A group of researchers studying the effect of green and blue spaces on happiness used a mobile app to track mood feelings of people when they were near water landscapes. The researchers found increased levels of happiness in people near water bodies. Consistently with the findings focusing on physical health, the positive effects on mood associated to blue spaces seem to diminish as the distance between the residence and the water increases.
Improved recovery from drug and alcohol addiction
Educational interventions in blue spaces - such as sailing - have been shown to have positive perceived effects on people undergoing drug and alcohol rehabilitation.
Quality assessment tools
In order to understand how blue spaces may influence health-promoting behaviours, a group of researchers that focuses on blue spaces has developed a set of novel tools specifically designed to quantify the quality and potential health benefits of these spaces, risks associated with their use, and environmental quality. The BlueHealth Environmental Assessment Tool (BEAT) - enables comparable assessment of environmental aspects and attributes that influence access to, use of and health-promoting activities in blue spaces. The tool has been developed to be used by communities and urban/landscape designers.
See also
Urban green space
Urban ecology
Urban water management
Green belt
Healthy city
Healthy buildings
Public health
Green infrastructure
References
Urban design
Public health
Rivers
Bodies of water
Coastal and oceanic landforms
Lakes | Blue space | Environmental_science | 1,302 |
555,119 | https://en.wikipedia.org/wiki/Displacement%20current | In electromagnetism, displacement current density is the quantity appearing in Maxwell's equations that is defined in terms of the rate of change of , the electric displacement field. Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. In physical materials (as opposed to vacuum), there is also a contribution from the slight motion of charges bound in atoms, called dielectric polarization.
The idea was conceived by James Clerk Maxwell in his 1861 paper On Physical Lines of Force, Part III in connection with the displacement of electric particles in a dielectric medium. Maxwell added displacement current to the electric current term in Ampère's circuital law. In his 1865 paper A Dynamical Theory of the Electromagnetic Field Maxwell used this amended version of Ampère's circuital law to derive the electromagnetic wave equation. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics into one single unified theory. The displacement current term is now seen as a crucial addition that completed Maxwell's equations and is necessary to explain many phenomena, most particularly the existence of electromagnetic waves.
Explanation
The electric displacement field is defined as:
where:
is the permittivity of free space;
is the electric field intensity; and
is the polarization of the medium.
Differentiating this equation with respect to time defines the displacement current density, which therefore has two components in a dielectric:(see also the "displacement current" section of the article "current density")
The first term on the right hand side is present in material media and in free space. It doesn't necessarily come from any actual movement of charge, but it does have an associated magnetic field, just as a current does due to charge motion. Some authors apply the name displacement current to the first term by itself.
The second term on the right hand side, called polarization current density, comes from the change in polarization of the individual molecules of the dielectric material. Polarization results when, under the influence of an applied electric field, the charges in molecules have moved from a position of exact cancellation. The positive and negative charges in molecules separate, causing an increase in the state of polarization . A changing state of polarization corresponds to charge movement and so is equivalent to a current, hence the term "polarization current". Thus,
This polarization is the displacement current as it was originally conceived by Maxwell. Maxwell made no special treatment of the vacuum, treating it as a material medium. For Maxwell, the effect of was simply to change the relative permittivity in the relation .
The modern justification of displacement current is explained below.
Isotropic dielectric case
In the case of a very simple dielectric material the constitutive relation holds:
where the permittivity is the product of:
, the permittivity of free space, or the electric constant; and
, the relative permittivity of the dielectric.
In the equation above, the use of accounts for
the polarization (if any) of the dielectric material.
The scalar value of displacement current may also be expressed in terms of electric flux:
The forms in terms of scalar are correct only for linear isotropic materials. For linear non-isotropic materials, becomes a matrix; even more generally, may be replaced by a tensor, which may depend upon the electric field itself, or may exhibit frequency dependence (hence dispersion).
For a linear isotropic dielectric, the polarization is given by:
where is known as the susceptibility of the dielectric to electric fields. Note that
Necessity
Some implications of the displacement current follow, which agree with experimental observation, and with the requirements of logical consistency for the theory of electromagnetism.
Generalizing Ampère's circuital law
Current in capacitors
An example illustrating the need for the displacement current arises in connection with capacitors with no medium between the plates. Consider the charging capacitor in the figure. The capacitor is in a circuit that causes equal and opposite charges to appear on the left plate and the right plate, charging the capacitor and increasing the electric field between its plates. No actual charge is transported through the vacuum between its plates. Nonetheless, a magnetic field exists between the plates as though a current were present there as well. One explanation is that a displacement current "flows" in the vacuum, and this current produces the magnetic field in the region between the plates according to Ampère's law:
where
is the closed line integral around some closed curve ;
is the magnetic field measured in teslas;
is the vector dot product;
is an infinitesimal vector line element along the curve , that is, a vector with magnitude equal to the length element of , and direction given by the tangent to the curve ;
is the magnetic constant, also called the permeability of free space; and
is the net displacement current that passes through a small surface bounded by the curve .
The magnetic field between the plates is the same as that outside the plates, so the displacement current must be the same as the conduction current in the wires, that is,
which extends the notion of current beyond a mere transport of charge.
Next, this displacement current is related to the charging of the capacitor. Consider the current in the imaginary cylindrical surface shown surrounding the left plate. A current, say , passes outward through the left surface of the cylinder, but no conduction current (no transport of real charges) crosses the right surface . Notice that the electric field between the plates increases as the capacitor charges. That is, in a manner described by Gauss's law, assuming no dielectric between the plates:
where refers to the imaginary cylindrical surface. Assuming a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates, according to charge conservation equation
where the first term has a negative sign because charge leaves surface (the charge is decreasing), the last term has a positive sign because unit vector of surface is from left to right while the direction of electric field is from right to left, is the area of the surface . The electric field at surface is zero because surface is in the outside of the capacitor. Under the assumption of a uniform electric field distribution inside the capacitor, the displacement current density D is found by dividing by the area of the surface:
where is the current leaving the cylindrical surface (which must equal D) and D is the flow of charge per unit area into the cylindrical surface through the face .
Combining these results, the magnetic field is found using the integral form of Ampère's law with an arbitrary choice of contour provided the displacement current density term is added to the conduction current density (the Ampère-Maxwell equation):
This equation says that the integral of the magnetic field around the edge of a surface is equal to the integrated current through any surface with the same edge, plus the displacement current term through whichever surface.
As depicted in the figure to the right, the current crossing surface is entirely conduction current. Applying the Ampère-Maxwell equation to surface yields:
However, the current crossing surface is entirely displacement current. Applying this law to surface , which is bounded by exactly the same curve , but lies between the plates, produces:
Any surface that intersects the wire has current passing through it so Ampère's law gives the correct magnetic field. However a second surface bounded by the same edge could be drawn passing between the capacitor plates, therefore having no current passing through it. Without the displacement current term Ampere's law would give zero magnetic field for this surface. Therefore, without the displacement current term Ampere's law gives inconsistent results, the magnetic field would depend on the surface chosen for integration. Thus the displacement current term is necessary as a second source term which gives the correct magnetic field when the surface of integration passes between the capacitor plates. Because the current is increasing the charge on the capacitor's plates, the electric field between the plates is increasing, and the rate of change of electric field gives the correct value for the field found above.
Mathematical formulation
In a more mathematical vein, the same results can be obtained from the underlying differential equations. Consider for simplicity a non-magnetic medium where the relative magnetic permeability is unity, and the complication of magnetization current (bound current) is absent, so that and
The current leaving a volume must equal the rate of decrease of charge in a volume. In differential form this continuity equation becomes:
where the left side is the divergence of the free current density and the right side is the rate of decrease of the free charge density. However, Ampère's law in its original form states:
which implies that the divergence of the current term vanishes, contradicting the continuity equation. (Vanishing of the divergence is a result of the mathematical identity that states the divergence of a curl is always zero.) This conflict is removed by addition of the displacement current, as then:
and
which is in agreement with the continuity equation because of Gauss's law:
Wave propagation
The added displacement current also leads to wave propagation by taking the curl of the equation for magnetic field.
Substituting this form for into Ampère's law, and assuming there is no bound or free current density contributing to :
with the result:
However,
leading to the wave equation:
where use is made of the vector identity that holds for any vector field :
and the fact that the divergence of the magnetic field is zero. An identical wave equation can be found for the electric field by taking the curl:
If , , and are zero, the result is:
The electric field can be expressed in the general form:
where is the electric potential (which can be chosen to satisfy Poisson's equation) and is a vector potential (i.e. magnetic vector potential, not to be confused with surface area, as is denoted elsewhere). The component on the right hand side is the Gauss's law component, and this is the component that is relevant to the conservation of charge argument above. The second term on the right-hand side is the one relevant to the electromagnetic wave equation, because it is the term that contributes to the curl of . Because of the vector identity that says the curl of a gradient is zero, does not contribute to .
History and interpretation
Maxwell's displacement current was postulated in part III of his 1861 paper ''. Few topics in modern physics have caused as much confusion and misunderstanding as that of displacement current. This is in part due to the fact that Maxwell used a sea of molecular vortices in his derivation, while modern textbooks operate on the basis that displacement current can exist in free space. Maxwell's derivation is unrelated to the modern day derivation for displacement current in the vacuum, which is based on consistency between Ampère's circuital law for the magnetic field and the continuity equation for electric charge.
Maxwell's purpose is stated by him at (Part I, p. 161):
He is careful to point out the treatment is one of analogy:
In part III, in relation to displacement current, he says
Clearly Maxwell was driving at magnetization even though the same introduction clearly talks about dielectric polarization.
Maxwell compared the speed of electricity measured by Wilhelm Eduard Weber and Rudolf Kohlrausch (193,088 miles/second) and the speed of light determined by the Fizeau experiment (195,647 miles/second). Based on their same speed, he concluded that "light consists of transverse undulations in the same medium that is the cause of electric and magnetic phenomena."
But although the above quotations point towards a magnetic explanation for displacement current, for example, based upon the divergence of the above curl equation, Maxwell's explanation ultimately stressed linear polarization of dielectrics:
With some change of symbols (and units) combined with the results deduced in the section (, , and the material constant these equations take the familiar form between a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates:
When it came to deriving the electromagnetic wave equation from displacement current in his 1865 paper 'A Dynamical Theory of the Electromagnetic Field', he got around the problem of the non-zero divergence associated with Gauss's law and dielectric displacement by eliminating the Gauss term and deriving the wave equation exclusively for the solenoidal magnetic field vector.
Maxwell's emphasis on polarization diverted attention towards the electric capacitor circuit, and led to the common belief that Maxwell conceived of displacement current so as to maintain conservation of charge in an electric capacitor circuit. There are a variety of debatable notions about Maxwell's thinking, ranging from his supposed desire to perfect the symmetry of the field equations to the desire to achieve compatibility with the continuity equation.
See also
Electromagnetic wave equation
Ampère's circuital law
Capacitance
References
Maxwell's papers
On Faraday's Lines of Force Maxwell's paper of 1855
Maxwell's paper of 1861
Maxwell's paper of 1864
Further reading
AM Bork Maxwell, Displacement Current, and Symmetry (1963)
AM Bork Maxwell and the Electromagnetic Wave Equation (1967)
External links
Electric current
Electricity concepts
Electrodynamics
Electromagnetism | Displacement current | Physics,Mathematics | 2,761 |
14,814,984 | https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L31 | 60S ribosomal protein L31 is a protein that in humans is encoded by the RPL31 gene.
Function
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L31E family of ribosomal proteins. It is located in the cytoplasm. Higher levels of expression of this gene in familial adenomatous polyps compared to matched normal tissues have been observed. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
Interactions
RPL31 has been shown to interact with BRCA1.
References
External links
Further reading
Ribosomal proteins | 60S ribosomal protein L31 | Chemistry | 183 |
875,035 | https://en.wikipedia.org/wiki/Corrugated%20galvanised%20iron | Corrugated galvanised iron (CGI) or steel, colloquially corrugated iron (near universal), wriggly tin (taken from UK military slang), pailing (in Caribbean English), corrugated sheet metal (in North America), zinc (in Cyprus and Nigeria) or custom orb / corro sheet (Australia), is a building material composed of sheets of hot-dip galvanised mild steel, cold-rolled to produce a linear ridged pattern in them. Although it is still popularly called "iron" in the UK, the material used is actually steel (which is iron alloyed with carbon for strength, commonly 0.3% carbon), and only the surviving vintage sheets may actually be made up of 100% iron. The corrugations increase the bending strength of the sheet in the direction perpendicular to the corrugations, but not parallel to them, because the steel must be stretched to bend perpendicular to the corrugations. Normally each sheet is manufactured longer in its strong direction.
CGI is lightweight and easily transported. It was and still is widely used especially in rural and military buildings such as sheds and water tanks. Its unique properties were used in the development of countries such as Australia from the 1840s, and it is still helping developing countries today.
History
Henry Robinson Palmer, architect and engineer to the London Dock Company, was granted a patent in 1829 for "indented or corrugated metallic sheets". It was originally made from wrought iron produced by puddling. It proved to be light, strong, corrosion-resistant, and easily transported, and particularly lent itself to prefabricated structures and improvisation by semi-skilled workers. It soon became a common construction material in rural areas in the United States, Argentina, Spain, New Zealand and Australia and later India, and in Australia and Argentina also became (and remains) a common roofing material even in urban areas. In Australia and New Zealand particularly it has become part of the cultural identity, and fashionable architectural use has become common. CGI is also widely used as building material in African slums and informal settlements.
For roofing purposes, the sheets are laid somewhat like tiles, with a lateral overlap of one and half corrugations, and a vertical overlap of about , to provide for waterproofing. CGI is also a common construction material for industrial buildings throughout the world.
Wrought iron CGI was gradually replaced by mild steel from around the 1890s, and iron CGI is no longer obtainable, but the common name has not been changed. Galvanised sheets with simple corrugations are also being gradually displaced by 55% Al-Zn coated steel or coil-painted sheets with complex profiles. CGI remains common.
Corrugation today
Today the corrugation process is carried out using the process of roll forming. This modern process is highly automated to achieve high productivity and low costs associated with labour. In the corrugation process sheet metal is pulled off huge rolls and through rolling dies that form the corrugation. After the sheet metal passes through the rollers it is automatically sheared off at a desired length. The traditional shape of corrugated material is the round wavy style, but different dies form a variety of shapes and sizes. Industrial buildings are often built with and covered by trapezoidal sheet metal.
Many materials today undergo the corrugation process. The most common materials for corrugated iron are ferrous alloys (e.g. stainless steels), aluminium and copper. Regular ferrous alloys are the most common due to price and availability. Common sizes of corrugated material can range from a very thin 30 gauge () to a relatively thick 6 gauge (). Thicker or thinner gauges may also be produced.
Other materials such as thermoplastic and fiberglass-reinforced plastic sheets are also produced with corrugations. Clear or translucent products can allow light to penetrate below.
Pitch and depth
The corrugations are described in terms of pitch (the distance between two crests) and depth (the height from the top of a crest to the bottom of a trough). It is important for the pitch and depth to be quite uniform, in order for the sheets to be easily stackable for transport, and to overlap neatly when joining two sheets. Pitches have ranged from 25 mm (1 inch) to 125 mm (5 inches).
It was once common for CGI used for vertical walls to have a shorter pitch and depth than roofing CGI. This shorter pitched material was sometimes called "rippled" instead of "corrugated".
However nowadays, nearly all CGI produced has the same pitch of 3 inches (76 mm).
A design of corrugated galvanised steel sheets "Proster 21", used as formwork, has 21 millimetre deep V-shaped pits.
Corrosion
Although galvanising inhibits the corrosion of steel, rusting is inevitable, especially in marine areas–where the salt water encourages rust–and areas where the local rainfall is acidic. Corroded corrugated steel roofs can nevertheless last for many years, particularly if the sheets are protected by a layer of paint.
See also
Chattel house
Metal roof
Nissen hut
Quonset hut
Theorema Egregium, for more information on why corrugation increases strength
Tin tabernacle
References
External links
An archive of building constructed from corrugated iron /
Steels
Building materials
Corrugation
Roofing materials | Corrugated galvanised iron | Physics,Engineering | 1,106 |
61,181,985 | https://en.wikipedia.org/wiki/Isolated%20organ%20perfusion%20technique | Isolated organ perfusion technique is employed to precipitate an organ's perfusion and circulation that are independent/isolated from the body's systemic circulation for various purposes such as organ-localized chemotherapy, organ-targeted delivery of drug, gene or anything else, organ transplantation, and organ injury recovery. The technique has been widely studied in animal and human for decades. Before the implementation, the perfusion system will be selected and the process can be similar to organ bath. Isolated organ perfusion technique, nevertheless, is averagely conducted in vivo without leaving the organ alone as a whole out of the body.
See also
ECMO
References
Oncology
Organ donation
Organ systems
Organ transplantation | Isolated organ perfusion technique | Biology | 140 |
173,449 | https://en.wikipedia.org/wiki/Trinitron | Trinitron was Sony's brand name for its line of aperture-grille-based CRTs used in television sets and computer monitors. It was one of the first television systems to enter the market since the 1950s. Constant improvement in the basic technology and attention to overall quality allowed Sony to charge a premium for Trinitron devices into the 1990s.
Patent protection on the basic Trinitron design ran out in 1996, and it quickly faced a number of competitors at much lower prices.
The name Trinitron was derived from trinity, meaning the union of three, and tron from electron tube, after the way that the Trinitron combined the three separate electron guns of other CRT designs into one.
History
Color television
Color television had been demoed since the 1920s starting with John Logie Baird's system. However, it was only in the late 1940s that it was perfected by both CBS and RCA. At the time, a number of systems were being proposed that used separate red, green and blue signals (RGB), broadcast in succession. Most systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black and white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. In spite of these problems, the United States Federal Communications Commission selected a sequential-frame 144 frame/s standard from CBS as their color broadcast in 1950.
RCA worked along different lines entirely, using the luminance-chrominance system. This system did not directly encode or transmit the RGB signals; instead it combined these colors into one overall brightness figure, the "luminance". Luminance closely matched the black and white signal of existing broadcasts, allowing it to be displayed on existing televisions. This was a major advantage over the mechanical systems being proposed by other groups. Color information was then separately encoded and folded into the signal as a high-frequency modification to produce a composite video signal – on a black and white television this extra information would be seen as a slight randomization of the image intensity, but the limited resolution of existing sets made this invisible in practice. On color sets the signal would be extracted, decoded back into RGB, and displayed.
Although RCA's system had enormous benefits, it had not been successfully developed because it was difficult to produce the display tubes. Black and white TVs used a continuous signal and the tube could be coated with an even deposit of phosphor. With the compatible color encoding scheme originally developed by Georges Valensi in 1938, the color was changing continually along the line, which was far too fast for any sort of mechanical filter to follow. Instead, the phosphor had to be broken down into a discrete pattern of colored spots. Focusing the proper signal on each of these tiny spots was beyond the capability of electron guns of the era, and RCA's early experiments used three-tube projectors, or mirror-based systems known as "Triniscope".
Shadow masks
RCA eventually solved the problem of displaying the color images with their introduction of the shadow mask. The shadow mask consists of a thin sheet of steel with tiny holes photo etched into it, placed just behind the front surface of the picture tube. Three guns, arranged in a triangle, were all aimed at the holes. Stray electrons at the edge of the beam were cut off by the mask, creating a sharply focused spot that was small enough to hit a single colored phosphor on the screen. Since each of the guns was aimed at the hole from a slightly different angle, the spots of phosphor on the tube could be separated slightly to prevent overlap.
The disadvantage of this approach was that for any given amount of gun power, the shadow mask filtered out the majority of the energy. To ensure there was no overlap of the beam on the screen, the dots had to be separated and covered perhaps 25% of its surface. This led to very dim images, requiring much greater electron beam power in order to provide a useful picture. Moreover, the system was highly dependent on the relative angles of the beams between the three guns, which required constant adjustment by the user to ensure the guns hit the correct colors. In spite of this, the technical superiority of the RCA system was overwhelming compared to the CBS system, and was selected as the new NTSC standard in 1953. The first broadcast using the new standard occurred on New Year's Day in 1954, when NBC broadcast the Tournament of Roses Parade.
In spite of this early start, only a few years after regularly scheduled television broadcasting had begun, consumer uptake of color televisions was very slow to start. The dim images, constant adjustments and high costs had kept them in a niche of their own. Low consumer acceptance led to a lack of color programming, further reducing the demand for the sets in a supply and demand problem. In the United States in 1960, only 1 color set was sold for every 50 sets sold in total.
Chromatron
Sony had entered the television market in 1960 with the black and white TV8-301, the first non-projection type all-transistor television. A combination of factors, including its small screen size, limited its sales to niche markets. Sony engineers had been studying the color market, but the situation in Japan was even worse than the U.S.; they accounted for only 300 of the 9 million sets sold that year. But by 1961, dealers were asking the Sony sales department when a color set would be available, and the sales department put pressure on engineering in turn. Masaru Ibuka, Sony's president and co-founder, steadfastly refused to develop a system based on RCA's shadow mask design, which he considered technically deficient. He insisted on developing a unique solution.
In 1961, a Sony delegation was visiting the IEEE trade show in New York City, including Ibuka, Akio Morita (Sony's other co-founder) and Nobutoshi Kihara, who was promoting his new CV-2000 home video tape recorder. This was Kihara's first trip abroad and he spent much of his time wandering the trade floor, where he came across a small booth by the small company Autometric. They were demonstrating a new type of color television based on the Chromatron tube, which used a single electron gun and a vertical grille of electrically charged thin wires instead of a shadow mask. The resulting image was far brighter than anything the RCA design could produce, and lacked the convergence problems that required constant adjustments. He quickly brought Morita and Ibuka to see the design, and Morita was "sold" on the spot.
Morita arranged a deal with Paramount Pictures, who was paying for Chromatic Labs' development of the Chromatron, taking over the entire project. In early 1963, Senri Miyaoka was sent to Manhattan to arrange the transfer of the technology to Sony, which would lead to the closing of Chromatic Labs. He was unimpressed with the labs, describing the windowless basement as "squalor". The American team was only too happy to point out the serious flaws in the Chromatron system, telling Miyaoka that the design was hopeless. By September 1964, a 17-inch prototype had been built in Japan, but mass-production test runs were demonstrating serious problems. Sony engineers were unable to make a version of Chromatron that could be reliably mass-produced.
When sets were finally made available in late 1964, they were put on the market at a competitive 198,000 yen (US$550), but cost the company over 400,000 yen (US$1111.11) to produce. Ibuka had bet the company on Chromatron and had already set up a new factory to produce them with the hopes that the production problems would be ironed out and the line would become profitable. After several thousand sets had shipped, the situation was no better, while Panasonic and Toshiba were in the process of introducing sets based on RCA licenses. By 1966, the Chromatron was breaking the company financially.
Trinitron
In the autumn of 1966, Ibuka finally gave in, and announced he would personally lead a search for a replacement for Chromatron. Susumu Yoshida was sent to the U.S. to look for potential licenses, and was impressed with the improvements that RCA had made in overall brightness by introducing new rare-earth phosphors on the screen. He also saw General Electric's "Porta-color" design, using three guns in a row instead of a triangle, which allowed a greater portion of the screen to be lit. His report was cause for concern in Japan, where it seemed Sony was falling ever-farther behind the U.S. designs. They might be forced to license the shadow mask system if they wanted to remain competitive.
Ibuka was not willing to give up entirely, and had his 30 engineers explore a wide variety of approaches to see if they could come up with their own design. At one point, Yoshida asked Senri Miyaoka if the in-line gun arrangement used by GE could be replaced by a single gun with three cathodes; this would be more difficult to build, but be lower cost in the long run. Miyaoka built a prototype and was astonished by how well it worked, although it had focusing problems. Later that week, on Saturday, Miyaoka was summoned to Ibuka's office while he was attempting to leave work to attend his weekly cello practice. Yoshida had just informed Ibuka about his success, and the two asked Miyaoka if they could really develop the gun into a workable product. Miyaoka, anxious to leave, answered yes, excused himself, and left. The following Monday, Ibuka announced that Sony would be developing a new color television tube, based on Miyaoka's prototype. By February 1967, the focusing problems had been solved, and because there was a single gun, the focusing was achieved with permanent magnets instead of a coil, and required no manual adjustments after manufacturing.
During development, Sony engineer Akio Ohgoshi introduced another modification. GE's system improved on the RCA shadow mask by replacing the small round holes with slightly larger rectangles. Since the guns were in-line, their electrons would land onto three rectangular patches instead of three smaller spots, about doubling the lit area. Ohgoshi proposed removing the mask entirely and replacing it with a series of vertical slots instead, lighting the entire screen. Although this would require the guns to be very carefully aligned with the phosphors on the tube in order to ensure they hit the right colors, with Miyaoka's new tube, this appeared possible. In practice, this proved easy to build but difficult to place in the tube – the fine wires were mechanically weak and tended to move when the tubes were bumped, resulting in shifting colors on the screen. This problem was solved by running several fine tungsten wires across the grille horizontally to keep the vertical wires of the grille in place.
The combination of three-in-one electron gun and the replacement of the shadow mask with the aperture grille resulted in a unique and easily patentable product. In spite of Trinitron and Chromatron having no technology in common, the shared single electron gun has led to many erroneous claims that the two are very similar, or the same.
Introduction, early models
Officially introduced by Ibuka in April 1968, the original 12 inch Trinitron (KV-1210) had a display quality that easily surpassed any commercial set in terms of brightness, color fidelity, and simplicity of operation. The vertical wires in the aperture grille meant that the tube had to be nearly flat vertically; this gave it a unique cylindrical look. It was also all solid state, with the exception of the picture tube itself, which allowed it to be much more compact and cool running than designs like GE's Porta-color. Some larger models such as the KV-1320UB for the United Kingdom market were initially fitted with 3AT2 valves for the extra high tension (high voltage) circuitry, before being redesigned as solid state in the early 70s.
Ibuka ended the press conference by claiming that 10,000 sets would be available by October, well beyond what engineering had told him was possible. Ibuka cajoled Yoshida to take over the effort of bringing the sets into production, and although Yoshida was furious at being put in charge of a task he felt was impossible, he finally accepted the assignment and successfully met the production goal. The KV-1210 was introduced in limited numbers in Japan in October as promised, and in the U.S. as the KV-1210U the following year.
Early color sets intended for the UK market had a PAL decoder that was different from those invented and licensed by Telefunken of Germany, who invented the PAL color system. The decoder inside the UK-sold Sony color Trinitron sets, from the KV-1300UB to the KV-1330UB, had an NTSC decoder adapted for PAL. The decoder used a 64 microsecond delay line to store every other line, but instead of using the delay line to average out the phase of the current line and the previous line, it simply repeated the same line twice. Any phase errors could then be compensated for by using a tint control knob on the front of the set, normally unneeded on a PAL set.
Reception
Reviews of the Trinitron were universally positive, although they all mentioned its high cost. Sony won an Emmy Award for the Trinitron in 1973. On his 84th birthday in 1992, Ibuka claimed the Trinitron was his proudest product.
New models quickly followed. Larger sizes at 19" and then 27" were introduced, as well as smaller, including a 7" portable. In the mid-1980s, a new phosphor coating was introduced that was much darker than earlier sets, giving the screens a black color when turned off, as opposed to the earlier light grey. This improved the contrast range of the picture. Early models were generally packaged in silver cases, but with the introduction of the darker screens, Sony also introduced new cases with a dark charcoal color, following a similar change in color taking place in the hi-fi world. This line expanded with 32", 35" and finally 40" units in the 1990s. In 1990, Sony released the first HD Trinitron TV set, for use with the Multiple sub-Nyquist sampling encoding standard.
In 1980, Sony introduced the "ProFeel" line of prosumer component televisions, consisting of a range of Trinitron monitors that could be connected to standardized tuners. The original lineup consisted of the KX-20xx1 20" and KX-27xx1 27" monitors (the "xx" is an identifier, PS for Europe, HF for Japan, etc.) the VTX-100ES tuner and TXT-100G TeleText decoder. They were often used with a set of SS-X1A stereo speakers, which matched the grey boxy styling of the suite. The concept was to build a market similar to contemporary stereo equipment, where components from different vendors could be mixed to produce a complete system. However, a lack of any major third party components, along with custom connectors between the tuner and monitors, meant that systems mixing fully compatible elements were never effectively realized. They were popular high-end units, however, and found a strong following in production companies where the excellent quality picture made them effective low-cost monitors. A second series of all-black units followed in 1986, the ProFeel Pro, sporting a space-frame around the back of the trapezoidal enclosure that doubled as a carrying handle and holder for the pop-out speakers. These units were paired with the VT-X5R tuner and optionally the APM-X5A speakers.
Sony also produced lines of Trinitron professional studio monitors, the PVM (Professional Video Monitor) and BVM (Broadcast Video Monitor) lines. These models were packaged in grey metal cubes with a variety of inputs that accepted practically any analog format. They originally used tubes similar to the ProFeel line, but over time, they gradually increased in resolution until the late 1990s when they offered over 900 lines. When these were cancelled as part of the wider Trinitron shutdown in 2007, professionals forced Sony to re-open two of the lines to produce the 20 and 14 inch models.
Among similar products, Sony produced the KV-1311 monitor/TV combination. It accepted NTSC-compatible video from various devices as well as analog broadcast TV. Along with its other functions, it had video and audio inputs and outputs as well as a wideband sound-IF decoded output. Its exterior looks much like the monitor illustrated here, with added TV controls.
By this time, Sony was well established as a supplier of reliable equipment; it was preferable to have minimal field failures instead of supporting an extensive service network for the entire United States.
Sony started developing the Trinitron for computer monitor use in the late 1970s. Demand was high, so high that there were examples of third party companies removing Trinitron tubes from televisions to use as monitors. In response, Sony started development of the GDM (Graphic Display Monitor) in 1983, which offered high resolution and faster refresh rates. Sony aggressively promoted the GDM and it became a standard on high-end monitors by the late 1980s. Particularly common models include the Apple Inc. 13" model that was originally sold with the Macintosh II starting in 1987. Well known users also included Digital Equipment Corporation, IBM, Silicon Graphics, Sun Microsystems and others. Demand for a lower cost solution led to the CDP series. In May 1988, the high-end 20 inch DDM model (Data Display Monitor) was introduced with a maximum resolution of 2,048 by 2,048, which went on to be used in the FAA's Advanced Automation System air traffic control system.
These developments meant that Sony was well placed to introduce high-definition televisions (HDTV). In April 1981, they announced the High Definition Video System (HDVS), a suite of MUSE equipment including cameras, recorders, Trinitron monitors and projection TVs.
Sony shipped its 100 millionth Trinitron screen in July 1994, 25 years after it had been introduced. New uses in the computer field and the demand for higher resolution televisions to match the quality of DVD when it was introduced in 1996 led to increased sales, with another 180 million units delivered in the next decade.<ref name=wallst>"Sony to stop making old-style cathode ray tube TVs", Wall Street Journal MarketWatch', 3 March 2008</ref>
End of Trinitron
Sony's patent on the Trinitron display ran out in 1996, after 20 years. After the expiration of Sony's Trinitron patent, manufacturers like Mitsubishi (whose monitor production is now part of NEC Display Solutions) were free to use the Trinitron design for their own product line without license from Sony although they could not use the Trinitron name. For example, Mitsubishi's are called Diamondtron''. To some degree, the name Trinitron became a generic term referring to any similar set.
Sony responded with the FD Trinitron, which used computer-controlled feedback systems to ensure sharp focus across a flat screen. Initially introduced on their 27, 32 and 36 inch models in 1998, the new tubes were offered in a variety of resolutions for different uses. The basic WEGA models supported normal 480i signals, but a larger version offered 16:9 aspect ratios. The technology was quickly applied to the entire Trinitron range, from 13 to 36 inch. High resolution versions, Hi-Scan and Super Fine Pitch, were also produced. With the introduction of the FD Trinitron, Sony also introduced a new industrial style, leaving the charcoal colored sets introduced in the 1980s for a new silver styling.
Sony was not the only company producing flat screen CRTs. Other companies had already introduced high-end brands with flat-screen tubes, like Panasonic's Tau. Many other companies entered the market quickly, widely copying the new silver styling as well. The FD Trinitron was unable to regain the cachet that the Trinitron brand had previously possessed; in the 2004 Christmas season, they increased sales by 5%, but only at the cost of a 75% plunge in profits after being forced to lower costs to compete in the market.
At the same time, the introduction of plasma televisions, and then LCD-based ones, led to the high-end market being increasingly focused on the "thin" sets. Both of these technologies have well known problems, and for some time Sony explored a wide array of technologies that would improve upon them in the same way the Trinitron did on the shadow mask. Among these experiments were organic light-emitting diodes (OLED) and the field-emission display, but in spite of considerable effort, neither of these technologies matured into competitors at the time. Sony also introduced their Plasmatron displays, and later LCD as well, but these had no inherent technical advantages over similar sets from other companies. From 2006, all of Sony's BRAVIA television products are LCD displays, initially based on screens from Samsung, and later Sharp.
Sony eventually ended production of the Trinitron in Japan in 2004. In 2006, Sony announced that it would no longer market or sell Trinitrons in the United States or Canada, but it would continue to sell the Trinitron in China, India, and regions of South America using tubes delivered from their Singapore plant. Production in Singapore finally ended by the end of March 2008, only months after ending production of their rear-projection systems. Two lines of the factory were later brought back online to supply the professional market.
280 million Trinitron tubes were built. At its peak, 20 million were made annually.
Description
Basic concept
The Trinitron design incorporates two unique features: the single-gun three-cathode picture tube, and the vertically aligned aperture grille.
The single gun consists of a long-necked tube with a single electrode at its base, flaring out into a horizontally-aligned rectangular shape with three rectangular cathodes inside. Each cathode is fed the amplified signal from one of the decoded RGB signals.
The electrons from the cathodes are all aimed toward a single point at the back of the screen where they hit the aperture grille, a steel sheet with vertical slots cut in it. Due to the slight separation of the cathodes at the back of the tube, the three beams approach the grille at slightly different angles. When they pass through the grille they retain this angle, hitting their individual colored phosphors that are deposited in vertical stripes on the inside of the faceplate. The main purpose of the grille is to ensure that each beam strikes only the phosphor stripes for its color, much as does a shadow mask. However, unlike a shadow mask, there are essentially no obstructions along each entire phosphor stripe. Larger CRTs have a few horizontal stabilizing wires part way between top and bottom.
Advantages
In comparison to early shadow mask designs, the Trinitron grille cuts off much less of the signal coming from the electron guns. RCA tubes built in the 1950s cut off about 85% of the electron beam, while the grille cuts off about 25%. Improvements to the shadow mask designs continually narrowed this difference between the two designs, and by the late 1980s the difference in performance, at least theoretically, was eliminated.
Another advantage of the aperture grille was that the distance between the wires remained constant vertically across the screen. In the shadow mask design, the size of the holes in the mask is defined by the required resolution of the phosphor dots on the screen, which was constant. However, the distance from the guns to the holes changed; for dots near the center of the screen, the distance was its shortest, at points in the corners it was at its maximum. To ensure that the guns were focused on the holes, a system known as dynamic convergence had to constantly adjust the focus point as the beam moved across the screen. In the Trinitron design, the problem was greatly simplified, requiring changes only for large screen sizes, and only on a line-by-line basis.
For this reason, Trinitron systems are easier to focus than shadow masks, and generally had a sharper image. This was a major selling point of the Trinitron design for much of its history. In the 1990s, new computer-controlled real-time feedback focusing systems eliminated this advantage, as well as leading to the introduction of "true flat" designs.
Disadvantages
Visible support or damping wires
Even small changes in the alignment of the grille over the phosphors can cause the color purity to shift. Since the wires are thin, small bumps can cause the wires to shift alignment if they are not held in place. Monitors using Trinitron technology have one or more thin tungsten wires running horizontally across the grille to prevent this. Screens 15" and below have one wire located about two thirds of the way down the screen, while monitors greater than 15" have 2 wires at the one-third and two-thirds positions. These wires are less apparent or completely obscured on standard definition sets due to wider scan lines to match the lower resolution of the video being displayed. On computer monitors, where the scan lines are much closer together, the wires are often visible. This is a minor drawback of the Trinitron standard which is not shared by shadow mask CRTs. Aperture grilles are not as mechanically stable as shadow or slot masks; a tap can cause the image to briefly become distorted, even with damping/support wires. Some people may find the wires to be distracting.
Anti-glare coating
A polyurethane sheet coated to scatter reflections is affixed to the front of the screen, where it can be damaged.
Partial list of other aperture grille brands
Sharp NEC Display Solutions (NEC/Mitsubishi) "Diamondtron"
Gateway, Inc. "Vivitron" (Trinitron and Diamondtron rebrand)
MAG InnoVision "Technitron" (Trinitron rebrand)
ViewSonic "SonicTron" (Trinitron rebrand)
See also
History of television
References
Notes
Bibliography
External links
Trinitron: Sony's Once Unbeatable Product
Sony Trinitron Explained
Sony products
Television technology
Vacuum tube displays
Cathode ray tube
Japanese inventions
Audiovisual introductions in 1968
1968 establishments in Japan
Products and services discontinued in 2008
2008 disestablishments in Japan | Trinitron | Technology | 5,542 |
4,475,055 | https://en.wikipedia.org/wiki/Hennin | The hennin ( ; possibly from Flemish meaning cock or rooster) was a headdress in the shape of a cone, steeple, or truncated cone worn in the Late Middle Ages by European women of the nobility. They were most common in Burgundy and France, but also elsewhere, especially at the English courts, and in Northern Europe, Hungary and Poland. They were little seen in Italy. It is unclear what styles the word hennin described at the time, though it is recorded as being used in French areas in 1428, probably before the conical style appeared. The word does not appear in English until the 19th century. The term is therefore used by some writers on costume for other female head-dresses of the period.
In pop culture, the hennin is often used to identify princesses and other important women in a royal court.
Conical hennins
These appear from about 1430 onwards, especially after the mid-century, initially only among aristocratic women, though later spreading more widely, especially in the truncated form. Typically, the hennin was approximately between 11" and 18" (30 to 45 cm.) high, but might be considerably higher, as much as over 2'5" (80 cm.). The tops of some of these conical hats were pointed while others were truncated, ending in a flat top. It was generally accompanied by a veil (cointoise) that usually emerged from the top of the cone and was allowed to fall onto the woman's shoulders or even to the ground, or was pulled forward over the hennin, often reaching over the woman's face. The cointoise is the model for the scroll work around a coat of arms in heraldry.
The hennin was worn tilted backward at an angle. It was made of light material, often card or a wire mesh over which a light fabric was fixed, although little is known of the details of their construction. There was often a cloth lappet (cornet) in front of the hennin, covering part of the brow, and sometimes falling onto the shoulders to either side. There is very often a "frontlet" or short loop seen on the forehead (example) to adjust the hennin forward, and perhaps even to hold it on in wind.
It was fashionable to pluck or shave the forehead to raise the hairlines. The hair was tied tightly on the scalp and usually hidden inside the cone (possibly one end of the veil was tied to the hair and wrapped round, with the free end being pulled through the hole at the tip of the cone). However, some images show long hair worn loose behind the hennin.
Nowadays, the hennin forms part of the depiction of the stereotypical fairy-tale princess. There are some manuscript illuminations that show princesses or queens wearing small crowns either round the brim or at the top of the hennin; it is likely that the very small crown of Margaret of York, Duchess of Burgundy (now in the treasury of Aachen Cathedral) was worn like this for her famously lavish wedding celebrations in 1468.
Definition
Various writers on costume history use hennin to cover a variety of different styles. Almost all agree that the steeple-cone style was a hennin, and the truncated ("flowerpot") versions. Many also include the heart-shaped open-centred escoffion. Some also use the term to cover beehive-shaped fabric head-coverings of the mid-century (example). Others also use it for the head-dresses divided to right and left of the early part of the century, such as those in which Christine de Pisan is usually depicted (example). In some of these only white cloth is visible, but in later examples worn by aristocrats rich fabric can be seen through translucent veils. Some use it for the horned hairstyle with a wimple on top.
The Chronique of Enguerrand de Monstrelet records that in 1428, in what seems to be the first record of the term "hennin", the radical Carmelite friar Thomas Conecte railed against extravagant headdresses of...
...the noble ladies, and all others, who dressed their heads in so ridiculous a manner, and who spent such large sums on such luxuries of fashion.
Thomas urged street boys to chase after such ladies and pluck off their headdresses, crying "Au hennin!", even granting indulgences to those who did so, although as so often in medieval documentary records, no clue as to the form of the "hennins" is given. Based on the evidence from visual records, they were probably not conical head-dresses, which are first seen slightly later. The Catalan poet Gabriel Mòger mocked the "tall deformed hat" () that was popular with Majorcan women of the time.
Gallery
See also
1400–1500 in European fashion
Tantur
Capuchon
Pointed hat
Coif
Notes
References
Boucher, François: 20,000 Years of Fashion, Harry Abrams, 1966.
Kohler, Carl: A History of Costume, Dover Publications reprint, 1963,
Laver, James: The Concise History of Costume and Fashion, Abrams, 1979
Payne, Blanche: History of Costume from the Ancient Egyptians to the Twentieth Century, Harper & Row, 1965. No ISBN for this edition; ASIN B0006BMNFS
Françoise Piponnier and Perrine Mane; Dress in the Middle Ages; Yale UP, 1997;
Vibbert, Marie, Headdresses of the 14th and 15th Centuries, The Compleat Anachronist, No. 133, SCA monograph series (August 2006)
External links
Images of Burgundian conical hennins
Constructing the Headdresses of the Fourteenth and Fifteenth Centuries, paper by Marie Vibbert (Lyonnete Vibert), Known World Costume Symposium Proceedings (2005).
Coiffures féminines — Le Hennin
Burgundian wedding c.1470, from the Getty, with a great variety of head-dresses.
15th-century fashion
16th-century fashion
Costume design
Pointed hats
History of clothing (Western fashion)
Medieval European costume
Women's clothing | Hennin | Engineering | 1,268 |
19,766,909 | https://en.wikipedia.org/wiki/Design%20and%20Industries%20Association | The Design and Industries Association is a United Kingdom charity whose object is to engage with all those who share a common interest in the contribution that design can make to the delivery of goods and services that are sustainable and enhance the quality of life for communities and the individual."
20th century
Shortly before the Great War there was a growing awareness, among British designers, of the extent to which German industrial design had taken the ideals of the Arts and Crafts movement (that had originated with William Morris and others in Britain in the late 19th century) and had successfully moved these into the age of mass, mechanised, production. The German Deutscher Werkbund organisation's Cologne exhibition, held before the outbreak of war in 1914, had been visited by many of those designers, architects, retailers and industrialists who were later to found the Design and Industries Association.
In March 1915 an exhibition of German manufactures was held at Goldsmiths' Hall in London. Shortly afterwards a meeting under the chairmanship of Lord Aberconway led to the foundation of the Design and Industries Association (DIA), with the express intention of raising the standard of British industrial design, under the slogan of "Fitness for Purpose".
DIA promoted its ideals through lectures, journals and exhibitions. Exhibitions included:
1920: Household Things - Whitechapel Gallery, London
1942 - 1945: Design Round The Clock - travelling
1953: Register your Choice - Charing Cross Underground Station
The journals published varied through the period and included:
1932: Design In Industry
1933 - 1935: Design for Today
1936: Trends in Everyday Life
In its early years there was considerable tension between the attachment of some members to the principals of the Arts and Crafts movement and the desire to promote the clearly 20th-century outlook of the Modern Movement.
Having been heavily involved with the British government's Utility Scheme in the Second World War, DIA had campaigned for the greater involvement of government in the promotion of good design. Ironically, DIA itself was to be somewhat eclipsed by the foundation of the government funded Council for Industrial Design, now the Design Council, in 1944.
DIA Today
Despite the predominance of the Design Council in the latter half of the 20th century, DIA continues its work today as an independent body, organising competitions, events and offering bursaries. In 1978 DIA, together with The Royal College of Art, The Faculty of Royal Designers for Industry and The Royal Academy of Engineering established the Sir Misha Black Awards to recognise excellence and innovation in design education.
Membership
DIA office bearers and members have included some of the most notable 20th-century British designers and manufacturers:
Lord Aberconway
Wenman Joseph Bassett-Lowke
Sir Misha Black
Cecil Brewer
Noel Carrington
Serge Ivan Chermayeff
Harold Curwen
Nanna Ditzel
Ambrose Heal
Charles Holden
Minnie McLeish
Harry Peach
Nikolaus Pevsner
Frank Pick
Jack Pritchard
Sir (Sydney) Gordon Russell
George Wilson-Crowe
Sir Lawrence Weaver
Hamilton T Smith [first director of Heals, designer]
How to Choose the Right Association
Community – If the individuals engages in a community that is considered active then it can enhance the value of membership.
Relevance – Consider the association’s relevance with oneself, this will ensure that it’s aligned with your personal interest with the design industry.
Cost vs Value – Make sure to determine the fees against the benefits. This will ensure that the investment is worth it.
Research – Doing research on the association’s reputation can influences one’s decision.
References
"Design and Industries Association." A Dictionary of Modern Design. Oxford University Press, 2004, 2005. Answers.com 13 Oct. 2008. http://www.answers.com/topic/design-and-industries-association
"Nothing Need Be Ugly", The first 70 years of the Design & Industries Association. Plumber, Raymond. DIA London 1985
External links
The Design and Industries Association
Architecture groups
British art
Design institutions
Organizations established in 1915 | Design and Industries Association | Engineering | 793 |
1,895,454 | https://en.wikipedia.org/wiki/Humite | Humite is a mineral found in the volcanically ejected masses of Vesuvius. It was first described in 1813 and named for Abraham Hume (1749–1838).
See also
Alleghanyite
Chondrodite
Clinohumite
Jerrygibbsite
References
External links
Magnesium minerals
Iron(II) minerals
Gemstones
Humite group
Orthorhombic minerals
Minerals in space group 62 | Humite | Physics | 80 |
52,404,285 | https://en.wikipedia.org/wiki/Lost%20Library%20of%20Ivan%20the%20Terrible | The Lost Library of the Moscow Tsars, also known as the "Golden Library", is a library speculated to have been assembled by Grand Duke Ivan III (the Great) of Russia () in the 16th century. It is also known as the Library of Ivan IV (Ivan the Terrible), who is credited with the disappearance of the library. The lost library is thought to contain rare Greek, Latin, and Egyptian works from the libraries of Constantinople and Alexandria, as well as 2nd-century CE Chinese texts and manuscripts from Ivan IV's own era. The library has been historically located as being underneath the Kremlin, and has become a source of interest for researchers, archaeologists, treasure-hunters, and historical figures such as Emperor Peter the Great and Napoleon Bonaparte. Under Ivan IV's rule (1533-1584), tales of the library grew.
Legends associated with the library include:
The collection formed part of the dowry of Sophia Palaiologina, the second wife of Ivan III (married in 1472) and a member of the last Byzantine imperial dynasty.
Ivan IV cursed the library before his death, causing blindness to those that came close to locating the books.
Ivan attempted to have scholars translate the ancient texts in order to gain knowledge of black magic.
History
The earliest reference to the lost library was in 1518 when Michail Tripolis known widely as Maximus the Greek was sent to Russia and came into contact with Moscow Grand Prince Vasili III, the son of Ivan III. Tripolis' reputation as a scholar and translator of works like the Psalter into Russian brought him to the attention of Vasili III. It was in a meeting between Michail and Vasili wherein it is described that "countless multitudes of Greek books" were shown to Michail by Vasili III. A Russian contemporary of Michail wrote a biography of him called "The Tale of Maxim the Philosopher". This biographer, Prince Kurbskii, a member of the Moscow nobility, detailed this meeting between Michail and Vasili III: "Maxim was astounded and impressed, and assured the prince that even in Greece he had never seen so many Greek books."
Close to 80 years after Kurbskii wrote Maximus the Greek's biography the next mention of the lost library as well as a location appeared. Livonian writer Franz Nyenstadt wrote about Johannes Wetterman, a German Protestant minister who established a church in Russia and met with Ivan IV. Ivan IV purportedly had been hiding multitudes of weapons underneath the Kremlin. Wetterman was summoned by Ivan IV not to look at a weapons arsenal but to look at ancient books that had been secured in a locked storeroom somewhere inside the Kremlin for well over a hundred years. Wetterman and three other Germans and three Russian officials were told to conduct a survey of the works. Wetterman noted that there were many works present that were only referenced in passing by other scholars because they had been either destroyed in fires or lost during wars previously.
A 1724 report by Moscow Petty Official Konon Osipov mentions a discovery made by V. Makariev in 1682, who was ordered to go into a Kremlin secret passage and found a room full of trunks. When Makariev reported the find to Princess Sophia Alekseyevna she made it forbidden for anyone to access those rooms.
Search for the Lost Library
In the early 19th century, Professor Dabelov of the University of Dorpat (University of Tartu) claimed to have found in the archives of the city of Pernau (Pärnu) a document called "Manuscripts Held by the Tsar". Dabelov left Pernau to inform a university associate, Professor Clossius, of the find, yet when returning to the Pernau archives the document had seemingly vanished. The only information left on the document was some of what Dabelov had copied down on his first visit. This information detailed that the tsars had around 800 manuscripts and some of these were gifted to Russia from an unknown Byzantine emperor.
In the 1890s, Professor Thraemer of the University of Strasbourg located a manuscript of Homer's hymns that he believed was once a part of the collection of manuscripts brought to Moscow by Byzantine Princess Sophia Palaiologina when she married Ivan III. Ivan III and Sophia married in 1472 and her dowry included a rare collection of books from the Library of Constantinople and Library of Alexandria. For several months in 1891 Professor Thraemer lived in Moscow searching through all of the city's libraries and archives in the hopes of locating the lost library. Thraemer eventually decided that it must be located inside hidden subterranean rooms underneath the Kremlin. In 1893 Professor I.E. Zabelin wrote an article called "The Underground Chambers of the Moscow Kremlin" where he concluded that the library did exist there but that it was destroyed in the 17th century. Around this time some attempts were made at excavating underneath the Kremlin. The excavations found several underground chambers and tunnels but all were found empty.
Several Russian scholars of the era also refuted the existence of the library. S.A. Belokurov in 1898 wrote that the "Tale of Maxim the Philosopher" was not written by Prince Kurbskii, but 75 years after the fact by another monk. Belokurov states that he found enough contradictions and inconsistencies in the Maximus the Greek biography that he believed Maximus never even saw the library. Belokurov also believed that Professor Dabelov's document was a forgery and he refuted other sources as well.
In the early 20th century, archaeologist Ignatius Stelletskii became a seeker of the lost library. A 1929 article of The New York Times details Stelletskii's search. The article reports that Stelletskii found archives showing "two large rooms filled with treasure chests and known to exist under the Kremlin" half a century after the death of Ivan IV. Also reported is the fact that Protestant Minister Wetterman never returned home after being in Moscow. According to myth Ivan IV had the architect of Saint Basil's Cathedral blinded in order to never be able to recreate it, hiding its secrets. Therefore, Ivan IV involvement in Wetterman's disappearance after seeing the library would seem plausible. Peter the Great also attempted to locate the library hoping to find treasures that would help the treasury after his several years long involvement in wars. However, Stelletskii's search ended without ever finding the library.
In 1978, S.O. Shmidt described an unpublished work by N.N. Zarubin from the 1930s called "The Library of Ivan the Terrible and His Books". Zarubin argued that the work of S. Belokurov was not impartial when claiming that the library did not exist.
References
External links
The Lost Library of Moscow
Libraries in Russia
Moscow Kremlin
Mythological places
Ivan the Terrible
Tsardom of Russia
Lost objects
Legendary treasures | Lost Library of Ivan the Terrible | Physics | 1,433 |
31,765,630 | https://en.wikipedia.org/wiki/SPIKE%20%28database%29 | SPIKE (Signaling Pathways Integrated Knowledge Engine) is a database of highly curated interactions for particular human pathways.
Development
SPIKE was developed by Ron Shamir's computational biology group in cooperation with the group of Yosef Shiloh, an Israel Prize recipient for his research in systems biology, and the group of Karen Avraham, a leading researcher of human deafness, all from Tel Aviv University.
See also
Signaling pathways
References
External links
Official website
Biochemistry databases
Metabolomic databases
Cell signaling
Signal transduction
Systems biology | SPIKE (database) | Chemistry,Biology | 102 |
2,376,387 | https://en.wikipedia.org/wiki/V1500%20Cygni |
V1500 Cygni or Nova Cygni 1975 was a bright nova occurring in 1975 in the constellation Cygnus. It had the second highest intrinsic brightness of any nova of the 20th century, exceeded only by CP Puppis in 1942.
V1500 Cygni was discovered shining at an apparent brightness of magnitude 3.0 by Minoru Honda of Kurashiki, Japan on 29 August 1975. It had brightened to magnitude 1.7 on the next day, and then rapidly faded. It remained visible to the naked eye for about a week, and 680 days after reaching maximum the star had dimmed by 12.5 magnitudes.
It is an AM Herculis type star, consisting of a red dwarf secondary depositing a stream of material onto a highly magnetized white dwarf primary. The distance of the V1500 Cygni was calculated in 1977 by the McDonald Observatory at 1.95 kiloparsecs (6,360 light years). More recently the Gaia space observatory determined a distance of approximately 5,100 light years. Additionally, V1500 Cyg was the first asynchronous polar to be discovered. This distinction refers to the fact that the white dwarf's spin period is slightly different from the binary orbital period. However, by 2016, x-ray observations strongly suggested that the white dwarf rotation had returned to normal synchronization with the orbit.
V1500 Cygni has a remnant typical of very fast novae, consisting of some clumps and some spherically symmetric diffuse material.
See also
Nova Cygni 1920
Nova Cygni 1992
References
Further reading
External links
Novae
Cygnus (constellation)
1975 in science
Cygni, V1500
Polars (cataclysmic variable stars)
Nova remnants | V1500 Cygni | Astronomy | 359 |
1,233,728 | https://en.wikipedia.org/wiki/Paul%20Karrer | Paul Karrer (21 April 1889 – 18 June 1971) was a Swiss organic chemist best known for his research on vitamins. He and British chemist Norman Haworth won the Nobel Prize for Chemistry in 1937.
Biography
Early years
Karrer was born in Moscow, Russia to Paul Karrer and Julie Lerch, both Swiss nationals. In 1892 Karrer's family returned to Switzerland where he was educated at Wildegg and at the Old Cantonal School Aarau where he matriculated in 1908. He studied chemistry at the University of Zurich under Alfred Werner and after gaining his Ph.D. in 1911, he spent a further year as an assistant in the Chemical Institute. He then took a post as a chemist with Paul Ehrlich at the Georg Speyer Haus, Frankfurt-am-Main. In 1919 he became Professor of Chemistry and Director of the Chemical Institute.
Research
Karrer's early research concerned complex metal compounds but his most important work has concerned plant pigments, particularly the yellow carotenoids. He elucidated their chemical structure and showed that some of these substances are transformed in the body into vitamin A. His work led to the establishment of the correct constitutional formula for beta-carotene, the chief precursor of vitamin A; the first time that the structure of a vitamin or provitamin had been established. George Wald worked briefly in Karrer's lab while studying the role of vitamin A in the retina. Later, Karrer confirmed the structure of ascorbic acid (vitamin C) and extended his researches into the vitamin B2 and E. His important contributions to the chemistry of the flavins led to the identification of lactoflavin as part of the complex originally thought to be vitamin B2.
Karrer published many papers and received many honours and awards, including the Nobel Prize in 1937. His textbook Lehrbuch der Organischen Chemie (Textbook of Organic Chemistry) was published in 1927, went through thirteen editions, and was published in seven languages.
Personal life
Karrer married Helena Froelich in 1914 and had three sons, one of whom died in infancy. He died on 18 June 1971, at the age of 82 in Zürich. His wife died in 1972.
Legacy
The prestigious Paul Karrer Gold Medal and lecture were established in his honour in 1959 by a group of leading companies such as CIBA AG, J.R. Geigy, F. Hoffmann-La Roche & Co. AG, Sandoz AG, Société des Produits Nestlé AG and Dr. A. Wander AG. It is awarded annually or biannually to an outstanding chemist who delivers a lecture at the University of Zurich.
The Paul Karrer Lecture Foundation is based at the Chemistry Institute of the University of Zurich at Rämistrasse 71, in Zürich.
References
Sources
Notes
External links
including the Nobel Lecture, 11 December 1937 Carotenoids, Flavins and Vitamin A and B2
1889 births
1971 deaths
Organic chemists
People associated with the University of Zurich
Swiss chemists
Nobel laureates in Chemistry
Swiss Nobel laureates
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Honorary Fellows of the Royal Society of Edinburgh
Swiss biochemists
Paul Ehrlich Institute people
Vitamin researchers | Paul Karrer | Chemistry | 671 |
38,691,231 | https://en.wikipedia.org/wiki/Bentazon | Bentazon (Bentazone, Basagran, Herbatox, Leader, Laddock) is a chemical manufactured by BASF Chemicals for use in herbicides. It is categorized under the thiadiazine group of chemicals. Sodium bentazon is available commercially and appears slightly brown in colour.
Usage
Bentazon is a selective herbicide as it only damages plants unable to metabolize the chemical. It is considered safe for use on alfalfa, beans (with the exception of garbanzo beans ), maize, peanuts, peas (with the exception of blackeyed peas ), pepper, peppermint, rice, sorghum, soybeans and spearmint; as well as lawns and turf. Bentazon is usually applied aerially or through contact spraying on food crops to control the spread of weeds occurring amongst food crops. Herbicides containing bentazon should be kept away from high heat as it will release toxic sulfur and nitrogen fumes.
Bentazon is currently registered for use in the United States in accordance with requirements set forth by the United States Environmental Protection Agency. However, as of September 2010, the herbicides Basagran M60, Basagran DF, Basagran AG, Prompt 5L and Laddock 5L are currently under review for pending requests for voluntary registration cancellation.
Water and ground contamination
In general, bentazon is quickly metabolized and degraded by both plants and animals. However, soil leaching and runoff is a major concern in terms of water contamination. In 1995 the Environmental Protection Agency (EPA) stated that levels of bentazon in both ground water and surface water "exceed levels of concern". Despite the establishment of a 20 parts per billion Health Advisory Level there is no requirement to measure for bentazon in water supplies as the Safe Drinking Water Act does not regulate bentazon. The United States EPA found bentazon in 64 out of 200 wells in California - the highest number of detections in their 1995 study. This prompted the State of California to review existing toxicology studies and establish a "Public Health Goal" that limits bentazon in drinking water to 200 parts per billion.
The EPA requires ground water and environmental hazard advisory labels on all commercially available herbicides containing bentazon. Both statements warn against application and/or disposal of bentazon directly into water, or in areas where soil leaching is common.
Food contamination
A number of limits have been placed on bentazon to reduce the possibility of toxic effects on humans. Tolerance levels vary depending on the use of the food/animal product. The following tolerance levels for bentazon have been established in the United States:
0.02 ppm for milk.
0.05 ppm (parts per million) for meat and animal byproducts (poultry, eggs, cattle, hogs, sheep and goats).
0.05 ppm for dried beans (excluding soybeans), corn (fresh and grain), bohemian chili peppers, peanuts, rice, soybeans, and sorghum used for fodder and grain.
0.5 ppm for succulent beans and peas.
0.3 ppm for peanut hulls.
1 ppm for mint and dried peas.
3 ppm for rice (straw), corn for fodder and forage, and peanuts used in hay and forage.
8 ppm for pea vine hays (dried), and soybeans used for foraging or hay.
It is recommended that food and feed supplies be stored away from herbicides containing bentazon. Aerial spraying should be conducted in a manner that prevents spray drift towards water sources and food crops susceptible to bentazon.
Toxicity to nonhuman species
A 1994 study concluded that bentazon is non-toxic to honeybees, and is not harmful to beetles. Studies have found that bentazon is toxic to rainbow trout and bluegill sunfish at 190 ppm and 616 ppm, respectively. Bentazon is considered toxic to birds as it affects their reproductive capacities.
Among mammals, bentazon is found to be moderately toxic when ingested or absorbed through the skin. Lethal doses (LD50, the dose required to kill half the population being studied) for bentazon have been established for:
Cats: 500 mg/kg
Rats: 1100 mg/kg to 2063 mg/kg
Mice: 400 mg/kg
Rabbits: 750 mg/kg
Dogs in a study being fed 13.1 mg of bentazon a day developed diarrhoea, anemia and dehydration. In another study using dogs, prostate inflammation was also observed along with previously noted health effects
In experiments conducted on hamsters, mice and rats, bentazon was not found to cause gene mutations to damage to DNA and chromosomes.
Toxicity to humans
Bentazon has been classified by the EPA as a "Group E" chemical, because it is believed to be non-carcinogenic to humans (as based on testing conducted on animals). However, there are no studies or experiments that can determine toxic and/or carcinogenic effects of bentazon on humans. Workers applying the herbicide would be most exposed to bentazon, and so have been advised to wear protective clothing (goggles, gloves and aprons) at all times when handling the chemical. Bentazon causes allergy-like symptoms as it irritates the eyes, skin and respiratory tract. Ingesting bentazon causes nausea, diarrhoea, trembling, vomiting and difficulty breathing. Workers handling bentazon must wash their hands before eating, drinking, smoking, and using the bathroom to minimize contact with skin.
The effects of bentazon ingestion has been observed in humans who chose the herbicide to commit suicide. Ingestion of bentazon was observed to cause fevers, renal failure (kidney failure), accelerated heart rate (tachycardia), shortness of breath (dyspnea) and hyperthermia. Ingestion of 88 grams of bentazon caused death in an adult.
References
Herbicides
Benzothiadiazines
Isopropyl compounds
Sultams | Bentazon | Biology | 1,263 |
12,777,346 | https://en.wikipedia.org/wiki/Albert%20F.%20A.%20L.%20Jones | Albert Francis Arthur Lofley Jones (9 August 1920 – 11 September 2013) was a New Zealand amateur astronomer, and a prolific variable star and comet observer, a member of the Variable Star Section and the Comet Section of the Royal Astronomical Society of New Zealand.
Life
Albert Jones was born in Christchurch, New Zealand, in 1920 and was educated at Timaru Boys' High School. At the beginning of the Second World War he joined the army, but in 1942 he was classified unfit for overseas service. He worked as a miller in a rolled oats mill, as a grocery shop owner and in a car assembly factory. He died in Nelson, New Zealand, in 2013.
Astronomy
Achievements
In 1963 he became the sixth astronomer in history to make 100,000 observations of variable stars and by 2004 he became the first to make more than 500,000 observations. His visual brightness estimates were very precise: most observers can distinguish variations of one tenth of a magnitude, but Jones' measurements were reported to show a standard deviation of about one twentieth of a magnitude. In 1946 he discovered the comet C/1946 P1 (Jones) and in 2000 he co-discovered, together with Japanese astronomer Syogo Utsunomiya the comet C/2000 W1 (Utsunomiya-Jones), becoming the oldest comet discoverer. In 1987 he co-discovered the supernova SN 1987A in the Large Magellanic Cloud, which was the brightest naked-eye supernova explosion since 1604.
Honours and awards
Jones' work was widely acknowledged. In 1968, he received the Merlin Silver Medal and Prize of the British Astronomical Association for his work in establishing accurate magnitudes of comets. In the 1987 Queen's Birthday Honours, he was appointed an Officer of the Order of the British Empire, for services to astronomy. The following year, asteroid 3152 Jones was named after him. He won the Amateur Achievement Award of the Astronomical Society of the Pacific for his variable star and comet observations in 1998. The comet C/2000 W1 discovery brought him the Edgar Wilson Award, administered by the Smithsonian Astrophysical Observatory, in 2001. In 2004 he received an honorary Doctorate of Science from the Victoria University of Wellington.
References
External links
Photograph of Albert Jones at a monthly meeting of the Nelson branch of the Royal Astronomical Society. Nelson Photo News, 1 June 1968.
Biography
1920 births
2013 deaths
20th-century New Zealand astronomers
Amateur astronomers
21st-century New Zealand astronomers
Discoverers of comets
Discoverers of supernovae
New Zealand Officers of the Order of the British Empire
People from Christchurch
People educated at Timaru Boys' High School
People from Nelson, New Zealand
Military personnel from Christchurch
New Zealand military personnel of World War II
New Zealand Army soldiers | Albert F. A. L. Jones | Astronomy | 545 |
1,203,602 | https://en.wikipedia.org/wiki/United%20Airlines%20Flight%20585 | United Airlines Flight 585 was a scheduled passenger flight on March 3, 1991, from Denver to Colorado Springs, Colorado, carrying 20 passengers and 5 crew members on board. The plane experienced a rudder hardover while on final approach to runway 35 at Colorado Springs Municipal Airport, causing the plane to roll over and enter an uncontrolled dive. All 25 people on board the Boeing 737 were killed on impact.
The National Transportation Safety Board, (NTSB), was initially unable to resolve the cause of the crash, but after similar accidents and incidents involving Boeing 737 aircraft, the crash was determined to be caused by a defect in the design of the 737's rudder power control unit.
Background
Aircraft
Flight 585 was operated by a Boeing 737-291, registered as N999UA with serial number 22742. It was manufactured in May 1982 for the original incarnation of Frontier Airlines, and was acquired by United Airlines on June 6, 1986, when the former went out of business (a new airline company with the same name formed eight years later). On the date of the accident, the aircraft had logged 26,050 flight hours in 19,734 takeoff and landing cycles.
Crew
In command was Captain Harold Green, aged 52. Green was hired by United Airlines on May 15, 1969, and had logged 9,902 hours as a United Airlines pilot (including 1,732 hours on the Boeing 737), and was regarded by colleagues as a conservative pilot who always followed standard operating procedures. The first officer was Patricia Eidson, aged 42. Eidson was hired by UAL on November 21, 1988 and had logged 3,903 flight hours (including 1,077 hours on the Boeing 737). She was considered by the captain to be a very competent pilot.
On February 25, 1991, the aircraft was flying at when the rudder abruptly deflected 10 degrees to the right. The crew on board reduced power and the aircraft returned to normal flight. A similar event occurred two days later. Four days later, the aircraft crashed.
Accident
Flight 585 was a regularly scheduled flight, originating at General Wayne A. Downing Peoria International Airport in Peoria, Illinois to Colorado Springs, Colorado, making intermediate stops at Quad City International Airport in Moline, Illinois, with an intended final stop at Stapleton International Airport in Denver, Colorado, at 09:46 Mountain Standard Time (16:46 UTC). On March 3, 1991, the flight operated from Peoria to Denver without incident.
At 09:23 (16:23 UTC), Flight 585 departed Denver with 20 passengers and 5 crew members on board. At 09:30:37 (16:30:37 UTC), the aircraft received Automatic terminal information service "Lima", that was about 40 minutes old, stating "Wind three one zero at one three gust three five; low level wind shear advisories are in effect, local aviation wind warning in effect calling for winds out of the northwest gust to and above." The flight crew added to their target landing airspeed based on this information.
At 09:32:35, First Officer Eidson reported to Colorado Springs Approach Control that their altitude was . Colorado Springs Approach then cleared the flight for a visual approach to runway 35 and instructed the flight to contact Colorado Springs Tower.
At 09:37:59 (16:37:59 UTC) Colorado Springs Tower cleared Flight 585 to land on runway 35, notifying the flight that wind was 320 degrees at with gusts to . At this moment, the aircraft was at . First Officer Eidson inquired about reports from other aircraft about airspeed changes, and at 09:38:29 (16:38:29 UTC) the tower replied that another 737 had reported a loss at , a gain at , and a gain at , at approximately 09:20 (16:20 UTC), 17 minutes prior. Eidson replied, "Sounds adventurous... United five eighty-five, thank you."
At 09:40:07 (16:40:07 UTC), Flight 585 was informed of traffic, a Cessna at their eleven o'clock, northwest bound, landing at runway 30. The crew was unable to locate the traffic, but 37 seconds after it was reported, the tower informed the flight that the traffic was now behind them. This Cessna was located about northeast of the accident when it occurred, and he had also reported slight, occasional, moderate chop at . The Cessna pilot had also noted indicated airspeed fluctuations between and with vertical speed indications of approximately per minute.
At 09:41:23 (16:41:23 UTC), air traffic control directed Flight 585 to hold short of runway 30 after landing to allow for departing traffic. Eidson replied "We'll hold short of three-zero, United five eighty five." This was the last transmission received from the flight.
In the final minute of the flight, normal acceleration varied between 0.6–1.3g, with an airspeed of with 2 to 10 knot excursions.
At 09:42 (16:42 UTC), about 20 seconds prior to the crash, the aircraft entered into a controlled 20-degree bank and turn for alignment with the runway. Four seconds later, First Officer Eidson informed Captain Green that they were at .
Within the next four seconds, at 09:43:33 (16:43:33 UTC), the aircraft suddenly rolled to the right, heading rate increasing to about 5-degrees per second as a result, nearly twice that of a standard rate turn, and pitched nose down. First Officer Eidson stated "Oh God, [flip]!", and in the same moment Captain Green called for 15-degrees of flaps while increasing thrust, in an attempt to initiate a go-around. The altitude decreased rapidly and acceleration increased to over 4G until, at 09:43:41 (16:43:41 UTC), the aircraft crashed at an 80-degree nose-down angle, yawed 4-degrees to the right, into Widefield Park, less than from the runway threshold, at a speed of . The aircraft was destroyed on impact and by the post-crash fire. According to the accident report, the crash carved a crater and deep. Segments of the 737 were buried deep within this crater, requiring excavation. Everyone on board was killed instantly. The aircraft narrowly missed a row of apartments; a girl standing in the doorway of one of the apartments was knocked backwards by the force of the explosion, hitting her head, but was released from a local hospital with no further issues after treatment.
Victims
Investigation
Initial investigation
The National Transportation Safety Board (NTSB) has released its concluding report into the crash that occurred on March 3, 1991, involving United Airlines Flight 585. Based on the findings of the research, it was determined that the crash was brought about by a mechanical breakdown in the rudder power control unit of the aircraft.
The NTSB's initial report on the crash was released in December 1992, but it ruled the probable cause as undetermined. The agency reopened the investigation in September 1994 after another crash of USAir Flight 427 that was under similar conditions. The NTSB's investigation considered data from the crash of Flight 585, as well as other incidents, including a non-fatal crash in 1996 of Eastwind Airlines flight 517 The NTSB finalized it's report on United 585 Report #AAR-01-01 on March 03, 2001.
Although the flight data recorder (FDR) outer protective case was damaged, the data tape inside was intact and all of the data were recoverable. Five parameters were recorded by the FDR: heading, altitude, airspeed, normal acceleration (G loads), and microphone keying. The FDR did not record rudder, aileron or spoiler deflection data, which could have aided the NTSB in reconstructing the plane's final moments. The data available proved insufficient to establish why the plane suddenly went into the fatal dive. The NTSB considered the possibilities of a malfunction of the rudder power control unit servo (which might have caused the rudder to reverse) and the effect that powerful rotor winds from the nearby Rocky Mountains may have had, but there was not enough evidence to prove either hypothesis.
The cockpit voice recorder (CVR) was also damaged, though the data tape inside was intact. However, the data tape featured creases that resulted in poor playback quality. The CVR recorded the pilots making a verbal (and possible physical) response to the loss of control.
The first NTSB report (issued on December 8, 1992) did not conclude with the usual "probable cause". Instead, it stated:
Intervening events
Following the failure to identify the cause of Flight 585's crash, another Boeing 737 crash occurred under very similar circumstances when USAir Flight 427 crashed while attempting to land in Pennsylvania in 1994.
Renewed investigation and probable cause
The NTSB reopened its investigation into Flight 585 in parallel with the USAir Flight 427 investigation, due to the similar nature of the circumstances.
During the NTSB's renewed investigation, it was determined that the crash of Flight 585 (and the later Flight 427 crash) was the result of a sudden malfunction of the aircraft's rudder power control unit. Another incident (non-fatal) that contributed to the conclusion was that of Eastwind Airlines Flight 517, which had a similar problem upon approach to Richmond on June 9, 1996. On March 27, 2001, the NTSB issued a revised final report for Flight 585, which found that the pilots lost control of the airplane because of a mechanical malfunction. The renewed investigation concluded with a "probable cause" that stated:
Memorial
A memorial garden honoring the victims is located at Widefield Park. The garden consists of a gazebo and 25 trees planted in honor of the victims.
In popular culture
The Discovery Channel Canada / National Geographic TV series Mayday dramatized the crash of Flight 585 and the subsequent 737 rudder investigation in a 2007 episode titled Hidden Danger.
The crash is dramatized in the episode "Fatal Flaws" of Why Planes Crash.
See also
Boeing 737 rudder issues
Eastwind Airlines Flight 517
USAir Flight 427
Alaska Airlines Flight 261
American Airlines Flight 1
Northwest Airlines Flight 85
TWA Flight 841 (1979)
References
External links
Memorial plaque at the crash site in Colorado Springs
(Archive)
Boeing 737 Rudder Design Defect
Airliners.net Pre-crash photos
Transportation in Colorado Springs, Colorado
History of El Paso County, Colorado
1991 in Colorado
Airliner accidents and incidents caused by design or manufacturing errors
Airliner accidents and incidents caused by mechanical failure
Aviation accidents and incidents in the United States in 1991
Airliner accidents and incidents in Colorado
585
Accidents and incidents involving the Boeing 737 Original
March 1991 events in the United States
Aviation accidents and incidents caused by loss of control | United Airlines Flight 585 | Materials_science | 2,225 |
66,391,593 | https://en.wikipedia.org/wiki/Dutch%20childcare%20benefits%20scandal | The Dutch childcare benefits scandal ( or , ) refers to a political scandal in the Netherlands involving false allegations of welfare fraud by the Tax and Customs Administration () against thousands of families claiming childcare benefits.
Between 2005 and 2019, approximately 26,000 parents were wrongly accused of making fraudulent benefit claims, resulting in demands to repay their received allowances in full. In many cases, this sum amounted to tens of thousands of euros, driving families into severe financial hardship.
The scandal gained public attention in September 2018, prompting investigations that criticized the Tax and Customs Administration's procedures as discriminatory, particularly affecting parents with foreign backgrounds and characterized by institutional biases. The severity of the issue culminated in the resignation of the third Rutte cabinet on 15 January 2021, just two months before the scheduled 2021 general election. A parliamentary inquiry into the affair concluded that it violated fundamental principles of the rule of law.
Background
Childcare benefits in the Netherlands
Childcare in the Netherlands is not free and parents are generally required to pay for the costs by themselves. However, part of the costs may be covered by childcare benefit, which is available to families in which all parents are either employed or enrolled in secondary or tertiary education or a civic integration course. The amount of childcare benefit is calculated as a percentage of the hourly rate of the childcare centre or childminding agency, ranging from 33.3 to 96.0% depending on the parents' collective income and the number of children.
Each year, the government sets a maximum hourly rate for which families may receive childcare benefit. Any amount exceeding the maximum hourly rate must be fully paid by the parents. The number of childcare hours for which a family is entitled to childcare benefit depends on the number of hours that each parent works. The maximum is 230 hours per month per child. Parents may opt to receive their childcare benefit on their own bank account or to have it transferred directly to the childcare centre or childminding agency.
Childcare benefits were introduced to the Dutch social welfare system in 2004, when the States General of the Netherlands adopted the Childcare Act (). Formally, the programme is run by the Ministry of Social Affairs and Employment, but the Tax and Customs Administration (part of the Ministry of Finance) is responsible for its implementation, including payment and fraud prevention.
In 2005, the General Act on Means-tested Benefits () was introduced, which reorganised the existing welfare system. This law did not include a hardship clause, which would allow for exceptions to be made should the prescribed procedures be deemed unreasonable.
Emergence of fraudulent childminding agencies
In the years following the introduction of the Childcare Act, childminding agencies emerged that committed fraud by applying for childcare benefit on behalf of their clients without asking for the mandatory contribution of 4.0 to 66.7% depending on their income. A notable example is the case of the childminding agency De Appelbloesem in Beilen, which provided informal babysitters (e.g. grandparents babysitting their grandchildren) with a formal employment contract, so that they could apply for childcare benefit and split the money between them.
Since family members often babysit for free and parents would find it undesirable to switch to an arrangement that requires them to pay for their babysitter's services, the agency did not inform its clients of the fact that they were legally required to pay for the remainder of the "costs", i.e. the part of the agency's (imaginary) hourly rate not covered by the childcare benefit it had received.
In 2009, the Fiscal Information and Investigation Service (FIOD) raided the agency, and its director was sentenced to eighteen months' imprisonment for forgery and fraud. According to the Tax and Customs Administration, the parents involved had to pay back the childcare benefits the agency had received in their name, as well as payments they had received after leaving De Appelbloesem.
In November 2010, the House of Representatives passed a motion to recover the funds given to fraudulent childminding agencies rather than indebting parents who acted in good faith. Minister of Social Affairs Henk Kamp wrote to the House that this was not legally possible and that parents who find themselves in this situation should take legal action against their childminding agency. The clients of De Appelbloesem appealed the decision of the Tax and Customs Administration, but after several lawsuits, the Council of State confirmed that the law required them to pay back the childcare benefits they had received.
Bulgarian migrant fraud
In 2013, RTL Nieuws revealed that a number of Bulgarian migrants had been taking advantage of the Dutch social welfare system. They were encouraged by a gang to briefly register at an address in the Netherlands and retroactively apply for a €6,000–8,000 health care and housing allowance. At the time, the tax authorities paid allowances immediately and checked eligibility afterwards, at which point the Bulgarians had already left the country.
Between 2007 and 2013, over 800 Bulgarians unjustly received about four million euros in total using this method. According to State Secretary for Finance Frans Weekers, many of these cases were not deliberate fraud but rather negligence on the Bulgarians' part. Ultimately, Weekers survived a motion of no confidence, in which he only retained support from the coalition parties (VVD and PvdA) and two minor opposition parties (CU and SGP).
Response to migrant fraud
In response to the widespread Bulgarian migrant fraud, the House of Representatives insisted on stricter fraud prevention. The coalition agreement of the first Rutte cabinet also included a provision to intensify anti-fraud enforcement. As a consequence, the government established a Fraud Management Team on 28 May 2013, consisting of top officials from the Tax and Customs Administration and the Ministry of Finance.
Later that year, the Fraud Management Team established the Collaborative Anti-facilitation Force (, CAF). Here, "facilitation" refers to individuals or institutions that enable or encourage people to commit fraud. In the context of childcare benefit fraud, this meant that the CAF actively looked for childcare centres and childminding agencies that submitted suspicious childcare benefit applications.
In June 2013, Prime Minister Mark Rutte also established the ministerial committee "Tackling Fraud", which would exist until 2015. This committee developed a broader strategy for the national government's anti-fraud campaign but did not specifically consider the Tax and Customs Administration's approach to welfare fraud. In late 2013, the committee prepared a letter to the House of Representatives, which initially set out a strict approach, but ultimately emphasised the need to act proportionately and to trust rather than to mistrust.
Cases
CAF 11 Hawaii
The first case that revealed to the public the severity of the anti-fraud policies was the CAF's eleventh case, nicknamed "CAF 11 Hawaii". In this case, the CAF investigated childminding agency Dadim in Eindhoven, after the local government had received signals in 2011 suggesting that Dadim was facilitating childcare benefit fraud. In 2012, a judge ruled that no fraud had been found, but in October 2013 the Tax and Customs Administration still designated the office as a site of suspected fraud. In November 2013, visits were made to sixteen affiliated childminders, but the authorities continued to find no evidence of fraud. Nevertheless, in April 2014, 317 clients of Dadim, almost all of whom had dual citizenship, were classified as fraudsters.
Other cases
It was later found that the CAF had investigated 630 other agencies, possibly with the same harshness as in the CAF 11 Hawaii case. It is estimated that about 2,200 families were victimised in this way. Only 200 of those labelled as fraudsters were subsequently recommended to the Public Prosecution Service for using forged documents. The subsequent prosecutions resulted in only 15 convictions and eight settlements.
In many of these cases, the CAF employed collective punishment based on the "80–20 principle" (80% fraud, 20% innocent; an inversion of the usual principle). Quantitative evidence for this presumption was lacking from the Tax and Customs Administration, and it turned out to be virtually impossible for innocent parents to reverse decisions.
Another group of approximately 8,000 parents fell afoul of strict administrative policies, in which a small mistake (e.g. a missing signature or an undeclared change in income) could lead to a full clawback of the childcare benefit. This was stated in the law and was initially confirmed by a decision of the Council of State. In 2019, the Council of State reversed this decision, and decided to return the recovered amount to the parents, along with compensation on a case-by-case basis.
Qualification "Deliberate intent/Gross negligence"
When the Tax and Customs Administration suspected seriously culpable acts, the Dutch bureaucracy would mark the involved parents with the label "Deliberate intent/Gross negligence" (). Individuals who had received this label were no longer eligible for standard debt collection arrangements. Under the standard arrangement, debtors repay their debt as much as possible over a two-year period (without falling below subsistence level) and any debt remaining after that period would then be considered irrecoverable. Because parents were not eligible for such a payment plan, they became heavily indebted.
In November 2020, State Secretary for Finance Alexandra van Huffelen released an internal memorandum from 2016, in which it is recommended that anyone with a childcare benefit debt exceeding €3,000 should automatically receive the "Deliberate intent/Gross negligence" qualification. According to Van Huffelen, it was unclear whether this recommendation had been carried out.
Whereas the scandal mainly involved false allegations of childcare benefit fraud, it later turned out that the authorities also wrongly suspected residents of making fraudulent claims to healthcare benefit, rent benefit and supplementary child benefit. In the case of income tax, it was revealed that the Tax and Customs Administration – under the code name Project 1043 – claimed that citizens were fraudulent based on suspicions.
Investigations
As early as 9 August 2017, the National Ombudsman published a report entitled "No power play, but fair play" about the 232 parents of the CAF 11 Hawaii case. In the report, the Ombudsman strongly criticised the Tax and Customs Administration's harsh approach and recommended that these parents should be compensated. The childcare benefits scandal came to public attention when RTL Nieuws and Trouw reported about the case in September 2018. The Socialist Party opened a hotline for victims and developed a "black book" (a list of grievances) based on the 280 complaints received. This indictment was delivered to State Secretary for Finance Menno Snel on 28 August 2019.
The Central Government Audit Service (ADR) also investigated the childcare benefits scandal in 2019. Specifically, the investigation aimed to find out whether the mistakes in the CAF 11 Hawaii case had also been made in other CAF cases. The investigation was controversial, as senior civil servants of the Tax and Customs Administration had stated that the three key actors involved with the CAF would not be prosecuted.
Whereas the Council of State had previously agreed with the Tax and Customs Administration on their strict approach to fraud, the Council of State reversed its position in October 2019. In contrast to previous rulings, the Council of State expressed the opinion that the Tax and Customs Administration did in fact have the power to assess proportionality on a case-by-case basis.
Internal reports
In 2019, State Advocate had issued a draft recommendation, which hinted that the law allowed for a less harsh approach for parents who had not paid a personal contribution on the advice of their childminder agency. Houtzagers called a tough approach (full repayment) "justifiable", but urged caution for individual circumstances. It is unclear why this advice was not followed. The exact contents of the advisory report remain classified, due to a general policy not to disclose the advice of the State Advocate. However, its contents were openly discussed in the House of Representatives in December 2020.
In November 2019, a former employee of the Tax and Customs Administration sent an urgent letter to the House of Representatives. The employee processed objections submitted to the Benefits department between 2014 and 2016. In the letter, he stated that parents were treated unfairly and that the activities did not have a sound legal basis. He also wrote that he reported this to his supervisor on several occasions, but that nothing was done about it.
In October 2020, it became public that in-house counsel Sandra Palmen had also reported unlawful acts at the Benefits department in 2017. Based on a decision by the Council of State, she said that the Tax and Customs Administration acted reprehensibly. This report did not lead to a change in policy either.
Advisory Committee for the Implementation of Benefits
On 1 July 2019, State Secretary for Finance Menno Snel established the Advisory Committee for the Implementation of Benefits (Dutch: Adviescommissie uitvoering toeslagen). The committee was chaired by former minister and former vice-chairman of the Council of State Piet Hein Donner, and was therefore nicknamed the "Donner Committee" (). The committee also included former State Secretary for Social Affairs and Employment Jetta Klijnsma and jurist . The task of this committee was to advise on how to improve the benefits system. The committee also had the specific task of assessing the scope for handling the childcare benefits scandal cases.
On 12 March 2020, the committee presented its final report, "Looking Back in Astonishment" (). In the report, the committee recommended extending the compensation scheme for about 300 victims to other parents who were "treated with an institutional bias".
The findings of this report were, however, criticised. For example, the committee was accused of not being critical enough of the role of the Council of State, of which Donner himself was vice-chairman at the time. The committee was also accused of keeping politicians out of harm's way, especially former Minister of Social Affairs and Employment Lodewijk Asscher. Dutch news site argued that the committee's main conclusions did not correspond to the findings of their inquiry.
Dutch Data Protection Authority
After reports from RTL Nieuws and Trouw on the use of racial profiling in the assessment of benefit applications, the Dutch Data Protection Authority (AP) decided to start an investigation into the Tax and Customs Administration in May 2019. In July 2020, the chairman of the AP presented the report to State Secretary Van Huffelen. The AP described the Tax and Customs Administration's working method as "unlawful, discriminatory and improper" and stated that it had seriously violated the General Data Protection Regulation (GDPR). Chairman wrote:
Although the AP considered the practices as discriminatory, it concluded there was no ethnic profiling. The AP also described in the report that the Tax and Customs Administration had not cooperated in the investigation. Based on this report, the AP is considering a sanction for the Tax and Customs Administration.
Parliamentary Interrogation Committee on Childcare Benefits
On the initiative of member Bart Snels of GroenLinks, the House of Representatives established the Parliamentary Interrogation Committee on Childcare Benefits (, POK) on 2 July 2020. The aim was to find out to what extent the cabinet was aware of the childcare benefits scandal and why it took until 2019 to become public. A minority of the House of Representatives also wanted former members of the House of Representatives to be able to be heard during the interrogations. The decision not to allow this was criticised, because as co-legislator and controller of the government, the House of Representatives also had a role in the childcare benefits scandal.
Because statements during a parliamentary questioning are no longer legally usable for criminal investigation and a criminal investigation by the Public Prosecution Service was still ongoing, the questioning was coordinated with the Public Prosecution Service. Civil servants were not asked about racial profiling by the tax authorities, so that they may still be prosecuted for those acts in the future.
The parliamentary interrogation committee consisted of the following members of the House of Representatives:
Interrogations
The investigation took place in November 2020. In the first week, twelve experts and former top officials from the Tax and Customs Administration, the Ministry of Finance and the Ministry of Social Affairs and Employment testified. On 18 November, the former directors of the Tax and Customs Administration were questioned by the committee. They blamed the childcare benefits scandal on the Ministry of Social Affairs and Employment. A day later, the officials of the Ministry of Social Affairs and Employment, in turn, referred back to the Tax and Customs Administration. During the interrogation of Sandra Palmen, Renske Leijten asked her to read part of her memorandum, so that redacted passages became public. This revealed that she already advised in 2017 not to continue litigating against parents.
In the second week, seven (former) members of government were interrogated:
Frans Weekers (State Secretary for Finance, 2010–2014)
Eric Wiebes (State Secretary for Finance, 2014–2017)
Menno Snel (State Secretary for Finance, 2017–2019)
Wopke Hoekstra (Minister of Finance, 2017–2022)
Lodewijk Asscher (Minister of Social Affairs and Employment, 2012–2017)
Tamara van Ark (State Secretary for Social Affairs and Employment, 2017–2020)
Mark Rutte (Prime Minister, 2010–2024)
Report
On 17 December 2020, the committee presented a report entitled "Unprecedented Injustice" () to Speaker of the House of Representatives Khadija Arib. The report criticised the Tax and Customs Administration, the Ministry of Social Affairs, the cabinet, the Council of State, and also the House of Representatives itself. The committee wrote that the affected parents did not receive the protection they deserved as a consequence of the group penalties implemented by the Ministry of Finance, thus violating the "fundamental principles of the rule of law".
In particular, the committee was critical of the information provided by the Tax and Customs Administration, both towards its own ministers and the House of Representatives, as well as towards affected parents, the judiciary and the media. There was also criticism of the so-called "Rutte doctrine", a term that originated from a text message from a civil servant to Prime Minister Mark Rutte that was discussed during the interrogations. This doctrine states that communication between officials and ministers did not have to be made public. Since recommendations fell outside the remit of this committee, they urged those involved to find out how this could have been prevented.
Consequences
Political consequences
On 4 December 2019, a motion of no confidence was filed against State Secretary for Finance Menno Snel in a debate on the childcare benefits scandal. The motion was not passed by the House of Representatives, as the coalition parties VVD, CDA, D66 and ChristenUnie, the opposition parties GroenLinks and SGP, and independent member Van Haga voted against the motion. On 18 December 2019, Snel announced his resignation during a second debate about the scandal. He was succeeded by two new state secretaries: Alexandra van Huffelen and Hans Vijlbrief (both D66). The portfolio of Van Huffelen includes the Benefits and Customs departments of the Tax and Customs Administration, and therefore she was given the responsibility for resolving the childcare benefits scandal.
In December 2020, after the report of the parliamentary interrogation committee had been published, the former Minister of Social Affairs and Employment Lodewijk Asscher personally apologised for his role in the childcare benefits scandal. His role in the childcare benefits scandal led to a discussion within the party about his position as party leader and lead candidate for the 2021 general election. Initially, he indicated that he wanted to continue as party leader. However, on 14 January 2021, Asscher announced that he would step down as party leader and candidate MP.
On 10 January 2021, GroenLinks leader Jesse Klaver announced a motion of no confidence against the third Rutte cabinet for an upcoming debate on 15 January about the report of the parliamentary interrogation committee. The entire opposition signalled that they either supported the motion or seriously considered supporting it. Shortly before the debate, the cabinet collectively decided to resign and to continue as a demissionary cabinet. In addition, Minister of Economic Affairs and Climate Policy Eric Wiebes decided to resign immediately.
Prosecution
On 28 November 2019, the Parliamentary Committee for Finance explored the possibilities of prosecuting the then State Secretary for Finance Menno Snel and his civil servants. Around the same time, there was also a call within the civil service of the Tax and Customs Administration for disciplinary and judicial measures against the executives involved. After Snel resigned, Minister of Finance Wopke Hoekstra indicated on 12 January 2020 that he saw no indications of criminal acts by the Tax and Customs Administration. At the insistence of the House of Representatives, Hoekstra nevertheless decided to ask an external agency to reassess the information for criminality and also called on everyone to report information about criminal acts.
On 19 May 2020, the Ministry of Finance filed a complaint against the Tax and Customs Administration as a result of the childcare benefits scandal, related to professional discrimination from 2013 to 2017. MP Pieter Omtzigt expressed his concerns about a criminal investigation by the Public Prosecution Service, given that the Public Prosecution Service itself also played a role in the childcare benefits scandal, by tackling matters under administrative rather than criminal law in consultation with the tax authorities.
In addition to the report filed by the Ministry of Finance, on 28 February 2020, five more reports had been filed with the Public Prosecution Service against the Tax Authorities for criminal acts regarding the childcare benefits scandal. One complaint is known to have been filed by affected parents in December 2019, but no person or reason is known for the others.
On 7 January 2021, the Public Prosecution Service announced that it would not start a criminal investigation based on the Ministry of Finance's report, because after a careful assessment there was no evidence of gagging and professional discrimination. In addition, the Public Prosecution Service referred to the sovereign immunity of the Tax and Customs Administration, which also includes its officials who implemented the policy, provided that they do not act out of their gain or interest. The Public Prosecution Service stated that the incorrect treatment of the parents was due to administrative and political choices, for which accountability belongs in the political domain. A group of affected parents and their lawyers indicated that they intended to sue the Public Prosecution Service to proceed with criminal prosecution.
On 12 January 2021, a group of twenty affected parents filed a complaint against several government officials involved: Tamara van Ark, Wopke Hoekstra, Eric Wiebes, Menno Snel, and Lodewijk Asscher. According to the lawyer representing the parents, these (former) ministers and state secretaries are guilty of a criminal offence and negligence. Because it concerns (former) members of government, this declaration was filed with the Attorney General of the Supreme Court.
On February 3, 2021, the group of affected parents had grown to 80 members. Their lawyer, Vasco Groeneveld, stated that many of the affected parents are not willing to file any complaint against these government officials because they are afraid that it will be used against them. Eighty parents have also filed a complaint against the Dutch Prime Minister Mark Rutte because of his responsibility in the affair. Several released documents show that the Prime Minister had been involved in the decision to take steps since May 2019 while illegal collection by the Tax Authorities continued until November 2019 or longer. Lawyer Groeneveld explained that the Prime Minister was also already aware of abuses by the Benefits department in the autumn of 2018.
Compensation
In March 2020, the Donner Committee recommended compensating wrongly accused parents. A day later, State Secretary Van Huffelen put forward a more extensive compensation scheme, totalling half a billion euros. In July 2020, a special department at the Ministry of Finance was created for this compensation. Because the payment of compensation was slow, the Socialist Party successfully pushed for a Christmas gift of 750 euros, which was paid in December 2020 to 8,500–9,500 of the affected parents. Later that month, 7,000 more received it as well.
In response to the report of the parliamentary interrogation committee, Van Huffelen announced on 22 December 2020 that all wrongly accused parents would receive €30,000 compensation, regardless of the financial loss, unless they qualify for higher compensation. This should take place within four months, for which the recovery operation will be expanded.
In July 2020, it became public that State Secretary Snel wanted to compensate the victimised parents in the CAF 11 Hawaii case earlier, in June 2019. This was rejected at that time by the cabinet, amongst other things, because they wanted to wait for the Donner Committee's report and for fear of setting a precedent. There was also a dispute between the Ministry of Social Affairs and Employment and the Ministry of Finance about who would pay for the compensation.
See also
De Jacht op Meral Ö – 2024 Dutch dramafilm about the scandal
British Post Office scandal – similar multi-year government intransigence in the United Kingdom.
Robodebt scheme – controversial Australian automated data-matching program for welfare debt recovery, scrapped in 2020.
References
External links
2021 in the Netherlands
2021 scandals
21st-century scandals
Discrimination in the Netherlands
Financial_scandals
Government by algorithm
Political scandals in the Netherlands
Welfare fraud | Dutch childcare benefits scandal | Engineering | 5,178 |
27,452,940 | https://en.wikipedia.org/wiki/Antibodies%20from%20lymphocyte%20secretions | The antibodies from lymphocyte secretions (ALS) assay is an immunological assay to detect active diseases like tuberculosis, cholera, typhoid etc. Recently, ALS assay nods the scientific community as it is rapidly used for diagnosis of Tuberculosis. The principle is based on the secretion of antibody from in vivo activated plasma B cells found in blood circulation for a short period of time in response to TB-antigens during active TB infection rather than latent TB infection.
Procedure
PBMCs were separated from blood on Ficoll-Paque by differential centrifugation and were suspended in 24-well tissue culture plates culture medium. Different dilutions of PBMCs were incubated at 37 °C with 5% . Culture supernatants were collected at 24, 48, 72, and 96 h after incubation and the supernatants were test against BCG or PPD by ELISA. The ELISA titer indicate the positive or negative result.
Advantages in TB diagnosis
The main advantages are High Sensitivity >93 %, Early detection of active TB. This method does not require a specimen taken from the site of disease, it also may be useful in diagnosis of paucibacillary childhood TB. Secreted antibody may be preserved for long time for further analysis.
Pitfalls
This method cannot be applied if Mantoux test (tuberculin skin test) has been done within the last 40 days, because it can hamper the results of the ALS test. This test is used as a complementary test to other tests, e.g. chest X-ray, ESR, CRP, history of contact with active TB case, failure with conventional antibiotic treatment etc.; anti-TB therapy is not provided if only ALS test is positive. The reason is that this method is potentially an early biomarker of active infection. However, if a subject does not show any physical symptoms, the doctors cannot prescribe anti-TB treatment.
References
Biochemistry methods | Antibodies from lymphocyte secretions | Chemistry,Biology | 404 |
5,012,700 | https://en.wikipedia.org/wiki/The%20White%20Balloon | The White Balloon (, ) is a 1995 Iranian film directed by Jafar Panahi, with a screenplay by Abbas Kiarostami. It was Panahi's feature-film debut as director. The film received many strong critical reviews and won numerous awards in the international film fairs around the world including the Prix de la Camera d'Or at the 1995 Cannes Film Festival. The Guardian has listed this film as one of the 50 best family films of all time. The film is on the BFI list of the 50 films you should see by the age of 14.
The film was selected as the Iranian entry for the Best Foreign Language Film at the 68th Academy Awards, but was not accepted as a nominee. Iran unsuccessfully tried to withdraw the film from contention but the academy refused to accept the withdrawal.
Plot
It is the eve of the Iranian New Year. The film opens in a Tehran market where seven-year-old Razieh (Aida Mohammadkhani) and her mother are shopping. Razieh sees a goldfish in a shop and begins to nag her hurrying mother to buy it for the festivities instead of the skinny ones in her family's pond at home. Almost all of the film's major characters are briefly seen in this market scene, though they are not introduced to the viewer until later. On their way home, mother and daughter pass a courtyard where a crowd of men has gathered to watch two snake charmers. Razieh wants to see what is happening but her mother pulls her daughter away, telling her that it is not good for her to watch these things.
Back home, Razieh is upset about her mother's refusal to let her buy a new goldfish, and continues to nag her mother. Her older brother Ali (Mohsen Kalifi) returns from a shopping errand for their father. He complains that he asked Ali to buy shampoo, not soap, then throws the soap at him. Ali sets off to buy the shampoo and when he returns Razieh asks him to help in changing her mother's mind about the goldfish, bribing him with a balloon. Ali thinks that the 100 tomans cost for the goldfish is a waste of money but helps Razieh in petitioning their mother nonetheless. Her mother gives her the family's last 500-toman banknote and asks her to bring back the change. Razieh sets off with an empty glass jar to the fish shop a few blocks away.
Between their home and the fish store, Razieh manages to lose the money twice, first in an encounter with the snake charmer, and then when she drops the money through the grate at the entrance to a store which has been closed for the New Year celebration.
Razieh and Ali make several attempts to retrieve the money and while doing so encounter many people, including a kind older woman at the fish shop, the owners of a nearby shop, and an Iranian soldier. The money, however, is always just out of reach. Finally, the siblings receive help from a young Afghan street vendor selling balloons. He carries all of his balloons on a wooden stick, which has three balloons left. Razieh, Ali, and the Afghan boy are unable to retrieve the note with only the stick, so Ali comes up with the idea of sticking gum to the bottom of the stick to retrieve the bill. Ali leaves to buy gum, but returns without any, and finds that the Afghan boy has left Razieh at the grate. However, the Afghan boy soon returns with his stick, now with only one white balloon, and a pack of chewing gum he bought for the group. The group attaches a piece of gum to one end of the balloon stick, and with it they reach down through the grate and pull the money out.
The film ends, not with Ali and Razieh, but on a still shot of the young Afghan boy, as he sits at the grate watching Ali and Razieh leave for the shop and soon afterwards return home from buying the goldfish. The Afghan boy sits alone with his stick and white balloon for a while, as the Iranian year 1374 begins, then he gets up to walk away.
Reception
The White Balloon has an approval rating of 83% on review aggregator website Rotten Tomatoes, based on 24 reviews, and an average rating of 6.9/10. The website's critical consensus states: "The White Balloon tells a simple yet powerfully effective story through a child's eyes, inviting audiences to see familiar surroundings from a different perspective."
John Simon of The National Review wrote "Few films are pure delight, but White Balloon is one of these" .
Accolades
Prix de la Camera d'Or, 1995 Cannes Film Festival
Gold Award, Tokyo International Film Festival, 1995
Best International Film, Sudbury Cinéfest, 1995
International Jury Award, São Paulo International Film Festival, 1995
See also
List of submissions to the 68th Academy Awards for Best Foreign Language Film
List of Iranian submissions for the Academy Award for Best Foreign Language Film
References
External links
1995 films
1995 drama films
Iranian drama films
Iranian children's films
1990s Persian-language films
Films about fish
Films set in Tehran
Films set in Iran
Films directed by Jafar Panahi
1995 directorial debut films
Iranian independent films
Balloons
Caméra d'Or winners | The White Balloon | Chemistry | 1,088 |
1,999,046 | https://en.wikipedia.org/wiki/Vinclozolin | Vinclozolin (trade names Ronilan, Curalan, Vorlan, Touche) is a common dicarboximide fungicide used to control diseases, such as blights, rots and molds in vineyards, and on fruits and vegetables such as raspberries, lettuce, kiwi, snap beans, and onions. It is also used on turf on golf courses. Two common fungi that vinclozolin is used to protect crops against are Botrytis cinerea and Sclerotinia sclerotiorum. First registered in 1981, vinclozolin is widely used but its overall application has declined. As a pesticide, vinclozolin is regulated by the United States Environmental Protection Agency (U.S. EPA). In addition to these restrictions within the United States, as of 2006 the use of this pesticide was banned in several countries, including Denmark, Finland, Norway, and Sweden.
It has gone through a series of tests and regulations in order to evaluate the risks and hazards to the environment and animals. Among the research, a main finding is that vinclozolin has been shown to be an endocrine disruptor with antiandrogenic effects.
Use in the United States
Vinclozolin is manufactured by the chemical company BASF and has been registered for use in the United States since 1981. The following is a compilation of data indicating the national use of vinclozolin per crop (lbs AI/yr) in 1987: apricots, 124; cherries, 3,301; green beans, 13,437; lettuce, 24,779; nectarines, 1,449; onions, 829; peaches, 15,203; plums, 163; raspberries, 3,247; and strawberries, 41,006. In 1997, two applications totaling 285 pounds each, were applied to kiwifruit in California to prevent the gray mold and soft rot caused by Botrytis cinerea. In general, the United States has seen an overall decline in the national use of vinclozolin. In 1992, a total of approximately 135,000 pounds were used. However, in 1997 this number dropped to 122,000 and in 2002 it was down to 55,000 pounds.
Preparation and application
The following chemical reactions are used to make vinclozolin: One method combines methyl vinyl ketone, sodium cyanide, 3,5-dichloroaniline, and phosgene. This process involves formation of the cyanohydrin, followed by hydrolysis of the nitrile. Vinclozolin is also prepared by the reaction of 3,5-dichlorophenyl isocyanate with an alkyl ester of 2-hydroxy-2-methylbut-3-enoic acid. Ring closure is achieved at elevated temperature.
Vinclozolin is then formulated into a dry flowable or extruded granular. It can be applied by through the air (aerial), through irrigation systems (chemigation), or by ground equipment. Vinclozolin is also applied to some plants, such as decorative flowers, as a dip treatment where the plant is dipped into the fungicide solution and then dried. It is also common to spray a vinclozolin solution using thermal foggers in greenhouses.
History of regulations in the US
All pesticides sold or distributed in the United States must be registered by U.S. EPA. Pesticides that were first registered before November 1, 1984, were reregistered so that they can be retested using the now more advanced methods. Because vinclozolin was released in 1981, it has gone through both preliminary and a subsequent reregistration.
Below is a list of the history of regulations for vinclozolin:
A Data Call-In (DCI), requiring data on product and residue chemistry, toxicity, environmental fate, and ecological effects, have been requested in 1991, 1995 and 1996.
Agricultural Data Call-In (AGDCI), to estimate what happens to workers after they apply the fungicide, was issued in 1995.
In April 1997, BASF proposed a new use for vinclozolin and thus all risks were reevaluated under the Food Quality Protection Act (FQPA).
In June 1998 the EPA put a new safety factor on vinclozolin. BASF voluntarily cancelled vinclozolin use on some fruits and turf.
On July 18, 2000 the EPA established that certain meat, bean and dairy products have a 3-year time-limited tolerances for vinclozolin and its metabolites. This led to a phase-out of most domestic food uses for vinclozolin. In September 2000, the EPA received objections to the issued tolerance for beans.
Other measures that BASF has taken to reduce risks: cancel use on decorative plants and set up new restrictions on turf which state that the application to turf restricted to golf courses and industrial sites.
BASF has proposed to immediately eliminate or phase-out uses such that only use on canola and turf will remain after 2004.
Means of exposure
The U.S. EPA has examined dietary (food and water), non-dietary, and occupational exposure to vinclozolin or its metabolites. In general, fungicides have been shown to circulate through the water and air, and it possible for them to end up on untreated foods after application. Consumers alone cannot easily reduce their exposure because fungicides are not removed from produce that is washed with tap water. A key example of exposure to vinclozolin is through wine grapes which is considered to account for about 2% of total vinclozolin exposure. It has been determined that people may be exposed to residues of vinclozolin and its metabolites containing the 3,5-dichloroaniline moiety (3,5-DCA) through diet, and thus tolerance limits have been established for each crop. Although vinclozolin is not registered for use by homeowners, it is still possible for people to come into contact with the fungicide and its residues. For example, golfers playing on treated golf courses, and families playing on sod which has been previously treated may be at risk for exposure. Occupationally, workers can be exposed to vinclozolin while doing activities such as loading and mixing.
Environmental and health impacts
Antiandrogen
As part of the reregistration process, the U.S. EPA reviewed all toxicity studies on vinclozolin. The main effect induced by vinclozolin is related to its antiandrogenic activity and its ability to act as a competitive antagonist of the androgen receptor. Vinclozolin can mimic male hormones, like testosterone, and bind to androgen receptors, while not necessarily activating those receptors properly. There is evidence that vinclozolin itself binds weakly to the androgen receptor but that at least two of its metabolites are responsible for much of the antiandrogenic activity. When male rats were given low dose levels (>3 mg/kg/day) of vinclozolin, effects such as decreased prostate weight, weight reduction in sex organs, nipple development, and decreased ano-genital distance were noted. At higher dose levels, male sex organ weight decreased further, and sex organ malformations were seen, such as reduced penis size, the appearance of vaginal pouches and hypospadias. In the rat model, it has been shown that the antiandrogenic effects of vinclozolin are most prominent during the developmental stages. In utero, this sensitive period of fetal development occurs between gestation days 16-17. Embryonic exposure to vinclozolin can influence sexual differentiation, gonadal formation, and reproductive functions. In bird models, vinclozolin and its metabolites were shown in vitro and in vivo to inhibit androgen receptor binding and gene expression. Vinclozolin caused reduced egg laying, reduced fertility rate, and a reduction in successful hatches. Androgens also play a role in puberty, and it has been shown an antiandrogen like vinclozolin can delay pubertal maturation. Antiandrogenic toxins are also known to alter sexual differentiation and reproduction in the rabbit model. Male rabbits exposed to vinclozolin in utero or during infancy did not show a sexual interest in females or did not ejaculate. Since the androgen receptor is widely conserved across species lines, antiandrogenic effects would be expected in humans.
In vertebrates, vinclozolin also acts as a neuroendocrine disruptor, affecting behaviors tied to locomotion, cognition, and anxiety.
Progesterone and estrogen effects in rats
In rats, vinclozolin has been shown to affect other steroid hormone receptors, such as those of progesterone and estrogen. Just as with androgens, the timing of the exposure to vinclozolin determines the magnitude of the effects related to these hormones. In a study with rats, in vitro research showed the ability of two vinclozolin metabolites to bind to the progesterone receptor. However, the same study in vivo using adult male rats showed no effects. When mice experienced vinclozolin exposure in utero, male offspring exhibited up-regulated estrogen receptor and up-regulated progesterone receptor. In females, vinclozolin down-regulated expression of estrogen receptors and up-regulated progesterone receptor expression. This result causes virilization and the feminization of males and masculinization of females.
Transgenerational effects
In rats, vinclozolin has been demonstrated to have trangenerational effects, meaning that not only is the initial animal affected, but effects are also seen in subsequent generations. One study demonstrated that vinclozolin impaired male fertility not only in the first generation that was exposed in utero, but in males born for three generations and beyond. Furthermore, when affected males were mated with normal females, some of the offspring were sterile and some had reduced fertility. After three generations, male offspring continued to show low sperm count, prostate disease and high rates of testicular cell apoptosis. Other studies conducted experiments where rat embryos were exposed to vinclozolin during sex determination. F1 (first generation) vinclozolin treated males were bred with F1 vinclozolin treated females. This pattern continued for three generations. The initial F0 mother was the only subject that was directly exposed to doses of vinclozolin. F1-F4 generation males all showed an increase in the prevalence of tumors, prostate disease, kidney disease, test abnormalities and immune failures when compared to the control group. F1-F4 females also showed an increased incidence of tumors and kidney disease. Furthermore, transgenerationally transmitted changes in mate preference and anxiety behavior have also been observed in rats following exposure to vinclozolin. It has been reported that these transgenerational reports correlate with epigenetic changes, specifically, an alteration in DNA methylation in the male germ line. However, these transgenerational changes have not been successfully reproduced by BASF scientists, the manufacturer of vinclozolin
Links to cancer
The U.S. EPA has classified vinclozolin as a possible human carcinogen. Vinclozolin induces an increase in leydig cell tumors in rats. The 3,5-DCA metabolite is thought to possess a mode of tumor induction based on its similarity to p-choroaniline.
Environment
Laboratory test indicate that vinclozolin easily breaks down and dissipates in the environment with the help of microbes. Of its several metabolites 3,5-dichloroaniline resists further degradation. In terrestrial field dissipation studies conducted in various states, vinclozolin dissipated with a half-life between 34 and 94 days. Half-lives including residues can reach up to 1,000 days. Residues may accumulate and be available for future crop uptake.
Alternative fungicides
Since the phase-out of vinclozolin, farmers are faced with fewer options to control gray and white mold. The New York State Agricultural Experiment Station has carried out efficacy trials for gray and white mold. Research has showed potential alternatives to vinclozolin. Trifloxystrobin (Flint), iprodione (Rovral), and cyprodinil plus fludioxonil (Switch) control gray mold. Thiophanate-methyl (Topsin M) was as effective as vinclozolin in controlling white molds. Switch was the most promising alternative to vinclozolin for controlling both gray and white mold on pods and for increasing marketable yield.
References
External links
BBC article on Vinclozolin in rats
Chloroarenes
Endocrine disruptors
Fungicides
Nonsteroidal antiandrogens
Oxazolidinediones
Suspected testicular toxicants
Vinyl compounds | Vinclozolin | Chemistry,Biology | 2,705 |
1,275,473 | https://en.wikipedia.org/wiki/Contamination%20delay | In digital circuits, the contamination delay (denoted as tcd) is the minimum amount of time from when an input changes until any output starts to change its value. This change in value does not imply that the value has reached a stable condition. The contamination delay only specifies that the output rises (or falls) to 50% of the voltage level for a logic high. The circuit is guaranteed not to show any output change in response to an input change before tcd time units (calculated for the whole circuit) have passed. The determination of the contamination delay of a combined circuit requires identifying the shortest path of contamination delays from input to output and by adding each tcd time along this path.
For a sequential circuit such as two D-flip flops connected in series, the contamination delay of the first flip-flop must be factored in to avoid violating the hold-time constraint of the second flip-flop receiving the output from the first flip flop. Here, the contamination delay is the amount of time needed for a change in the flip-flop clock input to result in the initial change at the flip-flop output (Q).
If there is insufficient delay from the output of the first flip-flop to the input of the second, the input may change before the hold time has passed. Because the second flip-flop is still unstable, its data would then be "contaminated." Every path from an input to an output can be characterized with a particular contamination delay.
Well-balanced circuits will have similar speeds for all paths through a combinational stage, so the minimum propagation time is close to the maximum. This corresponding maximum time is the propagation delay. The condition of data being contaminated is called a race.
References
Digital Design and Computer Architecture 2nd edition, David Money Harris and Sarah L. Harris, , Morgan Kaufmann, 2012
Timing in electronic circuits | Contamination delay | Technology | 374 |
55,907,709 | https://en.wikipedia.org/wiki/Cris%20Thomas | Cris Thomas (also known as Space Rogue) is an American cybersecurity researcher, white hat hacker, and award winning
best selling author. A founding member and researcher at the high-profile hacker security think tank L0pht Heavy Industries, Thomas was one of seven L0pht members who testified before the U.S. Senate Committee on Governmental Affairs (1998) on the topic of government and homeland computer security, specifically warning of internet vulnerabilities and claiming that the group could "take down the internet within 30 minutes".
Subsequently, Thomas pursued a career in Cyber Security Research while also embracing a public advocacy role as a cyber security subject-matter expert (SME) and pundit. Granting interviews and contributing articles, Space Rogue's advocacy has served to educate and advise corporations, government, and the Public about security concerns and relative risk in the areas of election integrity, cyber terrorism, technology, the anticipation of new risks associated with society's adoption of the Internet of things, and balancing perspective (risk vs. hype).
Career
Cyber Security
A founding member of the hacker think tank L0pht Heavy Industries, Thomas was the first of L0pht's members to leave following the merger of L0pht with @Stake in 2000, and the last to reveal his true name. Thomas was one of seven L0pht members who testified before the U.S. Senate Committee on Governmental Affairs (1999). Testifying under his internet handle, Space Rogue, the testimony of Thomas and other L0pht members served to inform the government of current and future internet vulnerabilities to which federal and public channels were susceptible. The testimony marked the first time that persons not under federal witness protection were permitted to testify under assumed names.
While at the L0pht Thomas created The Whacked Mac Archives and The Hacker News Network. In addition he released at least one security advisories detailing a flaw in FWB's Hard Disk Toolkit.
Thomas continued a career in Cyber Security Research at @Stake, Guardent, Trustwave (Spiderlabs), Tenable, and IBM (X-Force Red). Selected to serve as a panelist during a 2016 Atlantic Council cyber risk discussion series, and a webinar speaker for the National Science Foundation's WATCH series, Thomas has embraced a public advocacy role as a cyber security subject-matter expert (SME) and pundit, granting interviews and contributing articles to educate the public about security concerns and relative risk. Topics include election integrity, cyber terrorism, technology, password security, the anticipation of new risks associated with society's adoption of the Internet of things, and balancing perspective (risk vs. hype).
In response to a 2016 United States Government Accountability Office report revealing the nation's nuclear weapons were under the control of computers that relied on outdated 8" floppy disks, Thomas argued that the older computers, data storage systems, programming languages, and lack of internet connectivity would make it more difficult for hackers to access the systems, effectively reducing the vulnerability of the weapon control systems to hacking.
Following cyber security mega-breaches at Target, Home Depot, and the U.S. Office of Personnel Management, Thomas advocated for proactive implementation of basic security measures as the most effective means to thwart similar mega-threats. Bluntly stating that the gap between knowledge and implementation leaves companies and individuals at unnecessary risk, Thomas’ recommendation focused on simple measures that have been known for one to two decades, but which organizations have not implemented universally. Thomas had identified retail cyber security breaches, including that at FAO Schwarz, as early as 1999.
In 2017, at the Defcon hacker conference Thomas assisted with escorting Rep. Will Hurd (R) and Rep. Jim Langevin (D) around the conference area through the various villages.
At Defcon 27 in 2019 Thomas appeared on a panel with Rep. Langevin (D-RI), Rep. Lieu (D-CA), and former Rep. Jane Harman entitled "Hacking Congress: The Enemy of My Enemy Is My Friend."
During the panel Thomas was quoted as saying “It’s up to us as a community to engage with those people…to educate them”, "But Congress doesn't work that way; it doesn't work at the 'speed of hack'. If you're going to engage with it, you need to recognize this is an incremental journey” and “it takes 20 years to go from hackers in Congress to Congress at DEF CON”.
The Whacked Mac Archives
The Whacked Mac Archives was an FTP download site managed by Thomas with the world's largest collection of Apple Macintosh hacking tools. The total size of all the tools on the site was 20MB.
A CD copy of the contents of the FTP site was advertised for sale in 2600: The Hacker Quarterly.
Hacker News Network
Serving as Editor-in-Chief, Thomas founded and managed L0pht's online newsletter and website, known as the Hacker News Network (or simply Hacker News or HNN). Originally created to rapidly share discoveries about computer security, Hacker News also became a forum for users to post security alerts as vulnerabilities were identified. The publication grew, eventually supporting paid advertising and an audience that included technology journalists and companies with an interest in cybersecurity. The website can be seen in several background shots of the video "Solar Sunrise: Dawn of a New Threat" produced by the National Counterintelligence Center in 1999.
After L0pht's merger with @Stake in 2000, the Responsible disclosure-focused Hacker News Network was replaced with Security News Network.
Hacker News Network, after a decade offline, set for a launch on Jan. 11, 2010, with video reports about security, the last videos were published in 2011. Hacker News Network in 2018 redirects to spacerogue.net
CyberSquirrel1 (CS1)
In 2013, Thomas created the project CyberSquirrel1 as a satirical demonstration of the relative risk of Cyberwarfare attacks on critical infrastructure elements such as the North American electrical grid. Started as a Twitter feed, the CyberSquirrel1 project expanded to include a full website and CyberSquirrel Tracking Map; as the dataset grew, Attrition.org's Brian Martin (alias “Jared E. Richo” a/k/a Jericho) joined the project in 2014. CyberSquirrel1's results disrupted public perception regarding the prevalence of nation-based hacking cyberwarfare attacks, concluding that damage due to cyberwarfare (for example, Stuxnet) was "tiny compared to the cyber-threat caused by animals", referring to electrical disruptions caused by squirrels.
An archive containing the full data set and supporting material of the project was uploaded to the Internet Archive under the Creative Commons license on January 19, 2021.
Election Security
As the 2015-2016 alleged Russian interference in the 2016 United States elections unfolded, public and media interest in hacking and hackers increased. Leading up to the 2016 election, Thomas was interviewed for mainstream media productions, including CNBC's On the Money.
After the release of the Joint Analysis Report, Thomas called for expanded detail on Indicators of Compromise in Federal Joint Analysis Reports, indicating that increased transparency and IP address reporting were instrumental for enhancing security.
Prior to the 2018 election Thomas continued his advocacy speaking with CBS News and other outlets about securing our elections and the vulnerability of voting machines.
Books
In February 2023 Thomas released his first book, Space Rogue: How the Hackers Known as L0pht Changed the World.
Written as a personal memoir, the book detailed his childhood growing up in Maine, how he discovered the online world of BBS’s and met the other members of the hacker collective L0pht Heavy Industries. The book covers how the L0pht released security vulnerability information, created L0phtcrack, gained media recognition, and testified in front of Congress in 1998. The book also covers the L0pht’s transition to the security consultancy @Stake, and how the L0pht’s impact still ripples throughout the information security industry today.
The book spent several weeks in the Amazon top 10 in the Computer & Technology Biographies category and briefly hit number 1. The book was a finalist in the 2023 International Book Awards. and a winner of the 2023 National Indie Excellence Awards (NIEA).
References
External links
Cybersquirrel1 website & event tracking map
Space Rogue (Cris Thomas) personal website
L0pht
Living people
Hackers
Writers about computer security
American computer scientists
Year of birth missing (living people) | Cris Thomas | Technology | 1,781 |
23,981,506 | https://en.wikipedia.org/wiki/C30H48O3 | {{DISPLAYTITLE:C30H48O3}}
The molecular formula C30H48O3 (molar mass: 456.70 g/mol) can represent the following compounds:
Oleanolic acid, a pentacyclic triterpenoid commonly found in Olea europaea (olives) and their oils.
Ursolic acid, a pentacyclic triterpenoid
Lucidadiol, a sterol
Trametenolic acid, a triterpene
References | C30H48O3 | Chemistry | 109 |
4,544,077 | https://en.wikipedia.org/wiki/Heatwork | Heatwork is the combined effect of temperature and time. It is important to several industries:
Ceramics
Glass and metal annealing
Metal heat treating
While the concept of heatwork is taught in material science courses it is not a defined measurement or scientific concept.
Pyrometric devices can be used to gauge heat work as they deform or contract due to heatwork to produce temperature equivalents. Within tolerances, firing can be undertaken at lower temperatures for a longer period to achieve comparable results. When the amount of heatwork of two firings is the same, the pieces may look identical, but there may be differences not visible, such as mechanical strength and microstructure.
External links
Temperature equivalents table & description of Bullers Rings.
Temperature equivalents table & description of Nimra Cerglass pyrometric cones.
Temperature equivalents table & description of Orton pyrometric cones.
Temperature equivalents table of Seger pyrometric cones.
Temperature Equivalents, °F & °C for Bullers Ring.
Glass physics
Pottery
Metallurgy
Ceramic engineering | Heatwork | Physics,Chemistry,Materials_science,Engineering | 213 |
24,201,506 | https://en.wikipedia.org/wiki/C9H12O3 | {{DISPLAYTITLE:C9H12O3}}
The molecular formula C9H12O3 (molar mass: 168.19 g/mol, exact mass: 168.078 u) may refer to:
Homovanillyl alcohol
4-Ipomeanol (4-IPO)
Veratrole alcohol
Molecular formulas | C9H12O3 | Physics,Chemistry | 74 |
191,923 | https://en.wikipedia.org/wiki/Denatured%20alcohol | Denatured alcohol, also known as methylated spirits, metho, or meths in Australia, Ireland, New Zealand, South Africa, and the United Kingdom, and as denatured rectified spirit, is ethanol that has additives to make it poisonous, bad-tasting, foul-smelling, or nauseating to discourage its recreational consumption. It is sometimes dyed so that it can be identified visually. Pyridine and methanol, each and together, make denatured alcohol poisonous; denatonium makes it bitter.
Denatured alcohol is used as a solvent and as fuel for alcohol burners and camping stoves. Because of the diversity of industrial uses for denatured alcohol, hundreds of additives and denaturing methods have been used. The main additive usually is 10% methanol (methyl alcohol), hence the name methylated spirits. Other common additives include isopropyl alcohol, acetone, methyl ethyl ketone, and methyl isobutyl ketone.
Denaturing alcohol does not alter the ethanol molecule (chemically or structurally), unlike denaturation in biochemistry. Rather, the ethanol is mixed with other chemicals to form a foul-tasting, often toxic, solution. For many of these solutions, it is intentionally difficult to separate the components.
In many countries denaturated alcohol is traditionally dyed with methyl violet or similar hue (crystal violet, methylene blue) dye for safety reasons. In Central and Eastern Europe (Czechoslovakia, Poland and others) this was mandatory during the communist era.
Uses
In many countries, sales of alcoholic beverages are heavily taxed for revenue and public health policy purposes (see Pigovian tax). In order to avoid paying beverage taxes on alcohol that is not meant to be consumed, the alcohol is usually "denatured", or treated with added chemicals to make it unpalatable. Its composition is tightly defined by government regulations in countries that tax alcoholic beverages. Denatured alcohol is used identically to ethanol itself but only for applications that involve fuel, surgical and laboratory stock. Pure ethanol is required for food and beverage applications and certain chemical reactions where the denaturant would interfere. In molecular biology, denatured ethanol should not be used for the precipitation of nucleic acids, since the additives may interfere with downstream applications.
Denatured alcohol has no advantages for any purpose over normal ethanol; it is a public policy compromise. As denatured alcohol is sold without the often heavy taxes on alcohol suitable for consumption, it is a cheaper solution for most uses that do not involve drinking. If pure ethanol were made cheaply available for fuel, solvents, or medicinal purposes, it could be used as a beverage without payment of alcohol tax.
Toxicity
Despite its poisonous content, denatured alcohol is sometimes consumed as a surrogate alcohol. This can result in blindness or death if it contains methanol. For instance, during the thirteen-year prohibition of alcohol in the US, federal law required methanol be added to domestically manufactured industrial alcohols. From 25–27 December 1926, which was roughly at the midpoint of nationwide alcohol prohibition, 31 people in New York City alone died of methanol poisoning. To help prevent this, denatonium is often added to give the substance an extremely bitter flavour. Substances such as pyridine are added to give the mixture an unpleasant odour, and agents such as syrup of ipecac may also be included to induce vomiting.
New Zealand has removed methanol from its government-approved "methylated spirits" formulation.
In the USSR, denatured alcohol was used as drinking alcohol surrogate, along with many other technical ethanol-containing products. This was especially common during various anti-alcohol campaigns initiated by the Soviet government. There is much evidence to that in both popular folklore and in literature and music. The word "denaturat" (Russian: денатурат) even gained a special symbolic meaning. Its consumption is mentioned in songs of Vladimir Vysotsky, as well as written works of Venedikt Yerofeev, Yuz Aleshkovsky, and Vyacheslav Shishkov.
Formulations
Diverse additives are used to make it difficult to use distillation or other simple processes to reverse the denaturation. Methanol is commonly used both because its boiling point is close to that of ethanol and because it is toxic. Another typical denaturant is pyridine. Often the denatured alcohol is dyed with methyl violet.
There are several grades of denatured alcohol, but in general the denaturants used are similar. As an example, the formulation for completely denatured alcohol, according to 2005 British regulations was as follows:
The European Union agreed in February 2013 to the following mutual procedures for the complete denaturing of alcohol:
Specially denatured alcohol
A specially denatured alcohol (SDA) is one of many types of denatured alcohol specified under the United States Title 27 of the Code of Federal Regulations Section 21.151. A specially denatured alcohol is a combination of ethanol and another chemical substance, e.g., ethyl acetate in SDA 29, 35, and 35A, added to render the mixture unsuitable for drinking. SDAs are often used in cosmetic products, and can also be used in chemical manufacturing, pharmaceuticals, and solvents. Another example is SDA 40-B, which contains tert-butyl alcohol and denatonium benzoate, N.F. In the United States and other countries, the use of denatured alcohol unsuitable for beverages avoids excise taxes on alcohol.
See also
Aversive agent
Bitterant
E85
Isopropyl alcohol
Rubbing alcohol
References
External links
27 CFR 20, regulations relating to denatured alcohol in the United States
Specifications and licensing of methylated spirits in the United Kingdom
European Community COMMISSION REGULATION (EC) No 162/2013 on the mutual recognition of procedures for the complete denaturing of alcohol for the purposes of exemption from excise duty
HM Revenue and Customs: Production, distribution and use of denatured alcohol
"List of SDAs with denaturing chemical"
Alcohols
Alcohol solvents
Excipients
Product safety
Methanol
Adulteration
ja:エタノール#利用 | Denatured alcohol | Chemistry | 1,300 |
3,094,946 | https://en.wikipedia.org/wiki/Tate%20twist | In number theory and algebraic geometry, the Tate twist, named after John Tate, is an operation on Galois modules.
For example, if K is a field, GK is its absolute Galois group, and ρ : GK → AutQp(V) is a representation of GK on a finite-dimensional vector space V over the field Qp of p-adic numbers, then the Tate twist of V, denoted V(1), is the representation on the tensor product V⊗Qp(1), where Qp(1) is the p-adic cyclotomic character (i.e. the Tate module of the group of roots of unity in the separable closure Ks of K). More generally, if m is a positive integer, the mth Tate twist of V, denoted V(m), is the tensor product of V with the m-fold tensor product of Qp(1). Denoting by Qp(−1) the dual representation of Qp(1), the -mth Tate twist of V can be defined as
References
Number theory
Algebraic geometry | Tate twist | Mathematics | 233 |
6,444,283 | https://en.wikipedia.org/wiki/Comparison%20of%20integrated%20development%20environments | The following tables list notable software packages that are nominal IDEs; standalone tools such as source-code editors and GUI builders are not included. These IDEs are listed in alphabetic order of the supported language.
ActionScript
Ada
Assembly
BASIC
C/C++
C#
COBOL
Common Lisp
Component Pascal
D
Eiffel
Erlang
Go to this page: Source code editors for Erlang
Fortran
F#
Groovy
Haskell
Haxe
Go to this page: Comparison of IDE choices for Haxe programmers
Java
Java has strong IDE support, due not only to its historical and economic importance, but also due to a combination of reflection and static-typing making it well-suited for IDE support.
Some of the leading Java IDEs (such as IntelliJ and Eclipse) are also the basis for leading IDEs in other programming languages (e.g. for Python, IntelliJ is rebranded as PyCharm, and Eclipse has the PyDev plugin.)
Open
Closed
JavaScript
Julia
Lua
Pascal, Object Pascal
Perl
PHP
Python
R
Racket
Ruby
Rust
Scala
Smalltalk
Tcl
Unclassified
IBM Rational Business Developer
Mule (software)
Visual Basic .NET
See also
Comparison of assemblers
Graphical user interface builder
List of compilers
Source-code editor
Game integrated development environment
References
Integrated development environments | Comparison of integrated development environments | Technology | 270 |
68,194,734 | https://en.wikipedia.org/wiki/Corepresentations%20of%20unitary%20and%20antiunitary%20groups | In quantum mechanics, symmetry operations are of importance in giving information about solutions to a system. Typically these operations form a mathematical group, such as the rotation group SO(3) for spherically symmetric potentials. The representation theory of these groups leads to irreducible representations, which for SO(3) gives the angular momentum ket vectors of the system.
Standard representation theory uses linear operators. However, some operators of physical importance such as time reversal are antilinear, and including these in the symmetry group leads to groups including both unitary and antiunitary operators.
This article is about corepresentation theory, the equivalent of representation theory for these groups. It is mainly used in the theoretical study of magnetic structure but is also relevant to particle physics due to CPT symmetry. It gives basic results, the relation to ordinary representation theory and some references to applications.
Corepresentations of unitary/antiunitary groups
Eugene Wigner showed that a symmetry operation S of a Hamiltonian is represented in quantum mechanics either by a unitary operator, S = U, or an antiunitary one, S = UK where U is unitary, and K denotes complex conjugation. Antiunitary operators arise in quantum mechanics due to the time reversal operator
If the set of symmetry operations (both unitary and antiunitary) forms a group, then it is commonly known as a magnetic group and many of these are described in magnetic space groups.
A group of unitary operators may be represented by a group representation. Due to the presence of antiunitary operators this must be replaced by Wigner's corepresentation theory.
Definition
Let G be a group with a subgroup H of index 2. A corepresentation is a homomorphism into a group of operators over a vector space over the complex numbers where for all u in H the image of u is a linear operator and for all a in the coset G-H the image of a is antilinear (where '*' means complex conjugation):
Properties
As this is a homomorphism
Reducibility
Two corepresentations are equivalent if there is a matrix V
Just like representations, a corepresentation is reducible if there is a proper subspace invariant under the operations of the corepresentation. If the corepresentation is given by matrices, it is reducible if it is equivalent to a corepresentation with each matrix in block diagonal form.
If the corepresentation is not reducible, then it is irreducible.
Schur's lemma
Schur's lemma for irreducible representations over the complex numbers states that if a matrix commutes with all matrices of the representation then it is a (complex) multiple of the identity matrix, that is, the set of commuting matrices is isomorphic to the complex numbers . The equivalent of Schur's lemma for irreducible corepresentations is that the set of commuting matrices is isomorphic to , or the quaternions . Using the intertwining number over the real numbers, this may be expressed as an intertwining number of 1, 2 or 4.
Relation to representations of the linear subgroup
Typically, irreducible corepresentations are related to the irreducible representations of the linear subgroup H. Let be an irreducible (ordinary) representation of he linear subgroup H. Form the sum over all the antilinear operators of the square of the character of each of these operators:
and set for an arbitrary element .
There are three cases, distinguished by the character test eq 7.3.51 of Cracknell and Bradley.
Type(a) If S = |H| (the intertwining number is one) then D is an irreducible corepresentation of the same dimension as with
Type(b) S = -|H| (the intertwining number is four) then D is an irreducible representation formed from two 'copies' of
Type(c) If S = 0 (the intertwining number is two), then D is an irreducible corepresentation formed from two inequivalent representations and where
Cracknell and Bradley show how to use these to construct corepresentations for the magnetic point groups, while Cracknell and Wong give more explicit tables for the double magnetic groups.
Character theory of corepresentations
Standard representation theory for finite groups has a square character table with row and column orthogonality properties. With a slightly different definition of conjugacy classes and use of the intertwining number, a square character table with similar orthogonality properties also exists for the corepresentations of finite magnetic groups.
Based on this character table, a character theory mirroring that of representation theory has been developed.
See also
References
Representation theory of groups
Quantum mechanics | Corepresentations of unitary and antiunitary groups | Physics | 979 |
3,302,413 | https://en.wikipedia.org/wiki/Microsoft%20CryptoAPI | The Microsoft Windows platform specific Cryptographic Application Programming Interface (also known variously as CryptoAPI, Microsoft Cryptography API, MS-CAPI or simply CAPI) is an application programming interface included with Microsoft Windows operating systems that provides services to enable developers to secure Windows-based applications using cryptography. It is a set of dynamically linked libraries that provides an abstraction layer which isolates programmers from the code used to encrypt the data. The Crypto API was first introduced in Windows NT 4.0 and enhanced in subsequent versions.
CryptoAPI supports both public-key and symmetric key cryptography, though persistent symmetric keys are not supported. It includes functionality for encrypting and decrypting data and for authentication using digital certificates. It also includes a cryptographically secure pseudorandom number generator function CryptGenRandom.
CryptoAPI works with a number of CSPs (Cryptographic Service Providers) installed on the machine. CSPs are the modules that do the actual work of encoding and decoding data by performing the cryptographic functions. Vendors of HSMs may supply a CSP which works with their hardware.
Cryptography API: Next Generation
Windows Vista features an update to the Crypto API known as Cryptography API: Next Generation (CNG). It has better API factoring to allow the same functions to work using a wide range of cryptographic algorithms, and includes a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is also flexible, featuring support for plugging custom cryptographic APIs into the CNG runtime. However, CNG Key Storage Providers still do not support symmetric keys. CNG works in both user and kernel mode, and also supports all of the algorithms from the CryptoAPI. The Microsoft provider that implements CNG is housed in Bcrypt.dll.
CNG also supports elliptic curve cryptography which, because it uses shorter keys for the same expected level of security, is more efficient than RSA. The CNG API integrates with the smart card subsystem by including a Base Smart Card Cryptographic Service Provider (Base CSP) module which encapsulates the smart card API. Smart card manufacturers just have to make their devices compatible with this, rather than provide a from-scratch solution.
CNG also adds support for Dual_EC_DRBG, a pseudorandom number generator defined in NIST SP 800-90A that could expose the user to eavesdropping by the National Security Agency since it contains a kleptographic backdoor, unless the developer remembers to generate new base points with a different cryptographically secure pseudorandom number generator or a true random number generator and then publish the generated seed in order to remove the NSA backdoor. It is also very slow. It is only used when called for explicitly.
CNG also replaces the default PRNG with CTR_DRBG using AES as the block cipher, because the earlier RNG which is defined in the now superseded FIPS 186-2 is based on either DES or SHA-1, both which have been broken. CTR_DRBG is one of the two algorithms in NIST SP 800-90 endorsed by Schneier, the other being Hash_DRBG.
See also
CAPICOM
DPAPI
Encrypting File System
Public-key cryptography
Cryptographic Service Provider
PKCS#11
Crypto API (Linux)
References
External links
Cryptography Reference on MSDN
Microsoft CAPI at CryptoDox
Cryptographic software
Microsoft application programming interfaces
Microsoft Windows security technology | Microsoft CryptoAPI | Mathematics | 737 |
7,371,134 | https://en.wikipedia.org/wiki/Anthroposphere | The anthroposphere refers to that part of the Earth system that is made or modified by humans for use in human activities and human habitats. The term has been suggested for inclusion as one of the Earth's spheres, while others use the related term technosphere. The term "anthroposphere" was first coined by Austrian geologist Eduard Suess in 1862.
The anthroposphere can be viewed as a human-generated equivalent to the biosphere. While the biosphere is the total biomass of the Earth and its interaction with its systems, the anthroposphere is the total mass of human-generated systems and materials, including the human population, and its interaction with the Earth's systems. A recent study estimated the mass of anthropogenic creations as 1.1 trillion tons in 2020, equivalent to the mass of all living organisms that comprise the biosphere. However, while the biosphere is able to efficiently produce and recycle materials through processes like photosynthesis and decomposition, the anthroposphere is highly inefficient at sustaining itself. As human technology becomes more evolved, such as that required to launch objects into orbit or to cause deforestation, the impact of human activities on the environment potentially increases. The anthroposphere is the youngest of all the Earth's spheres, yet has made an enormous impact on the Earth and its systems in a very short time.
Some consider the term anthroposphere to be synonymous with the noosphere, though the noosphere is often used to refer specifically to the sphere of rational human thought, or ‘the terrestrial sphere of thinking substance’. The anthroposphere is also closely related to the concept of the "technosphere" developed by geologist Peter Haff, historian of science Jürgen Renn, and others. The technosphere refers to all of the technological objects and systems manufactured and created by humans, as contrasted for instance to the biosphere. The technosphere is also distinct from the anthroposphere in these sense that the anthroposphere encompasses not only technologies but cultural, social, economic, and political systems, as well as human behaviors and practices.
Aspects of the anthroposphere include: mines from which minerals are obtained; mechanized agriculture and transportation which support the global food system; oil and gas fields; computer-based systems including the Internet; educational systems; landfills; factories; atmospheric pollution; artificial satellites in space, both active satellites and space junk; forestry and deforestation; urban development; transportation systems including roads, highways, and subways; nuclear installations; warfare.
Technofossils are another interesting aspect of the anthroposphere. These can include objects like mobile phones that contain a diverse range of metals and man-made materials, raw materials like aluminum that do not exist in nature, and agglomerations of plastics created in areas like the Pacific Garbage Patch and on the beaches of the Pacific Islands.
See also
Anthropocene
Anthropogenic metabolism
biomass
space junk
References
External links
The Anthroposphere
Wikiversity:Technology as a threat or promise for life and its forms
Earth sciences
Artificial objects | Anthroposphere | Physics | 637 |
75,285,773 | https://en.wikipedia.org/wiki/DGSAT%20I | DGSAT I is a quenched, ultra diffuse galaxy (UDG) located on the outskirts of the Pisces-Perseus Supercluster, identified in 2016 during a visual inspection of a full color image of the Andromeda II dwarf galaxy. DGSAT I resides in a low-density environment compared to the densities where UDGs are typically found. Its chemical makeup have led astronomers to propose it was formed during the dawn of the universe when galaxies emerged in a different environment than today.
Discovery and identification
DGSAT I was first identified by the DGSAT project in 2016. The DGSAT (Dwarf Galaxy Survey with Amateur Telescopes) uses the potential of privately owned small-sized telescopes to probe low surface brightness (LSB) features around large galaxies and aims to increase the sample size of the dwarf satellite galaxies in the Local Volume.
At first astronomers thought DGSAT I to be an isolated dwarf galaxy beyond the Local Group due to its structural properties and absence of emission lines. A spectroscopic observation later revealed DGSAT I to be a background system and likely associated with an outer filament of the Pisces-Perseus super-cluster.
Chemical composition
The chemical composition of a galaxy provides record of the ambient conditions during its formation. The mass ratios of alpha-elements such as magnesium to iron ([Mg/Fe]) trace time-scales for star formation, as these elements are produced by stars according to different lifetimes. Younger galaxies tend to have more heavy elements in its chemical makeup compared to ancient galaxies formed during an early age of the universe.
DGSAT I's integral field spectroscopy data shows a remarkably low iron content, suggesting an early galaxy formed from a nearly pristine gas cloud, unpolluted by the supernova death of previous stars, however its magnesium levels are consistent with what astronomers expect to find in younger galaxies. This apparent chemical makeup discrepancy and DGSAT I's isolation from galaxy clusters are helping astronomers to develop new theories concerning the birth and formation of UDGs.
UDGs are hard to observe due to their extremely low luminosity but there are studies being conducted using the Keck Cosmic Web Imager in an attempt to shed light on the precise relation between DGSAT I's metallicity and its possibly exotic origin. A paper published in 2022 proposed DGSAT I to be a “failed galaxy” that formed relatively few stars in proportion to its halo mass, and could be related to cluster UDGs whose size and quiescence pre-date their infall (i.e. when molecular transition shows evidence of gas flowing into the core of the star).
See also
Dragonfly 44
NGC 1052-DF2
Low surface brightness galaxy
Galaxy morphological classification
References
External links
Dwarf Galaxy Survey with Amateur Telescopes - DGSAT project's website
Astronomy
Galaxies | DGSAT I | Astronomy | 582 |
42,257,371 | https://en.wikipedia.org/wiki/Wahlquist%20fluid | In general relativity, the Wahlquist fluid is an exact rotating perfect fluid solution to Einstein's equation with equation of state corresponding to constant gravitational mass density.
Introduction
The Wahlquist fluid was first discovered by Hugo D. Wahlquist in 1968. It is one of few known exact rotating perfect fluid solutions in general relativity. The solution reduces to the static Whittaker metric in the limit of zero rotation.
Metric
The metric of a Wahlquist fluid is given by
where
and is defined by . It is a solution with equation of state where is a constant.
Properties
The pressure and density of the Wahlquist fluid are given by
The vanishing pressure surface of the fluid is prolate, in contrast to physical rotating stars, which are oblate. It has been shown that the Wahlquist fluid can not be matched to an asymptotically flat region of spacetime.
References
General relativity | Wahlquist fluid | Physics | 184 |
2,477,946 | https://en.wikipedia.org/wiki/Seccomp | seccomp (short for secure computing) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will either just log the event or terminate the process with SIGKILL or SIGSYS. In this sense, it does not virtualize the system's resources but isolates the process from them entirely.
seccomp mode is enabled via the system call using the PR_SET_SECCOMP argument, or (since Linux kernel 3.17) via the system call. seccomp mode used to be enabled by writing to a file, /proc/self/seccomp, but this method was removed in favor of prctl(). In some kernel versions, seccomp disables the RDTSC x86 instruction, which returns the number of elapsed processor cycles since power-on, used for high-precision timing.
seccomp-bpf is an extension to seccomp that allows filtering of system calls using a configurable policy implemented using Berkeley Packet Filter rules. It is used by OpenSSH and vsftpd as well as the Google Chrome/Chromium web browsers on ChromeOS and Linux. (In this regard seccomp-bpf achieves similar functionality, but with more flexibility and higher performance, to the older systrace—which seems to be no longer supported for Linux.)
Some consider seccomp comparable to OpenBSD pledge(2) and FreeBSD capsicum(4).
History
seccomp was first devised by Andrea Arcangeli in January 2005 for use in public grid computing and was originally intended as a means of safely running untrusted compute-bound programs. It was merged into the Linux kernel mainline in kernel version 2.6.12, which was released on March 8, 2005.
Software using seccomp or seccomp-bpf
Android uses a seccomp-bpf filter in the zygote since Android 8.0 Oreo.
systemd's sandboxing options are based on seccomp.
QEMU, the Quick Emulator, the core component to the modern virtualization together with KVM uses seccomp on the parameter --sandbox
Docker – software that allows applications to run inside of isolated containers. Docker can associate a seccomp profile with the container using the --security-opt parameter.
Arcangeli's CPUShare was the only known user of seccomp for a while. Writing in February 2009, Linus Torvalds expresses doubt whether seccomp is actually used by anyone. However, a Google engineer replied that Google is exploring using seccomp for sandboxing its Chrome web browser.
Firejail is an open source Linux sandbox program that utilizes Linux namespaces, Seccomp, and other kernel-level security features to sandbox Linux and Wine applications.
As of Chrome version 20, seccomp-bpf is used to sandbox Adobe Flash Player.
As of Chrome version 23, seccomp-bpf is used to sandbox the renderers.
Snap specify the shape of their application sandbox using "interfaces" which snapd translates to seccomp, AppArmor and other security constructs
vsftpd uses seccomp-bpf sandboxing as of version 3.0.0.
OpenSSH has supported seccomp-bpf since version 6.0.
Mbox uses ptrace along with seccomp-bpf to create a secure sandbox with less overhead than ptrace alone.
LXD, a Ubuntu "hypervisor" for containers
Firefox and Firefox OS, which use seccomp-bpf
Tor supports seccomp since 0.2.5.1-alpha
Lepton, a JPEG compression tool developed by Dropbox uses seccomp
Kafel is a configuration language, which converts readable policies into seccompb-bpf bytecode
Subgraph OS uses seccomp-bpf
Flatpak uses seccomp for process isolation
Bubblewrap is a lightweight sandbox application developed from Flatpak
minijail uses seccomp for process isolation
SydBox uses seccomp-bpf to improve the runtime and security of the ptrace sandboxing used to sandbox package builds on Exherbo Linux distribution.
File, a Unix program to determine filetypes, uses seccomp to restrict its runtime environment
Zathura, a minimalistic document viewer, uses seccomp filter to implement different sandbox modes
Tracker, a indexing and preview application for the GNOME desktop environment, uses seccomp to prevent automatic exploitation of parsing vulnerabilities in media files
References
External links
Official website (Archived)
Google's Chromium sandbox, LWN.net, August 2009, by Jake Edge
seccomp-nurse, a sandboxing framework based on seccomp
Documentation/prctl/seccomp_filter.txt, part of the Linux kernel documentation
Security In-Depth for Linux Software: Preventing and Mitigating Security Bugs
Linux kernel features
Computer security
Cybersecurity engineering | Seccomp | Technology,Engineering | 1,120 |
56,315,066 | https://en.wikipedia.org/wiki/Rectangular%20polyconic%20projection | The rectangular polyconic projection is a map projection was first mentioned in 1853 by the United States Coast Survey, where it was developed and used for portions of the U.S. exceeding about one square degree. It belongs to the polyconic projection class, which consists of map projections whose parallels are non-concentric circular arcs except for the equator, which is straight. Sometimes the rectangular polyconic is called the War Office projection due to its use by the British War Office for topographic maps. It is not used much these days, with practically all military grid systems having moved onto conformal projection systems, typically modeled on the transverse Mercator projection.
Description
The rectangular polyconic has one specifiable latitude (along with the latitude of opposite sign) along which scale is correct. The scale is also true on the central meridian of the projection. Meridians are spaced such that they meet the parallels at right angles in equatorial aspect; this trait accounts for the name rectangular.
The projection is defined by:
where:
λ is the longitude of the point to be projected;
φ is the latitude of the point to be projected;
λ is the longitude of the central meridian,
φ is the latitude chosen to be the origin along λ;
φ is the latitude whose parallel is chosen to have correct scale.
To avoid division by zero, the formulas above are extended so that if φ = 0 then x = 2A and y = −φ. If φ= 0 then A = (λ − λ).
See also
List of map projections
American polyconic projection
References
External links
Mapthematics page describing the rectangular polyconic projection.
Map projections | Rectangular polyconic projection | Mathematics | 332 |
4,403,720 | https://en.wikipedia.org/wiki/Chemicals%20Convention | Chemicals Convention, 1990 is an International Labour Organization Convention.
Content
The convention was held on the 77th session of the International Labour Convention in Geneva on 6 June 1990. The convention states the importance of protection of the environment, general public and all workers from chemicals. It notes the relevance of Employment Injury Benefits Convention, 1964, the Benzene Convention and Recommendation, 1971, the Occupational Cancer Convention and Recommendation 1974, Working Environment (Air Pollution, Noise and Vibration) Convention, 1977, the list of occupational diseases amended in 1980, the Occupational Safety and Health Convention and Recommendation, 1981, the Occupational Health Services Convention and Recommendation, 1985 and the Asbestos Convention and Recommendation 1986. Workers have to be informed about the used chemicals and the possibility of illness and injuries at work must be reduced.
Scope and definitions
The first two articles deal with the definitions of the different terms to be used in this convention and the areas of application or the scopes of application.
Article 1
This convention applies to all branches of economy, in which chemicals are used. After an assessment of the hazards involved and protective measures to be applied, an organization may be exempted by the competent authority of a member if special problems are encountered, sufficient protection is provided, or precautions taken to protect confidential information do not compromise the safety of workers. This convention does not apply to articles which do not expose workers to hazardous chemicals. It does not apply to organisms, but shall apply to chemicals derived from organisms.
Article 2
The term chemicals are defined as natural or synthetic elements and compound for this convention.
The term hazardous chemical means any chemical classified as hazardous under Article 6 or for which information exist indicating that it is hazardous.
The term us of chemicals at work implies any activity that may expose workers to a chemical during production, handling, storage and transport of chemicals. Furthermore, the term includes the treatment of waste chemicals, release of chemical results and the maintenance, repair and cleaning of equipment and containers for chemicals.
Branches of economic activity means all branches including public services.
The term article implies an object that has a specific shape or pattern when manufactured or that is in natural form and whose use depends in whole or in part on its shape or pattern.
Workers’ representatives are persons who are recognized by national law or practice in accordance with the Workers' Representatives Convention, 1971.
General provisions
Article 3
The most representative organizations of the employers and employees concerned must be consulted on the measures for implementation.
Article 4
Each member shall formulate, implement and periodically review a coherent policy for safety in the use of chemicals in the workplace.
Article 5
The competent authority is allowed to prohibit the use of certain hazardous chemicals on the grounds of safety or to require prior approval for the use.
Classification and related measures
Articles six to nine deal with the classification of all chemicals, supply, safety precautions and the recommendations of the United Nations. The measures are recorded on adapted safety data sheets.
Article 6
The competent authority or a body approved or recognized by the competent authority shall establish systems for the classification of all chemicals.
The hazardous properties of mixtures may be determined on the basis of the hazardousness of the individual components.
The United Nations Recommendation may be taken into account in the transport of dangerous goods.
The classification systems and their application are gradually being expanded.
Article 7
All chemicals shall be labeled. Dangerous chemicals shall be specially marked. These markings shall be made by the competent authority itself or the competent authority shall permit the marking. When transporting dangerous goods, the recommendations of the United Nations must be taken into account.
Article 8
Employers must be provided with data sheets containing information on hazards, suppliers, safety precautions and emergency procedures for hazardous chemicals.
The data sheets are subject to criteria set by the competent authority or recognized bodies under Criteria.
The name used on the data sheets must match the name on the label.
Article 9
All suppliers of chemicals shall ensure that the chemicals are classified in accordance with Article 6, labeled in accordance with Article 7, and safety data sheets are provided in accordance with Article 8.
As new health and safety information on chemicals becomes available, the supplier of hazardous chemicals shall ensure that new labels and safety data sheets are handed over in accordance with national legislation.
Suppliers of chemicals not yet classified under Article 6 shall seek available information on the chemical to evaluate whether it is hazardous.
Responsibilities of employers
Articles 10 to 16 deal with the duty of employers to inform workers about possible risks associated with the use of chemicals in the workplace. Employers and employees must work together to ensure safety.
Article 10
Employers must ensure that all chemicals are labeled in accordance with Article 7 and that chemical data sheets are made available to workers and their representatives in accordance with Article 8.
If employers receive chemicals that are not labeled in accordance with Article 7 or for which safety data sheets are not provided in accordance with Article 8, they shall obtain information from the supplier or other reasonably available sources. Until then, the chemicals should not be used.
The employer shall ensure that chemicals used have been classified in accordance with Article 6, identified in accordance with Article 9, labeled in accordance with Article 7 and that all necessary precautions have been taken.
Employers must maintain a register of all hazardous chemicals used in the workplace, with cross-references to their chemical safety data sheets, and make it available to all workers.
Article 11
The employer must provide workers with sufficient information about safety precautions and identity of chemicals when the chemical is transferred.
Article 12
Employers must ensure that workers are not exposed to hazardous chemicals for longer than permitted, must assess worker exposure to hazardous chemicals, must monitor and record work with hazardous chemicals to protect safety and health, and must ensure that records are properly maintained.
Article 13
The employer shall make an assessment of the risks resulting from the use of chemicals at work and shall protect workers by taking appropriate measures.
Employers must limit employee exposure to chemicals to protect health and safety, provide first aid and make provisions for emergencies.
Article 14
Hazardous chemicals and emptied containers containing residues of hazardous chemicals shall be disposed of in a manner that reduces the risk to safety, health and the environment.
Article 15
Employers must inform workers of the hazards they face in their workplace and of chemical labels and safety data sheets.
They must use the safety data sheets as the basis for work instructions and provide ongoing training to workers on chemical use.
Article 16
Employers and employees shall work together in relation to safety in the use of chemicals in the workplace.
Duties of workers
Article 17 is about the cooperation between employers and employees to reduce risks at work.
Article 17
Workers shall work closely with employers and follow all procedures in the use of chemicals at work to ensure safety.
Workers shall take all reasonable steps to minimize the risk associated with the handling of chemicals.
Rights of workers and their representatives
Article 18 gives employees the right to avoid imminent risk for health reasons without unreasonable consequences.
Article 18
Employees have the right to remove themselves from the hazards of working with chemicals if there is an immediate risk to their health or safety.
Employees who extricate themselves from danger in accordance with the provisions of this Article shall be protected from unreasonable consequences.
Affected workers have the right to information about the properties and identity, labels, and safety data sheets of the chemicals used.
If disclosure of an identity of a chemical to a competitor may harm the employer's business, the employer, when providing the information required under this Article, may protect the identity in accordance with Article 1.
Responsibility of exporting States
Articles 19 to 27 deal with the responsibilities for States exporting hazardous chemicals and the responsibilities for control. They also address the validity of this Convention and the scope of application.
Article 19
When an exporting member State prohibits the use of certain or all hazardous chemicals for reasons of safety and health at work, the fact and the reasons for it shall be communicated to all importing countries.
Article 20
The ratifications of this convention must be communicated to the Director-General of the International Labour Office for registration.
Article 21
This convention shall be binding only upon members whose ratifications have been registered with the Director-General.
It enters into force twelve months after the date on which the ratifications have been registered with the Director-General of the first two members.
Thereafter, the convention shall enter into force for each additional member twelve months after the date of its ratification.
Article 22
A ratified member may denounce ten years after the entry into force of the convention by an act addressed to the Director-General. The denunciation shall take effect one year after the date of registration.
Any ratified member which does not exercise the right of denunciation within the said ten-year period may denounce only after the expiration of a further ten years under the mentioned conditions.
Article 23
The Director-General shall note all ratifications and denunciations by all notified members.
When notifying members of the second ratification, the Director General shall indicate the date on which the convention shall enter into force.
Article 24
The Director-General of the ILO shall transmit to the Secretary-General of the UN for registration under Article 102 of the Charter of the United Nations the particulars of all ratifications and denunciations.
Article 25
The Governing Body of the ILO shall, at such times as it may determine, submit a report on the implementation of this convention and shall consider the need for its revision.
Article 26
If the conference adopts a new convention which revises this convention, ratification of the new convention shall, without prejudice to Article 22, result in the immediate denunciation of this convention.
On the date on which the new convention enters into force, this convention shall cease to be in force.
This convention shall in any case remain in force in its present form for those members which have ratified it but have not ratified the revision convention.
Article 27
The English and French versions of the text of this Convention are equally authoritative.
Ratifications
As of April 2024, the convention has been ratified by 24 states.
References
External links
Text.
Ratifications.
International Labour Organization conventions
Health treaties
Treaties entered into force in 1993
Treaties concluded in 1990
Chemical safety
Treaties of Brazil
Treaties of Burkina Faso
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of Cyprus
Treaties of Finland
Treaties of Germany
Treaties of Italy
Treaties of South Korea
Treaties of Lebanon
Treaties of Luxembourg
Treaties of Mexico
Treaties of Norway
Treaties of Poland
Treaties of Syria
Treaties of Sweden
Treaties of Tanzania
Treaties of Zimbabwe
Treaties of the Dominican Republic
1990 in labor relations | Chemicals Convention | Chemistry | 2,069 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.