id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
31,645,842 | https://en.wikipedia.org/wiki/Actinium%28III%29%20oxide | Actinium(III) oxide is a chemical compound containing the rare radioactive element actinium. It has the formula Ac2O3. It is similar to its corresponding lanthanum compound, lanthanum(III) oxide, and contains actinium in the oxidation state +3. Actinium oxide is not to be confused with Ac2O (acetic anhydride), where Ac is an abbreviation for acetyl instead of the symbol of the element actinium.
Reactions
Ac2O3 + 6HF → 2AcF3 + 3H2O
Ac2O3 + 6HCl → 2AcCl3 + 3H2O
4Ac(NO3)3 → 2Ac2O3 + 12NO2 + 3O2
4Ac + 3O2 → 2Ac2O3
Ac2O3 + 2AlBr3 → 2AcBr3 + Al2O3
2Ac(OH)3 → Ac2O3 + 3H2O
Ac2(C2O4)3 → Ac2O3 + 3CO2 + 3CO
Ac2O3 + 3H2S → Ac2S3 + 3H2O
References
Actinium compounds
Sesquioxides | Actinium(III) oxide | Chemistry | 252 |
66,441,421 | https://en.wikipedia.org/wiki/Actinoptychus | Actinoptychus is a genus of diatoms belonging to the family Heliopeltaceae.
The genus was described in 1843 by Christian Gottfried Ehrenberg.
Species:
Actinoptychus octodenarius Ehrenberg
Actinoptychus senarius (Ehrenberg) Ehrenberg, 1843
References
Diatoms
Diatom genera | Actinoptychus | Biology | 76 |
3,376,783 | https://en.wikipedia.org/wiki/Residual%20oil | Residual oil is oil found in low concentrations naturally or in exhausted oil fields. Often mixed with water, it cannot be recovered by conventional techniques. However, part of it can be recovered using carbon dioxide-enhanced oil recovery ( EOR) which involves injecting carbon dioxide into the well reducing viscosity and enhancing flow of the oil. The technique is not new but has not been used extensively on residual oil zones, low-grade deposits of petroleum such as the 40 square miles in the Permian Basin of Texas leased by Tiny Kamalabo . The technique is limited by availability of carbon dioxide. Carbon dioxide is injected and recycled as many times as possible, and stored in the depleted reservoir at the end of the life for the field. United States reserves of residual oil are estimated to be 100 billion barrels.
With this much residual oil, reclaiming and upgrading it not only helps meet demand but also improves profitability for refineries. Finding effective options for extracting valuable fuels from the unwanted material is economically attractive. Some other methods used to upgrade residual oil are Deasphalting, Coking, Hydrocracking, Residue Hydrotreating, Resid FCC, and Visbreaking. Another method for upgrading and handling uses a devolatilization process to separate the quality oil and the asphaltene material.
See also
Carbon Dioxide Flooding
Notes
Unconventional oil | Residual oil | Chemistry | 274 |
4,845,690 | https://en.wikipedia.org/wiki/Projective%20orthogonal%20group | In projective geometry and linear algebra, the projective orthogonal group PO is the induced action of the orthogonal group of a quadratic space V = (V,Q) on the associated projective space P(V). Explicitly, the projective orthogonal group is the quotient group
PO(V) = O(V)/ZO(V) = O(V)/{±I}
where O(V) is the orthogonal group of (V) and ZO(V)={±I} is the subgroup of all orthogonal scalar transformations of V – these consist of the identity and reflection through the origin. These scalars are quotiented out because they act trivially on the projective space and they form the kernel of the action, and the notation "Z" is because the scalar transformations are the center of the orthogonal group.
The projective special orthogonal group, PSO, is defined analogously, as the induced action of the special orthogonal group on the associated projective space. Explicitly:
PSO(V) = SO(V)/ZSO(V)
where SO(V) is the special orthogonal group over V and ZSO(V) is the subgroup of orthogonal scalar transformations with unit determinant. Here ZSO is the center of SO, and is trivial in odd dimension, while it equals {±1} in even dimension – this odd/even distinction occurs throughout the structure of the orthogonal groups. By analogy with GL/SL and GO/SO, the projective orthogonal group is also sometimes called the projective general orthogonal group and denoted PGO.
Like the orthogonal group, the projective orthogonal group can be defined over any field and with varied quadratic forms, though, as with the ordinary orthogonal group, the main emphasis is on the real positive definite projective orthogonal group; other fields are elaborated in generalizations, below. Except when mentioned otherwise, in the sequel PO and PSO will refer to the real positive definite groups.
Like the spin groups and pin groups, which are covers rather than quotients of the (special) orthogonal groups, the projective (special) orthogonal groups are of interest for (projective) geometric analogs of Euclidean geometry, as related Lie groups, and in representation theory.
More intrinsically, the (real positive definite) projective orthogonal group PO can be defined as the isometries of elliptic space (in the sense of elliptic geometry), while PSO can be defined as the orientation-preserving isometries of elliptic space (when the space is orientable; otherwise PSO = PO).
Structure
Odd and even dimensions
The structure of PO differs significantly between odd and even dimension, fundamentally because in even dimension, reflection through the origin is orientation-preserving, while in odd dimension it is orientation-reversing ( but ). This is seen in the fact that each odd-dimensional real projective space is orientable, while each even-dimensional real projective space of positive dimension is non-orientable. At a more abstract level, the Lie algebras of odd- and even-dimensional projective orthogonal groups form two different families:
Thus, O(2k+1) = SO(2k+1) × {±I},
while and is instead a non-trivial central extension of PO(2k).
Beware that PO(2k+1) is isometries of RP2k = P(R2k+1), while PO(2k) is isometries of RP2k−1 = P(R2k) – the odd-dimensional (vector) group is isometries of even-dimensional projective space, while the even-dimensional (vector) group is isometries of odd-dimensional projective space.
In odd dimension, so the group of projective isometries can be identified with the group of rotational isometries.
In even dimension, SO(2k) → PSO(2k) and O(2k) → PO(2k) are both 2-to-1 covers, and PSO(2k) < PO(2k) is an index 2 subgroup.
General properties
PSO and PO are centerless, as with PSL and PGL; this is because scalar matrices are not only the center of SO and O, but also the hypercenter (quotient by the center does not always yield a centerless group).
PSO is the maximal compact subgroup in the projective special linear group PSL, while PO is maximal compact in the projective general linear group PGL. This is analogous to SO being maximal compact in SL and O being maximal compact in GL.
Representation theory
PO is of basic interest in representation theory: a group homomorphism G → PGL is called a projective representation of G, just as a map G → GL is called a linear representation of G, and just as any linear representation can be reduced to a map G → O (by taking an invariant inner product), any projective representation can be reduced to a map G → PO.
See projective linear group: representation theory for further discussion.
Subgroups
Subgroups of the projective orthogonal group correspond to subgroups of the orthogonal group that contain −I (that have central symmetry). As always with a quotient map (by the lattice theorem), there is a Galois connection between subgroups of O and PO, where the adjunction on O (given by taking the image in PO and then the preimage in O) simply adds −I if absent.
Of particular interest are discrete subgroups, which can be realized as symmetries of projective polytopes – these correspond to the (discrete) point groups that include central symmetry. Compare with discrete subgroups of the Spin group, particularly the 3-dimensional case of binary polyhedral groups.
For example, in 3 dimensions, 4 of the 5 Platonic solids have central symmetry (cube/octahedron, dodecahedron/icosahedron), while the tetrahedron does not – however, the stellated octahedron has central symmetry, though the resulting symmetry group is the same as that of the cube/octahedron.
Topology
PO and PSO, as centerless topological groups, are at the bottom of a sequence of covering groups, whose top are the (simply connected) Pin groups or Spin group, respectively:
Pin±(n) → O(n) → PO(n).
Spin(n) → SO(n) → PSO(n).
These groups are all compact real forms of the same Lie algebra.
These are all 2-to-1 covers, except for SO(2k+1) → PSO(2k+1) which is 1-to-1 (an isomorphism).
Homotopy groups
Homotopy groups above do not change under covers, so they agree with those of the orthogonal group. The lower homotopy groups are given as follows.
The fundamental group of (centerless) PSO(n) equals the center of (simply connected) Spin(n), which is always true about covering groups:
Using the table of centers of Spin groups yields (for ):
In low dimensions:
as the group is trivial.
as it is topologically a circle, though note that the preimage of the identity in Spin(2) is as for other
Homology groups
Bundles
Just as the orthogonal group is the structure group of vector bundles, the projective orthogonal group is the structure group of projective bundles, and the corresponding classifying space is denoted BPO.
Generalizations
As with the orthogonal group, the projective orthogonal group can be generalized in two main ways: changing the field or changing the quadratic form. Other than the real numbers, primary interest is in complex numbers or finite fields, while (over the reals) quadratic forms can also be indefinite forms, and are denoted PO(p,q) by their signature.
The complex projective orthogonal group, PO(n,C) should not be confused with the projective unitary group, PU(n): PO preserves a symmetric form, while PU preserves a hermitian form – PU is the symmetries of complex projective space (preserving the Fubini–Study metric).
In fields of characteristic 2 there are added complications: quadratic forms and symmetric bilinear forms are no longer equivalent, , and the determinant needs to be replaced by the Dickson invariant.
Finite fields
The projective orthogonal group over a finite field is used in the construction of a family of finite simple groups of Lie type, namely the Chevalley groups of type Dn. The orthogonal group over a finite field, O(n,q) is not simple, since it has SO as a subgroup and a non-trivial center ({±I}) (hence PO as quotient). These are both fixed by passing to PSO, but PSO itself is not in general simple, and instead one must use a subgroup (which may be of index 1 or 2), defined by the spinor norm (in odd characteristic) or the quasideterminant (in even characteristic). The quasideterminant can be defined as (−1)D, where D is the Dickson invariant (it is the determinant defined by the Dickson invariant), or in terms of the dimension of the fixed space.
Notes
See also
Projective linear group
Projective unitary group
Orthogonal group
Spin group
References
Conway, J. H.; Curtis, R. T.; Norton, S. P.; Parker, R. A.; and Wilson, R. A. "The Groups GOn(q), SOn(q), PGOn(q), and PSOn(q), and On(q)." §2.4 in Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups. Oxford, England: Clarendon Press, pp. xi–xii, 1985.
External links
Lie groups
Projective geometry
Quadratic forms | Projective orthogonal group | Mathematics | 2,036 |
1,589,701 | https://en.wikipedia.org/wiki/Nemawashi | Nemawashi () is a Japanese business informal process of laying the foundation for some proposed change or project by talking to the people concerned and gathering support and feedback before a formal announcement. It is considered an important element in any major change in the Japanese business environment before any formal steps are taken. Successful nemawashi enables changes to be carried out with the consent of all sides, avoiding embarrassment.
Nemawashi literally translates as "turning the roots", from ne (, "root") and mawasu (, "to turn something, to put something around something else"). Its original meaning was literal: in preparation for transplanting a tree, one would carefully dig around a tree some time before transplanting, and trim the roots to encourage the growth of smaller roots that will help the tree become established in its new location.
Nemawashi is often cited as an example of a Japanese word which is difficult to translate effectively, because it is tied so closely to Japanese culture itself, although it is often translated as "laying the groundwork."
In Japan, high-ranking people expect to be let in on new proposals prior to an official meeting. If they find out about something for the first time during the meeting, they will feel that they have been ignored, and they may reject it for that reason alone. Thus, it's important to approach these people individually before the meeting. This provides an opportunity to introduce the proposal to them and gauge their reaction. This is also a good chance to hear their input.
The term is associated with forming a consensus, along with ringiseido (which is a more formal process). There is debate whether Nemawashi is truly co-operative, or if sometimes those consulted have little choice but to agree. The process can be time consuming.
See also
Japanese management culture
Lobbying
Polder model - Dutch form of consensus building
Toyota Production System
References
External links
Kirai, a geek in Japan: Nemawashi
Japanese words and phrases
Japanese business terms
Economy of Japan
Lean manufacturing | Nemawashi | Engineering | 415 |
1,159,051 | https://en.wikipedia.org/wiki/Anglo-American%20Cataloguing%20Rules | Anglo-American Cataloguing Rules (AACR) were an international library cataloging standard. First published in 1967 and edited by C. Sumner Spalding, a second edition (AACR2) edited by Michael Gorman and Paul W. Winkler was issued in 1978, with subsequent revisions (AACR2R) appearing in 1988 and 1998; all updates ceased in 2005.
Published jointly by the American Library Association, the Canadian Library Association, and the UK Chartered Institute of Library and Information Professionals, the rules were designed for the construction of library catalogs and similar bibliographic tools. The rules cover the physical description of library resources, as well as the provision of name and title access points.
AACR2 was issued in several print versions, including a concise edition and an online version. Various translations were also available. Principles of AACR included cataloguing based on the item 'in hand' rather than inferring information from external sources and the concept of the 'chief source of information' which is preferred where conflicts exist.
Initial adoption
Despite the claim to be 'Anglo-American', the first edition of AACR was published in 1967 in somewhat distinct North American and British texts. The second edition of 1978 unified the two sets of rules (adopting the British spelling 'cataloguing') and brought them in line with the International Standard Bibliographic Description (ISBD). Libraries wishing to migrate from the previous North American text were obliged to implement 'desuperimposition', a substantial change in the form of headings for corporate bodies.
Successor
While the 2002 updates included substantial improvements to AACR's treatment of non-book materials, the proliferation of 21st century formats in a networked environment and the rise of electronic publishing signaled the necessity for significant change in the cataloging code. Plans for a third edition (AACR3) were abandoned in 2005.
The international cataloging community turned its attention to drafting a completely new standard to succeed AACR. Informed by the work of the International Federation of Library Associations and Institutions (IFLA) Functional Requirements for Bibliographic Records (FRBR), the new framework was crafted to be more flexible and suitable for use in a digital environment: Resource Description and Access (RDA) was released in June 2010. The Library of Congress, National Library of Medicine, National Agricultural Library, and several national libraries of other English-speaking countries performed a formal test of RDA, resulting in a June 2011 report of findings.
The physical description recorded in MARC field 300 illustrates some of the differences in cataloging practice between AACR2 and RDA. For example, many abbreviations, such as "p." for "pages," under AACR2 are spelled out under RDA. AACR2's General Material Designation (GMD), recorded in subfield h in MARC field 245, was superseded under RDA rules by content, media, and carrier types, recorded in MARC fields 336, 337, and 338. However, some libraries retained GMDs after adopting RDA.
See also
MARC standards
(RAK)
References
External links
AACR2
JSC (Joint Steering Committee for Development of RDA)
A Brief History of AACR
RDA: Resource Description and Access Prospectus
ALCTS Newsletter article on RDA
Cataloger's Desktop
1967 establishments
1967 introductions
2005 disestablishments
Archival science
Metadata
Metadata standards | Anglo-American Cataloguing Rules | Technology | 687 |
699,498 | https://en.wikipedia.org/wiki/Col%20%28game%29 | Col is a pencil and paper game, specifically a map-coloring game, involving the shading of areas in a line drawing according to the rules of graph coloring. With each move, the graph must remain proper (no two areas of the same colour may touch), and a player who cannot make a legal move loses. The game was described and analysed by John Conway, who attributed it to Colin Vout, in On Numbers and Games.
Example game
In the following game, the first of the two players is using red, and the second is using blue. The last move in each image is shown brighter than the other areas.
The starting graph:
The first player may colour any of the areas to begin. However, the region around the outside of the graph is not included as an area for this game.
After the first move:
The second player now colours a white cell. As no areas are currently blue, any white cell is allowed.
Two moves in:
At this point, the requirement that the graph be proper comes into effect, as a red area must be made which does not touch the existing one:
Once the third region is coloured:
Note that areas only count as touching if they share edges, not if they only share vertices, so this move is legal.
The game continues, players moving alternately, until one player cannot make a move. This player loses. A possible continuation of the game is as follows (with each move numbered for clarity):
Game over:
In this outcome, the blue player has lost.
Snort
Snort, invented by Simon P. Norton, uses a similar partisan assignment of two colors, but with the anticlassical constraint: neighboring regions are not allowed to be given different colors. Coloring the regions is explained as assigning fields to bulls and cows, where neighboring fields may not contain cattle of the opposite sex, lest they be distracted from their grazing.
Deciding the outcome in Snort is PSPACE-complete on general graphs. This is proven by reducing partizan node Kayles, which is PSPACE-complete, to a game of Snort.
Analysis
The value of a Col position is always either a number or a number plus star This makes the game relatively simple compared with Snort, which features a much greater variety of values.
References
Revised and reprinted as
Revised and reprinted as
External links
Col and Snort games on Google Play
Paper-and-pencil games
Combinatorial game theory
Graph coloring | Col (game) | Mathematics | 488 |
3,731,311 | https://en.wikipedia.org/wiki/Mdm2 | Mouse double minute 2 homolog (MDM2) also known as E3 ubiquitin-protein ligase Mdm2 is a protein that in humans is encoded by the MDM2 gene. Mdm2 is an important negative regulator of the p53 tumor suppressor. Mdm2 protein functions both as an E3 ubiquitin ligase that recognizes the N-terminal trans-activation domain (TAD) of the p53 tumor suppressor and as an inhibitor of p53 transcriptional activation.
Discovery and expression in tumor cells
The murine double minute (mdm2) oncogene, which codes for the Mdm2 protein, was originally cloned, along with two other genes (mdm1 and mdm3) from the transformed mouse cell line 3T3-DM. Mdm2 overexpression, in cooperation with oncogenic Ras, promotes transformation of primary rodent fibroblasts, and mdm2 expression led to tumor formation in nude mice. The human homologue of this protein was later identified and is sometimes called Hdm2. Further supporting the role of mdm2 as an oncogene, several human tumor types have been shown to have increased levels of Mdm2, including soft tissue sarcomas and osteosarcomas as well as breast tumors.
An additional Mdm2 family member, Mdm4 (also called MdmX), has been discovered and is also an important negative regulator of p53.
Ubiquitination target: p53
The key target of Mdm2 is the p53 tumor suppressor. Mdm2 has been identified as a p53 interacting protein that represses p53 transcriptional activity. Mdm2 achieves this repression by binding to and blocking the N-terminal trans-activation domain of p53. Mdm2 is a p53 responsive gene—that is, its transcription can be activated by p53. Thus when p53 is stabilized, the transcription of Mdm2 is also induced, resulting in higher Mdm2 protein levels.
E3 ligase activity
The E3 ubiquitin ligase MDM2 is a negative regulator of the p53 tumor suppressor protein. MDM2 binds and ubiquitinates p53, facilitating it for degradation. p53 can induce transcription of MDM2, generating a negative feedback loop. Mdm2 also acts as an E3 ubiquitin ligase, targeting both itself and p53 for degradation by the proteasome (see also ubiquitin). Several lysine residues in p53 C-terminus have been identified as the sites of ubiquitination, and it has been shown that p53 protein levels are downregulated by Mdm2 in a proteasome-dependent manner. Mdm2 is capable of auto-polyubiquitination, and in complex with p300, a cooperating E3 ubiquitin ligase, is capable of polyubiquitinating p53. In this manner, Mdm2 and p53 are the members of a negative feedback control loop that keeps the level of p53 low in the absence of p53-stabilizing signals. This loop can be interfered with by kinases and genes like p14arf when p53 activation signals, including DNA damage, are high.
Structure and function
The full-length transcript of the mdm2 gene encodes a protein of 491 amino acids with a predicted molecular weight of 56kDa. This protein contains several conserved structural domains including an N-terminal p53 interaction domain, the structure of which has been solved using x-ray crystallography. The Mdm2 protein also contains a central acidic domain (residues 230–300). The phosphorylation of residues within this domain appears to be important for regulation of Mdm2 function. In addition, this region contains nuclear export and import signals that are essential for proper nuclear-cytoplasmic trafficking of Mdm2. Another conserved domain within the Mdm2 protein is a zinc finger domain, the function of which is poorly understood.
Mdm2 also contains a C-terminal RING domain (amino acid residues 430–480), which contains a Cis3-His2-Cis3 consensus that coordinates two ions of zinc. These residues are required for zinc binding, which is essential for proper folding of the RING domain. The RING domain of Mdm2 confers E3 ubiquitin ligase activity and is sufficient for E3 ligase activity in Mdm2 RING autoubiquitination. The RING domain of Mdm2 is unique in that it incorporates a conserved Walker A or P-loop motif characteristic of nucleotide binding proteins, as well as a nucleolar localization sequence. The RING domain also binds specifically to RNA, although the function of this is poorly understood.
Regulation
There are several known mechanisms for regulation of Mdm2. One of these mechanisms is phosphorylation of the Mdm2 protein. Mdm2 is phosphorylated at multiple sites in cells. Following DNA damage, phosphorylation of Mdm2 leads to changes in protein function and stabilization of p53. Additionally, phosphorylation at certain residues within the central acidic domain of Mdm2 may stimulate its ability to target p53 for degradation. HIPK2 is a protein that regulates Mdm2 in this way. The induction of the p14arf protein, the alternate reading frame product of the p16INK4a locus, is also a mechanism of negatively regulating the p53-Mdm2 interaction. p14arf directly interacts with Mdm2 and leads to up-regulation of p53 transcriptional response. ARF sequesters Mdm2 in the nucleolus, resulting in inhibition of nuclear export and activation of p53, since nuclear export is essential for proper p53 degradation.
Inhibitors of the MDM2-p53 interaction include the cis-imidazoline analog nutlin.
Levels and stability of Mdm2 are also modulated by ubiquitylation. Mdm2 auto ubiquitylates itself, which allows for its degradation by the proteasome. Mdm2 also interacts with a ubiquitin specific protease, USP7, which can reverse Mdm2-ubiquitylation and prevent it from being degraded by the proteasome. USP7 also protects from degradation the p53 protein, which is a major target of Mdm2. Thus Mdm2 and USP7 form an intricate circuit to finely regulate the stability and activity of p53, whose levels are critical for its function.
Interactions
Mdm2 has been shown to interact with:
ABL1,
ARRB1,
ARRB2,
CCNG1,
CTBP1,
CTBP2,
DAXX,
DHFR,
EP300,
ERICH3,
FKBP3,
FOXO4,
GNL3,
HDAC1,
HIF1A,
HTATIP,
IGF1R,
MDM4,
NUMB,
P16,
P53,
P73,
PCAF,
PSMD10,
PSME3,
RPL5,
RPL11,
PML,
RPL26,
RRM2B,
RYBP,
TBP, and
UBC.
Mdm2 p53-independent role
Mdm2 overexpression was shown to inhibit DNA double-strand break repair mediated through a novel, direct interaction between Mdm2 and Nbs1 and independent of p53. Regardless of p53 status, increased levels of Mdm2, but not Mdm2 lacking its Nbs1-binding domain, caused delays in DNA break repair, chromosomal abnormalities, and genome instability. These data demonstrated Mdm2-induced genome instability can be mediated through Mdm2:Nbs1 interactions and independent from its association with p53.
References
Further reading
External links
NLM
NCBI-Gene
Nextbio
Genecards
Atlas of Genetics
Proteins
Oncogenes
Human proteins | Mdm2 | Chemistry | 1,665 |
34,226,680 | https://en.wikipedia.org/wiki/Alice%20%E2%80%93%20A%20Fight%20For%20Life | Alice – A Fight For Life is a documentary featuring 47-year-old Alice Jefferson, a British woman who developed malignant pleural mesothelioma thirty years after working for nine months at Cape Insulation's Acre Mill asbestos plant in Hebden Bridge, West Yorkshire. The film also explored the health issues surrounding the manufacture and use of asbestos products. Described by The Guardian newspaper as "a momentous film", the programme also explicitly linked asbestos with cancer, and attacked what it perceived as the government's complacency in limiting the manufacture and use of asbestos in Britain.
Alice died of mesothelioma in February 1982, a month after filming for the programme had ended. She left two children: Paul and Patsy aged, 15 and 5 respectively.
Background
John Willis, an investigative journalist described by The Times as "a digger of the first order" and by The Guardian as "one of TV's most courageous documentary writers", had previously produced two critically acclaimed exposés for Yorkshire Television, the BAFTA award winning Johnny Go Home (1975) and Rampton - The Secret Hospital (1979), which received an International Emmy Award. Between 1964-1975 the media in the United Kingdom and the United States had kept asbestos "high on the political agenda" and in the early 1970s ITV and the BBC had broadcast two programmes examining working conditions and occupational health at the Cape plant at Acre Mill. The first was World in Action: The Dust At Acre Mill, produced by Granada TV and broadcast on ITV in June 1971. This was followed in January 1975 by BBC TV's Horizon: The Killer Dust.
As a result West Yorkshire MP Max Madden lodged an official complaint alleging non-enforcement of the Asbestos Industry Regulations 1931, which led to a Parliamentary Ombudsman Report being submitted to the Ombudsman, Sir Alan Marre. The report, which was highly critical of the behaviour of the Factory Inspectors at Acre Mill, found that since the plant's closure in 1970 10% of the work force had developed asbestosis, a figure far in excess of the previously estimated exposure-risk relationship. The government responded by launching a major inquiry, the Advisory Committee on Asbestos, or "Simpson Committee", in 1976. The committee delivered two interim reports and published their final report in October 1979.
The final report "lack[ed] scientific research independent of the industrial sector" which "conditioned some of the final recommendations (e.g. on reducing threshold values and on the search for products that could replace the mineral), which were very reconciling and had very little impact on working conditions." Concerns about the effect restrictions would have on employment figures also played a part in delaying implementation of the recommendations. Dissastisfied with the findings of the report, which appeared to favour the asbestos industry's position and contained "deceptive" statements from medical professionals in the pay of the industry, John Willis, assisted by researchers James Cutler and Peter Moore, began work on Alice... in 1980.
See also
Libby, Montana (2004) - American television documentary about asbestos exposure.
References
1980s British films
1982 television films
1982 films
1982 documentary films
Asbestos
Documentary films about health care
ITV documentaries | Alice – A Fight For Life | Environmental_science | 649 |
66,347,717 | https://en.wikipedia.org/wiki/Minister%20for%20Building | The Minister for Building is a minister in the Government of New South Wales with responsibility for building across New South Wales, Australia.
History
Building Materials
During World War II building controls had been exercised by the Commonwealth government. A secondary industries section had been established in the Premier's department in 1944 with responsibility for developing manufacturing industries and in 1945 transferred to the Department of Labour and Industry. The functions of the section were to keep the Department informed about development and decentralisation of secondary industries, to provide information, advice and assistance to those contemplating the establishment of new industries or the expansion and technical development existing industries in NSW. The Section was responsible for the development and progressive implementation of various plans for industrial development, contact with overseas industries, negotiation for the establishment of factories in Australia, and movement towards the more rational and economic grouping of inter-related industries. The Division worked co-operatively with Commonwealth and other NSW agencies concerned with the development and decentralisation of secondary industries, and maintained contact with manufacturers for the purposes of information exchange, fostering expansion and efficiency, and encouraging maximum employment.
With post war reconstruction, control over building materials returned to the state governments. Controls continued to be necessary in the post-war environment to ensure that State planning priorities (including the demands of population growth) were achieved and scarce resources were allocated equitably. The controls introduced by the Building Operations and Building Materials Control Act 1946, included requiring consent for building operations except those exempted under the Act; preventing architects, builders, contractors and engineers from commencing buildings which were unauthorised, and requiring them to conform to any conditions placed on the building authorisation. Local Government powers to approve building applications were subject to the Act. Consent was required to use bricks except for purposes defined by the Act. Restrictions were placed on the supply of a range of other building materials. Inspectors could visit building sites and places where building materials were manufactured, stored, sold or distributed and require the production of relevant records. This was initially administered by the Building Materials Branch of the Department of Labour and Industry and timber distribution staff of the Forestry Commission. In June 1947 these staff were transferred to the new department of building materials. A technical branch was established to stimulate and develop the various activities allied to the building industry, and to ensure the training of skilled tradesmen to enable the State’s housing program to be achieved. The branch also controlled all building materials such as bricks, cement products, and timber. The various branches which had combined to create the Department were operationally restricted to coastal districts while the new department's responsibilities covered the entire State. Bricks for example were not permitted to be used for the construction of fences or garages.
The principal responsibility of the Minister was the development, availability, production and standard of building materials particularly bricks, tiles and baths. It was established in the second McGirr ministry in May 1947, carved out of the responsibilities of the Minister for Labour and Industry. Additional responsibility for the encouragement and regulation of manufacturing, referred to as secondary industries, were added in November 1947 and the title of the portfolio was amended to reflect the additional responsibilities in March 1948.
On 4 November, 1947 the secondary industries division was transferred from the Premier's department to the Department of Building Materials in order to achieve co-ordination between industrial and housing development. Despite the additional responsibilities, the portfolio remained named Building Materials until June 1950 when it was renamed Minister for Secondary Industries and Minister for Building Materials.
On 15 August 1952 William Dickson resigned from the ministry and was elected President of the Legislative Council. The portfolio was abolished, with responsibility for secondary industries returning to the Premier, while building materials returned to the responsibility of the Minister for Labour and Industry. Manufacturing was next represented at a portfolio level as Minister for Industrial Development and Decentralisation.
Infrastructure
Infrastructure was first represented at a portfolio level in the fourth Carr ministry, combined with Planning. The minister, Craig Knowles, also held the portfolio of Natural Resources and was responsible for the Department of Infrastructure, Planning and Natural Resources. The government's stated purpose in establishing a combined department was:
to form one department for the purpose of making integrated decisions about natural resource management and land use planning; that is to bring the social, economic and environmental agendas together to promote sustainability;
improve service delivery and provide clear, concise and co-ordinated information to customers;
to simplify policy and regulation to resolve confusion and duplication;
to reduce costs and redirect savings back to the community;
to link decisions about vital infrastructure with the broader plans for NSW; and
to devolve decision making to the communities that those decisions affect.
Infrastructure was established as a separate portfolio in the first Iemma ministry, however it was not responsible for a department nor legislation The portfolio was combined with planning in the O'Farrell ministry before being split into separate portfolios in the first Baird ministry. The portfolio was then combined with Transport in the second Baird ministry, before being abolished in the second Berejiklian ministry, subsumed into Transport.
The portfolio was recreated in the second Perrottet ministry. In that ministry from December 2021 to March 2023, the minister was responsible for Barangaroo and Infrastructure NSW. It was one of the six ministries in the transport sector and the Minister (for Infrastructure, Cities and Active Transport) works with the Minister for Transport, the Minister for Metropolitan Roads and the Minister for Regional Transport and Roads. Together they administered the portfolio through the Department of Transport (Transport for NSW) and a range of other government agencies that coordinate funding arrangements for transport operators, including hundreds of local and community transport operators.
List of ministers
See also
List of New South Wales government agencies
References
External links
Transport for New South Wales
Infrastructure
Building
Infrastructure ministers | Minister for Building | Engineering | 1,155 |
76,206,157 | https://en.wikipedia.org/wiki/Birds%20of%20the%20World | Birds of the World (BoW) is an online database of ornithological data adapted from the Handbook of the Birds of the World and contemporary reference works, including Birds of North America, Neotropical Birds Online, and Bird Families of the World. The database is published and maintained by the Cornell Lab of Ornithology and collects data on bird observations through integration with eBird. The database requires a subscription to access the majority of its entries, but offers institutional access to many libraries and birding-related organizations, participating in the National Information Standards Organization's Shared E-Resource Understanding practice as a publisher.
The database is frequently cited in regional checklists and distribution map studies, either as a point of comparison or a source of data.
History
Birds of the World was originally developed in the early 1990s through collaboration between the American Ornithologists' Union, the Cornell Lab of Ornithology, and the Academy of Natural Sciences of Drexel University. The goal of the project was to produce an illustrated guide to all of the birds of the world; its first iteration was in the 17-volume Handbook of the Birds of the World, published by Lynx Edicions over the course of 22 years, from 1992 to 2014. After the Cornell Lab of Ornithology acquired the rights to the contents of the Handbook of the Birds of the World, the online database was launched in March of 2020.
A significant portion of the audiovisual content available in Birds of the World is collected through citizen science data collection as provided by eBird, but content is also included from the Macaulay Library, as it was gathered in the Internet Bird Collection by Josep del Hoyo, the initial founder of Lynx Edicions, and his colleagues in 2002.
Description
Birds of the World is a subscription-access database that aims to describe comprehensive life history information on birds. This includes:
Species accounts
Details on taxonomy, habitat, breeding, diet, and behaviors
Family accounts
Hybrid and subspecies descriptions and photos
Migration and range maps
IUCN Conservation Status
Literature cited
Common names in multiple languages
Free resources
Birds of the World provides various resources other than those provided with an institutional or individual subscription to the service. James A. Jobling's Dictionary of Scientific Bird Names, which would be published by Lynx Edicions as the HBW Alive Key to Scientific Names In Ornithology, is accessible as a searchable database on the Birds of the World website, allowing for free access to the definitions of the various scientific names of birds. The HBW Alive Key has been the underpinning for developments between the Cornell Lab and BirdLife International to produce a unified checklist of the birds of the world, and is currently used to form the list of bird species on the IUCN Red List.
References
External links
Birds of the World website
The Key to Scientific Names on Birds of the World
2020 introductions
Biodiversity databases
Birdwatching
Citizen science
Cornell University
Ornithological citizen science | Birds of the World | Biology,Environmental_science | 595 |
24,690,455 | https://en.wikipedia.org/wiki/C9H10Cl2N4 | {{DISPLAYTITLE:C9H10Cl2N4}}
The molecular formula C9H10Cl2N4 (molar mass: 245.10 g/mol, exact mass: 244.0283 u) may refer to:
Aganodine
Apraclonidine
Molecular formulas | C9H10Cl2N4 | Physics,Chemistry | 66 |
7,071,096 | https://en.wikipedia.org/wiki/Engineering%20design%20process | The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary.
It is a decision making process (often iterative) in which the engineering sciences, basic sciences and mathematics are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation.
Common stages of the engineering design process
It's important to understand that there are various framings/articulations of the engineering design process. Different terminology employed may have varying degrees of overlap, which affects what steps get stated explicitly or deemed "high level" versus subordinate in any given model. This, of course, applies as much to any particular example steps/sequences given here.
One example framing of the engineering design process delineates the following stages: research, conceptualization, feasibility assessment, establishing design requirements, preliminary design, detailed design, production planning and tool design, and production. Others, noting that "different authors (in both research literature and in textbooks) define different phases of the design process with varying activities occurring within them," have suggested more simplified/generalized models – such as problem definition, conceptual design, preliminary design, detailed design, and design communication. Another summary of the process, from European engineering design literature, includes clarification of the task, conceptual design, embodiment design, detail design. (NOTE: In these examples, other key aspects – such as concept evaluation and prototyping – are subsets and/or extensions of one or more of the listed steps.)
Research
Various stages of the design process (and even earlier) can involve a significant amount of time spent on locating information and research. Consideration should be given to the existing applicable literature, problems and successes associated with existing solutions, costs, and marketplace needs.
The source of information should be relevant. Reverse engineering can be an effective technique if other solutions are available on the market. Other sources of information include the Internet, local libraries, available government documents, personal organizations, trade journals, vendor catalogs and individual experts available.
Design requirements
Establishing design requirements and conducting requirement analysis, sometimes termed problem definition (or deemed a related activity), is one of the most important elements in the design process in certain industries, and this task is often performed at the same time as a feasibility analysis. The design requirements control the design of the product or process being developed, throughout the engineering design process. These include basic things like the functions, attributes, and specifications – determined after assessing user needs. Some design requirements include hardware and software parameters, maintainability, availability, and testability.
Feasibility
In some cases, a feasibility study is carried out after which schedules, resource plans and estimates for the next phase are developed. The feasibility study is an evaluation and analysis of the potential of a proposed project to support the process of decision making. It outlines and analyses alternatives or methods of achieving the desired outcome. The feasibility study helps to narrow the scope of the project to identify the best scenario.
A feasibility report is generated following which Post Feasibility Review is performed.
The purpose of a feasibility assessment is to determine whether the engineer's project can proceed into the design phase. This is based on two criteria: the project needs to be based on an achievable idea, and it needs to be within cost constraints. It is important to have engineers with experience and good judgment to be involved in this portion of the feasibility study.
Concept generation
A concept study (conceptualization, conceptual design) is often a phase of project planning that includes producing ideas and taking into account the pros and cons of implementing those ideas. This stage of a project is done to minimize the likelihood of error, manage costs, assess risks, and evaluate the potential success of the intended project. In any event, once an engineering issue or problem is defined, potential solutions must be identified. These solutions can be found by using ideation, the mental process by which ideas are generated. In fact, this step is often termed Ideation or "Concept Generation." The following are widely used techniques:
trigger word – a word or phrase associated with the issue at hand is stated, and subsequent words and phrases are evoked.
morphological analysis – independent design characteristics are listed in a chart, and different engineering solutions are proposed for each solution. Normally, a preliminary sketch and short report accompany the morphological chart.
synectics – the engineer imagines him or herself as the item and asks, "What would I do if I were the system?" This unconventional method of thinking may find a solution to the problem at hand. The vital aspects of the conceptualization step is synthesis. Synthesis is the process of taking the element of the concept and arranging them in the proper way. Synthesis creative process is present in every design.
brainstorming – this popular method involves thinking of different ideas, typically as part of a small group, and adopting these ideas in some form as a solution to the problem
Various generated ideas must then undergo a concept evaluation step, which utilizes various tools to compare and contrast the relative strengths and weakness of possible alternatives.
Preliminary design
The preliminary design, or high-level design includes (also called FEED or Basic design), often bridges a gap between design conception and detailed design, particularly in cases where the level of conceptualization achieved during ideation is not sufficient for full evaluation. So in this task, the overall system configuration is defined, and schematics, diagrams, and layouts of the project may provide early project configuration. (This notably varies a lot by field, industry, and product.) During detailed design and optimization, the parameters of the part being created will change, but the preliminary design focuses on creating the general framework to build the project on.
S. Blanchard and J. Fabrycky describe it as:
“The ‘whats’ initiating conceptual design produce ‘hows’ from the conceptual design evaluation effort applied to feasible conceptual design concepts. Next, the ‘hows’ are taken into preliminary design through the means of allocated requirements. There they become ‘whats’ and drive preliminary design to address ‘hows’ at this lower level.”
Detailed design
Following FEED is the Detailed Design (Detailed Engineering) phase, which may consist of procurement of materials as well.
This phase further elaborates each aspect of the project/product by complete description through solid modeling, drawings as well as specifications.
Computer-aided design (CAD) programs have made the detailed design phase more efficient. For example, a CAD program can provide optimization to reduce volume without hindering a part's quality. It can also calculate stress and displacement using the finite element method to determine stresses throughout the part.
Production planning
The production planning and tool design consists of planning how to mass-produce the product and which tools should be used in the manufacturing process. Tasks to complete in this step include selecting materials, selection of the production processes, determination of the sequence of operations, and selection of tools such as jigs, fixtures, metal cutting and metal or plastics forming tools. This task also involves additional prototype testing iterations to ensure the mass-produced version meets qualification testing standards.
Comparison with the scientific method
Engineering is formulating a problem that can be solved through design. Science is formulating a question that can be solved through investigation.
The engineering design process bears some similarity to the scientific method. Both processes begin with existing knowledge, and gradually become more specific in the search for knowledge (in the case of "pure" or basic science) or a solution (in the case of "applied" science, such as engineering). The key difference between the engineering process and the scientific process is that the engineering process focuses on design, creativity and innovation while the scientific process emphasizes explanation, prediction and discovery (observation).
Degree programs
Methods are being taught and developed in Universities including:
Engineering Design, University of Bristol Faculty of Engineering
Dyson School of Design Engineering, Imperial College London
TU Delft, Industrial Design Engineering.
University of Waterloo, Systems Design Engineering
See also
Applied science
Computer-automated design
Design engineer
Engineering analysis
Engineering optimization
New product development
Systems engineering process
Surrogate model
Traditional engineering
References
External links
Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition,
Eggert, Rudolph J. (2010) Engineering Design, Second Edition, High Peak Press, Meridian, Idaho,
Engineering concepts
Mechanical engineering
Systems engineering | Engineering design process | Physics,Engineering | 1,757 |
429,836 | https://en.wikipedia.org/wiki/Message-oriented%20middleware | Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. Message-oriented middleware is in contrast to streaming-oriented middleware where data is communicated as a sequence of bytes with no explicit message boundaries. Note that streaming protocols are almost always built above protocols using discrete messages such as frames (Ethernet), datagrams (UDP), packets (IP), cells (ATM), et al.
MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating systems and network interfaces. Application programming interfaces (APIs) that extend across diverse platforms and networks are typically provided by MOM.
This middleware layer allows software components (applications, servlets, and other components) that have been developed independently and that run on different networked platforms to interact with one another. Applications distributed on different network nodes use the application interface to communicate. In addition, by providing an administrative interface, this new, virtual system of interconnected applications can be made fault tolerant and secure.
MOM provides software elements that reside in all communicating components of a client/server architecture and typically support asynchronous calls between the client and server applications. MOM reduces the involvement of application developers with the complexity of the master-slave nature of the client/server mechanism.
Middleware categories
Remote procedure call or RPC-based middleware
Object request broker or ORB-based middleware
Message-oriented middleware or MOM-based middleware
All these models make it possible for one software component to affect the behavior of another component over a network. They are different in that RPC- and ORB-based middleware create systems of tightly coupled components, whereas MOM-based systems allow for a loose coupling of components. In an RPC- or ORB-based system, when one procedure calls another, it must wait for the called procedure to return before it can do anything else. In these mostly synchronous messaging models, the middleware functions partly as a super-linker, locating the called procedure on a network and using network services to pass function or method parameters to the procedure and then to return results. Note that Object request brokers also support fully asynchronous messaging via oneway invocations.
Advantages
Central reasons for using a message-based communications protocol include its ability to store (buffer), route, or transform messages while conveying them from senders to receivers.
Another advantage of messaging provider mediated messaging between clients is that by adding an administrative interface, you can monitor and tune performance. Client applications are thus effectively relieved of every problem except that of sending, receiving, and processing messages. It is up to the code that implements the MOM system and up to the administrator to resolve issues like interoperability, reliability, security, scalability, and performance.
Asynchronicity
Using a MOM system, a client makes an API call to send a message to a destination managed by the provider. The call invokes provider services to route and deliver the message. Once it has sent the message, the client can continue to do other work, confident that the provider retains the message until a receiving client retrieves it. The message-based model, coupled with the mediation of the provider, makes it possible to create a system of loosely coupled components.
MOM comprises a category of inter-application communication software that generally relies on asynchronous message-passing, as opposed to a request-response architecture. In asynchronous systems, message queues provide temporary storage when the destination program is busy or not connected. In addition, most asynchronous MOM systems provide persistent storage to back up the message queue. This means that the sender and receiver do not need to connect to the network at the same time (asynchronous delivery), and problems with intermittent connectivity are solved. It also means that should the receiver application fail for any reason, the senders can continue unaffected, as the messages they send will simply accumulate in the message queue for later processing when the receiver restarts.
Routing
Many message-oriented middleware implementations depend on a message queue system. Some implementations permit routing logic to be provided by the messaging layer itself, while others depend on client applications to provide routing information or allow for a mix of both paradigms. Some implementations make use of broadcast or multicast distribution paradigms.
Transformation
In a message-based middleware system, the message received at the destination need not be identical to the message originally sent. A MOM system with built-in intelligence can transform messages and route to match the requirements of the sender or of the recipient. In conjunction with the routing and broadcast/multicast facilities, one application can send a message in its own native format, and two or more other applications may each receive a copy of the message in their own native format. Many modern MOM systems provide sophisticated message transformation (or mapping) tools which allow programmers to specify transformation rules applicable to a simple GUI drag-and-drop operation.
Disadvantages
The primary disadvantage of many message-oriented middleware systems is that they require an extra component in the architecture, the message transfer agent (message broker). As with any system, adding another component can lead to reductions in performance and reliability, and can also make the system as a whole more difficult and expensive to maintain.
In addition, many inter-application communications have an intrinsically synchronous aspect, with the sender specifically wanting to wait for a reply to a message before continuing (see real-time computing and near-real-time for extreme cases). Because message-based communication inherently functions asynchronously, it may not fit well in such situations. That said, most MOM systems have facilities to group a request and a response as a single pseudo-synchronous transaction.
With a synchronous messaging system, the calling function does not return until the called function has finished its task. In a loosely coupled asynchronous system, the calling client can continue to load work upon the recipient until the resources needed to handle this work are depleted and the called component fails. Of course, these conditions can be minimized or avoided by monitoring performance and adjusting message flow, but this is work that is not needed with a synchronous messaging system. The important thing is to understand the advantages and liabilities of each kind of system. Each system is appropriate for different kinds of tasks. Sometimes, a combination of the two kinds of systems is required to obtain the desired behavior.
Standards
Historically, there was a lack of standards governing the use of message-oriented middleware that has caused problems. Most of the major vendors have their own implementations, each with its own application programming interface (API) and management tools.
One of the long-standing standards for message oriented middleware is X/Open group's XATMI specification (Distributed Transaction Processing: The XATMI Specification) which standardizes API for interprocess communications. Known implementations for this API is ATR Baltic's Enduro/X middleware and Oracle's Tuxedo.
The Advanced Message Queuing Protocol (AMQP) is an approved OASIS and ISO standard that defines the protocol and formats used between participating application components, so implementations are interoperable. AMQP may be used with flexible routing schemes, including common messaging paradigms like point-to-point, fan-out, publish/subscribe, and request-response (these are intentionally omitted from v1.0 of the protocol standard itself, but rely on the particular implementation and/or underlying network protocol for routing). It also supports transaction management, queuing, distribution, security, management, clustering, federation and heterogeneous multi-platform support. Java applications that use AMQP are typically written in Java JMS. Other implementations provide APIs for C#, C++, PHP, Python, Ruby, and other programming languages.
The High Level Architecture (HLA IEEE 1516) is an Institute of Electrical and Electronics Engineers (IEEE) and Simulation Interoperability Standards Organization (SISO) standard for simulation interoperability. It defines a set of services, provided through an API in C++ or Java. The services offer publish/subscribe based information exchange, based on a modular Federation Object Model. There are also services for coordinated data exchange and time advance, based on logical simulation time, as well as synchronization points. Additional services provide transfer of ownership, data distribution optimizations and monitoring and management of participating Federates (systems).
The MQ Telemetry Transport (MQTT) is an ISO standard (ISO/IEC PRF 20922) supported by the OASIS organization. It provides a lightweight publish/subscribe reliable messaging transport protocol on top of TCP/IP suitable for communication in M2M/IoT contexts where a small code footprint is required and/or network bandwidth is at a premium.
The Object Management Group's Data Distribution Service (DDS) provides message-oriented Publish/Subscribe (P/S) middleware standard that aims to enable scalable, real-time, dependable, high performance and interoperable data exchanges between publishers and subscribers. The standard provides interfaces to C++, C++11, C, Ada, Java, and Ruby.
XMPP
The eXtensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on Extensible Markup Language (XML). Designed to be extensible, the protocol has also been used for publish-subscribe systems, signalling for VoIP, video, file transfer, gaming, Internet of Things applications such as the smart grid, and social networking services. Unlike most instant messaging protocols, XMPP is defined in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organizations' implementations. Because XMPP is an open protocol, implementations can be developed using any software license; although many server, client, and library implementations are distributed as free and open-source software, many freeware and proprietary software implementations also exist. The Internet Engineering Task Force (IETF) formed an XMPP working group in 2002 to formalize the core protocols as an IETF instant messaging and presence technology. The XMPP Working group produced four specifications (RFC 3920, RFC 3921, RFC 3922, RFC 3923), which were approved as Proposed Standards in 2004. In 2011, RFC 3920 and RFC 3921 were superseded by RFC 6120 and RFC 6121 respectively, with RFC 6122 specifying the XMPP address format. In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly Jabber Software Foundation) is active in developing open XMPP extensions. XMPP-based software is deployed widely across the Internet, according to the XMPP Standards Foundation, and forms the basis for the Department of Defense (DoD) Unified Capabilities Framework.
The Java EE programming environment provides a standard API called Java Message Service (JMS), which is implemented by most MOM vendors and aims to hide the particular MOM API implementations; however, JMS does not define the format of the messages that are exchanged, so JMS systems are not interoperable.
A similar effort is with the actively evolving OpenMAMA project, which aims to provide a common API, especially to C clients. As of August 2012, it is mainly appropriate for distributing market-oriented data (e.g. stock quotes) over pub-sub middleware.
Message queuing
Message queues allow the exchange of information between distributed applications. A message queue can reside in memory or disk storage. Messages stay in the queue until the time they are processed by a service consumer. Through the message queue, the application can be implemented independently - they do not need to know each other's position, or continue to implement procedures to remove the need for waiting to receive this message.
Trends
Advanced Message Queuing Protocol (AMQP) provides an open standard application layer protocol for message-oriented middleware.
The Object Management Group's Data Distribution Service (DDS) has added many new standards to the basic DDS specification. See Catalog of OMG Data Distribution Service (DDS) Specifications for more details.
The Object Management Group's Common Object Request Broker Architecture (CORBA) has added many new standards recently including a new language mapping to C# and an update to the IDL to C++ mapping specification to support the latest updates to the C++ language standards. See Catalog of OMG CORBA Specifications for more details.
Extensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on Extensible Markup Language (XML).
Streaming Text Oriented Messaging Protocol (STOMP), formerly named TTMP, is a simple text-based protocol, provides an interoperable wire format that allows STOMP clients to talk with any Message Broker supporting the protocol.
An added trend sees message-oriented middleware functions being implemented in hardware, usually in a field-programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other specialized silicon chip.
See also
Enterprise Integration Patterns (book)
Enterprise messaging system
Enterprise service bus
Flow-based programming
Event-driven architecture
References
External links
Enterprise application integration
Middleware
Systems engineering | Message-oriented middleware | Technology,Engineering | 2,777 |
1,468,595 | https://en.wikipedia.org/wiki/MVCML | Multiple-valued current mode logic (MVCML) or current mode multiple-valued logic (CM-MVL) is a method of representing electronic logic levels in analog CMOS circuits. In MVCML, logic levels are represented by multiples of a base current, Ibase, set to a certain value, x. Thus, level 0 is associated with the value of null, level 1 is associated with Ibase = x, level 2 is represented by Ibase = 2x, and so on.
References
See also
Many-valued logic
Digital electronics | MVCML | Engineering | 114 |
8,701,557 | https://en.wikipedia.org/wiki/Schizosaccharomycetales | Schizosaccharomycetales is an order in the kingdom of fungi that contains the family Schizosaccharomycetaceae.
References
Yeasts
Ascomycota
Ascomycota orders | Schizosaccharomycetales | Biology | 47 |
320,308 | https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group | The North American Network Operators' Group (NANOG) is a forum for the coordination and dissemination of information to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and a mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as NANOG-l), a free mailing list to which anyone may subscribe or post.
History
NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues. At the February 1994 regional tech meeting in San Diego, the group revised its charter to include a broader base of network service providers and subsequently adopted NANOG as its new name. NANOG was organized by Merit Network, a non-profit Michigan organization, from 1994 through 2011, when it was transferred to NewNOG.
Funding
Funding for NANOG originally came from the National Science Foundation as part of two projects Merit undertook in partnership with NSF and other organizations: the NSFNET Backbone Service and the Routing Arbiter project. All NANOG funds came from conference registration fees and donations from vendors, and starting in 2011, membership dues.
Meetings
NANOG meetings are held three times each year and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also lightning talks, where speakers can submit brief presentations (no longer than 10 minutes) on a very short term. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. In addition to the conferences, NANOG On the Road events offer single-day networking events.
NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee.
See also
Internet network operators' group
References
External links
Routing Arbiter Project
CIO official website
Ripe Labs - Network operations
Computer networking
Electronic mailing lists
Internet Network Operators' Groups
History of the Internet | North American Network Operators' Group | Technology,Engineering | 447 |
433,101 | https://en.wikipedia.org/wiki/Neil%20Bartlett%20%28chemist%29 | Neil Bartlett (15 September 1932 – 5 August 2008) was a British chemist who specialized in fluorine and compounds containing fluorine, and became famous for creating the first noble gas compounds. He taught chemistry at the University of British Columbia and the University of California, Berkeley.
Biography
Neil Bartlett was born on 15 September 1932 in Newcastle-upon-Tyne, England. Bartlett's interest in chemistry dated back to an experiment at Heaton Grammar School when he was only eleven years old, in which he prepared "beautiful, well-formed" crystals by reaction of aqueous ammonia with copper sulfate. He explored chemistry by constructing a makeshift lab in his parents' home using chemicals and glassware he purchased from a local supply store. He went on to attend King's College, University of Durham (which went on to become Newcastle University) in the United Kingdom where he obtained a Bachelor of Science (1954) and then a doctorate (1958) in the inorganic chemistry research group of Dr. P.L. Robinson.
In 1958, Bartlett's career began upon being appointed a lecturer in chemistry at the University of British Columbia in Vancouver, BC, Canada where he would ultimately reach the rank of full professor. During his time at the university he made his discovery that noble gases were indeed reactive enough to form bonds. He remained there until 1966, when he moved to Princeton University as a professor of chemistry and a member of the research staff at Bell Laboratories. He then went on to join the chemistry department at the University of California, Berkeley in 1969 as a professor of chemistry until his retirement in 1993. He was also a staff scientist at Lawrence Berkeley National Laboratory from 1969 to 1999. In 2000, he became a naturalized citizen of the United States. He died on 5 August 2008 of a ruptured aortic aneurysm. He lived with his wife Christina Bartlett until his death. They had four children.
Research
Bartlett's main specialty was the chemistry of fluorine and of compounds containing fluorine. In 1962, Bartlett prepared the first noble gas compound, xenon hexafluoroplatinate, Xe+[PtF6]−. This contradicted established models of the nature of valency, as it was believed that all noble gases were entirely inert to chemical combination. His discovery incited other chemists to discover several other fluorides of xenon: XeF2, XeF4, and XeF6.
Honors
In 1968 he was awarded the Elliott Cresson Medal. In 1973, he was made a Fellow of the Royal Society (United Kingdom). In 1976 he received the Welch Award in Chemistry for his synthesis of chemical compounds of noble gases and the consequent opening of broad new fields of research in the inorganic chemistry. He was elected a Fellow of the American Academy of Arts and Sciences in 1977. In 1979, he was honored as a Foreign Associate of the National Academy of Sciences. He was awarded the prestigious Davy Medal in 2002 for his discovery that the noble gases were not that noble after all. Previous recipients of the Davy Medal had included people as diverse as Robert Wilhelm Bunsen, the inventor of the Bunsen burner, and Albert Ladenburg, who suggested the existence of the compound prismane. In 2006, his research into the reactivity of noble gases was designated jointly by the American Chemical Society and the Canadian Society for Chemistry (CSC) as an International Historic Chemical Landmark at the University of British Columbia in recognition of its significance, "fundamental to the scientific understanding of the chemical bond." Bartlett was nominated for a Nobel Prize in Chemistry every year between 1963 and 1966 but did not get the Prize.
Hospitalization
In January 1963, Bartlett and his graduate student, P. R. Rao, were hospitalized after an explosion in the laboratory. As they looked at what they thought might be the first crystals of XeF2, the compound exploded, getting shards of glass in the eyes of both men. According to Bartlett, he thought that the compound may have contained water molecules, and he and Rao took off their glasses to get a better look. They were both taken to the hospital for four weeks, and Bartlett was left with damaged vision in one eye. The last piece of glass from this accident was removed 27 years later.
References
Further reading
External links
1932 births
2008 deaths
Alumni of King's College, Newcastle
Deaths from aortic aneurysm
English chemists
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
Inorganic chemists
Members of the French Academy of Sciences
Foreign associates of the National Academy of Sciences
People educated at Heaton Grammar School
Princeton University faculty
Academic staff of the University of British Columbia Faculty of Science
UC Berkeley College of Chemistry faculty
Members of the Göttingen Academy of Sciences and Humanities | Neil Bartlett (chemist) | Chemistry | 966 |
92,665 | https://en.wikipedia.org/wiki/K%CA%BCin | A Kʼin () is a part of the ancient Maya Long Count Calendar system which corresponds to one day. It is the smallest unit of Maya time to be counted as part of the long count and it usually appears as the last glyph in a long count date. Such long count dates can be seen on many inscriptions in the Mayan area at the start of the initial series which usually occurs at the beginning of an inscription.
"Kʼin" means "sun" in the Mayan language.
References
Maya calendars | Kʼin | Physics | 109 |
72,223,183 | https://en.wikipedia.org/wiki/Rockingham%20Kiln | The Rockingham, or Waterloo, Kiln in Swinton, South Yorkshire, England, is a pottery kiln dating from 1815. It formed part of the production centre for the Rockingham Pottery which, in the early 19th century, produced highly-decorative Rococo porcelain. The pottery failed in the mid-19th century, and the kiln is one of the few remaining elements of the Rockingham manufactory. It is a Grade II* listed building and forms part of the Rockingham Works Scheduled monument. The kiln is currently on the Historic England Heritage at Risk Register.
History
The original factory on the Swinton site produced simple earthenware pottery. The first recorded operator was a Joseph Flint, who in the 1740s was renting the site from the Marquess of Rockingham. A partnership with the Leeds Pottery failed and was dissolved by 1806. The subsequent owners, the Brameld family, built the Rockingham Kiln, and other structures on the site, in 1815. The date, the year of the Battle of Waterloo, led to the kiln's alternative name, the Waterloo Kiln. Despite the Brameld's investigations into the production of high-quality porcelain, the venture continued to be unsuccessful and the firm was extricated from a further bankruptcy in 1826 only by the intervention of William Fitzwilliam, 4th Earl Fitzwilliam, who had inherited the Wentworth Woodhouse estate from his uncle, the second Marquess of Rockingham.
The Earl's patronage, permitting the use of the Rockingham name and family crest, together with providing direct financial support, saw the Rockingham Pottery develop into a major producer of elaborate rococo-style porcelain, which enjoyed royal endorsement at home and considerable sales abroad. The factory produced major pieces including a full desert service for William IV which took eight years to complete. Ruth Harman, in her 2017 revised volume, Yorkshire West Riding: Sheffield and the South, of the Pevsner Buildings of England series, notes that "perfection was their undoing" and by 1842 the Rockingham firm was again bankrupt and the site was closed.
The Pottery Ponds site is administered by Rotherham Museums. As at November 2022, the kiln is on Historic England's Heritage at Risk Register. Recent interest in the Rockingham Works has seen the erection of a commemorative sculpture in Swinton in 2003, and a community heritage project at the site in 2021, directed by the artist Carlos Cortes.
Architecture and description
The Rockingham Kiln is believed to be the only surviving such pottery kiln in Yorkshire, and one of the few remaining in England. The high kiln is bottle-shaped and is constructed in English Bond red brick. Harman records that the structure is more accurately described as a "bottle-shaped brick oven [containing] a kiln". The kiln is a Grade II* listed building and forms part of the Rockingham Works Scheduled monument.
Notes
References
Sources
Grade II* listed buildings in South Yorkshire
Swinton, South Yorkshire
Buildings and structures in the Metropolitan Borough of Rotherham
British porcelain
English pottery
Ceramics manufacturers of England
Structures on the Heritage at Risk register
Kilns | Rockingham Kiln | Chemistry,Engineering | 634 |
37,113,247 | https://en.wikipedia.org/wiki/Empagliflozin | Empagliflozin, sold under the brand name Jardiance ( ), among others, is an antidiabetic medication used to improve glucose control in people with type2 diabetes. It is taken by mouth.
Common side effects include hyperventilation, anorexia, abdominal pain, nausea, vomiting, lethargy, mental status changes, hypotension, acute kidney injury, and vaginal yeast infections. Rarer but more serious side effects include a skin infection of the groin called Fournier's gangrene and a form of diabetic ketoacidosis with normal blood sugar levels. Use during pregnancy or breastfeeding is not recommended. Empagliflozin sometimes causes a transient decline in kidney function, and on rare occasions causes acute kidney injury, so use should be monitored in those with kidney dysfunction. But some trials have indicated that empagliflozin can be used in people with an eGFR as low as 20 mL/min/1.73 m², without increasing adverse kidney outcomes.
The use of empagliflozin has been shown to improve outcomes in people with established cardiovascular disease. There is evidence from high quality studies that empagliflozin can also help to slow the rate of kidney function decline. Irrespective of diabetes status, benefit was observed in those with mild, moderate or severe loss of kidney function. People started on empagliflozin may first see a decrease in kidney function before their glomerular filtration rate stabilises. Greatest benefit was demonstrated in those who had severe loss of kidney function, higher risk of kidney function worsening and background of diabetes.
Empagliflozin is an inhibitor of the sodium glucose co-transporter-2 (SGLT-2), and works by increasing sugar loss in urine.
Empagliflozin was approved for medical use in the United States and in the European Union in 2014. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 56th most commonly prescribed medication in the United States, with more than 12million prescriptions. It has received approval as a generic medication from the US Food and Drug Administration (FDA).
Medical uses
In the United States, empagliflozin is indicated to reduce the risk of cardiovascular death and hospitalization for heart failure in adults with heart failure; to reduce the risk of sustained decline in eGFR, end-stage kidney disease, cardiovascular death, and hospitalization in adults with chronic kidney disease at risk of progression; to reduce the risk of cardiovascular death in adults with type2 diabetes and established cardiovascular disease; and as an adjunct to diet and exercise to improve glycemic control in people aged ten years of age and older with type2 diabetes.
In the European Union, empagliflozin is indicated in people aged ten years of age and older for the treatment of insufficiently controlled type 2 diabetes as an adjunct to diet and exercise; as monotherapy when metformin is considered inappropriate due to intolerance; in addition to other medicinal products for the treatment of diabetes. It is indicated in adults for the treatment of symptomatic chronic heart failure; and it is indicated in adults for the treatment of chronic kidney disease.
Empagliflozin lowers risk of hospitalization and death in people with reduced heart function, when added to standard heart failure treatment with or without type2 diabetes. It is indicated in adults with type2 diabetes and established cardiovascular disease to reduce the risk of cardiovascular death; and as an adjunct to diet and exercise to improve glycemic control in adults with type2 diabetes.
In June 2023, the US Food and Drug Administration (FDA) expanded the indication, as an addition to diet and exercise, to improve blood sugar control in children 10 years and older with type2 diabetes.
Contraindications
History of a severe allergic reaction to empagliflozin
End-stage kidney disease
Diabetic ketoacidosis
Side effects
Common
Empagliflozin increases the risk of genital fungal infections. The risk is highest in people with a prior history of genital fungal infections.
Empagliflozin has been thought to be associated with increased risk of urinary tract infections. Reviews of clinical trials have shown there is no significant risk of developing urinary tract infections while taking empagliflozin when compared to placebo or other diabetic medications.
Empagliflozin reduces systolic and diastolic blood pressure and can increase the risk of low blood pressure, which can cause fainting and/or falls. The risk is higher in older people, people taking diuretics, and people with reduced kidney function.
Slight increases in Low-density lipoprotein (LDL) cholesterol can be seen with empagliflozin, in the range of 2–4% from baseline.
Serious
Diabetic ketoacidosis, a rare but potentially life-threatening condition, may occur more commonly with empagliflozin and other SGLT-2 inhibitors. While diabetic ketoacidosis is usually associated with elevated blood glucose levels, in people taking SGLT-2 inhibitors diabetic ketoacidosis may be seen with uncharacteristically normal blood glucose levels, a phenomenon called euglycemic diabetic ketoacidosis. The absence of elevated blood glucose levels in people on an SGLT-2 inhibitor may make it more difficult to diagnose diabetic ketoacidosis. The risk of empagliflozin-associated euglycemic diabetic ketoacidosis may be higher in the setting of illness, dehydration, surgery, and/or alcohol consumption. It is also seen in type1 diabetes who take empagliflozin, which notably is an unapproved or "off-label" use of the medication. To lessen the risk of developing ketoacidosis (a serious condition in which the body produces high levels of blood acids called ketones) after surgery, the FDA has approved changes to the prescribing information for SGLT2 inhibitor diabetes medicines to recommend they be stopped temporarily before scheduled surgery. Empagliflozin should each be stopped at least three days before scheduled surgery. Symptoms of diabetic ketoacidosis include nausea, vomiting, abdominal pain, tiredness, and trouble breathing.
Fournier's gangrene, a rare but serious infection of the groin, occurs more commonly in people taking empagliflozin and other SGLT-2 inhibitors. Symptoms include feverishness, a general sense of malaise, and pain or swelling around the genitals or in the skin behind them. The infection progresses quickly and urgent medical attention is recommended.
Empagliflozin can increase the risk of low blood sugar when it is used together with a sulfonylurea or insulin. When used by itself or in addition to metformin it does not appear to increase the risk of hypoglycemia.
Mechanism of action
Empagliflozin is an inhibitor of the sodium glucose co-transporter-2 (SGLT-2), which is found almost exclusively in the proximal tubules of nephronic components in the kidneys. SGLT-2 accounts for about 90percent of glucose reabsorption into the blood. Blocking SGLT-2 reduces blood glucose by blocking glucose reabsorption in the kidney and thereby excreting glucose (i.e., blood sugar) via the urine. Of all the SGLT-2 Inhibitors currently available, empagliflozin has the highest degree of selectivity for SGLT-2 over SGLT-1, SGLT-4, SGLT-5 and SGLT-6.
History
It was developed by Boehringer Ingelheim and is co-marketed by Eli Lilly and Company. It is also available as the fixed-dose combinations empagliflozin/linagliptin, empagliflozin/metformin, and empagliflozin/linagliptin/metformin.
For cardiovascular death, the FDA based its decision on a postmarketing study it required when it approved empagliflozin in 2014, as an adjunct to diet and exercise to improve glycemic control in adults with type2 diabetes. Empagliflozin was studied in a postmarket clinical trial of more than 7,000 participants with type2 diabetes and cardiovascular disease. In the trial, empagliflozin was shown to reduce the risk of cardiovascular death compared to a placebo when added to standard of care therapies for diabetes and atherosclerotic cardiovascular disease.
For heart failure, the safety and effectiveness of empagliflozin were evaluated by the FDA as an adjunct to standard of care therapy in a randomized, double-blind, international trial comparing 2,997 participants who received empagliflozin, 10 mg, once daily to 2,991 participants who received the placebo. The main efficacy measurement was the time to death from cardiovascular causes or need to be hospitalized for heart failure. Of the individuals who received empagliflozin for an average of about two years, 14% died from cardiovascular causes or were hospitalized for heart failure, compared to 17% of the participants who received the placebo. This benefit was mostly attributable to fewer participants being hospitalized for heart failure.
The FDA granted the application for empagliflozin priority review and granted the approval of Jardiance to Boehringer Ingelheim.
Legal status
As of May 2013, Boehringer and Lilly had submitted applications for marketing approval to the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA). Empagliflozin was approved in the European Union in May 2014, and was approved in the United States in August 2014. The FDA required four postmarketing studies: a cardiovascular outcomes trial, two studies in children, and a toxicity study in animals related to the pediatric trials.
Research
A meta-analysis of short-term randomized controlled trials has shown similar efficacy on glycemic control between empagliflozin 10mg and 25mg in people with type2 diabetes. While there may be a higher reduction in HbA1c with higher doses, this difference is more clinically significant when the patients' baseline HbA1c is ≥ 8.5%.
Weight and blood pressure
Empagliflozin causes moderate reductions in blood pressure and body weight. These effects are likely due to the excretion of glucose in the urine and a slight increase in urinary sodium excretion.
In clinical trials, participants with type2 diabetes taking empagliflozin with other diabetic medications lost an average of 2% of their baseline body weight. A higher percentage of people taking empagliflozin achieved weight loss greater than 5% from their baseline, which has been associated with improved glucose control. The same extent of weight loss was also observed in a study with heart failure patients taking empagliflozin.
Empagliflozin has been shown to reduce systolic blood pressure by 3 to 5millimeters of mercury (mmHg) without changes in pulse rate. A greater percentage of people with uncontrolled blood pressure at baseline, achieved controlled blood pressure (i.e. systolic blood pressure <130 mmHg and diastolic blood pressure <80 mmHg) after taking empagliflozin at 24 weeks. The effects on blood pressure and body weight are generally viewed as favorable, as many people with type2 diabetes have high blood pressure or are overweight or obese.
References
Drugs developed by Boehringer Ingelheim
Drugs developed by Eli Lilly and Company
Ethers
Glucosides
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
SGLT2 inhibitors | Empagliflozin | Chemistry | 2,492 |
3,492,608 | https://en.wikipedia.org/wiki/Poly-Bernoulli%20number | In mathematics, poly-Bernoulli numbers, denoted as , were defined by M. Kaneko as
where Li is the polylogarithm. The are the usual Bernoulli numbers.
Moreover, the Generalization of Poly-Bernoulli numbers with a,b,c parameters defined as follows
where Li is the polylogarithm.
Kaneko also gave two combinatorial formulas:
where is the number of ways to partition a size set into non-empty subsets (the Stirling number of the second kind).
A combinatorial interpretation is that the poly-Bernoulli numbers of negative index enumerate the set of by (0,1)-matrices uniquely reconstructible from their row and column sums. Also it is the number of open tours by a biased rook on a board (see A329718 for definition).
The Poly-Bernoulli number satisfies the following asymptotic:
For a positive integer n and a prime number p, the poly-Bernoulli numbers satisfy
which can be seen as an analog of Fermat's little theorem. Further, the equation
has no solution for integers x, y, z, n > 2; an analog of Fermat's Last Theorem.
Moreover, there is an analogue of Poly-Bernoulli numbers (like Bernoulli numbers and Euler numbers) which is known as Poly-Euler numbers.
See also
Bernoulli numbers
Stirling numbers
Gregory coefficients
Bernoulli polynomials
Bernoulli polynomials of the second kind
Stirling polynomials
References
.
.
.
.
Integer sequences
Enumerative combinatorics | Poly-Bernoulli number | Mathematics | 333 |
3,702,241 | https://en.wikipedia.org/wiki/Plantlet | A plantlet is a young or small plant, produced on the leaf margins or the aerial stems of another plant.
Many plants such as spider plants naturally create stolons with plantlets on the ends as a form of asexual reproduction. Vegetative propagules or clippings of mature plants may form plantlets.
An example is mother of thousands. Many plants reproduce by throwing out long shoots or runners that can grow into new plants. Mother of thousands appears to have lost the ability to reproduce sexually and make seeds, but transferred at least part of the embryo-making process to the leaves to make plantlets.
See also
Apomixis
Plant propagation
Plant reproduction
References
Plants | Plantlet | Biology | 140 |
18,862,994 | https://en.wikipedia.org/wiki/Science%20Communication%20Observatory | The Science Communication Observatory (, , OCC) is a Special Research Centre attached to the Department of Communication of the Pompeu Fabra University in Barcelona, Spain, set up in 1994. This centre is specialized in the study and analysis of the transmission of scientific, medical, environmental and technological knowledge to society. The journalist Vladimir de Semir, associated professor of Science Journalism at the Pompeu Fabra University, was the founder and is the current director of the centre. A multidisciplinary team of researchers coming from different backgrounds (i.e. journalists, biologists, physicians, linguists, historians, etc.) is working on various lines of research: science communication; popularization of sciences, risk and crisis communication; science communication and knowledge representation; journalism specialized in science and technology; scientific discourse analysis; health and medicine in the daily press; relationships between science journals and mass media; history of science communication; public understanding of science; gender and science in the mass media, promotion of scientific vocations, science museology, etc.
PCST Network & Academy
The Science Communication Observatory is linked to the international network on Public Communication of Science & Technology (PCST), which includes individuals from around the world who are active in producing and studying PCST through science journalism, science museums and science centers, academic researchers in social and experimental sciences, scientists who deal with the public, public information officers for scientific institutions and others related to science in society issues. The PCST Network sponsors international conferences, electronic discussions, and other activities to foster dialogue among the different groups of people interested in PCST, leading to cross-fertilization across professional, cultural, international, and disciplinary boundaries. The PCST Network seeks to promote new ideas, methods, intellectual and practical questions and perspectives.
The first conference held by the PCST Network was at Poitiers, France in 1989. Since then biennial conferences have been held in Madrid (1991), Montreal (1994), Melbourne (1996), Berlin (1998), Geneva (2000), Cape Town (2002), Barcelona (2004), Seoul (2006), Malmo/Copenhagen (2008) and New Delhi (2010). The 2012 conference is scheduled for Florence in 2012.
With events in Melbourne, Beijing, Seoul and Cape Town, the Network expanded from its European origins to become a truly international network. The Scientific Committee managing the organisation is drawn from 19 different countries ranging across the globe. The Committee is chaired by Mr Toss Gascoigne (Australia).
The Science Communication Observatory hosts the PCST Academy. The PCST Academy is responsible for the creation of the documentary basis of the Public Communication of Science and Technology network (PCST) and its main task is the selection and organized collection of articles, reports and resources on particular topics in the field of communication and social understanding of sciences. As stated by the Chair of the Network from 2004 to 2006, Vladimir de Semir, the Academy looks for the necessary resources at international level to guarantee the access to the network of representatives from those countries that currently have to face more difficulties: “The main aim is to represent and include the multiplicity of identities existing in the world, because the study and practice of science communication should respect the different cultural contexts and integrate the knowledge coming from all continents.”
Teaching, publishing and collaborative projects
The Science Communication Observatory runs a Master in Science, Medical and Environmental Communication in Barcelona (Spain) since 1995 and a Diploma in Science Communication in Buenos Aires (Argentina) since 2008 and other courses and workshops about science communication and the popularization of science. The Science Communication Observatory also publishes Quark, a journal about “Science, Medicine, Communication and Culture”, and also carries on researches and analysis in the Science in Society field, working with other European institutions and academic groups on several European projects such as:
• PLACES - Platform of Local Authorities and Communicators Engaged in Science, a four-year European project establishing and developing the concept of the European City of Scientific Culture. The project focuses on developing and strengthening City Partnerships, bringing together 67 science centres, museums, festivals and events, each partnering with local authorities, and 10 European regional networks. The project facilitates cooperation among these alliances to structure their science communication activities, sharing tools, resources and results.
• KiiCS - Knowledge Incubation in Innovation and Creation for Science, the project aims to build bridges between arts, science and technology by giving evidence of the positive impacts of their interaction for creativity as well as for triggering interest in science. The project will stimulate co-creation processes involving creators and scientists, and nurture youth interest in science in a creative way. (KiiCS starts 15 February 2012)
• MASIS - Monitoring Policy and Research Activities on Science in Society in Europe, project to develop structural links and interaction between scientists, policy-makers and society at large, therefore an instrumental tool in relation to stimulating further cooperation in Europe and reducing fragmentation through the identification of common resources, common trends, common interests, and common challenges.
• ESCITY - Europe, Science and the City: promoting scientific culture at local level, an initiative to create the core of a network for the exchange of information and best practices in the area of promoting scientific culture, with two particular characteristics; focusing on local and regional action and emplacing strategies that situate the promotion of scientific culture under the umbrella of cultural policies.
• ESConet - European Science Communication Network, which brings together experienced science communication lecturers, researchers and practitioners from across Europe to train natural scientists and technologists to communicate effectively with the media, policy-makers and the general public. As well as delivering these core communication skills, ESConet workshops encourage scientists to reflect critically on the social, cultural, and ethical dimensions of their scientific work.
• E-KNOWNET - Network for ICT-enabled non-formal science learning, a project supported by the Lifelong Learning Programme of the European Commission to develop an innovative and viable ICT-enabled mechanism for fast and efficient sharing of new knowledge among larger non-expert segments of society, in forms suitable for non-formal learning.
• STEPE - Sensitive Technologies and European Public Ethics project is innovative in contributing to the early identification of potentially controversial technological developments and related public ethics, by systematically considering both the view of key stakeholders in technological, political and societal life and the perceptions of European citizens in 25 European member states, thereby contextualising the findings by a systematic analysis of policy developments both on national and European levels. The interdisciplinary and multi-method approach will aim at establishing an integrated European Map of Public Ethics. It is the aim to stimulate new, empirically grounded, thinking on public ethics as a contribution to wider debates and policy making on responsible technological innovation. As a key data source, the proposal is based on the triennial Eurobarometer survey on the Biotechnology and the Life Sciences.
• Benchmarking the Promotion of RTD culture and Public Understanding of Science to establish the current state of RTD culture in Member States, to provide a survey of the ongoing activities, and to recommend measures to be followed to improve the present situation. In order to clarify the meaning behind the vocabulary used in different Member States, our introduction also contains an analysis of the concepts behind
“Public Understanding of Science”, “Public Understanding of Science and the Humanities (Wissenschaft)” and “Culture Scientifique”.
Scientific Knowledge and Cultural Diversity
The Science Communication Observatory was responsible of the organization of the 8th International Conference of the PCST Network in Barcelona (Spain), June 2004. The main theme of the conference was "Scientific Knowledge and Cultural Diversity" which opened up a field to debate on the global discourse of science in a range of local culture and knowledge environments. When talking about various cultures we are referring to the different groups sharing the same language, same traditions, ideology or religion, inhabiting in a specific geographical environment, having the same job, or being a man or a woman, a young, a child, an elder… All this rich cultural diversity also reflects its stamp on scientific knowledge, in its creation and application as well as in the whole process of public communication of science and technology.
The main theme of "Scientific Knowledge and Cultural Diversity", included 3 subthemes or discussion subjects.
Native Knowledge & Modern Science
Cultural diversity. Traditional knowledge. Local wisdom. Regional identity and globalization. Indigenous knowledge system. Citizenship participation on scientific decisions. Popular culture and scientific culture. Possibilities of native knowledge facing with new technologies. Science ethics and believes. Religion or morality influence in knowledge construction. Cohabitation between medicines with different evaluation systems. Knowledge, religion and beliefs. Parasciences. Science as a universal knowledge Intellectual property. Gender and cultural approach. New models, trends and concepts in PCST.
Science Communication: Historical Perspectives And New Trends
Influences of historical processes on science communication. The greatest science communicators. The role of the mass media. The role of science centres and museums. Main initiatives in the promotion of scientific culture. Results analysis methodology. International networks. New models, trends and concepts in PCST.
Science Communication & Social Participation
Peripheral science and science in the outskirts. Science culture and cooperation with illiterate population and marginal groups. Social inclusion. Public engagement with science policy (consensus conferences, citizen juries, deliberative polling). Science vocations in the changing world. Media impact on science opinion. Science festivals. Ethics of science communication. Public policies in scientific culture. Citizen participation on scientific decisions. Informal science education. Science centers and museums. Science communication training. New models, trends and concepts in PCST.
European Forum on Science Journalism
In December 2007, the Science Communication Observatory organized with the European Commission the European Forum on Science Journalism (EFSJ) where leading science journalists and editors of national newspapers and specialised science publications from across Europe and the world met in Barcelona to discuss the challenges in reporting on science, the impact of new technologies on the profession and importance of linking science to society and everyday life together with leading scientists and top science communication professionals from across Europe, the US, Canada, China and Australia. A Special Eurobarometer on scientific research in the media and a European Guide to Science Journalism Training were presented in this forum.
How to strengthen science coverage in the European press? How to convince editors to run science stories? How to assess the trustworthiness of scientific research? How to explain science in an understandable fashion? How to stimulate public interest in science news?... These were among the key questions addressed at the first European Forum on Science Journalism.
Media for Science Forum
In May 2010, the Science Communication Observatory was member of the scientific committee of the Media for Science Forum organised by the Spanish Foundation for Science and Technology with the collaboration of the European Commission in the context of the Spanish Presidency of Europe 2010.
References
External links
European Guide to Science Journalism Training - Second Edition, August 2008
Science communication
Pompeu Fabra University
Environmental communication | Science Communication Observatory | Environmental_science | 2,208 |
49,863,282 | https://en.wikipedia.org/wiki/Diversity%20in%20computing | Diversity in computing refers to the representation and inclusion of underrepresented groups, such as women, people of color, individuals with disabilities, and LGBTQ+ individuals, in the field of computing. The computing sector, like other STEM fields, lacks diversity in the United States.
Despite women constituting around half of the U.S. population they still are not properly represented in the computing sector. Racial minorities, such as African Americans, Hispanics, and American Indians or Alaska Natives, also remain significantly underrepresented in the computing sector.
Two issues that cause the lack of diversity are:
1. Pipeline: the lack of early access to resources
2. Culture: exclusivity and discrimination in the workplace
The lack of diversity can also be attributed to limited early exposure to resources, as students who do not already have computer skills upon entering college are at a disadvantage in computing majors. There is also the issue of discrimination and harassment faced in the workplace which affects all underrepresented groups. For example, studies have shown that 50% of women reported experiencing sexual harassment in tech companies.
As technology is becoming omnipresent, diversity in the tech field could help institutions reduce inequalities in society. To make the field more diverse, organizations need to address both issues. There are multiple organizations and initiatives which are working towards increasing diversity in computing by providing resources, mentorship, support, and fostering a sense of belonging for minority groups such as EarSketch and Black Girls Code. Institutions are also implementing strategies such as Summer Bridge programs, tutoring, academic advising, financial support, and curriculum reform to support diversity in STEM. Along with Institutions Educators can help cultivate a sense of confidence in underrepresented students interested in pursuing computing, such as emphasizing a growth mindset, rejecting the idea that some individuals have innate talent, and establishing inclusive learning environments.
Statistics
In 2019, women represented 50.8% of the total population of the United States, but made up only 25.6% of computer and mathematical occupations and 27% of computer and information systems manager occupations. African Americans represented 13.4% of the population, but held 8.4% of computer and mathematical occupations. Hispanic or Latino people made up 18.3% of the population, but constituted only 7.5% of the people in these jobs. Meanwhile, white people, standing at 60.4%-76.5% of the population of the United States, represented 67% of computer and mathematical occupations and 77% of computer and information systems manager occupations. Asians, representing 5.9% of the population, held 22% of computer and mathematical jobs and were 14.3% of all computer and information systems managers.
In 2021, women made up 51% of the total population aged 18 to 74 years old, yet only accounted for 35% of STEM occupations. Additionally, while individuals with disabilities made up 9% of the population, they accounted for 3% of STEM occupations. Hispanics, Blacks, and American Indians or Alaska Natives collectively only accounted for 24% of STEM occupations in 2021 while making up 31% of the total population.
In addition to occupational disparities, there are differences in representation in postsecondary science and engineering education. Women earning associate's or bachelor's degrees in science and engineering accounted for approximately half of the total number of degrees in 2020, which was proportional to their share of the population for the age range of 18 – 34 years. In contrast, women only accounted for 46% of science and engineering master's degrees and 41% of science and engineering doctoral degrees. Hispanics, Blacks, and American Indians or Alaska Natives as a group face a similar gap between their share of the population and proportion of degrees earned, with them collectively making up 37% of the college age population in 2021, yet only 26% of bachelor's degrees in science and engineering, 24% of master's degrees in science and engineering, and 16% of doctoral degrees in science and engineering awarded in 2020. On top of the degree gap, data indicates that only 38% of women who major in computer science actually end up working in the computer science field, in contrast to 53% of men.
A 2021 report indicates that approximately 57% of women working in tech responded that have experienced gender discrimination in the workplace in contrast with men, where approximately only 10% reported experiencing gender discrimination. Additionally, 48% of women reported experiencing discrimination over their technical abilities in contrast with only 24% of men reporting the same discrimination. The report also found that 48% of Black respondents indicated that they experienced racial discrimination in the tech workplace. Hispanic respondents followed at 30%, Asian/Pacific Islanders responded at 25%, Asian Indians responded at 23%, and White respondents followed them at 9%.
In a 2022 survey available on Stack Overflow, approximately 2% of all respondents identified either "in their own words" or "transgender." On top of that, approximately 16% of all respondents identified using an option other than "Straight/Heterosexual." Additionally, 10.6% of respondents identified as having a concentration and/or memory disorder, 10.3% identified as having an anxiety disorder, and 9.7% as having a mood or emotional disorder.
When it comes to career mobility, a 2022 report found that there is a gap in promotions given in the tech industry to women in comparison to men. The report found that for every 100 men promoted to manager, only 52 women were given the same promotion.
Factors contributing to underrepresentation
There are two reported reasons for the lack of participation of women and minorities in the computing sector. The first reason is the lack of early exposure to resources like computers, internet connections and experiences such as computer courses. Research shows that the digital divide acts as a factor; students who do not already have computer skills upon entering college are at a disadvantage in computing majors, and access to computers is influenced by demographics, such as ethnic background. The problem of lack of resources is compounded with lack of exposure to courses and information that can lead to a successful computing career. A survey of students at University of Maryland Eastern Shore and Howard University, two historically black universities, found that the majority of students were not "counseled about computer related careers" either before or during college. The same study (this time only surveying UMES students) found that fewer women than men had learned about computers and programming in high school. The researchers have concluded that these factors could contribute to lower numbers of women and minorities choosing to pursue computing degrees.
Another reported issue that leads to the homogeneity of the computing sector is the cultural issue of discrimination at the workplace and how minorities are treated. For participants to excel in a tech-related course or career, their sense of belonging matters more than pre-gained knowledge. That was reflected in “The Great Resignation” that took place in the US during the COVID-19 pandemic. In a survey of 2,030 workers between the ages of 18 and 28 conducted in July 2021, the company found that 50% said they had left or wanted to their leave tech or IT job “because the company culture made them feel unwelcome or uncomfortable,” with a higher percentage of women and Asian, Black, and Hispanic respondents each saying they had such an experience. In most cases, the workplaces not only lack a sense of belonging but are also unsafe. Research conducted by Dice, a tech career hub, showed that more than 50% of women faced sexual harassment in tech companies. A pilot program that was done to understand different elements that affect minorities during a STEM course showed that increased mentorship and support was an important factor for the completion of the course.
One of the biggest factors halting the increase of diversity in STEM education is awareness. Many experts feel that increasing awareness is a strong first step towards enacting change at a higher level. One of the most common outreach methods are on campus workshops at colleges. These workshops are effective because they instill awareness into people who are just coming into the field and learning about the field to foster inclusivity. Students leaving a workshop at a West Virginia university reported that they were unaware of the problems facing diverse people in STEM, particularly people with disabilities.
Effects on different groups
Black People
Gaming
Black gamers are put into unique positions when it comes to entering spaces of gaming, for when they are represented incorrectly whilst constantly at risk of being harassed for a wide variety of reasons. Whenever they are represented, which is not as often as is what occurs in the real world, it typically comes at the price of being stereotyped into typically two categories: being an athlete, a criminal, or both. If they decide to call out these issues, there is typically heavy backlash for their actions. One such example comes from The Sims community. When its black player base call out issues about various hair texture representations, enter Sims community spaces, or see storylines about black sims members, they typically faced racial attacks, microagressions, or see storylines of characters that looked like them that were based on prevalent stereotypes of black people. The solution to their issues did not come from the creators, but rather groups of black Sims players coming together to make their own spaces in order to have somewhere to go to. Moreover, Black content creators have a unique space within the gaming world: they need to maintain a level of being black that allows people to be comfortable with watching their content, but in creating who they are as creators, they are inherently creating spaces for racialized comments against them that fills their comment sections. Moreover, whenever they do ask for bigger changes, companies take on a race-blind approach to ignoring the problems within the communities they are allowing to exist. When black people are included, it’s mostly because the games being played are inherently included in African American culture, and often considered “diversity nights” for black creators.
Artificial Intelligence
The issues that lie dormant within the training data of large language models such as ChatGPT can be seen through how it sees black people. Former Google AI Ethicist Timnit Gebru had her time end at Google due to complications over a paper that described the issues of some AI Ethicists: its carbon impact is an issue that could create many issues very soon, greater datasets would lead to complications with currently insensitive vocabulary that was utilized in earlier days of the internet, and the amount of effort it takes to train the model again if something were to fail. There has already been clear evidence that AI models have latent biases that claim that white men are the best scientists. When this was discovered, OpenAI quickly created a block for questions that directly pertained to race, rather than fixing the issue at hand. Something else is the idea of beauty: when creating a supposedly unbiased judge for a beauty contest, BeautyAI asked for submissions from throughout the world, and within its 44 winners of the contest, 38 were white, and 1 finalist had an obvious darker skin tone. These submissions also were used in a manner of gleaning information about health factors affecting the users, and the fact that "healthy" people were put further to the front implies to the AI model that those who are darker skin toned are generally less healthy. Within both of these models, there exists training data that inherently has been given data that presents biases against people of color. A lack of representation within the spaces of developing these models creates an underlying issue of a lack of consideration for more people to be included. If the people that initial testing is done on are coworkers, it is possible that these models from the beginning are untested on all scenarios.
Surveillance
Black and Latinx communities have frequently been the targets of new surveillance and risk assessment technologies that have brought more arrest to these communities. The police have utilized tools to target communities of color for decades. One of the earliest examples of this occurring within the borders United States itself was directly after attacks on the Twin Towers. The New York Police Department used community leaders, taxi drivers, and extensive databases that managed to find ways of connecting people together in order to find more potential terrorists that lived within the United States. This has mostly been done through a program called CompStat, and many precincts have been encouraged to do the same because of its ability to find high crime areas and put more police in areas where they believe crime will happen, leading to even more arrests. In time, this has created systems in which entire states have attempted to create gang databases that have been based on risk assessments, but in turn created situations where children less than a year old were determined to be "self identified gang members". This creates a sense of both confusion and distrust amongst those within these communities, and in turn could lead to even more violence and arrests. These programs have been used throughout the United States such as Boston, Massachusetts, Salina, California, and, most clearly, Camden, New Jersey. Outside of specifically Boston, most of these places have not provided social services to those who are a part of these cycles of violence. Rather, they prefer to put them into prison. This cycle is a positive feedback loop for the computers, and does not help these communities.
Social Media
Africans throughout the world have a much higher risk of harassment through the internet:
The two countries with the highest levels of cyberbullying reports came from Kenya and Nigeria, with around 70% of all users claiming to have received hate throughout their time using the internet.
Tweets that have discriminatory ideals within them are linked to rates of hate crimes within the area that the Tweet was made.
Black People are more likely to report the attacks they received throughout the internet are mostly based on their race.
There is an inherent tie to being black within the internet and also receiving racially-charged hatred. Moreover, because of the lax nature of many popular social media sites (such as Twitter), there exists many ways in which white nationalists can come together to spread hatred through large hate waves that target people of color, and most especially black women.
Increasing diversity
Institutions working to improve diversity in the computing sector are focusing on increasing access to resources and building a sense of belonging for minorities. One organization working toward this goal is EarSketch, an educational coding program that allows users to produce music by coding in JavaScript and Python. Its aim is to spark interest in programming and computer science for a wider range of students and "to attract different demographics, especially girls." The nonprofit Black Girls Code is working to encourage and empower black girls and girls of color to enter the world of computing by teaching them how to code. Another way to widen access to resources is by increasing equality in access to computers. Students who use computers in school settings are more likely to use them outside the classroom, so bringing computers into the classroom improves students' computer literacy.
Those who work in the field of education, primarily educators, have a significant impact on how students perceive the fields of engineering and computing, as well as their own capabilities within these fields. According to the American Association of University Women (AAUW), there are several things that teachers can do to cultivate a sense of confidence in underrepresented individuals interested in pursuing an education or career in the field of computing. Some of these things that educators can do are:
Emphasize that engineering skills and abilities can be acquired through learning. In other words, emphasize the idea of a growth mindset.
Portray obstacles and challenges as universal experiences, rather than indicators of unsuitability for engineering or computing.
Increase accessibility to computing for people from diverse backgrounds and reject the notion that some individuals are inherently better suited to the field.
Highlight the varied and extensive applications of engineering and computing.
Establish inclusive environments for girls in math, science, engineering, and computing where they're encouraged to tinker with technology and develop confidence in their programming and design skills.
Another way for educators to affect change and help to resolve the problem is through certain intervention methods that have shown to have a positive impact on the issue. These can be implemented by institutions rather than individuals and have shown a lot of promise. Of these there are ten that have been heavily researched and are as follows:
Summer Bridge: Summer bridge programs are meant to help students from low income families transition to college life and take place between the end of a prospective student's senior year of high school and freshman year of college. Summer bridge programs are meant to help students adjust and get ahead in their college lives.
Mentoring: In this program each student must take a mentor that they can trust to help them when they find themselves struggling while also promoting individual successes.
Research Experience: Students participate in research on or off campus during their time as an undergraduate. This has been found to greatly increase a student's likelihood of pursuing a graduate degree compared to students who do not participate in research.
Tutoring: One of the most common academic intervention methods a student seeks out a knowledgeable individual to provide extra instruction and practice.
Career Counseling and Awareness: Having a connection to someone in the field that a student is trying to join is extremely important. If an institution can help to connect students with someone in their prospective career it causes a higher likelihood of that student staying in that field.
Learning Center: An on campus learning center is a place where students can go to learn skills that will help them succeed in school in general. Topics like study skills and note taking skills are taught free of charge.
Workshops and Seminars: Short Classes and meetings on campus that focus on skills or research work from professors at other universities who are visiting. Workshops can be used to learn knowledge that is outside of the curriculum.
Academic Advising: Higher Quality academic advising is a large factor in increasing student retention. If students feel adequately supported and are paced correctly throughout their experience they are much more likely to finish their degree.
Financial Support: Giving financial aid to students through merit scholarships or other outside scholarship opportunities has been found to increase retention rates among Students.
Curriculum and Instructional Reform: Find and isolate areas of the program that are meant to “weed out” students and refactor them to be challenging but rewarding.
These methods on their own are not enough to adequately increase the diversity of the talent pool but have shown promise as potential solutions. They can be most effective when used in an integrated manner, meaning the more that are studied and utilized the closer to a solution STEM educators will be.
Since workplace discrimination causes lack of diversity in STEM, changing that would increase diversity in the sector. Big tech companies like Microsoft and Facebook are publishing diversity reports and investing in programs to make their companies more diverse.
Additionally, while companies dedicating resources to initiatives designed to promote diversity within their workplaces is a great start, there is more that tech companies can do. The AAUW published a set of proposals for STEM employers to adopt, aimed at enhancing diversity within their organizations:
Sustain effective management practices that are equitable, consistent, and promote a healthy work environment.
Administer and advocate for diversity and affirmative action policies.
Minimize the detrimental effects of gender bias.
Foster a sense of inclusion and belonging.
Allow employees the opportunity to work on projects or initiatives that have social significance.
See also
Diversity in open-source software
Gender disparity in computing
STEM pipeline
Women in computing
References
External links
Coalition for Cultural Diversity
UK Coalition for Cultural Diversity
Black Girls Code website
Computer science’s diversity gap starts early
More Students—But Few Girls, Minorities—Took AP Computer Science Exams
AP Archived Data 2014
Top and Bottom Five States for Minorities in Computing
Computer science education | Diversity in computing | Technology | 3,955 |
1,763,082 | https://en.wikipedia.org/wiki/History%20of%20genetics | The history of genetics dates from the classical era with contributions by Pythagoras, Hippocrates, Aristotle, Epicurus, and others. Modern genetics began with the work of the Augustinian friar Gregor Johann Mendel. His works on pea plants, published in 1866, provided the initial evidence that, on its rediscovery in 1900's, helped to establish the theory of Mendelian inheritance.
In ancient Greece, Hippocrates suggested that all organs of the body of a parent gave off invisible “seeds,” miniaturised components, that were transmitted during sexual intercourse and combined in the mother's womb to form a baby. In the Early Modern times, William Harvey's
book On Animal Generation contradicted Aristotle's theories of genetics and embryology.
The 1900 rediscovery of Mendel's work by Hugo de Vries, Carl Correns and Erich von Tschermak led to rapid advances in genetics. By 1915 the basic principles of Mendelian genetics had been studied in a wide variety of organisms — most notably the fruit fly Drosophila melanogaster. Led by Thomas Hunt Morgan and his fellow "drosophilists", geneticists developed the Mendelian model, which was widely accepted by 1925. Alongside experimental work, mathematicians developed the statistical framework of population genetics, bringing genetic explanations into the study of evolution.
With the basic patterns of genetic inheritance established, many biologists turned to investigations of the physical nature of the gene. In the 1940s and early 1950s, experiments pointed to DNA as the portion of chromosomes (and perhaps other nucleoproteins) that held genes. A focus on new model organisms such as viruses and bacteria, along with the discovery of the double helical structure of DNA in 1953, marked the transition to the era of molecular genetics.
In the following years, chemists developed techniques for sequencing both nucleic acids and proteins, while many others worked out the relationship between these two forms of biological molecules and discovered the genetic code. The regulation of gene expression became a central issue in the 1960s; by the 1970s gene expression could be controlled and manipulated through genetic engineering. In the last decades of the 20th century, many biologists focused on large-scale genetics projects, such as sequencing entire genomes.
Pre-Mendel ideas on heredity
Ancient theories
The most influential early theories of heredity were that of Hippocrates and Aristotle. Hippocrates' theory (possibly based on the teachings of Anaxagoras) was similar to Darwin's later ideas on pangenesis, involving heredity material that collects from throughout the body. Aristotle suggested instead that the (nonphysical) form-giving principle of an organism was transmitted through semen (which he considered to be a purified form of blood) and the mother's menstrual blood, which interacted in the womb to direct an organism's early development. For both Hippocrates and Aristotle—and nearly all Western scholars through to the late 19th century—the inheritance of acquired characters was a supposedly well-established fact that any adequate theory of heredity had to explain. At the same time, individual species were taken to have a fixed essence; such inherited changes were merely superficial. The Athenian philosopher Epicurus observed families and proposed the contribution of both males and females of hereditary characters ("sperm atoms"), noticed dominant and recessive types of inheritance and described segregation and independent assortment of "sperm atoms".
The Roman poet and philosopher Lucretius describes heredity in his work "De rerum natura".
From this semen, Venus produces a varied variety of characteristics and reproduces ancestral traits of expression, voice or hair; These features, as well as our faces, bodies, and limbs, are also determined by the specific semen of our relatives.
Similarly, Marcus Terentius Varro in "Rerum rusticarum libri tres" and Publius Vergilius Maro propose that wasps and bees originate from animals like horses, calves, and donkeys, with wasps coming from horses and bees from calves or donkeys.
In 1000 CE, the Arab physician, Abu al-Qasim al-Zahrawi (known as Albucasis in the West) was the first physician to describe clearly the hereditary nature of haemophilia in his Al-Tasrif. In 1140 CE, Judah HaLevi described dominant and recessive genetic traits in The Kuzari.
Preformation theory
The preformation theory is a developmental biological theory, which was represented in antiquity by the Greek philosopher Anaxagoras. It reappeared in modern times in the 17th century and then prevailed until the 19th century. Another common term at that time was the theory of evolution, although "evolution" (in the sense of development as a pure growth process) had a completely different meaning than today. The preformists assumed that the entire organism was preformed in the sperm (animalkulism) or in the egg (ovism or ovulism) and only had to unfold and grow. This was contrasted by the theory of epigenesis, according to which the structures and organs of an organism only develop in the course of individual development (Ontogeny). Epigenesis had been the dominant opinion since antiquity and into the 17th century, but was then replaced by preformist ideas. Since the 19th century epigenesis was again able to establish itself as a view valid to this day.
Plant systematics and hybridisation
In the 18th century, with increased knowledge of plant and animal diversity and the accompanying increased focus on taxonomy, new ideas about heredity began to appear. Linnaeus and others (among them Joseph Gottlieb Kölreuter, Carl Friedrich von Gärtner, and Charles Naudin) conducted extensive experiments with hybridisation, especially hybrids between species. Species hybridisers described a wide variety of inheritance phenomena, include hybrid sterility and the high variability of back-crosses.
Plant breeders were also developing an array of stable varieties in many important plant species. In the early 19th century, Augustin Sageret established the concept of dominance, recognising that when some plant varieties are crossed, certain characteristics (present in one parent) usually appear in the offspring; he also found that some ancestral characteristics found in neither parent may appear in offspring. However, plant breeders made little attempt to establish a theoretical foundation for their work or to share their knowledge with current work of physiology, although Gartons Agricultural Plant Breeders in England explained their system.
Mendel
Between 1856 and 1865, Gregor Mendel conducted breeding experiments using the pea plant Pisum sativum and traced the inheritance patterns of certain traits. Through these experiments, Mendel saw that the genotypes and phenotypes of the progeny were predictable and that some traits were dominant over others. These patterns of Mendelian inheritance demonstrated the usefulness of applying statistics to inheritance. They also contradicted 19th-century theories of blending inheritance, showing, rather, that genes remain discrete through multiple generations of hybridisation.
From his statistical analysis, Mendel defined a concept that he described as a character (which in his mind holds also for "determinant of that character"). In only one sentence of his historical paper, he used the term "factors" to designate the "material creating" the character: " So far as experience goes, we find it in every case confirmed that constant progeny can only be formed when the egg cells and the fertilising pollen are off like the character so that both are provided with the material for creating quite similar individuals, as is the case with the normal fertilisation of pure species. We must, therefore, regard it as certain that exactly similar factors must be at work also in the production of the constant forms in the hybrid plants."(Mendel, 1866).
Mendel's work was published in 1866 as "Versuche über Pflanzen-Hybriden" (Experiments on Plant Hybridisation) in the Verhandlungen des Naturforschenden Vereins zu Brünn (Proceedings of the Natural History Society of Brünn), following two lectures he gave on the work in early 1865.
Post-Mendel, pre-rediscovery
Pangenesis
Mendel's work was published in a relatively obscure scientific journal, and it was not given any attention in the scientific community. Instead, discussions about modes of heredity were galvanised by Darwin's theory of evolution by natural selection, in which mechanisms of non-Lamarckian heredity seemed to be required. Darwin's own theory of heredity, pangenesis, did not meet with any large degree of acceptance. A more mathematical version of pangenesis, one which dropped much of Darwin's Lamarckian holdovers, was developed as the "biometrical" school of heredity by Darwin's cousin, Francis Galton.
Germ plasm
In 1883 August Weismann conducted experiments involving breeding mice whose tails had been surgically removed. His results — that surgically removing a mouse's tail had no effect on the tail of its offspring — challenged the theories of pangenesis and Lamarckism, which held that changes to an organism during its lifetime could be inherited by its descendants. Weismann proposed the germ plasm theory of inheritance, which held that hereditary information was carried only in sperm and egg cells.
Rediscovery of Mendel
Hugo de Vries wondered what the nature of germ plasm might be, and in particular he wondered whether or not germ plasm was mixed like paint or whether the information was carried in discrete packets that remained unbroken. In the 1890s he was conducting breeding experiments with a variety of plant species and in 1897 he published a paper on his results that stated that each inherited trait was governed by two discrete particles of information, one from each parent, and that these particles were passed along intact to the next generation. In 1900 he was preparing another paper on his further results when he was shown a copy of Mendel's 1866 paper by a friend who thought it might be relevant to de Vries's work. He went ahead and published his 1900 paper without mentioning Mendel's priority. Later that same year another botanist, Carl Correns, who had been conducting hybridisation experiments with maize and peas, was searching the literature for related experiments prior to publishing his own results when he came across Mendel's paper, which had results similar to his own. Correns accused de Vries of appropriating terminology from Mendel's paper without crediting him or recognising his priority. At the same time another botanist, Erich von Tschermak was experimenting with pea breeding and producing results like Mendel's. He too discovered Mendel's paper while searching the literature for relevant work. In a subsequent paper de Vries praised Mendel and acknowledged that he had only extended his earlier work.
Emergence of molecular genetics
After the rediscovery of Mendel's work there was a feud between William Bateson and Pearson over the hereditary mechanism, solved by Ronald Fisher in his work "The Correlation Between Relatives on the Supposition of Mendelian Inheritance".
In 1910, Thomas Hunt Morgan showed that genes reside on specific chromosomes. He later showed that genes occupy specific locations on the chromosome. With this knowledge, Alfred Sturtevant, a member of Morgan's famous fly room, using Drosophila melanogaster, provided the first chromosomal map of any biological organism. In 1928, Frederick Griffith showed that genes could be transferred. In what is now known as Griffith's experiment, injections into a mouse of a deadly strain of bacteria that had been heat-killed transferred genetic information to a safe strain of the same bacteria, killing the mouse.
A series of subsequent discoveries (e.g.) led to the realization decades later that the genetic material is made of DNA (deoxyribonucleic acid) and not, as was widely believed until then, of proteins. In 1941, George Wells Beadle and Edward Lawrie Tatum showed that mutations in genes caused errors in specific steps of metabolic pathways. This showed that specific genes code for specific proteins, leading to the "one gene, one enzyme" hypothesis. Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty showed in 1944 that DNA holds the gene's information. In 1952, Rosalind Franklin and Raymond Gosling produced a strikingly clear x-ray diffraction pattern indicating a helical form. Using these x-rays and information already known about the chemistry of DNA, James D. Watson and Francis Crick demonstrated the molecular structure of DNA in 1953.
Together, these discoveries established the central dogma of molecular biology, which states that proteins are translated from RNA which is transcribed by DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses.
In 1947, Salvador Luria discovered the reactivation of irradiated phage leading to many further studies on the fundamental processes of repair of DNA damage (for review of early studies, see ). In 1958, Meselson and Stahl demonstrated that DNA replicates semiconservatively, leading to the understanding that each of the individual strands in double-stranded DNA serves as a template for new strand synthesis. In 1960, Jacob and collaborators discovered the operon which consists of a sequence of genes whose expression is coordinated by operator DNA. In the period 1961 – 1967, through work in several different labs, the nature of the genetic code was determined (e.g. ).
In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for bacteriophage MS2 coat protein. Richard J. Roberts and Phillip Sharp discovered in 1977 that genes can be split into segments. This led to the idea that one gene can make several proteins. The successful sequencing of many organisms' genomes has complicated the molecular definition of the gene. In particular, genes do not always sit side by side on DNA like discrete beads. Instead, regions of the DNA producing distinct proteins may overlap, so that the idea emerges that "genes are one long continuum". It was first hypothesised in 1986 by Walter Gilbert that neither DNA nor protein would be required in such a primitive system as that of a very early stage of the earth if RNA could serve both as a catalyst and as genetic information storage processor.
The modern study of genetics at the level of DNA is known as molecular genetics, and the synthesis of molecular genetics with traditional Darwinian evolution is known as the modern evolutionary synthesis.
See also
List of sequenced eukaryotic genomes
History of molecular biology
History of RNA Biology
History of evolutionary thought
One gene-one enzyme hypothesis
Phage group
References
Further reading
Elof Axel Carlson, Mendel's Legacy: The Origin of Classical Genetics (Cold Spring Harbor Laboratory Press, 2004.)
External links
Olby's "Mendel, Mendelism, and Genetics," at MendelWeb
http://www.accessexcellence.org/AE/AEPC/WWC/1994/geneticstln.html
http://www.sysbioeng.com/index/cta94-11s.jpg
http://www.esp.org/books/sturt/history/
http://cogweb.ucla.edu/ep/DNA_history.html
https://web.archive.org/web/20120323085256/http://www.hchs.hunter.cuny.edu/wiki/index.php?title=Modern_Science&printable=yes
http://www.nature.com/physics/looking-back/crick/Crick_Watson.pdf
http://www.genomenewsnetwork.org/resources/timeline/1960_mRNA.php
https://web.archive.org/web/20120403041525/http://www.molecularstation.com/molecular-biology-images/data/503/MRNA-structure.png
http://www.genomenewsnetwork.org/resources/timeline/1973_Boyer.php
Genetics
Gregor Mendel
Genetics | History of genetics | Biology | 3,368 |
22,571,802 | https://en.wikipedia.org/wiki/Nuclear%20Physics%20%28journal%29 | Nuclear Physics A, Nuclear Physics B, Nuclear Physics B: Proceedings Supplements and discontinued Nuclear Physics are peer-reviewed scientific journals published by Elsevier. The scope of Nuclear Physics A is nuclear and hadronic physics, and that of Nuclear Physics B is high energy physics, quantum field theory, statistical systems, and mathematical physics.
Nuclear Physics was established in 1956, and then split into Nuclear Physics A and Nuclear Physics B in 1967. A supplement series to Nuclear Physics B, called Nuclear Physics B: Proceedings Supplements has been published from 1987 onwards until 2015 and continues as Nuclear and Particle Physics Proceedings.
Nuclear Physics B is part of the SCOAP3 initiative.
Abstracting and indexing
Nuclear Physics A
Current Contents/Physics, Chemical, & Earth Sciences
Nuclear Physics B
Current Contents/Physics, Chemical, & Earth Sciences
References
External links
Nuclear Physics
Nuclear Physics A
Nuclear Physics B
Nuclear Physics B: Proceedings Supplements
Elsevier academic journals
Nuclear physics journals
Academic journals established in 1956
English-language journals | Nuclear Physics (journal) | Physics | 197 |
1,021,210 | https://en.wikipedia.org/wiki/Immunohistochemistry | Immunohistochemistry is a form of immunostaining. It involves the process of selectively identifying antigens (proteins) in cells and tissue, by exploiting the principle of antibodies binding specifically to antigens in biological tissues. Albert Hewett Coons, Ernest Berliner, Norman Jones and Hugh J Creech was the first to develop immunofluorescence in 1941. This led to the later development of immunohistochemistry.
Immunohistochemical staining is widely used in the diagnosis of abnormal cells such as those found in cancerous tumors. In some cancer cells certain tumor antigens are expressed which make it possible to detect. Immunohistochemistry is also widely used in basic research, to understand the distribution and localization of biomarkers and differentially expressed proteins in different parts of a biological tissue.
Sample preparation
Immunohistochemistry can be performed on tissue that has been fixed and embedded in paraffin, but also cryopreservated (frozen) tissue. Based on the way the tissue is preserved, there are different steps to prepare the tissue for immunohistochemistry, but the general method includes proper fixation, antigen retrieval incubation with primary antibody, then incubation with secondary antibody.
Tissue preparation and fixation
Fixation of the tissue is important to preserve the tissue and maintaining cellular morphology. The fixation formula, ratio of fixative to tissue and time in the fixative, will affect the result. The fixation solution (fixative) is often 10% neutral buffer formalin. Normal fixation time is 24 hours in room temperature. The ratio of fixative to tissue ranges from 1:1 to 1:20. After the tissue is fixed it can be embedded in paraffin wax.
For frozen sections, fixation is usually performed after sectioning if not new antibodies are going to be tested. Then acetone or formalin can be used.
Sectioning
Sectioning of the tissue sample is done using a microtome. For paraffin embedded tissue 4 μm is normal thickness, and for frozen sections 4 – 6 μm. The thickness of the sliced sections matters, and is an important factor in immunohistochemistry. If you compare a section of brain tissue measuring 4 μm with a section measuring 7 μm, some of what you see in the 7 μm thick section might be lacking in the 4 μm section. This shows the importance of detailed methods related to this methodology. The paraffin embedded tissues should be deparaffinized to remove all the paraffin on and around the tissue sample in xylene or a good substitute, followed by alcohol.
Antigen retrieval
Antigen retrieval is required to make the epitopes accessible for immunohistochemical staining for most formalin fixed tissue section. The epitopes are the binding sites for antibodies used to visualize the targeted antigen which may be masked due to the fixation. Fixation of the tissue may cause formation of methylene bridges or crosslinking of amino groups, so that the epitopes no longer are available. Antigen retrieval can restore the masked antigenicity, possibly by breaking down the crosslinks caused by fixation. The most common way to perform antigen retrieval is by using high-temperature heating while soaking the slides in a buffer solution. This can be done in different ways, for example by using microwave oven, autoclaves, heating plates or water baths. For frozen sections, antigen retrieval is generally not necessary, but for frozen section that has been fixed in acetone or formalin, can antigen retrieval improve the immunohistochemistry signal.
Blocking
Non-specific binding of antibodies can cause background staining. Although antibodies bind to specific epitopes, they may also partially or weakly bind to sites on nonspecific proteins that are similar to the binding site on the target protein. By incubating the tissue with normal serum isolated from the species which the secondary antibody was produced, the background staining can be reduced. It is also possible to use commercially available universal blocking buffers. Other common blocking buffers include normal serum, non-fat dry milk, BSA, or gelatin. Endogenous enzyme activity may also cause background staining but can be reduced if the tissue is treated with hydrogen peroxide.
Sample labeling
After preparing the sample, the target can be visualized by using antibodies labeled with fluorescent compounds, metals or enzymes. There are direct and indirect methods for labeling the sample.
Antibody types
The antibodies used for detection can be polyclonal or monoclonal. Polyclonal antibodies are made by using animals like guinea pig, rabbit, mouse, rat, or goat. The animal is injected with the antigen of interest and trigger an immune response. The antibodies can be isolated from the animal's whole serum. Polyclonal antibody production will result in a mixture of different antibodies and will recognize multiple epitopes. Monoclonal antibodies are made by injecting the animal with the antigen of interest and then isolating an antibody-producing B cell, typically from the spleen. The antibody producing cell is then fused with a cancer cell line. This causes the antibodies to show specificity for a single epitope.
For immunohistochemical detection strategies, antibodies are classified as primary or secondary reagents. Primary antibodies are raised against an antigen of interest and are typically unconjugated (unlabeled). Secondary antibodies are raised against immunoglobulins of the primary antibody species. The secondary antibody is usually conjugated to a linker molecule, such as biotin, that then recruits reporter molecules, or the secondary antibody itself is directly bound to the reporter molecule.
Detection methods
The direct method is a one-step staining method and involves a labeled antibody reacting directly with the antigen in tissue sections. While this technique utilizes only one antibody and therefore is simple and rapid, the sensitivity is lower due to little signal amplification, in contrast to indirect approaches.
The indirect method involves an unlabeled primary antibody that binds to the target antigen in the tissue. Then a secondary antibody, which binds with the primary antibody is added as a second layer. As mentioned, the secondary antibody must be raised against the antibody IgG of the animal species in which the primary antibody has been raised. This method is more sensitive than direct detection strategies because of signal amplification due to the binding of several secondary antibodies to each primary antibody.
The indirect method, aside from its greater sensitivity, also has the advantage that only a relatively small number of standard conjugated (labeled) secondary antibodies needs to be generated. For example, a labeled secondary antibody raised against rabbit IgG, is useful with any primary antibody raised in rabbit. This is particularly useful when a researcher is labeling more than one primary antibody, whether due to polyclonal selection producing an array of primary antibodies for a singular antigen or when there is interest in multiple antigens. With the direct method, it would be necessary to label each primary antibody for every antigen of interest.
Reporter molecules
Reporter molecules vary based on the nature of the detection method, the most common being chromogenic and fluorescence detection. In chromogenic immunohistochemistry an antibody is conjugated to an enzyme, such as alkaline phosphate and horseradish peroxidase, that can catalyze a color-producing reaction in the presence of a chromogenic substrate like diaminobenzidine. The colored product can be analyzed with an ordinary light microscope. In immunofluorescence the antibody is tagged to a fluorophore, such as fluorescein isothiocyanate, tetramethylrhodamine isothiocyanate, aminomethyl Coumarin acetate or Cyanine5. Synthetic fluorochromes from Alexa Fluors is also commonly used. The fluorochromes can be visualized by a fluorescence or confocal microscope.
For chromogenic and fluorescent detection methods, densitometric analysis of the signal can provide semi- and fully quantitative data, respectively, to correlate the level of reporter signal to the level of protein expression or localization.
Counterstains
After immunohistochemical staining of the target antigen, another stain is often applied. The counterstain provide contrast that helps the primary stain stand out and makes it easier to examine the tissue morphology. It also helps with orientation and visualization of the tissue section. Hematoxylin is commonly used.
Troubleshooting
In immunohistochemical techniques, there are several steps prior to the final staining of the tissue that can cause a variety of problems. It can be strong background staining, weak target antigen staining and presence of artifacts. It is important that antibody quality and the immunohistochemistry techniques are optimized. Endogenous biotin, reporter enzymes or primary/secondary antibody cross-reactivity are common causes of strong background staining. Weak or absent staining may be caused by inaccurate fixation of the tissue or to low antigen levels. These aspects of immunohistochemistry tissue prep and antibody staining must be systematically addressed to identify and overcome staining issues.
Methods to eliminate background staining include dilution of the primary or secondary antibodies, changing the time or temperature of incubation, and using a different detection system or different primary antibody. Quality control should as a minimum include a tissue known to express the antigen as a positive control and negative controls of tissue known not to express the antigen, as well as the test tissue probed in the same way with omission of the primary antibody (or better, absorption of the primary antibody).
Diagnostic immunohistochemistry markers
Immunohistochemistry is an excellent detection technique and has the tremendous advantage of being able to show exactly where a given protein is located within the tissue examined. It is also an effective way to examine the tissues. This has made it a widely used technique in neuroscience, enabling researchers to examine protein expression within specific brain structures. Its major disadvantage is that, unlike immunoblotting techniques where staining is checked against a molecular weight ladder, it is impossible to show in immunohistochemistry that the staining corresponds with the protein of interest. For this reason, primary antibodies must be well-validated in a Western Blot or similar procedure. The technique is even more widely used in diagnostic surgical pathology for immunophenotyping tumors (e.g. immunostaining for e-cadherin to differentiate between ductal carcinoma in situ (stains positive) and lobular carcinoma in situ (does not stain positive)). More recently, immunohistochemical techniques have been useful in differential diagnoses of multiple forms of salivary gland, head, and neck carcinomas.
The diversity of immunohistochemistry markers used in diagnostic surgical pathology is substantial. Many clinical laboratories in tertiary hospitals will have menus of over 200 antibodies used as diagnostic, prognostic and predictive biomarkers. Examples of some commonly used markers include:
BrdU: used to identify replicating cells. Used to identify tumors as well as in neuroscience research.
Cytokeratins: used for identification of carcinomas but may also be expressed in some sarcomas.
CD15 and CD30: used for Hodgkin's disease.
Alpha fetoprotein: for yolk sac tumors and hepatocellular carcinoma.
CD117 (KIT): for gastrointestinal stromal tumors (GIST) and mast cell tumors.
CD10 (CALLA): for renal cell carcinoma and acute lymphoblastic leukemia.
Prostate specific antigen (PSA): for prostate cancer.
estrogens and progesterone receptor (ER & PR) staining are used both diagnostically (breast and gyn tumors) as well as prognostic in breast cancer and predictive of response to therapy (estrogen receptor).
Identification of B-cell lymphomas using CD20.
Identification of T-cell lymphomas using CD3.
PIN-4 cocktail, targeting p63, CK-5, CK-14 and AMACR (latter also known as P504S), and used to distinguish prostate adenocarcinoma from benign glands.
Directing therapy
A variety of molecular pathways are altered in cancer and some of the alterations can be targeted in cancer therapy. Immunohistochemistry can be used to assess which tumors are likely to respond to therapy, by detecting the presence or elevated levels of the molecular target.
Chemical inhibitors
Tumor biology allows for a number of potential intracellular targets. Many tumors are hormone dependent. The presence of hormone receptors can be used to determine if a tumor is potentially responsive to antihormonal therapy. One of the first therapies was the antiestrogen, tamoxifen, used to treat breast cancer. Such hormone receptors can be detected by immunohistochemistry.
Imatinib, an intracellular tyrosine kinase inhibitor, was developed to treat chronic myelogenous leukemia, a disease characterized by the formation of a specific abnormal tyrosine kinase. Imitanib has proven effective in tumors that express other tyrosine kinases, most notably KIT. Most gastrointestinal stromal tumors express KIT, which can be detected by immunohistochemistry.
Monoclonal antibodies
Many proteins shown to be highly upregulated in pathological states by immunohistochemistry are potential targets for therapies utilising monoclonal antibodies. Monoclonal antibodies, due to their size, are utilized against cell surface targets. Among the overexpressed targets are members of the EGFR family, transmembrane proteins with an extracellular receptor domain regulating an intracellular tyrosine kinase. Of these, HER2/neu (also known as Erb-B2) was the first to be developed. The molecule is highly expressed in a variety of cancer cell types, most notably breast cancer. As such, antibodies against HER2/neu have been FDA approved for clinical treatment of cancer under the drug name Herceptin. There are commercially available immunohistochemical tests, Dako HercepTest, Leica Biosystems Oracle and Ventana Pathway.
Similarly, epidermal growth factor receptor (HER-1) is overexpressed in a variety of cancers including head and neck and colon. Immunohistochemistry is used to determine patients who may benefit from therapeutic antibodies such as Erbitux (cetuximab). Commercial systems to detect epidermal growth factor receptor by immunohistochemistry include the Dako pharmDx.
Mapping protein expression
Immunohistochemistry can also be used for a more general protein profiling, provided the availability of antibodies validated for immunohistochemistry. The Human Protein Atlas displays a map of protein expression in normal human organs and tissues. The combination of immunohistochemistry and tissue microarrays provides protein expression patterns in a large number of different tissue types. Immunohistochemistry is also used for protein profiling in the most common forms of human cancer.
See also
Cutaneous conditions with immunofluorescence findings
Chromogenic in situ hybridization
Tissue Cytometry, a technique that brings the concept of flow cytometry to tissue section, in situ, and helps to perform whole slide scanning and quantification of markers by maintaining the spatial context using machine learning and AI.
References
Further reading
External links
The Human Protein Atlas
Overview of Immunohistochemistry--describes all aspects of immunohistochemistry including sample prep, staining and troubleshooting
Immunofluorescent Staining of Paraffin-Embedded Tissue (IF-P)
IHC Tip 1: Antigen retrieval - should I do PIER or HIER?
Histochemical Staining Methods - University of Rochester Department of Pathology
Immunohistochemistry Staining Protocol
Histology
Immunologic tests
Protein methods
Anatomical pathology
Staining
Laboratory techniques
Pathology | Immunohistochemistry | Chemistry,Biology | 3,317 |
2,218,035 | https://en.wikipedia.org/wiki/LMS%20color%20space | LMS (long, medium, short), is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.
The numerical range is generally not specified, except that the lower end is generally bounded by zero. It is common to use the LMS color space when performing chromatic adaptation (estimating the appearance of a sample under a different illuminant). It is also useful in the study of color blindness, when one or more cone types are defective.
Definition
The cone response functions are the color matching functions for the LMS color space. The chromaticity coordinates (L, M, S) for a spectral distribution are defined as:
The cone response functions are normalized to have their maxima equal to unity.
XYZ to LMS
Typically, colors to be adapted chromatically will be specified in a color space other than LMS (e.g. sRGB). The chromatic adaptation matrix in the diagonal von Kries transform method, however, operates on tristimulus values in the LMS color space. Since colors in most colorspaces can be transformed to the XYZ color space, only one additional transformation matrix is required for any color space to be adapted chromatically: to transform colors from the XYZ color space to the LMS color space.
In addition, many color adaption methods, or color appearance models (CAMs), run a von Kries-style diagonal matrix transform in a slightly modified, LMS-like, space instead. They may refer to it simply as LMS, as RGB, or as ργβ. The following text uses the "RGB" naming, but do note that the resulting space has nothing to do with the additive color model called RGB.
The chromatic adaptation transform (CAT) matrices for some CAMs in terms of CIEXYZ coordinates are presented here. The matrices, in conjunction with the XYZ data defined for the standard observer, implicitly define a "cone" response for each cell type.
Notes:
All tristimulus values are normally calculated using the CIE 1931 2° standard colorimetric observer.
Unless specified otherwise, the CAT matrices are normalized (the elements in a row add up to 1) so the tristimulus values for an equal-energy illuminant (X=Y=Z), like CIE Illuminant E, produce equal LMS values.
Hunt, RLAB
The Hunt and RLAB color appearance models use the Hunt–Pointer–Estevez transformation matrix (MHPE) for conversion from CIE XYZ to LMS. This is the transformation matrix which was originally used in conjunction with the von Kries transform method, and is therefore also called von Kries transformation matrix (MvonKries).
Equal-energy illuminants:
Normalized to D65:
Bradford's spectrally sharpened matrix (LLAB, CIECAM97s)
The original CIECAM97s color appearance model uses the Bradford transformation matrix (MBFD) (as does the LLAB color appearance model). This is a “spectrally sharpened” transformation matrix (i.e. the L and M cone response curves are narrower and more distinct from each other). The Bradford transformation matrix was supposed to work in conjunction with a modified von Kries transform method which introduced a small non-linearity in the S (blue) channel. However, outside of CIECAM97s and LLAB this is often neglected and the Bradford transformation matrix is used in conjunction with the linear von Kries transform method, explicitly so in ICC profiles.
A "spectrally sharpened" matrix is believed to improve chromatic adaptation especially for blue colors, but does not work as a real cone-describing LMS space for later human vision processing. Although the outputs are called "LMS" in the original LLAB incarnation, CIECAM97s uses a different "RGB" name to highlight that this space does not really reflect cone cells; hence the different names here.
LLAB proceeds by taking the post-adaptation XYZ values and performing a CIELAB-like treatment to get the visual correlates. On the other hand, CIECAM97s takes the post-adaptation XYZ value back into the Hunt LMS space, and works from there to model the vision system's calculation of color properties.
Later CIECAMs
A revised version of CIECAM97s switches back to a linear transform method and introduces a corresponding transformation matrix (MCAT97s):
The sharpened transformation matrix in CIECAM02 (MCAT02) is:
CAM16 uses a different matrix:
As in CIECAM97s, after adaptation, the colors are converted to the traditional Hunt–Pointer–Estévez LMS for final prediction of visual results.
physiological CMFs
From a physiological point of view, the LMS color space describes a more fundamental level of human visual response, so it makes more sense to define the physiopsychological XYZ by LMS, rather than the other way around.
A set of physiologically-based LMS functions were proposed by Stockman & Sharpe in 2000. The functions have been published in a technical report by the CIE in 2006 (CIE 170). The functions are derived from Stiles and Burch RGB CMF data, combined with newer measurements about the contribution of each cone in the RGB functions. To adjust from the 10° data to 2°, assumptions about photopigment density difference and data about the absorption of light by pigment in the lens and the macula lutea are used.
The Stockman & Sharpe functions can then be turned into a set of three color-matching functions similar to the CIE 1931 functions.
Let be the three cone response functions, and let be the new XYZ color matching functions. Then, by definition, the new XYZ color matching functions are:
where the transformation matrix is defined as:
For any spectral distribution , let be the LMS chromaticity coordinates for , and let be the corresponding new XYZ chromaticity coordinates. Then:
or, explicitly:
The inverse matrix is shown here for comparison with the ones for traditional XYZ:
The above development has the advantage of basing the new XFYFZF color matching functions on the physiologically-based LMS cone response functions. In addition, it offers a one-to-one relationship between the LMS chromaticity coordinates and the new XFYFZF chromaticity coordinates, which was not the case for the CIE 1931 color matching functions. The transformation for a particular color between LMS and the CIE 1931 XYZ space is not unique. It rather depends highly on the particular form of the spectral distribution ) producing the given color. There is no fixed 3x3 matrix which will transform between the CIE 1931 XYZ coordinates and the LMS coordinates, even for a particular color, much less the entire gamut of colors. Any such transformation will be an approximation at best, generally requiring certain assumptions about the spectral distributions producing the color. For example, if the spectral distributions are constrained to be the result of mixing three monochromatic sources, (as was done in the measurement of the CIE 1931 and the Stiles and Burch color matching functions), then there will be a one-to-one relationship between the LMS and CIE 1931 XYZ coordinates of a particular color.
As of Nov 28, 2023, CIE 170-2 CMFs are proposals that have yet to be ratified by the full TC 1-36 committee or by the CIE.
Quantal CMF
For theoretical purposes, it is often convenient to characterize radiation in terms of photons rather than energy. The energy E of a photon is given by the Planck relation
where E is the energy per photon, h is the Planck constant, c is the speed of light, ν is the frequency of the radiation and λ is the wavelength. A spectral radiative quantity in terms of energy, JE(λ), is converted to its quantal form JQ(λ) by dividing by the energy per photon:
For example, if JE(λ) is spectral radiance with the unit W/m2/sr/m, then the quantal equivalent JQ(λ) characterizes that radiation with the unit photons/s/m2/sr/m.
If CEλi(λ) (i=1,2,3) are the three energy-based color matching functions for a particular color space (LMS color space for the purposes of this article), then the tristimulus values may be expressed in terms of the quantal radiative quantity by:
Define the quantal color matching functions:
where λi max is the wavelength at which CEλ i(λ)/λ is maximized. Define the quantal tristimulus values:
Note that, as with the energy based functions, the peak value of CQλi(λ) will be equal to unity. Using the above equation for the energy tristimulus values CEi
For the LMS color space, ≈ {566, 541, 441} nm and
J/photon
Applications
Color blindness
The LMS color space can be used to emulate the way color-blind people see color. An early emulation of dichromats were produced by Brettel et al. 1997 and was rated favorably by actual patients. An example of a state-of-the-art method is Machado et al. 2009.
A related application is making color filters for color-blind people to more easily notice differences in color, a process known as daltonization.
Image processing
JPEG XL uses an XYB color space derived from LMS. Its transform matrix is shown here:
This can be interpreted as a hybrid color theory where L and M are opponents but S is handled in a trichromatic way, justified by the lower spatial density of S cones. In practical terms, this allows for using less data for storing blue signals without losing much perceived quality.
The colorspace originates from Guetzli's butteraugli metric, and was passed down to JPEG XL via Google's Pik project.
See also
Color balance
Color vision
Luminous efficiency function
Trichromacy
References
Color space
Color blindness | LMS color space | Mathematics | 2,152 |
819,439 | https://en.wikipedia.org/wiki/June%20Gloom | June Gloom is a mainly Southern California term for a weather pattern that results in cloudy, overcast skies with cool temperatures during the late spring and early summer. While the marine layer is most common in the month of June, it can occur in surrounding months, giving rise to other colloquialisms, such as Graypril, May Gray, No-Sky July, and Summer Bummer. Low-altitude stratus clouds form over the cool water of the California Current, and spread overnight into the coastal regions of California.
The overcast skies often are accompanied by fog and drizzle, though usually not rain. June Gloom usually clears up between mid-morning and early afternoon, depending on the strength of the marine layer and the distance of the location from the Pacific Ocean, and gives way to sunny skies. May and June together are usually the cloudiest months in coastal California. June Gloom is stronger in years associated with a La Niña, and weaker or nonexistent in years with an El Niño. This weather pattern is relatively rare, and occurs only in a few other parts of the world where climates and conditions are similar. Scientists study the cloud fields that make up June Gloom to increase understanding of cloud behavior at the onset of precipitation.
Description
A typical June Gloom morning consists of marine stratus clouds covering the coast of southern California, extending a varying distance inland depending on the strength of the June Gloom effect that day. On a strong June Gloom day, the clouds and fog may cover the San Francisco Bay Area, penetrate far inland down valleys such as the Salinas Valley in central California, or extend into the Inland Empire of southern California. It's not uncommon for the layer to persist into the mid-afternoon or evening.
The clouds, which are formed by the marine layer, move in at night, usually after midnight, and typically dissipate in the late morning, giving way to clear, sunny skies. During a heavy June Gloom season, the condition may persist into the afternoon, or even all day during an exceptionally strong event. Often, the air is saturated with moisture, and fog also develops, along with frequent light mist and occasional drizzle. Fog and drizzle normally are found near the furthest inland extent of the gloom, where the cloud deck is closest to the ground.
By late morning to early afternoon, solar heating usually is sufficient to evaporate the clouds, and the sun emerges. The phenomenon forms earliest and lasts longest at the coast, with weaker effects as it moves further inland. When the marine layer is strong and deep, clouds can fill the Los Angeles Basin and spill over into the San Fernando Valley and San Gabriel Valley, even extending into the Santa Clarita Valley and Inland Empire on exceptionally strong June Gloom mornings. If conditions are not as strong, the Basin may be filled while the valleys may be clear. It is not uncommon for motorists to drive over the Sepulveda Pass from the clear, sunny San Fernando Valley and plunge into a cloudy, fog-filled Los Angeles. On a weak June Gloom morning, the clouds and fog may only be present within a mile or two of the coastline, affecting only the beach cities.
Climate effects
A combination of atmospheric and oceanic conditions must be just right in order for June Gloom to form, and these conditions usually align only around May and June of each year. These include the marine layer effect common to the West Coast of the United States, an atmospheric inversion caused by subsidence of high-pressure air from the subtropical ridge, and sufficiently cool ocean water off the coast. The June Gloom pattern is also enhanced by the Catalina eddy local to southern California.
The months of May and June are typically the cloudiest months of the year in coastal southern California, having only 59% and 58% sunny days, respectively, on average in San Diego. The number of days in May and June that are "gloomy" varies from year to year. Anomalies in sea surface temperature can be used to forecast the length and intensity of the June Gloom phenomenon in a given season. Years with warmer ocean temperatures, referred to as El Niño, may result in fewer gray days in May and June. Cooler ocean temperatures, associated with La Niña, usually foretell a more gray period.
The climate charts below show a clear drop in the mean monthly sunshine hours and percent possible sunshine for the months of May and June, which are the two months when the June Gloom pattern is the strongest.
June Gloom has been reported by some Californians to bring on symptoms consistent with seasonal affective disorder, although this is not well-supported by evidence. However, the normally-very-sunny Los Angeles climate also is home to people who thrive during the brief seasonal respite the gloom provides from the unending sunshine and clear skies.
In the early 20th century, this phenomenon was sometimes known as the high fog. A long June Gloom season, extending late into the summer, is known as Summer Bummer. The negative effects of a long June Gloom on the coastal California tourism industry is often reported in the local news media. The phenomenon can be especially disorienting to visitors from inland areas who, coming from the summer heat, would not expect cool temperatures and clouds and fog at the beach.
Formation
The low-altitude stratus clouds that make up the June Gloom cloud layer form over the nearby ocean, and are transported over the coastal areas by the region's prevailing westerly winds. The sheet-like stratus clouds are almost uniformly horizontal, covering large areas but having relatively shallow depth of . These clouds begin to form when wind mixes moisture from the ocean surface into the air. The air cools and expands as it is mixed and moves upward, and this cooling increases the relative humidity. When the relative humidity reaches 100%, the water vapor condenses into liquid water droplets and the clouds begin to form. The stable top of the marine layer, a result of the temperature inversion, prevents any dry, warm air from above the inversion from mixing with the stratus deck. This confines the stratus deck to a relatively narrow vertical band in the atmosphere, allowing it to strengthen.
The inversion layer is crucial to the formation of the marine stratus that produce June Gloom. Compression and warming of air sinking out of the North Pacific High-pressure system (which is strongest during the summer) meets with the rising, cooling air from the sea surface, producing a very stable layer of air that caps the cool air from rising any further. The strength of this subsidence inversion affects the strength of the marine layer, and how long it will take the clouds to dissipate. Additionally, the cool ocean water of the California Current, which flows out of the cold Gulf of Alaska, enhances the contrast between the cool air below the inversion layer and the warm air above it. A stronger inversion layer – one with a greater difference in temperature between the air above and the air below – often results in more and deeper marine layer clouds that persist longer into the day. Upwelling of colder-than-normal ocean water, associated with a La Niña, can strengthen this effect even more.
Once this marine layer has formed, the prevailing westerly winds advect the clouds over the coastal lands. The extent of inland advection is limited by southern California's coastal mountain ranges. The winds will continue to push the cloud layer onshore until they encounter mountains at or above the altitude of the clouds themselves, with the mountains then preventing any further inland progress of the marine layer. The foothill regions of these mountains experience some of the thickest fog and drizzle, as they are essentially in the clouds at this point.
The marine layer clouds of a June Gloom day usually are at their maximum at dawn, when the surface air is at a minimum temperature and the temperature difference in the inversion layer is at its maximum. The air beneath the inversion base being at its coolest also makes it likely that it will be saturated with a relative humidity of 100%.
A sea breeze, which is caused by the temperature and pressure difference between warm areas inland and the cool air over the ocean, often develops on warm summer days as well, increasing the on-shore flow pattern and maintaining a constant flow of marine stratus clouds onto the coastal areas.
A strong low pressure system passing over southern California will produce the deepest and most extensive June Gloom marine layer. The marine layer effect is weakened when a weak high pressure system is in place over the region, and the marine layer will be very weak or nonexistent when there is a strong high-pressure system affecting southern California. The National Weather Service graphic on the right explains the effects of atmospheric conditions upon the marine layer and local weather conditions in more detail.
Similar weather elsewhere in the world
While many parts of the world commonly have an offshore marine layer of stratus or stratocumulus clouds, other locations matching the daily and seasonal effects of Southern California's June Gloom are relatively rare. These include the western coast of Peru, the Macaronesian Islands, the western coasts of Morocco and Portugal, and Namibia in southern Africa.
Actinoform clouds and drizzle prediction
Researchers have discovered that the cloud fields forming June Gloom and related phenomena from other west-coast marine-influenced climates are excellent places to find and study actinoform clouds. These clouds have been found to be present more often than expected in common stratocumulus layers. These clouds are persistent year-round off the coast, but are only drawn inland during June Gloom events and related phenomena elsewhere in the world. Observations suggest that when marine stratus is present alone, drizzle is minimized. However, scientists believe that the presence of actinoform clouds within the marine stratus is indicative of an increase in drizzle and the onset of precipitation. Observation and computer modeling have shown that the shape of the cloud fields actually rearrange themselves when the clouds start to rain.
See also
Gloom
Climate of Los Angeles
Climate of San Diego
Inversion (meteorology)
Marine stratocumulus
San Francisco fog
Tule fog
Notes
References
Weather lore
Fog
Climate of California
Rain
Southern California
Weather events in the United States | June Gloom | Physics | 2,064 |
723,712 | https://en.wikipedia.org/wiki/Sandbag | A sandbag or dirtbag is a bag or sack made of hessian (burlap), polypropylene or other sturdy materials that is filled with sand or soil and used for such purposes as flood control, military fortification in trenches and bunkers, shielding glass windows in war zones, ballast, counterweight, and in other applications requiring mobile fortification, such as adding improvised additional protection to armored vehicles or tanks.
The advantages are that the bags and sand are inexpensive. When empty, the bags are compact and lightweight for easy storage and transportation. They can be brought to a site empty and filled with local sand or soil. Disadvantages are that filling bags is labor-intensive. Without proper training, sandbag walls can be constructed improperly causing them to fail at a lower height than expected, when used in flood-control purposes. They can degrade prematurely in the sun and elements once deployed. They can also become contaminated by sewage in flood waters making them difficult to deal with after flood waters recede. In a military context, improvised up-armouring of tanks or armored personnel carriers with sandbags is not effective against cannons (though it may offer protection against some small arms).
Sandbags have traditionally been filled manually using shovels. Since the 1990s, machine filling has become more common, allowing the work to be done more quickly and efficiently.
Usage
Flood control
Properly stacked sandbags are an effective deterrent against damaging flood waters. Sandbags can be used to build levees, barricades, dikes and berms to limit erosion from flooding. Sandbags can also be used to fortify existing flood control structures and limit the effects of sand boils. Sandbag structures do not prevent water seepage and therefore should be built with the central purpose of diverting flood water around or away from buildings.
Properly filled sandbags for flood control are filled one-half to two-thirds full with clean washed sand. In an emergency, if clean sand is in limited supply, gravel or dirt can also be used with less effective end results. When filled sandbags are stacked or laid in place, the contents need to settle flat to the ground. Sandbags filled over two-thirds full will not form an adequate seal to the ground or structure. Likewise sandbags filled under one-half will generally also form an inadequate seal to the ground when placed.
The best practices for filling sandbags require a three-person team. One team member will crouch down and hold open the bag to form a collar opening. The second team member places the tip of a pointed shovel with sand into the opened sandbag. A square shovel is not recommended as the blade of the shovel will not fit into the sandbag when filling. The third team member will transport and stockpile the filled sandbags.
Properly placed sandbags will be set lengthwise and parallel to the water flow with the folded or open end of the sandbag facing upstream. All loose debris should be removed from the placement surface and the lowest areas are the first spots to be filled in with sandbags. Each bag must be set consecutively with the tightly packed bottom slightly overlapping the previously placed sandbag. Subsequent layers of bags should be offset by 1/2 the length of a sandbag to eliminate voids and improve the wall seal. Each placed bag should be tamped and flattened to improve the seal.
The two primary methods for stacking sandbags to build flood control structures are the (1) Single Stack placement, and; (2) Pyramid Placement Method.
Fortification
The military uses sandbags for field fortifications and as a temporary measure to protect civilian structures. Because burlap and sand are inexpensive, large protective barriers can be erected cheaply. The friction created by moving soil or sand grains and tiny air gaps makes sandbags an efficient dissipator of explosive blast. The most common sizes for sandbags are to . These dimensions, and the weight of sand a bag this size can hold, allow for the construction of an interlocking wall like brickwork.
Individual filled bags are not too heavy to lift and move into place. They may be laid in excavated defences as revetment, or as free-standing walls above ground where excavations are impractical. As plain burlap sandbags deteriorate fairly quickly, sandbag structures meant to remain in place for a long time may be painted with a portland cement slurry to reduce the effects of rot and abrasion. Cotton ducking sandbags last considerably longer than burlap and are hence preferable for long-term use. However, the vast majority of sandbags used by modern military and for flood prevention are made of circular woven polypropylene. Some of the World War I memorial trenches were rebuilt with concrete sandbags after the First World War—although criticized as looking unnatural, they have lasted well. During World War II in Great Britain, some aircraft revetments and pillboxes were made from concrete filled sandbags, again these have lasted well.
Sandbag fortifications have been used since at least the late 16th century. For example, the rebellious Mughal governor Mirza Jani Beg used improvised sandbags made out of boat sails to construct a makeshift fort at Unarpur, Sindh, in 1592. Later, British loyalists used sandbag and log fortifications in the 1781 Siege of Ninety-Six during the American Revolutionary War. Nathanael Greene was familiar enough with the fortification technique to equip his troops with hooks to pull down the sandbag and log walls when they stormed the Star Redoubt in Ninety Six, South Carolina.
In ancient times, temporary sandbag forts known as an antestature were made out of sandbags. They were historically hastily established by a retreating force to slow the progress of the enemy. The word comes from the Latin ante ("before"), and statūra ("a standing").
Bulk bags
Bulk bags, also known as big bags, are much larger than traditional sandbags. Moving a bag of this size typically requires a forklift truck. Bulk bags are usually made of woven or non-woven geotextiles.
Large bags of sand are often used in flood control and making temporary patches to water barriers. For example, Thailand utilized bulk bags filled with sand to erect temporary walls to protect against the 2011 Thailand floods.
Other uses
Sandbags are also used for disposable ballast in gas balloons, and as counterweights for theatre sets. Some temporary construction signs or advertising signs are held in place and secured against being blown over with sandbags.
During World War II, sandbags were also used as extemporized "soft armor" on American tanks, with the goal of protecting the tanks from German anti-tank rounds, but they were largely ineffective.
Sandbags can also be carried within vehicles to provide improved traction during inclement weather (typically stored above the drive wheels where the increased weight improves traction). If ever stuck, sand can be removed and placed directly onto the slippery surface thereby providing greatly improved traction. Sandbags are also used by off-road enthusiasts instead of sand plates or sand ladders to assist the vehicle to get traction and momentum after being stuck in soft sand. The same sandbags can be used to bridge deep holes or ditches. Apart from being very light and taking very little space (when empty), the sandbags are a much cheaper option than any of the other options (sand plates, sand ladders, multipurpose bags, etc.).
Sandbags are often used to temporarily stabilize soil from erosion, such as oceanfront structures whose foundations have been undermined by heavy waves. Sandbags are also used in earthbag construction to make inexpensive, environmentally sustainable homes. In addition, sandbags are often used when shooting a long gun, specifically a rifle or sniper rifle, from a rest, as it provides support for the weapon, allowing for less movement during shooting.
Sandbags of various sizes and weights can be used for exercise or resistance training.
Sandbags are used for safety in film, video and theatrical production. Sandbags are often used as easily portable weight to lower the center of gravity of a Light stand or a C-Stand where heavy items are placed at the top of a high stand often having a small base. Shot bags are another type of flexible weight used for the same purpose.
Gallery
See also
Hesco bastion
HydroSack, brand name of an alternative sandless sandbag for flood control
Metalith, brand name/manufacturer of an alternative flood control technology
Sandbagging
References
External links
A guide from Sandbags Online: Everything you ever wanted to know about sandbags, but were too afraid to ask
California Department of Water Resources - Flood Fighting at home, How to fill and place sandbags (PDF)
California Department of Water Resources and the California Conservation Corps - Flood fighting Methods (PDF)
US Army Corps of Engineers Sandbagging pamphlet (PDF)
FEMA - Flood Response manual for Community Emergency Response Teams, including sandbagging techniques (PDF)
US OSHA recommendations for safely Filling, Moving and Placing Sandbags During Flooding Disasters
Articles containing video clips
Bags
Flood control
Fortifications by type
Soil
Soil-based building materials | Sandbag | Chemistry,Engineering | 1,851 |
1,742,660 | https://en.wikipedia.org/wiki/Dicyclopentadiene | Dicyclopentadiene, abbreviated DCPD, is a chemical compound with formula . At room temperature, it is a white brittle wax, although lower purity samples can be straw coloured liquids. The pure material smells somewhat of soy wax or camphor, with less pure samples possessing a stronger acrid odor. Its energy density is 10,975 Wh/l.
Dicyclopentadiene is a co-produced in large quantities in the steam cracking of naphtha and gas oils to ethylene. The major use is in resins, particularly, unsaturated polyester resins. It is also used in inks, adhesives, and paints.
The top seven suppliers worldwide together had an annual capacity in 2001 of 179 kilotonnes (395 million pounds).
DCPD was discovered in 1885 as a hydrocarbon among the products of pyrolysis of phenol by Henry Roscoe, who didn't identify the structure (that was made during the following decade) but accurately assumed that it was a dimer of some hydrocarbon.
History and structure
For many years the structure of dicyclopentadiene was thought to feature a cyclobutane ring as the fusion between the two subunits. Through the efforts of Alder and coworker, the structure was deduced in 1931.
The spontaneous dimerization of neat cyclopentadiene at room temperature to form dicyclopentadiene proceeds to around 50% conversion over 24 hours and yields the endo isomer in better than 99:1 ratio as the kinetically favored product (about 150:1 endo:exo at 80 °C). However, prolonged heating results in isomerization to the exo isomer. The pure exo isomer was first prepared by base-mediated elimination of hydroiodo-exo-dicyclopentadiene. Thermodynamically, the exo isomer is about 0.7 kcal/mol more stable than the endo isomer. The exo isomer also has a lower reported melting point of 19°C. Both isomers are chiral.
Reactions
Above 150 °C, dicyclopentadiene undergoes a retro-Diels–Alder reaction at an appreciable rate to yield cyclopentadiene. The reaction is reversible and at room temperature cyclopentadiene dimerizes over the course of hours to re-form dicyclopentadiene. Cyclopentadiene is a useful diene in Diels–Alder reactions as well as a precursor to metallocenes in organometallic chemistry. It is not available commercially as the monomer, due to the rapid formation of dicyclopentadiene; hence, it must be prepared by "cracking" the dicyclopentadiene (heating the dimer and isolating the monomer by distillation) shortly before it is needed.
The thermodynamic parameters of this process have been measured. At temperatures above about 125 °C in the vapor phase, dissociation to cyclopentadiene monomer starts to become thermodynamically favored (the dissociation constant Kd = ). For instance, the values of Kd at 149 °C and 195 °C were found to be 277 and 2200, respectively. By extrapolation, Kd is on the order of 10–4 at 25 °C, and dissociation is disfavored. In accord with the negative values of ΔH° and ΔS° for the Diels–Alder reaction, dissociation of dicyclopentadiene is more thermodynamically favorable at high temperatures. Equilibrium constant measurements imply that ΔH° = –18 kcal/mol and ΔS° = –40 eu for cyclopentadiene dimerization.
Dicyclopentadiene polymerizes. Copolymers are formed with ethylene or styrene. The "norbornene double bond" participates. Using ring-opening metathesis polymerization a homopolymer polydicyclopentadiene is formed.
Hydroformylation of DCP gives the dialdehyde called TCD dialdehyde (TCD = tricyclodecane). This dialdehyde can be oxidized to the dicarboxylic acid and to a diol. All of these derivatives have some use in polymer science.
Hydrogenation of dicyclopentadiene gives tetrahydrodicyclopentadiene (), which is a component of jet fuel JP-10, and rearranges to adamantane with aluminium chloride or acid at elevated temperature.
References
External links
MSDS for dicyclopentadiene
Inchem fact sheet for dicyclopentadiene
CDC — NIOSH Pocket Guide to Chemical Hazards
Cyclopentadienes
Monomers
Dimers (chemistry)
Cyclopentenes | Dicyclopentadiene | Chemistry,Materials_science | 1,041 |
40,705,113 | https://en.wikipedia.org/wiki/Cenex | Cenex, the Low Carbon and Fuel Cells Centre of Excellence, is an independent non-profit research and consultancy that helps private and public sector organisations devise ULEV strategies. Founded in 2005, Cenex is headquartered in Loughborough, United Kingdom.
History
Cenex was established in April 2005 with support from the Automotive Unit of the British Department of Trade and Industry, Its goal was to assist British automakers in responding to the transition to low carbon and fuel cell technologies.
In 2008, Cenex founded the low Carbon Vehicle Event (Cenex-LCV). The event includes exhibitions, seminars, networking, and opportunities to ride and drive prototype vehicles.
Transport Team
The Cenex Transport Team helps clients to implement low and ultra-low emission vehicle technologies into fleet, freight and logistics operations. These include hydrogen, gas and electric vehicles.
Cenex created the VC3 tool to calculate and compare the whole life costs and carbon emissions of diesel, electric, gas and stop-start van technologies. The Cenex CLEAR Capture (Cost-effective Low Emissions Analysis from Real-world Date Capture) plug obtains drive cycle data information from a vehicle.
The Transport Team has worked with British Gas on an EV Deployment Risk Assessment, reducing construction carbon emissions in logistics, and Hydrogen Van trials.
Energy Systems Team
The Energy Systems Team works with developers of infrastructure to integrate vehicles with the National Grid. Cenex also supports and advise on the installation of low emission vehicle infrastructure across the UK and Europe.
Cenex chairs the UK Electric Vehicle Supply Equipment Association (UKEVSE).
Innovation Support Team
The Innovation Support Team runs programmes on behalf of Central and Local Governments to develop the UK supply chain of low emission vehicle technology.
Cenex has worked with Nottingham City Council, and led the InclusivEV project, which investigated the potential for electric vehicles to be used to tackle transport poverty.
See also
Energy Technologies Institute, also at Loughborough
References
2005 establishments in the United Kingdom
Automotive industry in the United Kingdom
Energy in the United Kingdom
Energy research institutes
Fuel cells
Loughborough University
Research institutes in Leicestershire
Vehicle emission controls | Cenex | Engineering | 426 |
23,241,707 | https://en.wikipedia.org/wiki/Working%20timetable | A working timetable (WTT) - (Fr. horaire de service (HDS) or service annuel (SA); N. America Employee timetable) - The data defining all planned train and rolling-stock movements which will take place on the relevant infrastructure during the period for which it is in force; within the EU, it is established once per calendar year. The trains included may be passenger trains, freight trains, empty stock movements, or even bus and/or ship connections or replacements.
Contents
The detail found in Working Timetables includes the timings at every major station, junction, or other significant location along the train's journey (including additional minutes inserted to allow for such factors as engineering work or particular train performance characteristics), which platforms are used at certain stations, and line codes where there is a choice of running line.
Further information may include the train's identification (or "reporting") number which, in Network Rail practice, consists of a four digit alpha-numeric code where the first number indicates the type of train (fast, stopping, Freightliner and so on), followed by a letter indicating the area of operation or destination and then two figures denoting the individual service; what service the train next forms; what formation ("consist") the train has, its maximum speed, and any other information relevant to the operation of the train. A WTT for the Parisian Petite Ceinture belt railway gives a gradient profile and track diagram for the entire railway.
In the USA, the New Haven Railroad Employee Timetable contained such information as: the maximum allowable speeds for different types of locomotives; electrical operating instructions concerning the operation of the AC catenary system and pantographs; designation of on which lines the different types of signalling were operational, e.g. manual block, automatic block and centralized traffic control.
Railway companies incorporate their philosophy of service provision into their timetable in numerical, chronological form. In the beginning of commercial railways, the timetable was the authority for a train to be at a particular location at a specified time, subject to any restrictions imposed by the rules, regulations and engineered safety controls (which were originally minimal). As such, instructional publications were often referred to as 'appendices' to the working timetable. As the rules and regulations gradually expanded following accidents, the working timetable became more of a guide than an absolute authority.
Safe working
The working timetable is effectively the foundation of railway safe operations and one of six main instructional publications which employees of Traffic departments in British style railways traditionally had at their disposal. The other publications were the Rule Book, General Appendix to the Working Timetable, Sectional or Local Appendix to the Working Timetable, Regulations for Train Signalling, circulars and weekly notices (names varied between companies).
Unscheduled or 'special' train movements are worked as margins in the timetable permit. Such movements are authorized and regulated by staff such as signalmen, station masters and train controllers.
Updating
Most railway companies revise their standard working timetable (SWTT) every few years, or as changes in their network require.
The daily working timetable (DWTT) consists of the standard working timetable (SWTT) as amended by publications such as Special Train Notices or telegrams. Special train notices are temporary amendments to the SWTT, issued as required for additional ('special') trains or alterations to the working of trains already in the SWTT.
Australia
Sydney Trains
The SWTT is updated every 2 to 3 years for the 7 day period covering 6400 passenger trains and 1000 freight trains.
The DWTT is constantly updated to include special events such as sport events, concerts (500+ requests); special trains i.e. train testing, school charters, crew training and heritage trains (700+ requests); and work trains i.e. inspections and maintenance (500+ requests).
The approval time for the 1700+ requests a year ranges from 4 weeks to 26 weeks depending on the impact on customers.
Germany
Most German motive power is now equipped with an electronic WTT, known as the EBuLa or "Elektronischer Buchfahrplan" which is kept constantly updated by GPS and is displayed on a screen in the driver's cab. This also incorporates speed restriction and non-standard signal stopping distance data from the "Langsamfahrstrecken" document, the near-equivalent of which in British terminology would be the Sectional Appendix.
Use of WTTs as historical documents
The railway historian Jack Simmons suggests that the WTTs are only a set of instructions issued to staff and indicate intended, not actual, train operations, and that this should be borne in mind when using them for historical research. However, Simmons also notes that, read with care, "they show us how railways were made to work, in normal service, as no other documents can."
Availability
Current British railway WTTs, compiled by Network Rail, are available online. The versions published by the various pre-grouping railways, the "Big Four (British railway companies)", British Rail(ways), Railtrack and Network Rail in book form and branded "Not for publication" can frequently be found at rail exhibitions, second hand book shops, and auction websites. Some WTTs have been reprinted as commercial publications.
Britain's National Archives and National Railway Museum hold copies of many printed WTTs issued by the railways of Great Britain and Ireland and these are available for consultation by the public.
Transport for London has made available up-to-date working timetables for the London Underground, as well as every London bus route, on their website.
Notes
Public transport information systems
Railway safety
Scheduling (transportation) | Working timetable | Technology | 1,152 |
54,190,170 | https://en.wikipedia.org/wiki/Infrared%20compact%20catalogue | In astronomy, infrared compact or IRc designations refer to objects in several astronomical catalogues. The first is a list of near-infrared sources in the NGC 6334 molecular cloud. There are also a series of infrared catalogues of objects in Orion.
References
Astronomical catalogues | Infrared compact catalogue | Astronomy | 56 |
70,440,972 | https://en.wikipedia.org/wiki/Large%20Integrated%20Flexible%20Environment | The Large Integrated Flexible Environment (LIFE) is an inflatable space habitat design currently being developed by Sierra Space. The proposed Orbital Reef commercial space station would include multiple LIFE habitats.
Development
In September 2022 Sierra Space completed a successful sub-scale Ultimate Burst Pressure test of a LIFE prototype. A second successful test was completed later that year, exceeding NASA certification requirements. On 22 January 2024 the company announced a successful full scale burst test, exceeding safety margins by 27%.
Pathfinder
As soon as the end of 2026, before using LIFE as part of Orbital Reef, Sierra Space is proposing to launch a “pathfinder” version of LIFE as a standalone space station.
See also
List of space stations
References
Space stations | Large Integrated Flexible Environment | Astronomy | 146 |
15,804,648 | https://en.wikipedia.org/wiki/Gran%20plot | A Gran plot (also known as Gran titration or the Gran method) is a common means of standardizing a titrate or titrant by estimating the equivalence volume or end point in a strong acid-strong base titration or in a potentiometric titration. Such plots have been also used to calibrate glass electrodes, to estimate the carbonate content of aqueous solutions, and to estimate the Ka values (acid dissociation constants) of weak acids and bases from titration data. Gran plots are named after Swedish chemist Gunnar Gran, who developed the method in 1950.
Gran plots use linear approximations of the a priori non-linear relationships between the measured quantity, pH or electromotive potential (emf), and the titrant volume. Other types of concentration measures, such as spectrophotometric absorbances or NMR chemical shifts, can in principle be similarly treated. These approximations are only valid near, but not at, the end point, and so the method differs from end point estimations by way of first- and second-derivative plots, which require data at the end point. Gran plots were originally devised for graphical determinations in pre-computer times, wherein an x-y plot on paper would be manually extrapolated to estimate the x-intercept. The graphing and visual estimation of the end point have been replaced by more accurate least-squares analyses since the advent of modern computers and enabling software packages, especially spreadsheet programs with built-in least-squares functionality.
Basis of the calculations
The Gran plot is based on the Nernst equation which can be written as
where E is a measured electrode potential, E0 is a standard electrode potential, s is the slope, ideally equal to RT/nF, and {H+} is the activity of the hydrogen ion. The expression rearranges to
depending on whether the electrode is calibrated in millivolts or pH. For convenience the concentration, [H+], is used in place of activity. In a titration of strong acid with strong alkali, the analytical concentration of the hydrogen ion is obtained from the initial concentration of acid, Ci and the amount of alkali added during titration.
where vi is the initial volume of solution, cOH is the concentration of alkali in the burette and v is the titre volume. Equating the two expressions for [H+] and simplifying, the following expression is obtained
A plot of against v will be a straight line. If E0 and s are known from electrode calibration, where the line crosses the x-axis indicates the volume at the equivalence point, . Alternatively, this plot can be used for electrode calibration by finding the values of E0 and s that give the best straight line.
Titrating strong acid with strong base
For a strong acid-strong base titration monitored by pH, we have at any ith point in the titration
where Kw is the water autoprotolysis constant.
If titrating an acid of initial volume and concentration with base of concentration , then at any ith point in the titration with titrant volume ,
At the equivalence point, the equivalence volume .
Thus,
a plot of will have a linear region before equivalence, with slope
and a plot of will have a linear region after equivalence, with slope
both plots will have as intercept
The equivalence volume is used to compute whichever of or is unknown.
The pH meter is usually calibrated with buffer solutions at known pH values before starting the titration. The ionic strength can be kept constant by judicious choice of acid and base. For instance, HCl titrated with NaOH of approximately the same concentration will replace H+ with an ion (Na+) of the same charge at the same concentration, to keep the ionic strength fairly constant. Otherwise, a relatively high concentration of background electrolyte can be used, or the activity quotient can be computed.
Titrating strong base with strong acid
Mirror-image plots are obtained if titrating the base with the acid, and the signs of the slopes are reversed.
Hence,
a plot of will have a linear region before equivalence with slope
and a plot of will have a linear region after''' equivalence with slope
both plots will have as x-intercept
Figure 1 gives sample Gran plots of a strong base-strong acid titration.
Concentrations and dissociation constants of weak acids
The method can be used to estimate the dissociation constants of weak acids, as well as their concentrations (Gran, 1952). With an acid represented by HA, where
,
we have at any ith point in the titration of a volume of acid at a concentration by base of concentration . In the linear regions away from equivalence,
and
are valid approximations, whence
, or
or, because ,
.
A plot of versus will have a slope over the linear acidic region and an extrapolated x-intercept , from which either or can be computed. The alkaline region is treated in the same manner as for a titration of strong acid. Figure 2 gives an example; in this example, the two x-intercepts differ by about 0.2 mL but this is a small discrepancy, given the large equivalence volume (0.5% error).
Similar equations can be written for the titration of a weak base by strong acid (Gran, 1952; Harris, 1998).
Carbonate content
Martell and Motekaitis (1992) use the most linear regions and exploit the difference in equivalence volumes between acid-side and base-side plots during an acid-base titration to estimate the adventitious CO2 content in the base solution. This is illustrated in the sample Gran plots of Figure 1. In that situation, the extra acid used to neutralize the carbonate, by double protonation, in volume of titrate is . In the opposite case of a titration of acid by base, the carbonate content is similarly computed from , where is the base-side equivalence volume (from Martell and Motekaitis).
When the total CO2 content is significant, as in natural waters and alkaline effluents, two or three inflections can be seen in the pH-volume curves owing to buffering by higher concentrations of bicarbonate and carbonate. As discussed by Stumm and Morgan (1981), the analysis of such waters can use up to six Gran plots from a single titration to estimate the multiple end points and measure the total alkalinity and the carbonate and/or bicarbonate contents.
Potentiometric monitoring of H+
To use potentiometric (e.m.f.) measurements in monitoring the concentration in place of readings, one can trivially set and apply the same equations as above, where is the offset correction , and is a slope correction (1/59.2 pH units/mV at 25°C), such that replaces .
Thus, as before for a titration of strong acid by strong base,
a plot of vs. will have a linear region before equivalence, with slope
and a plot of vs. will have a linear region after equivalence, with slope
both plots will have as intercept and, as before, the acid-side equivalence volume can be used to standardize whichever concentration is unknown, and the difference between acid-side and base-side equivalence volumes can be used to estimate the carbonate content
Analogous plots can be drawn using data from a titration of base by acid.
Electrode calibration
Note that the above analysis requires prior knowledge of and .
If a pH electrode is not well calibrated, an offset correction can be computed in situ from the acid-side Gran slope:
For a titration of acid by base, the acid-side slope () can serve to compute using a known value of or using the value given by the equivalence volume. can then be computed from the base-side slope.
For a titration of base by acid, as illustrated in the sample plots, the acid-side slope () is similarly used to compute and the base-side slope () is used to compute using a known value of or using the value given by the acid-side equivalence volume.
In the sample data illustrated in Figure 1, this offset correction was not insignificant, at -0.054 pH units.
The value of , however, may deviate from its theoretical value and can only be assessed by a proper calibration of the electrode. Calibration of an electrode is often performed using buffers of known pH, or by performing a titration of strong acid with strong base. In that case, a constant ionic strength can be maintained, and is known at all titration points if both and are known (and should be directly related to primary standards). For instance, Martell and Motekaitis (1992) calculated the pH value expected at the start of the titration, having earlier titrated the acid and base solutions against primary standards, then adjusted the pH electrode reading accordingly, but this does not afford a slope correction if one is needed.
Based on earlier work by McBryde (1969), Gans and O'Sullivan (2000) describe an iterative approach to arrive at both and values in the relation , from a titration of strong acid by strong base:
The procedure could in principle be modified for titrations of base by acid. A computer program named GLEE (for GLass Electrode Evaluation) implements this approach on titrations of acid by base for electrode calibration. This program additionally can compute (by a separate, non-linear least-squares process) a 'correction' for the base concentration. An advantage of this method of electrode calibration is that it can be performed in the same medium of constant ionic strength which may later be used for the determination of equilibrium constants.
Note that the regular Gran functions will provide the required equivalence volumes and, as is initially set at its theoretical value, the initial estimate for in step 1 can be had from the slope of the regular acid-side Gran function as detailed earlier. Note too that this procedure computes the CO2 content and can indeed be combined with a complete standardization of the base, using the definition of to compute . Finally, the usable pH range could be extended by solving the quadratic for .
Potentiometric monitoring of other species
Potentiometric data are also used to monitor species other than . When monitoring any species by potentiometry, one can apply the same formalism with . Thus, a titration of a solution of another species by species is analogous to a pH-monitored titration of base by acid, whence either or plotted versus will have an x-intercept . In the opposite titration of by , the equivalence volume will be . The significance of the slopes will depend on the interactions between the two species, whether associating in solution or precipitating together (Gran, 1952). Usually, the only result of interest is the equivalence point. However, the before-equivalence slope could in principle be used to assess the solubility product in the same way as can be determined from acid-base titrations, although other ion-pair association interactions may be occurring as well.
To illustrate, consider a titration of Cl− by Ag+ monitored potentiometrically:
Hence,
a plot of will have a linear region before equivalence, with slope
and a plot of will have a linear region after equivalence, with slope
in both plots, the x-intercept is
Figure 3 gives sample plots of potentiometric titration data.
Non-ideal behaviour
In any titration lacking buffering components, both before-equivalence and beyond-equivalence plots should ideally cross the x axis at the same point. Non-ideal behaviour can result from measurement errors (e.g. a poorly calibrated electrode, an insufficient equilibration time before recording the electrode reading, drifts in ionic strength), sampling errors (e.g. low data densities in the linear regions) or an incomplete chemical model (e.g. the presence of titratable impurities such as carbonate in the base, or incomplete precipitation in potentiometric titrations of dilute solutions, for which Gran et al. (1981) propose alternate approaches). Buffle et al. (1972) discuss a number of error sources.
Because the or terms in the Gran functions only asymptotically tend toward, and never reach, the x axis, curvature approaching the equivalence point is to be expected in all cases. However, there is disagreement among practitioners as to which data to plot, whether using only data on one side of equivalence or on both sides, and whether to select data nearest equivalence or in the most linear portions:Butler (1991) discusses the issue of data selection, and also examines interferences from titratable impurities such as borate and phosphate. using the data nearest the equivalence point will enable the two x-intercepts to be more coincident with each other and to better coincide with estimates from derivative plots, while using acid-side data in an acid-base titration presumably minimizes interference from titratable (buffering) impurities, such as bicarbonate/carbonate in the base (see Carbonate content), and the effect of a drifting ionic strength. In the sample plots displayed in the Figures, the most linear regions (the data represented by filled circles) were selected for the least-squares computations of slopes and intercepts. Data selection is always subjective.
References
Buffle, J., Parthasarathy, N. and Monnier, D. (1972): Errors in the Gran addition method. Part I. Theoretical Calculation of Statistical Errors; Anal. Chim. Acta 59, 427-438; Buffle, J. (1972): Anal. Chim. Acta 59, 439.
Butler, J. N. (1991): Carbon Dioxide Equilibria and Their Applications; CRC Press: Boca Raton, FL.
Butler, J. N. (1998): Ionic Equilibrium: Solubility and pH Calculations; Wiley-Interscience. Chap. 3.
Gans, P. and O'Sullivan, B. (2000): GLEE, a new computer program for glass electrode calibration; Talanta, 51, 33–37.
Gran, G. (1950): Determination of the equivalence point in potentiometric titrations, Acta Chemica Scandinavica, 4, 559-577.
Gran, G. (1952): Determination of the equivalence point in potentiometric titrations—Part II, Analyst, 77, 661-671.
Gran, G., Johansson, A. and Johansson, S. (1981): Automatic Titration by Stepwise Addition of Equal Volumes of Titrant Part VII. Potentiometric Precipitation Titrations, Analyst, 106, 1109-1118.
Harris, D. C.: Quantitative Chemical Analysis, 5th Ed.; W.H. Freeman & Co., New. York, NY, 1998.
Martell, A. E. and Motekaitis, R. J.: The determination and use of stability constants, Wiley-VCH, 1992.
McBryde, W. A. E. (1969): Analyst, 94, 337.
Rossotti, F. J. C. and Rossotti, H. (1965): J. Chem. Ed''., 42, 375
Skoog, D. A., West, D. M., Holler, F. J. and Crouch, S. R. (2003): Fundamentals of Analytical Chemistry: An Introduction, 8th Ed., Brooks and Cole, Chap. 37.
Stumm, W. and Morgan, J. J. (1981): Aquatic chemistry, 2nd Ed.; John Wiley & Sons, New York.
Notes
Plots (graphics)
Analytical chemistry
Titration | Gran plot | Chemistry | 3,328 |
33,716,371 | https://en.wikipedia.org/wiki/Galileo%20%28operating%20system%29 | Galileo was an unreleased 32-bit operating system that was under development by Acorn Computers as a long-term project to produce "an ultra-modern scalable, portable, multi-tasking, multi-threading, object-oriented, microkernel operating system", reportedly significant enough to Acorn's strategy to warrant a statement to the financial markets.
Announced in early 1997 as targeting "the next generation of smart appliances", running initially on ARM architecture devices but intended to be easily portable to "other RISC processors" (or even "a range of RISC and CISC processors"), emphasis was made on its quality of service features that would guarantee system resources to critical tasks, as well as its reliability, its sophistication relative to RISC OS (which was described as "too primitive to succeed as a 21st century operating system"), and its small footprint that would "enable Acorn to compete in the semi-embedded systems market". However, the system's "modular object-oriented" architecture gave it the scalability to potentially be deployed in devices ranging from "multimedia cellular phones" and network computers to desktop workstations and server platforms.
Features
The operating system was to offer an "innovative modular real time kernel", also described as a microkernel with a hardware abstraction layer, having a footprint of only 15 KB. The kernel itself supported preemptive multitasking, being "multi-threaded and fully pre-emptive", and was portable through extensive high-level language use (an estimated 95% of the code) in conjunction with the hardware abstraction layer. Kernel responsibilities included memory allocation, interrupt handling, direct memory access services, scheduling, and the resource allocation required by the quality of service functionality.
Systems using Galileo were to be able to leverage the modularity of the software architecture to deliver a "complete customisable software stack" that could be deployed in ROM, with system modules and applications being executed in-place to reduce RAM requirements. The architecture was also meant to allow additional components, such as multimedia codecs or network stacks, to be downloaded and deployed without the need to restart the system. It was noted that "virtually all Galileo tasks run in user mode", with "complete memory and CPU usage protection" enforced to uphold the quality of service regime.
The inclusion of quality of service features was intended to "eliminate the need for dedicated multimedia chips" in consumer-level Internet appliances, particularly those chips concerned with video compression and decompression that might instead be implemented in software, thus helping manufacturers to reduce system costs below an anticipated target given of $100 by 1998. Such objectives were to be achieved through collaboration with system-on-chip manufacturers, with a specific collaboration in progress mentioned in early 1997, and with "companies such as Hitachi" expected to release suitable hardware in 1998.
Fate
The operating system was scheduled to be the successor of RISC OS, although Acorn envisaged RISC OS remaining relevant for "high functionality ARM based devices" in the short to medium term, with Galileo being aimed at "portable and networked interactive media devices". Early versions for existing Acorn customers were anticipated by the second half of 1997, and the Galileo kernel was stated as having been "up and running" as a prototype, but the project was cancelled when the workstation division closed as part of Acorn's restructuring in 1998.
The commercial potential of Galileo had been put into some doubt by the announcement of the Symbian alliance which established Psion's EPOC operating system as the basis of a mobile communications platform to be adopted by Nokia and Ericsson, with Motorola having also announced a commitment to the initiative. Despite Galileo promising to be "technically better" than EPOC, the comparative readiness of the two offerings was summarised in one publication's remark that "EPOC has started the race while Galileo is still in the pits with its engine in bits". Nevertheless, at that time, hopes were expressed for opportunities for the product in set-top boxes and network computers.
References
Acorn Computers operating systems
ARM operating systems | Galileo (operating system) | Technology | 835 |
22,585,248 | https://en.wikipedia.org/wiki/Gennadi%20Sardanashvily | Gennadi Sardanashvily (; March 13, 1950 – September 1, 2016) was a theoretical physicist, a principal research scientist of Moscow State University.
Biography
Gennadi Sardanashvily graduated from Moscow State University (MSU) in 1973, he was a Ph.D. student of the Department of Theoretical Physics (MSU) in 1973–76, where he held a position in 1976.
He attained his Ph.D. degree in physics and mathematics from MSU, in 1980, with Dmitri Ivanenko as his supervisor, and his D.Sc. degree in physics and mathematics from MSU, in 1998.
Gennadi Sardanashvily was the founder and Managing Editor (2003 - 2013) of the International Journal of Geometric Methods in Modern Physics (IJGMMP).
He was a member of Lepage Research Institute (Slovakia).
Research area
Gennadi Sardanashvily research area is geometric method in classical and quantum mechanics and field theory, gravitation theory. His main achievement is geometric formulation of classical field theory and non-autonomous mechanics including:
gauge gravitation theory, where gravity is treated as a classical Higgs field associated to a reduced Lorentz structure on a world manifold
geometric formulation of classical field theory and Lagrangian BRST theory where classical fields are represented by sections of fiber bundles and their dynamics is described in terms of jet manifolds and the variational bicomplex (covariant classical field theory)
covariant (polysymplectic) Hamiltonian field theory, where momenta correspond to derivatives of fields with respect to all world coordinates
the second Noether theorem in a very general setting of reducible degenerate Grassmann-graded Lagrangian systems on an arbitrary manifold
geometric formulation of classical and quantum non-autonomous mechanics on fiber bundles over
generalization of the Liouville–Arnold, Nekhoroshev and Mishchenko–Fomenko theorems on completely and partially integrable and superintegrable Hamiltonian systems to the case of non-compact invariant submanifolds
cohomology of the variational bicomplex of graded differential forms of finite jet order on an infinite order jet manifold.
Gennadi Sardanashvily has published more than 400 scientific works, including 28 books.
Selected monographs
.
.
.
.
.
.
.
.
.
.
.
.
.
References
External links
Personal page at Moscow State University (in Russian)
Gennadi Sardanashvily's personal site
Gennadi Sardanashvily's site at Google
Scientific Biography
List of publications at ResearchGate
1950 births
2016 deaths
Russian physicists
Academic staff of Moscow State University
Theoretical physicists
Moscow State University alumni | Gennadi Sardanashvily | Physics | 558 |
827,069 | https://en.wikipedia.org/wiki/Bandsaw | A bandsaw (also written band saw) is a power saw with a long, sharp blade consisting of a continuous band of toothed metal stretched between two or more wheels to cut material. They are used principally in woodworking, metalworking, and lumbering, but may cut a variety of materials. Advantages include uniform cutting action as a result of an evenly distributed tooth load, and the ability to cut irregular or curved shapes like a jigsaw. The minimum radius of a curve is determined by the width of the band and its kerf. Most bandsaws have two wheels rotating in the same plane, one of which is powered, although some may have three or four to distribute the load. The blade itself can come in a variety of sizes and tooth pitches (teeth per inch, or TPI), which enables the machine to be highly versatile and able to cut a wide variety of materials including wood, metal and plastic. Band saw is recommended for use in cutting metal as it produces much less toxic fumes and particulates when compared with angle grinder and reciprocating saw.
Almost all bandsaws today are powered by an electric motor. Line shaft versions were once common but are now antiques.
History
The idea of the bandsaw dates back to at least 1809, when William Newberry received a British patent for the idea, but bandsaws remained impractical largely because of the inability to produce accurate and durable blades using the technology of the day. Constant flexing of the blade over the wheels caused either the material or the joint welding it into a loop to fail.
Nearly 40 years passed before Frenchwoman Anne Paulin Crepin devised a welding technique overcoming this hurdle. She applied for a patent in 1846, and soon afterward sold the right to employ it to manufacturer A. Perin & Company of Paris. Combining this method with new steel alloys and advanced tempering techniques allowed Perin to create the first modern bandsaw blade.
The first American bandsaw patent was granted to Benjamin Barker of Ellsworth, Maine, in January 1836. The first factory produced and commercially available bandsaw in the U.S. was by a design of Paul Prybil.
Power hacksaws (with reciprocating blades) were once common in the metalworking industries, but bandsaws and cold saws have mostly displaced them.
Types
Residential and light industry
Many workshops in residential garages or basements and in light industry contain small or medium-sized bandsaws that can cut wood, metal, or plastic. Often a general-purpose blade is left in place, although blades optimized for wood or metal can be switched out when volume of use warrants. Most residential and commercial bandsaws are of the vertical type mounted on a bench or a cabinet stand. Portable power tool versions, including cordless models, are also common in recent decades, allowing building contractors to bring them along on the truck to the jobsite.
Meat cutting
Saws for cutting meat are typically of all stainless steel construction with easy to clean features. The blades either have fine teeth with heat treated tips, or have plain or scalloped knife edges.
Metal fabrication shop and machine shop models
Bandsaws dedicated to industrial metal-cutting use, such as for structural steel in fabrication shops and for bar stock in machine shops, are available in vertical and horizontal designs. Typical band speeds range from to , although specialized bandsaws are built for friction cutting of hard metals and run band speeds of . Metal-cutting bandsaws are usually equipped with brushes or brushwheels to prevent chips from becoming stuck in between the blade's teeth. Systems which cool the blade with cutting fluid are also common equipment on metal-cutting bandsaws. The coolant washes away swarf and keeps the blade cool and lubricated.
Horizontal bandsaws hold the workpiece stationary while the blade swings down through the cut. This configuration is used to cut long materials such as pipe or bar stock to length. Thus it is an important part of the facilities in most machine shops. The horizontal design is not useful for cutting curves or complicated shapes. Small horizontal bandsaws typically employ a gravity feed alone, retarded to an adjustable degree by a coil spring; on industrial models, the rate of descent is usually controlled by a hydraulic cylinder that bleeds through an adjustable valve. When the saw is set up for a cut, the operator raises the saw, positions the material to be cut underneath the blade, and then turns on the saw. The blade slowly descends into the material, cutting it as the band blade moves. When the cut is complete, a switch is tripped and the saw automatically turns off. More sophisticated versions of this type of saw are partially or entirely automated (via PLC or CNC) for high-volume cutting of machining blanks. Such machines provide a stream of cutting fluid recirculated from a sump, in the same manner that a CNC machining center does.
A vertical bandsaw, also called a contour saw, keeps the blade's path stationary while the workpiece is moved across it. This type of saw can be used to cut out complex shapes and angles. The part may be fed into the blade manually or with a power assist mechanism. This type of metal-cutting bandsaw is often equipped with a built-in blade welder. This not only allows the operator to repair broken blades or fabricate new blades quickly, but also allows for the blade to be purposely cut, routed through the center of a part, and re-welded in order to make interior cuts. These saws are often fitted with a built-in air blower to cool the blade and to blow chips away from the cut area giving the operator a clear view of the work. This type of saw is also built in a woodworking version. The woodworking type is generally of much lighter construction and does not incorporate a power feed mechanism, coolant, or welder.
Advancements have also been made in the bandsaw blades used to cut metals. Bimetal blades with high speed steel teeth, including cobalt grades, are now the norm. The development of new tooth geometries and tooth pitches has produced increased production rates and greater blade life. New materials and processes such as M51 steel and the cryogenic treatment of blades have produced results that were thought impossible just a few years ago. New machines have been developed to automate the welding process of bandsaw blades as well.
Timber cutting
Timber mills use very large bandsaws for ripping lumber; they are preferred over circular saws for ripping because they can accommodate large-diameter timber and because of their smaller kerf (cut size), resulting in less waste.
There are also small portable sawmills consisting of a shop-size bandsaw mounted on a guiding table, which are called bandsaw mills (band saw mills, band sawmills). Like chain saw mills (a chainsaw on a guiding table), they can be used inexpensively by one or two people out in the field.
In a full-size sawmill, the blades are mounted on wheels with a diameter large enough not to cause metal fatigue due to flexing when the blade repeatedly changes from a circular to a straight profile. It is stretched very tight (with fatigue strength of the saw metal being the limiting factor). Bandsaws of this size need to have a deformation worked into them that counteracts the forces and heating of operation. This is called "benching". They also need to be removed and serviced at regular intervals. Sawfilers or sawdoctors are the craftsmen responsible for this work.
The shape of the tooth gullet is highly optimized and designed by the sawyer and sawfiler. It varies according to the mill, as well as the type and condition of the wood. Frozen logs often require a "frost notch" ground into the gullet to break the chips. The shape of the tooth gullet is created when the blade is manufactured and its shape is automatically maintained with each sharpening. The sawfiler will need to maintain the grinding wheel's profile with periodic dressing of the wheel.
Proper tracking of the blade is crucial to accurate cutting and considerably reduces blade breakage. The first step to ensuring good tracking is to check that the two bandwheels or flywheels are co-planar. This can be done by placing a straightedge across the front of the wheels and adjusting until each wheel touches. Rotate the wheels with the blade in position and properly tensioned and check that the tracking is correct. Now install the blade guide rollers and leave a gap of about 1 mm between the back of the blade and the guide flange. The teeth of blades that have become narrow through repeated sharpening will foul the front edge of the guide rollers due to their kerf set and force the blade out of alignment. This can be remedied by cutting of a small step on the rollers' front edges to accommodate the protruding teeth. Ideally the rollers should be crowned, (see belt_and_pulley_systems) a configuration that assists in the proper tracking of bands and belts, at the same time allowing clearance for the set of the teeth.
Head saws
Head saws are large bandsaws that make the initial cuts in a log. They generally have a tooth space on the cutting edge and sliver teeth on the back. Sliver teeth are non-cutting teeth designed to wipe slivers out of the way when the blade needs to back out of a cut.
Resaws
A resaw is a large bandsaw optimized for cutting timber along the grain to reduce larger sections into smaller sections or veneers. Resawing veneers requires a wide blade—commonly —with a small kerf to minimize waste. Resaw blades of up to may be fitted to a standard bandsaw.
Double cut saws
Double cut saws have cutting teeth on both sides. They are generally very large, similar in size to a head saw.
Construction
Feed mechanisms
Gravity feed saws fall under their own weight. Most such saws have a method to allow the cutting force to be adjusted, such as a movable counterbalancing weight, a coil spring with a screw-thread adjustment, or a hydraulic or pneumatic damper (speed control valve). The latter does not force the blade downwards, but rather simply limits the speed at which the saw can fall, preventing excessive feed on thin or soft parts. This is analogous to door closer hardware whose damping action keeps the door from slamming. Gravity feed designs are common in small saws.
Hydraulic feed saws use a positive pressure hydraulic piston to advance the saw through the work at variable pressure and rate. Common in production saws.
Screw feed saws employ a leadscrew to move the saw.
Fall mechanisms
Pivot saws hinge in an arc as they advance through the work.
Single column saws have a large diameter column that the entire saw rides up and down on, very similar to a drill press.
Dual column saws have a pair of large columns, one on either side of the work, for very high rigidity and precision. The dual column setup is unable to make use of a miter base due to inherent design. Dual column saws are the largest variety of machine bandsaws encountered, to the point where some make use of a rotary table and X axis to perform complex cutting.
Automated saws
Automatic bandsaws feature preset feed rate, return, fall, part feeding, and part clamping. These are used in production environments where having a machine operator per saw is not practical. One operator can feed and unload many automatic saws.
Some automatic saws rely on numerical control to not only cut faster, but to be more precise and perform more complex miter cuts.
Common tooth forms
Precision blade gives accurate cuts with a smooth finish.
Buttress blade provides faster cutting and large chip loads.
Claw tooth blade gives additional clearance for fast cuts and soft material.
At least two teeth must be in contact with the workpiece at all times to avoid stripping off the teeth.
See also
Chainsaw
Bandsaw box
Portable sawmill
Band knife
Wire saw
References
Bibliography
.
Duginske, Mark (1989). The Bandsaw Handbook. Sterling Publishing. .
External links
Cutting machines
Metalworking cutting tools
Saws
Woodworking machines | Bandsaw | Physics,Technology | 2,516 |
11,421,666 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20R43 | In molecular biology, Small nucleolar RNA R43 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA R43 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
Plant snoRNA R43 was identified in a screen of Arabidopsis thaliana.
References
External links
Small nuclear RNA | Small nucleolar RNA R43 | Chemistry | 195 |
37,953,311 | https://en.wikipedia.org/wiki/Allmyapps | Allmyapps was an application store for Microsoft Windows. The service allowed users to install, update and organize over 1,500 PC applications.
History
Allmyapps developed an application manager for Ubuntu before entering Microsoft's IDEEs program and receiving 1 million euro from Elaia Partners in 2010. The company launched the first Windows application store as a beta version at LeWeb in December 2010, for which it won the Startup Pitch competition. In December 2011, Allmyapps announced that they had 2.5 million registered users. On October 21, 2014, Allmyapps was bought by ironSource.
Features
Allmyapps supported the Windows 8, Windows 7, Windows Vista and Windows XP operating systems. The service let users initiate the installation of applications from its website and from its desktop client. The desktop client could detect software already installed on the computer in order to update them via the application store and to save the list online as a backup.
References
External links
Allmyapps App Store for Windows
Freeware
Software distribution platforms
2010 software | Allmyapps | Technology | 217 |
15,845,985 | https://en.wikipedia.org/wiki/E%E2%88%9E-operad | {{DISPLAYTITLE:E∞-operad}}
In the theory of operads in algebra and algebraic topology, an E∞-operad is a parameter space for a multiplication map that is associative and commutative "up to all higher homotopies". (An operad that describes a multiplication that is associative but not necessarily commutative "up to homotopy" is called an A∞-operad.)
Definition
For the definition, it is necessary to work in the category of operads with an action of the symmetric group. An operad A is said to be an E∞-operad if all of its spaces E(n) are contractible; some authors also require the action of the symmetric group Sn on E(n) to be free. In other categories than topological spaces, the notion of contractibility has to be replaced by suitable analogs, such as acyclicity in the category of chain complexes.
En-operads and n-fold loop spaces
The letter E in the terminology stands for "everything" (meaning associative and commutative), and the infinity symbols says that commutativity is required up to "all" higher homotopies. More generally, there is a weaker notion of En-operad (n ∈ N), parametrizing multiplications that are commutative only up to a certain level of homotopies. In particular,
E1-spaces are A∞-spaces;
E2-spaces are homotopy commutative A∞-spaces.
The importance of En- and E∞-operads in topology stems from the fact that iterated loop spaces, that is, spaces of continuous maps from an n-dimensional sphere to another space X starting and ending at a fixed base point, constitute algebras over an En-operad. (One says they are En-spaces.) Conversely, any connected En-space X is an n-fold loop space on some other space (called BnX, the n-fold classifying space of X).
Examples
The most obvious, if not particularly useful, example of an E∞-operad is the commutative operad c given by c(n) = *, a point, for all n. Note that according to some authors, this is not really an E∞-operad because the Sn-action is not free. This operad describes strictly associative and commutative multiplications. By definition, any other E∞-operad has a map to c which is a homotopy equivalence.
The operad of little n-cubes or little n-disks is an example of an En-operad that acts naturally on n-fold loop spaces.
See also
operad
A-infinity operad
loop space
References
Abstract algebra
Algebraic topology | E∞-operad | Mathematics | 591 |
7,854,229 | https://en.wikipedia.org/wiki/Vexillography | Vexillography ( ) is the art and practice of designing flags; a person who designs flags is a vexillographer. Vexillography is allied with vexillology, the scholarly study of flags, but is not synonymous with that discipline.
Background of flag design
Flag designs exhibit a number of regularities, arising from a variety of practical concerns, historical circumstances, and cultural prescriptions that have shaped and continue to shape their evolution.
Vexillographers face the necessity for the design to be manufactured (and often mass-produced) into or onto a piece of cloth, which will subsequently be hoisted aloft in the outdoors to represent an organization, individual, idea, or group. In this respect, flag design departs considerably from logo design: logos are predominantly still images suitable for reading off a page, screen, or billboard; while flags are alternately draped and fluttering images - visible from a variety of distances and angles (including the reverse). The prevalence of simple bold colors and shapes in flag design attests to these practical issues.
Flag design has a history, and new designs often refer back to previous designs, effectively quoting, elaborating, or commenting upon them. Families of current flags may derive from a few common ancestors - as in the cases of the Pan-African colours, the Pan-Arab colors, the Pan-Slavic colors, the Nordic Cross flag and the Ottoman flag.
Certain cultures prescribe the proper design of their own flags, through heraldic or other authoritative systems. Prescription may be based on religious principles: see, for example, Islamic flags. Vexillographers have begun to articulate design principles, such as those jointly published by the North American Vexillological Association and the Flag Institute in their Guiding Principles of Flag Design.
Principles of design
In 2006, the North American Vexillological Association published a booklet titled “Good” Flag, “Bad” Flag to aid those wishing to design or re-design a flag. Taking a minimalist approach, the booklet lists five basic flag design principles which have become a standard reference in the vexillographer community. In 2014, the North American Vexillological Association, alongside the Flag Institute created an updated booklet titled The Commission's Report on the Guiding Principles of Flag Design, which addresses issues present in “Good” Flag, “Bad” Flag, and goes more in-depth on the ideas laid forth in the aforementioned booklet. The guidelines in this booklet can be summarized as follows:
Basics
Keep in mind the physics of a flag in flight when designing a flag
Simple designs are more easily remembered while complex ones are harder to recall and recreate
Flags should have distinctive designs that separate them from others
Designs and trends should be avoided if there is a possibility that they can date quickly
Color
Using fewer colors keeps designs simple and bold (2-3 colors are strongly recommended)
Contrast is important; use light on dark and dark on light
Modern printing techniques have made more shades of color available than previously, and this can be used advantageously
Designs should make the edge of a flag be well-defined so as to not get visually lost in the background of where it is flying
Gradient on flags (Like the Flag of Guatemala ) make it look too computer generated, and make it difficult to sew/draw. Try to avoid gradients
Structure
Charges are best placed in the canton, hoist, or center of a design as these are the most visually prominent areas
Flag designs are usually longer than they are tall
Having different designs on the obverse and reverse of a flag undermines recognition and increases cost of production
Devices
A single device should be used in a prominent position to ensure that people can recognize the flag whether it is in flight or at rest
When multiple devices are included, different background colors can be used to "anchor" the devices into the overall design
Devices should be stylized graphical representations as opposed to realistic drawings, so the flag can easily be recreated and recognized by anyone
Avoid text on flags; it is difficult to read while the flag is in flight and will appear backwards on the flag's reverse
Charges with directionality traditionally face towards the hoist, or flagpole
Seals, coats of arms, or logos are usually too complex to be used effectively on a flag, although exceptions exist
Symbolism
Symbols should be both distinct and representative
A flag should represent the totality of any given community as opposed to its individual parts
A flag should emphasize its own identity over higher-level groupings, otherwise distinctiveness is lost
Symbolism relating to other entities should only be used if there is a clear, direct relevance
Designers should avoid representing any particular reference in multiple ways, and instead try to make a single definitive reference
Prominent vexillographers
Columbano Bordalo Pinheiro, designer of the flag of Portugal
Luis and Sabino Arana, designers of the Ikurriña (the flag of the Basque Country)
Graham Bartram, designer of the flag of Tristan da Cunha and others
Manuel Belgrano, designer of the flag of Argentina
Frederick 'Fred' Brownell, designer of the flags of South Africa and Namibia
Ron Cobb, designer of the American Ecology Flag
John Eisenmann, designer of the flag of the U.S. state of Ohio
Mohamed Hamzah, designer of the flag of Malaya
Quamrul Hassan, designer of the flag of Bangladesh
Cederic Herbert, designer of the flag of the short-lived Zimbabwe Rhodesia
Francis Hopkinson, generally acknowledged designer of the American flag
Friedensreich Hundertwasser, designer of a koru flag, among others
Susan K. Huhume, designer of the flag of Papua New Guinea
Sharif Hussein, designer of the flag of the Arab Revolt
James I of England, designer of the first flag of Great Britain
Syed Amir-uddin Kedwaii, designer of the flag of Pakistan
Lu Haodong, designer of the Blue Sky with a White Sun flag of the Republic of China
Nicola Marschall, designer of the "Stars and Bars", the First National Flag of the Confederate States of America
John McConnell, designer of a flag of the Earth
Fredrik Meltzer, designer of the flag of Norway
Raimundo Teixeira Mendes, designer of the flag of Brazil
William Porcher Miles, designer of the battle flag of the Confederate States of America
Francisco de Miranda, designer of the flag of Venezuela, upon which the present flags of Colombia and Ecuador are based.
Theodosia Okoh, designer of the flag of Ghana
Christopher Pratt, designer of the flag of the Canadian province of Newfoundland and Labrador
Orren Randolph Smith, citizen of North Carolina who is co-credited as being the father of the "Stars and Bars" flag, along with Nicola Marschall.
Whitney Smith, designer of the flag of Guyana and other flags
George Stanley, designer of the flag of Canada
Joaquín Suárez, designer of the flag of Uruguay
Pingali Venkayya, designer of the flag of India
Robert Watt, designer of the flag of Vancouver, British Columbia, Canada
Oliver Wolcott Jr., designer of the flag of the United States Customs Service
Zeng Liansong, designer of the flag of the People's Republic of China
İsmet Güney, designer of the flag of Cyprus
Nguyen Huu Tien, designer of the flag of Vietnam
Gilbert Baker, designer of the rainbow flag symbol of the LGBT Movement
Alexander Baretich, designer of the Cascadian bioregional flag AKA Doug Flag
Ralph Eugene Diffendorfer, co-designer of the Christian Flag
Christopher Gadsden, designer of the Gadsden flag
Monica Helms, designer of the Transgender flag
Catherine Rebecca Murphy Winborne - the "Betsy Ross of the Confederacy" - also co-credited as the designer of the "Stars and Bars" flag
Adolf Hitler, designer of the flag of Nazi Germany, the Reichskriegsflagge and his personal standard
Betsy Ross, designer, according to legend, of the American flag during the American Revolution
Theodore Sizer, designed of the flag of St. Louis
Gerard Slevin, former Chief Herald of Ireland reputed to have helped design the flag of Europe.
Emilio Aguinaldo, 1st president of the Republic of the Philippines, along with the designer of the countries' flag.
Notes | Vexillography | Engineering | 1,663 |
23,453,085 | https://en.wikipedia.org/wiki/Crop%20weed | Crop weeds are weeds that grow amongst crops.
Despite the potential for some crop weeds to be used as a food source, many can also prove harmful to crops, both directly and indirectly. Crop weeds can inhibit the growth of crops, contaminate harvested crops and often spread rapidly. They can also host crop pests such as aphids, fungal rots and viruses. Cost increases and yield losses occur as a result. Striga, one of the main cereal crop weeds in Sub-Saharan Africa, commonly causes yield losses of 40–100% and accounts for around $7 billion in losses annually. Around 100 million hectares of land in Sub-Saharan Africa are affected by striga. Barnyard grass has been identified as a culprit in global rice yield losses and certain species have been known to mimic rice.
Examples of crop weeds include chickweed, barnyard grass, dandelion, striga and Japanese knotweed. A less commonly known crop weed is Abelmoschus ficulneus.
See also
Weed of cultivation
References
Weeds
Agricultural pests
Crops | Crop weed | Biology | 218 |
19,587,932 | https://en.wikipedia.org/wiki/Gao%20Xing | Gao Xing (; born 1974) is a Chinese amateur astronomer from Ürümqi, Xinjiang, China. (Astronomers name: Gaoxing) He built Xingming Observatory (星明天文台) in 2006 and discovered Comet C/2008 C1 (Chen-Gao) on February 1, 2008 with Chen Tao from Jiangsu and Comet P/2009 L2 (Yang-Gao) on June 15, 2009 with Yang Rui from Hangzhou, Zhejiang and Comet C/2015 F5 (SWAN-Xingming) on April 4, 2015 with Guoyou Sun from Wenzhou, Zhejiang, China (...?) and hence won the Edgar Wilson Award for 2008. (He won Edgar Wilson Award unter that for 2015 3 times.)
In the night on February 26, 2009, he discovered a nova in Sagittarius in the Galaxy's central part at night with his partner Sun Guoyou from Wenzhou. Gao reported his new discovery to the International Astronomical Union on May 29 and acquired the identification. On the night of October 3, 2010, he discovered a new supernova in NGC5430 at night with his partner Sun Guoyou. He also discovered several SOHO comets and NEAT asteroids. Currently, he is working as a physics teacher at the Urumqi No.1 High School. In 2017 he was awarded the Gordon Myers Amateur Achievement Award.
See also
Urumqi No.1 High School
References
External links
Gao Xing's Homepage
Living people
1974 births
Discoverers of comets
Discoverers of supernovae
Discoverers of minor planets
Discoveries by Gao Xing
21st-century Chinese astronomers
Amateur astronomers
Chinese schoolteachers
People from Ürümqi
Educators from Xinjiang
Scientists from Xinjiang | Gao Xing | Astronomy | 346 |
11,333,018 | https://en.wikipedia.org/wiki/Communities%20Directory | The Communities Directory, A Comprehensive Guide to Intentional Community provides listing of intentional communities primarily from North America but also from around the world. The Communities Directory has both an online and a print edition, which is published based on data from the website.
History
The first version of the Communities Directory appeared in issue #1 of Communities magazine in December 1972. In all, ten versions were published in the magazine over the next 18 years. The Fellowship for Intentional Community became publisher of the magazine in 1989, and in 1990 released the first self-contained book-format edition of the directory (also distributed to magazine subscribers, counted as double issue #77/78).
The Communities Directory is now in its 7th edition. Editions were published in 1990, 1995, 2000, 2005, 2007, 2010 and 2016. The production cycle has been shortened due to the online collection of data. The 4th edition lists 600 communities in North America and another 130 worldwide. The 5th edition lists almost 1250 communities worldwide.
There is also a companion video Visions of Utopia: Experiments in Sustainable Culture that outlines the history of intentional shared living and profiles a diverse cross-section of contemporary groups (external link included below).
The Online Communities Directory database is shared by members of the Intentional Community Data Collective which includes the Fellowship for Intentional Community and Coho/US's Cohousing Directory.
Publisher
The Communities Directory is published by Fellowship for Intentional Community, which also publishes the quarterly magazine Communities.
See also
Cohousing
Commune (intentional community)
Diggers and Dreamers
Ecovillage
Fellowship for Intentional Community
Intentional Community
List of intentional communities
References
External links
Video: Visions of Utopia
Cohousing Directory
Intentional Communities Wiki
Intentional Communities Wiki page – on Communities Directory
Urban planning
Rural community development | Communities Directory | Engineering | 349 |
39,097,531 | https://en.wikipedia.org/wiki/OpenDaylight%20Project | The OpenDaylight Project is a collaborative open-source project hosted by the Linux Foundation. The project serves as a platform for software-defined networking (SDN) for customizing, automating and monitoring computer networks of any size and scale.
History
On April 8, 2013, The Linux Foundation announced the founding of the OpenDaylight Project. The goal was to create a community-led and industry-supported, open-source platform to accelerate adoption & innovation in terms of software-defined networking (SDN) and network functions virtualization (NFV). The project's founding members were Big Switch Networks, Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Red Hat and VMware.
Reaction to the goals of open architecture and administration by The Linux Foundation have been mostly positive. While initial criticism centered on concerns that this group could be used by incumbent technology vendors to stifle innovation, most of the companies signed up as members do not sell incumbent networking technology.
Technical steering committee
For governance of the project, the technical steering committee (TSC) provides technical oversight over the project. The TSC is able to hold voting on major changes to the project. As of June 2022, the TSC includes:
Anil Belur (The Linux Foundation)
Cedric Ollivier (Orange)
Guillaume Lambert (Orange)
Ivan Hrasko (PANTHEON.tech)
Luis Gomez (Kratos)
Manoj Chokka (Verizon)
Robert Varga (PANTHEON.tech)
Venkatrangan Govindarajan (Rakuten Mobile)
Code Contributions
By 2015, user companies began participating in upstream development. The largest, actively contributing companies include PANTHEON.tech, Orange, Red Hat, and Ericsson. At the time of the Carbon release in May 2017, the project estimated that over 1 billion subscribers accessing OpenDaylight-based networks, in addition to its usage within large enterprises.
There is a dedicated OpenDaylight Wiki, and mailing lists.
Technology
Projects
The platform is described as a modular, open-source platform for automating networks. Part of the concept of modularity are over 50 projects, which address & extend the capabilities of networks managed by OpenDaylight. Each project has a formal structure, teams and meetings to discuss releases, functionality and code. Projects include BGPCEP, TransportPCE, NETCONF, YANG Tools, and others.
Releases
Releases are named after the atomic number of chemical elements, including the corresponding number.
Members
Originally there were three tiers of membership for OpenDaylight: Platinum, Gold and Silver, with varying levels of commitment.
As of January 2018, OpenDaylight became a project within the LF Networking Foundation, which consolidated membership across multiple projects into a common governance structure. Most OpenDaylight members became members of the new LF Networking Foundation.
See also
List of SDN controller software
References
External links
Computer networking
Linux Foundation projects | OpenDaylight Project | Technology,Engineering | 603 |
44,647,971 | https://en.wikipedia.org/wiki/Institute%20of%20Physics%20Joseph%20Thomson%20Medal%20and%20Prize | The Thomson Medal and Prize is an award which has been made, originally only biennially in even-numbered years, since 2008 by the British Institute of Physics for "distinguished research in atomic (including quantum optics) or molecular physics". It is named after Nobel prizewinner Sir J. J. Thomson, the British physicist who demonstrated the existence of electrons, and comprises a silver medal and a prize of £1000.
Not to be confused with the J. J. Thomson IET Achievement Medal for electronics.
Medallists
The following have received a medal:
2024: Janne Ruostekoski, for outstanding contributions to the fundamental understanding of cooperative interactions between light and atomic ensembles, as well as for pioneering efforts in harnessing these interactions for applications.
2023: Ulrich Schneider, for groundbreaking experiments on the collective dynamics of quantum gases in optical lattices, including fundamental studies of localization effects in both disordered and quasicrystalline systems.
2022: Michael Tarbutt, for pioneering experimental and theoretical work on the production of ultracold molecules by laser cooling, and the applications of those molecules to quantum science and tests of fundamental physics.
2021: Carla Faria, for distinguished contributions to the theory of strong-field laser-matter interactions.
2020: Michael Charlton, for scientific leadership in antimatter science.
2019: , for outstanding contributions to experiments on ultra-cold atoms and molecules
2016: Jeremy M. Hutson, for his pioneering work on the theory of ultracold molecules
2014: Charles S Adams, for his imaginative experiments which have pioneered the field of Rydberg quantum optics
2012: , for his pioneering experimental work in Bose-Einstein condensates and cold Fermi gases
2010: , for her contributions to the development of the world's only positronium beam
2008: Edward Hinds, for his important and elegant experimental investigations in the fields of atomic physics and quantum optics
See also
Institute of Physics Awards
List of physics awards
List of awards named after people
References
Awards established in 2008
Awards of the Institute of Physics
Quantum optics
Atomic, molecular, and optical physics | Institute of Physics Joseph Thomson Medal and Prize | Physics,Chemistry | 423 |
41,977,979 | https://en.wikipedia.org/wiki/Breton%20%28company%29 | Breton S.p.A. is an Italian, privately held company established in 1963 that produces machines and plants for engineered stone and metalworking.
Machines and plants by Breton can be used in diverse sectors such as die-making, aerospace, automotive, racing cars, energy, gears, general mechanics, stone processing and kitchen top manufacturing.
History
Breton was established in 1963 in Castello di Godego, Italy, by Marcello Toncelli, who started developing new technologies and manufacturing industrial plants for producing engineered stone. He invented Bretonstone technology, also known as vibrocompression under vacuum, a patented technology which is used today by engineered stone manufacturers.
Around the mid-1990s, the company decided to expand into the machine tool market, manufacturing machining centres for the mechanical industry.
In 2003, Marcello Toncelli died, and the control of the company passed to his sons Luca and Dario Toncelli, who have been running the company since together with Roberto Chiavacci, Vice President of the board of directors.
In 2011, the company acquired Bidese Impianti and signed a partnership with Boart & Wire, a diamond wires manufacturer.
In 2014, Breton became an official member of the Graphene Flagship Project, one of the largest research initiatives of the European Commission which focuses on the potential applications of graphene.
Products
Breton manufactures machines and technology for following fields:
engineered stone processing
natural stone processing
ceramic materials processing
high-speed machining for aerospace, formula 1, automotive and die-mould sector
Awards
Breton's solution to connect through the cloud to manage tele-service for 4,000 machines for hundreds of customers worldwide was awarded by Microsoft in 2012 with the Windows Embedded Partner Excellence Award for manufacturing.
Footnotes
External links
Company's official website
Timeline of Breton's history
Engineering companies of Italy
Technology companies of Italy
Italian brands
Industrial machine manufacturers
Manufacturing companies established in 1963
Italian companies established in 1963
Companies based in Veneto | Breton (company) | Engineering | 393 |
32,361,704 | https://en.wikipedia.org/wiki/Slip%20ratio%20%28gas%E2%80%93liquid%20flow%29 | Slip ratio (or velocity ratio) in gas–liquid (two-phase) flow, is defined as the ratio of the velocity of the gas phase to the velocity of the liquid phase.
In the homogeneous model of two-phase flow, the slip ratio is by definition assumed to be unity (no slip). It is however experimentally observed that the velocity of the gas and liquid phases can be significantly different, depending on the flow pattern (e.g. plug flow, annular flow, bubble flow, stratified flow, slug flow, churn flow). The models that account for the existence of the slip are called "separated flow models".
The following identities can be written using the interrelated definitions:
where:
S – slip ratio, dimensionless
indices G and L refer to the gas and the liquid phase, respectively
u – velocity, m/s
U – superficial velocity, m/s
– void fraction, dimensionless
ρ – density of a phase, kg/m3
x – steam quality, dimensionless.
Correlations for the slip ratio
There are a number of correlations for slip ratio.
For homogeneous flow, S = 1 (i.e. there is no slip).
The Chisholm correlation is:
The Chisholm correlation is based on application of the simple annular flow model and equates the frictional pressure drops in the liquid and the gas phase.
The slip ratio for two-phase cross-flow horizontal tube bundles may be determined using the following correlation:
where the Richardson and capillary numbers are defined as and .
For enhanced surfaces bundles the slip ratio can be defined as:
Where:
S – slip ratio, dimensionless
P – tube centerline pitch
D – tube diameter
Subscript – liquid phase
Subscript – gas phase
g– gravitational acceleration
– minimum distance between the tubes
G-mass flux (mass flow per unit area)
– dynamic viscosity
– surface tension
– thermodynamic quality
– void fraction
References
Fluid dynamics | Slip ratio (gas–liquid flow) | Chemistry,Engineering | 403 |
27,949,833 | https://en.wikipedia.org/wiki/Diodes%20Incorporated | Diodes Incorporated is a global manufacturer and supplier of application specific standard products within the discrete, logic, analog, and mixed-signal semiconductor markets. Diodes serves the consumer electronics, computing, communications, industrial, and automotive markets.
Diodes' products include diodes, rectifiers, transistors, MOSFETs, protection devices, functional specific arrays, single gate logic, amplifiers and comparators, Hall effect and temperature sensors; power management devices, including LED drivers, AC-DC converters and controllers, DC-DC switching and linear voltage regulators, and voltage references along with special function devices, such as USB power switches, load switches, voltage supervisors, and motor controllers. Diodes Incorporated also has timing, connectivity, switching, and signal integrity solutions for high-speed signals. In January 2024 the company announced three dual-channel power-switches.
The company's product focus is on end-user equipment markets such as satellite TV set-top boxes, portable DVD players, datacom devices, ADSL modems, power supplies, medical devices (non-life support devices/systems), PCs and notebooks, flat panel displays, digital cameras, mobile handsets, AC-to-DC and DC-to-DC conversion, Wireless 802.11 LAN access points, brushless DC motor fans, serial connectivity, and automotive applications.
Over the years, Diodes Incorporated grew by acquiring other semiconductor companies. Notable acquisitions include Zetex Semiconductors (2008), Power Analog Microelectronics, Inc. (2012), Pericom Semiconductor (2015), Texas Instruments' Greenock wafer fabrication plant (2019), and Lite-On Semiconductor (2020).
On 3 June 2022, Diodes completed the acquisition of the South Portland wafer fabrication facility, and its operations from onsemi's known as SPFAB. This also includes the transfer of all of its employees there.
On 26 December 2023, Diodes announced that Gary Yu would become president as of 2 January 2024 and Dr. Keh Shew Lu will remain chairman and CEO until at least 31 May 2027.
References
Semiconductor companies of the United States
Equipment semiconductor companies
Manufacturing companies based in Texas
Companies based in Plano, Texas
American companies established in 1959
Electronics companies established in 1959
1959 establishments in Texas
Companies listed on the Nasdaq | Diodes Incorporated | Engineering | 481 |
43,822,252 | https://en.wikipedia.org/wiki/Self-interference%20cancellation | Self-interference cancellation (SIC) is a signal processing technique that enables a radio transceiver to simultaneously transmit and receive on a single channel, a pair of partially-overlapping channels, or any pair of channels in the same frequency band. When used to allow simultaneous transmission and reception on the same frequency, sometimes referred to as “in-band full-duplex” or “simultaneous transmit and receive,” SIC effectively doubles spectral efficiency. SIC also enables devices and platforms containing two radios that use the same frequency band to operate both radios simultaneously.
Self-interference cancellation has applications in mobile networks, the unlicensed bands, cable TV, mesh networks, the military, and public safety.
In-band full-duplex has advantages over conventional duplexing schemes. A frequency division duplexing (FDD) system transmits and receives at the same time by using two (usually widely separated) channels in the same frequency band. In-band full-duplex performs the same function using half of the spectrum resources. A time division duplexing (TDD) system operates half-duplex on a single channel, creating the illusion of full-duplex communication by rapidly switching back-and-forth between transmit and receive. In-band full-duplex radios achieve twice the throughput using the same spectrum resources.
Techniques
A radio transceiver cannot cancel out its own transmit signal based solely on knowledge of what information is being sent and how the transmit signal is constructed. The signal that the receiver sees is not entirely predictable. The signal that appears at the receiver is subject to varying delays. It consists of a combination of leakage (the signal traveling directly from the transmitter to the receiver) and local reflections. In addition, transmitter components (such as mixers and power amplifiers) introduce non-linearities that generate harmonics and noise. These distortions must be sampled at the output of the transmitter. Finally, the self-interference cancellation solution must detect and compensate for real-time changes caused by temperature variations, mechanical vibrations, and the motion of things in the environment.
The transmit signal can be cancelled out at the receiver by creating an accurate model of the signal and using it to generate a new signal that when combined with the signal arriving at the receiver leaves only the desired receive signal. The precise amount of cancellation required will vary depending on the power of the transmit signal that is the source of the self-interference and the signal-to-noise ratio (SNR) that the link is expected to handle in half-duplex mode. A typical figure for Wi-Fi and cellular applications is 110 dB of signal cancellation, though some applications require greater cancellation.
Cancelling a local transmit signal requires a combination of analog and digital electronics. The strength of the transmit signal can be modestly reduced before it reaches the receiver by using a circulator (if a shared antenna is used) or antenna isolation techniques (such as cross polarization) if separate antennas are used. The analog canceller is most effective at handling strong signals with a short delay spread. A digital canceller is most effective at handling weak signals with delays greater than 1,000 nanoseconds. The analog canceller should contribute at least 60 dB of cancellation. The digital canceller must process both linear and non-linear signal components, producing about 50 dB of cancellation. Both the analog and digital cancellers consist of a number of “taps” composed of attenuators, phase shifters, and delay elements. The cost, size, and complexity of the SIC solution is primarily determined by the analog stage. Also essential are the tuning algorithms that enable the canceller to adapt to rapid changes. Cancellation algorithms typically need to adapt at the rate of once every few hundred microseconds to keep up with changes in the environment.
SIC can also be employed to reduce or eliminate adjacent channel interference. This allows a device containing two radios (such as a Wi-Fi access Point with two 5 GHz radios) to use any pair of channels regardless of separation. Adjacent channel interference consists of two main components. The signal on the transmit frequency, known as the blocker, may be so strong that it desensitizes a receiver listening on an adjacent channel. A strong, local transmitter also produces noise that spills over onto the adjacent channel. SIC may be used to reduce both the blocker and the noise that might otherwise prevent use of an adjacent channel.
Applications
In-band full duplex
Transmitting and receiving on exactly the same frequency at exactly the same time has multiple purposes. In-band full duplex can potentially double spectral efficiency. It permits true full duplex operation where only a single frequency is available. And it enables “listen while talking” operation (see cognitive radio, below).
Integrated access and backhaul
Though most small cells are expected to be fed using fiber optic cable, running fiber isn't always practical. Reuse of the frequencies used by a small cell to communicate with users (“access”) for communication between the small cell and the network (“backhaul”) will be part of the 3GPP's 5G standards. When implemented using SIC, the local backhaul radio's transmit signal is cancelled out at the small cell's receiver, and the small cell's transmit signal is cancelled out at the local backhaul radio's receiver. No changes are required to the users’ devices or the remote backhaul radio. The use of SIC in this applications has been successfully field-tested by Telecom Italia Mobile and Deutsche Telekom.
Satellite repeaters
SIC enables satellite repeaters to extend coverage to indoor, urban canyon, and other locations by reusing the same frequencies. This type of repeater is essentially two radios connected back-to-back. One radio faces the satellite, while the other radio faces the area not in direct coverage. The two radios relay the signals (rather than store-and-forward data bits) and must be isolated from each other to prevent feedback. The satellite-facing radio listens to the satellite and must be isolated from the transmitter repeating the signal. Likewise, the indoor-facing radio listens for indoor users and must be isolated from the transmitter repeating their signals to the satellite. SIC may be used to cancel out each radio's transmit signal at the other radio's receiver.
Full-duplex DOCSIS 3.1
Cable networks have traditionally allocated most of their capacity to downstream transmissions. The recent growth in user-generated content calls for more upstream capacity. Cable Labs developed the Full Duplex DOCSIS 3.1 standard to enable symmetrical service at speeds up to 10 Gbit/s in each direction. In DOCSIS 3.1, different frequencies are allocated for upstream and downstream transmissions, separated by a guard band. Full Duplex DOCSIS establishes a new band allowing a mix of upstream and downstream channels on adjacent channels. The headend must support simultaneous transmission and reception across the full duplex band, which requires SIC technology. The cable modems are not required to transmit and receive on the same channels simultaneously, but they are required to use different combinations of upstream and downstream channels as instructed by the headend.
Wireless mesh networks
Mesh networks are used to extend coverage (to cover entire homes) and for ad-hoc networking (emergency communication). Wireless mesh networks use a mesh topology to provide the desired coverage. The data travels from one node to another until it reaches its destination. In mesh networks using a single frequency, the data is typically store-and-forwarded, with each hop adding a delay. SIC can enable wireless mesh nodes to reuse frequencies so that the data is retransmitted (relayed) as it is received. In mesh networks using multiple frequencies, such as whole-home Wi-Fi networks using “tri-band” routers, SIC can enable greater flexibility in channel selection. Tri-band routers have one 2.4 GHz and one 5 GHz radio to communicate with client devices, and a second 5 GHz radio that is used exclusively for internode communication. Most tri-band routers use the same pair of 80 MHz channels (at opposite ends of the 5 GHz band) to minimize interference. SIC can allow tri-band routers to use any of the six 80-MHz channels in the 5 GHz band for coordination both within networks and between neighboring networks.
Military communication
The military frequently requires multiple, high power radios on the same air, land, or sea platform for tactical communication. These radios must be reliable even in the face of interference and enemy jamming. SIC enables multiple radios to operate on the same platform at the same time. SIC also has potential applications in military and vehicular radar, allowing radar systems to transmit and receive continuously rather than constantly switching between transmit and receive, yielding higher resolution. These new capabilities have been recognized as a potential 'superpower' for armed forces that may bring about a paradigm shift in tactical communications and electronic warfare.
Spectrum sharing
National regulatory agencies, such as the Federal Communications Commission in the U.S., often address the need for more spectrum resources by permitting sharing of underutilized spectrum. For instance, billions of Wi-Fi and Bluetooth devices compete for access to the ISM bands. Smartphones, Wi-Fi routers, and smart home hubs frequently support Wi-Fi, Bluetooth, and other wireless technologies in the same device. SIC technology enables these devices to operate two radios in the same band at the same time. Spectrum sharing is a topic of great interest to the mobile phone industry as it begins to deploy 5G systems.
Cognitive radio
Radios that dynamically select idle channels to make more efficient use of finite spectrum resources are the subject of considerable research. Traditional spectrum sharing schemes rely on Listen-before-talk protocols. However, when two or more radios choose to transmit on the same channel at the same time there is a collision. Collisions take time to detect and resolve. SIC enables listen-while-talking, ensuring immediate detection and faster resolution of collisions.
See also
Cognitive radio
DOCSIS
Duplex (telecommunications)
Mesh networking
Successive Interference Cancellation
References
Y. Hua, Y. Ma, A. Gholian, Y. Li, A. Cirik, P. Liang, “Radio Self-Interference Cancellation by Transmit Beamforming, All-Analog Cancellation and Blind Digital Tuning,” Signal Processing, Vol. 108, pp. 322–340, 2015.
External links
3GPP Integrated access and backhaul
CableLabs Full Duplex DOSCIS 3.1
Harris Corporation
IEEE 802.11 Full duplex topic of interest group
Kumu Networks
Radio technology
Wireless networking
Radio resource management
Radiofrequency receivers
Telecommunications engineering | Self-interference cancellation | Technology,Engineering | 2,162 |
19,167,644 | https://en.wikipedia.org/wiki/Toilet | A toilet is a piece of sanitary hardware that collects human waste (urine and feces), and sometimes toilet paper, usually for disposal. Flush toilets use water, while dry or non-flush toilets do not. They can be designed for a sitting position popular in Europe and North America with a toilet seat, with additional considerations for those with disabilities, or for a squatting posture more popular in Asia, known as a squat toilet. In urban areas, flush toilets are usually connected to a sewer system; in isolated areas, to a septic tank. The waste is known as blackwater and the combined effluent, including other sources, is sewage. Dry toilets are connected to a pit, removable container, composting chamber, or other storage and treatment device, including urine diversion with a urine-diverting toilet.
The technology used for modern toilets varies. Toilets are commonly made of ceramic (porcelain), concrete, plastic, or wood. Newer toilet technologies include dual flushing, low flushing, toilet seat warming, self-cleaning, female urinals and waterless urinals. Japan is known for its toilet technology. Airplane toilets are specially designed to operate in the air. The need to maintain anal hygiene post-defecation is universally recognized and toilet paper (often held by a toilet roll holder), which may also be used to wipe the vulva after urination, is widely used (as well as bidets).
In private homes, depending on the region and style, the toilet may exist in the same bathroom as the sink, bathtub, and shower. Another option is to have one room for body washing (also called "bathroom") and a separate one for the toilet and handwashing sink (toilet room). Public toilets (restrooms) consist of one or more toilets (and commonly single urinals or trough urinals) which are available for use by the general public. Products like urinal blocks and toilet blocks help maintain the smell and cleanliness of toilets. Toilet seat covers are sometimes used. Portable toilets (frequently chemical "porta johns") may be brought in for large and temporary gatherings.
Historically, sanitation has been a concern from the earliest stages of human settlements. However, many poor households in developing countries use very basic, and often unhygienic, toilets – and nearly one billion people have no access to a toilet at all; they must openly defecate and urinate. These issues can lead to the spread of diseases transmitted via the fecal-oral route, or the transmission of waterborne diseases such as cholera and dysentery. Therefore, the United Nations Sustainable Development Goal 6 wants to "achieve access to adequate and equitable sanitation and hygiene for all and end open defecation".
Overview
The number of different types of toilets used worldwide is large, but can be grouped by:
Having water (which seals in odor) or not (which usually relates to e.g. flush toilet versus dry toilet)
Being used in a sitting or squatting position (sitting toilet versus squat toilet)
Being located in the private household or in public (toilet room versus public toilet)
Toilets can be designed to be used either in a standing (urinatiing), sitting or in a squatting posture (defecating). Each type has its benefits. The "sitting toilet", however, is essential for those who are movement impaired. Sitting toilets are often referred to as "western-style toilets". Sitting toilets are more convenient than squat toilets for people with disabilities and the elderly.
People use different toilet types based on the country that they are in. In developing countries, access to toilets is also related to people's socio-economic status. Poor people in low-income countries often have no toilets at all and resort to open defecation instead. This is part of the sanitation crisis which international initiatives (such as World Toilet Day) draw attention to.
With water
Flush toilet
A typical flush toilet is a ceramic bowl (pan) connected on the "up" side to a cistern (tank) that enables rapid filling with water, and on the "down" side to a drain pipe that removes the effluent. When a toilet is flushed, the sewage should flow into a septic tank or into a system connected to a sewage treatment plant. However, in many developing countries, this treatment step does not take place.
The water in the toilet bowl is connected to a pipe shaped like an upside-down U. One side of the U channel is arranged as a siphon tube longer than the water in the bowl is high. The siphon tube connects to the drain. The bottom of the drain pipe limits the height of the water in the bowl before it flows down the drain. The water in the bowl acts as a barrier to sewer gas entering the building. Sewer gas escapes through a vent pipe attached to the sewer line.
The amount of water used by conventional flush toilets usually makes up a significant portion of personal daily water usage. However, modern low flush toilet designs allow the use of much less water per flush. Dual flush toilets allow the user to select between a flush for urine or feces, saving a significant amount of water over conventional units. One type of dual flush system allows the flush handle to be pushed up for one kind of flush and down for the other, whereas another design is to have two buttons, one for urination and the other for defecation. In some places, users are encouraged not to flush after urination. Flushing toilets can be plumbed to use greywater (water that was previously used for washing dishes, laundry, and bathing) rather than potable water (drinking water). Some modern toilets pressurize the water in the tank, which initiates flushing action with less water usage.
Another variant is the pour-flush toilet. This type of flush toilet has no cistern but is flushed manually with a few liters of a small bucket. The flushing can use as little as . This type of toilet is common in many Asian countries. The toilet can be connected to one or two pits, in which case it is called a "pour flush pit latrine" or a "twin pit pour flush to pit latrine". It can also be connected to a septic tank.
Flush toilets on ships are typically flushed with seawater.
Twin pit designs
Twin pit latrines use two pits used alternatively, when one pit gets full over a few months or years. The pits are of an adequate size to accommodate a volume of waste generated over one or two years. This allows the contents of the full pit enough time to transform into a partially sanitized, soil-like material that can be manually excavated. There is a risk of groundwater pollution when pits are located in areas with a high or variable water table, and/or fissures or cracks in the bedrock.
Vacuum toilet
A vacuum toilet is a flush toilet that is connected to a vacuum sewer system, and removes waste by suction. They may use very little water (less than a quarter of a liter per flush) or none, (as in waterless urinals). Some flush with coloured disinfectant solution rather than with water. They may be used to separate blackwater and greywater, and process them separately (for instance, the fairly dry blackwater can be used for biogas production, or in a composting toilet).
Passenger train toilets, aircraft lavatories, bus toilets, and ships with plumbing often use vacuum toilets. The lower water usage saves weight, and avoids water slopping out of the toilet bowl in motion. Aboard vehicles, a portable collection chamber is used; if it is filled by positive pressure from an intermediate vacuum chamber, it need not be kept under vacuum.
Floating toilet
A floating toilet is essentially a toilet on a platform built above or floating on the water. Instead of excreta going into the ground they are collected in a tank or barrel. To reduce the amount of excreta that needs to hauled to shore, many use urine diversion. The floating toilet was developed for residents without quick access to land or connection to a sewer systems. It is also used in areas subjected to prolonged flooding. The need for this type of toilet is high in areas like Cambodia.
Without water
Pit latrine
Vault toilet
A vault toilet is a non-flush toilet with a sealed container (or vault) buried in the ground to receive the excreta, all of which is contained underground until it is removed by pumping. A vault toilet is distinguished from a pit latrine because the waste accumulates in the vault instead of seeping into the underlying soil.
Urine-diverting toilet
Portable toilet
Chemical toilet
Toilet fed to animals
The pig toilet, which consists of a toilet linked to a pigsty by a chute, is still in use to a limited extent. It was common in rural China, and was known in Japan, Korea, and India. The fish pond toilet depends on the same principle, of livestock (often carp) eating human excreta directly.
"Flying toilet"
Squat toilets
Usage
Urination
There are cultural differences in socially accepted and preferred voiding positions for urination around the world: in the Middle East and Asia, the squatting position is more prevalent, while in the Western world the standing and sitting position are more common.
Anal cleansing habits
In the Western world, the most common method of cleaning the anal area after defecation is by toilet paper or sometimes by using a bidet. In many Muslim countries, the facilities are designed to enable people to follow Islamic toilet etiquette . For example, a bidet shower may be plumbed in. The left hand is used for cleansing, for which reason that hand is considered impolite or polluted in many Asian countries.
The use of water in many Christian countries is due in part to the biblical toilet etiquette which encourages washing after all instances of defecation. The bidet is common in predominantly Catholic countries where water is considered essential for anal cleansing, and in some traditionally Orthodox and Lutheran countries such as Greece and Finland respectively, where bidet showers are common.
There are toilets on the market with seats having integrated spray mechanisms for anal and genital water sprays (see for example Toilets in Japan). This can be useful for the elderly or people with disabilities.
Accessible toilets
An accessible toilet is designed to accommodate people with physical disabilities, such as age related limited mobility or inability to walk due to impairments. Additional measures to add toilet accessibility are providing more space and grab bars to ease transfer to and from the toilet seat, including enough room for a caregiver if necessary.
Public toilets
Communication through toilets
In prisons, inmates may utilize toilets and the associated plumbing to communicate messages and pass products. The acoustic properties of communicating through the toilet bowl, known as toilet talk, potty talk, toilet telephone is influenced by flush patterns and bowl water volumes. Prisoners may also send binary signals by ringing the sewage or water pipes. Toilet talk enables communication for those in solitary confinement. Toilets have been subject to wiretaps.
Public health aspects
To this day, 1 billion people in developing countries have no toilets in their homes and are resorting to open defecation instead. Therefore, it is one of the targets of Sustainable Development Goal 6 to provide toilets (sanitation services) to everyone by 2030.
Toilets are one important element of a sanitation system, although other elements are also needed: transport, treatment, disposal, or reuse. Diseases, including Cholera, which still affects some 3 million people each year, can be largely prevented when effective sanitation and water treatment prevents fecal matter from contaminating waterways, groundwater, and drinking water supplies.
History
Ancient history
The fourth millennium BC would witness the invention of clay pipes, sewers, and toilets, in Mesopotamia, with the city of Uruk today exhibiting the earliest known internal pit toilet, from . The Neolithic village of Skara Brae contains examples, , of internal small rooms over a communal drain, rather than pit. The Indus Valley Civilisation in northwestern India and Pakistan was home to the world's first known urban sanitation systems. In Mohenjo-Daro (), toilets were built into the outer walls of homes. These toilets had vertical chutes, via which waste was disposed of into cesspits or street drains. In the Indus city of Lothal (), houses belonging to the upper class had private toilets connected to a covered sewer network constructed of brickwork held together with a gypsum-based mortar that emptied either into the surrounding water bodies or alternatively into cesspits, the latter of which were regularly emptied and cleaned.
Other very early toilets that used flowing water to remove the waste are found at Skara Brae in Orkney, Scotland, which was occupied from about 3100 BC until 2500 BC. Some of the houses there have a drain running directly beneath them, and some of these had a cubicle over the drain. Around the 18th century BC, toilets started to appear in Minoan Crete, Pharaonic Egypt, and ancient Persia.
In 2012, archaeologists found what is believed to be Southeast Asia's earliest latrine during the excavation of a neolithic village in the Rạch Núi archaeological site, southern Vietnam. The toilet, dating back 1500 BC, yielded important clues about early Southeast Asian society. More than 30 coprolites, containing fish and shattered animal bones, provided information on the diet of humans and dogs, and on the types of parasites each had to contend with.
In Sri Lanka, the techniques of the construction of toilets and lavatories developed over several stages. A highly developed stage in this process is discernible in the constructions at the Abhayagiri complex in Anuradhapura where toilets and baths dating back to 2nd century BC to 3rd century CE are known, later forms of toilets from 5th century CE to 13th century CE in Polonnaruwa and Anuradhapura had elaborate decorative motifs carved around the toilets. Several types of toilets were developed; these include lavatories with ring-well pits, underground terracotta pipes that lead to septic pits, urinary pits with large bottomless clay pots of decreasing size placed one above the other. These pots under urinals contained "sand, lime and charcoal" through which urine filtered down to the earth in a somewhat purified form.
In Roman civilization, latrines using flowing water were sometimes part of public bath houses. Roman latrines, like the ones pictured here, are commonly thought to have been used in the sitting position. The Roman toilets were probably elevated to raise them above open sewers which were periodically "flushed" with flowing water, rather than elevated for sitting. Romans and Greeks also used chamber pots, which they brought to meals and drinking sessions. Johan J. Mattelaer said, "Plinius has described how there were large receptacles in the streets of cities such as Rome and Pompeii into which chamber pots of urine were emptied. The urine was then collected by fullers." (Fulling was a vital step in textile manufacture.)
The Han dynasty in China two thousand years ago used pig toilets.
Post-classical history
Garderobes were toilets used in the Post-classical history, most commonly found in upper-class dwellings. Essentially, they were flat pieces of wood or stone spanning from one wall to the other, with one or more holes to sit on. These were above chutes or pipes that discharged outside the castle or Manor house. Garderobes would be placed in areas away from bedrooms because of the smell and also near kitchens or fireplaces to keep their enclosures warm.
The other main way of handling toilet needs was the chamber pot, a receptacle, usually of ceramic or metal, into which one would excrete waste. This method was used for hundreds of years; shapes, sizes, and decorative variations changed throughout the centuries. Chamber pots were in common use in Europe from ancient times, even being taken to the Middle East by medieval pilgrims.
Modern history
By the Early Modern era, chamber pots were frequently made of china or copper and could include elaborate decoration. They were emptied into the gutter of the street nearest to the home.
In pre-modern Denmark, people generally defecated on farmland or other places where the human waste could be collected as fertilizer. The Old Norse language had several terms for referring to outhouses, including garðhús (yard house), náð-/náða-hús (house of rest), and annat hús (the other house). In general, toilets were functionally non-existent in rural Denmark until the 18th century.
By the 16th century, cesspits and cesspools were increasingly dug into the ground near houses in Europe as a means of collecting waste, as urban populations grew and street gutters became blocked with the larger volume of human waste. Rain was no longer sufficient to wash away waste from the gutters. A pipe connected the latrine to the cesspool, and sometimes a small amount of water washed waste through. Cesspools were cleaned out by tradesmen, known in English as gong farmers, who pumped out liquid waste, then shovelled out the solid waste and collected it during the night. This solid waste, euphemistically known as nightsoil, was sold as fertilizer for agricultural production (similarly to the closing-the-loop approach of ecological sanitation).
In the early 19th century, public officials and public hygiene experts studied and debated sanitation for several decades. The construction of an underground network of pipes to carry away solid and liquid waste was only begun in the mid 19th-century, gradually replacing the cesspool system, although cesspools were still in use in some parts of Paris into the 20th century. Even London, at that time the world's largest city, did not require indoor toilets in its building codes until after the First World War.
The water closet, with its origins in Tudor times, started to assume its currently known form, with an overhead cistern, s-bends, soil pipes and valves around 1770. This was the work of Alexander Cumming and Joseph Bramah. Water closets only started to be moved from outside to inside of the home around 1850. The integral water closet started to be built into middle-class homes in the 1860s and 1870s, firstly on the principal bedroom floor and in larger houses in the maids' accommodation, and by 1900 a further one in the hallway. A toilet would also be placed outside the back door of the kitchen for use by gardeners and other outside staff such as those working with the horses. The speed of introduction was varied, so that in 1906 the predominantly working-class town of Rochdale had 750 water closets for a population of 10,000.
The working-class home had transitioned from the rural cottage, to the urban back-to-back terraces with external rows of privies, to the through terraced houses of the 1880 with their sculleries and individual external WC. It was the Tudor Walters Report of 1918 that recommended that semi-skilled workers should be housed in suburban cottages with kitchens and internal WC. As recommended floor standards waxed and waned in the building standards and codes, the bathroom with a water closet and later the low-level suite became more prominent in the home.
Before the introduction of indoor toilets, it was common to use the chamber pot under one's bed at night and then to dispose of its contents in the morning. During the Victorian era, British housemaids collected all of the household's chamber pots and carried them to a room known as the housemaids' cupboard. This room contained a "slop sink", made of wood with a lead lining to prevent chipping china chamber pots, for washing the "bedroom ware" or "chamber utensils". Once running water and flush toilets were plumbed into British houses, servants were sometimes given their own lavatory downstairs, separate from the family lavatory. The practice of emptying one's own chamber pot, known as slopping out, continued in British prisons until as recently as 2014 and was still in use in 85 cells in Ireland in July 2017.
With rare exceptions, chamber pots are no longer used. Modern related implements are bedpans and commodes, used in hospitals and the homes of invalids.
Long-established sanitary wear manufacturers in the United Kingdom include Adamsez, founded in Newcastle-upon-Tyne in 1880, by M.J. and S.H. Adams, and Twyfords, founded in Hanley, Stoke-on-Trent in 1849, by Thomas Twyford and his son Thomas William Twyford.
Development of dry earth closets
Before the widespread adoption of the flush toilet, there were inventors, scientists, and public health officials who supported the use of "dry earth closets" – nowadays known either as dry toilets or composting toilets.
Development of flush toilets
Although a precursor to the flush toilet system which is widely used nowadays was designed in 1596 by John Harington, such systems did not come into widespread use until the late nineteenth century. With the onset of the Industrial Revolution and related advances in technology, the flush toilet began to emerge into its modern form. A crucial advance in plumbing, was the S-trap, invented by the Scottish mechanic Alexander Cummings in 1775, and still in use today. This device uses the standing water to seal the outlet of the bowl, preventing the escape of foul air from the sewer. It was only in the mid-19th century, with growing levels of urbanisation and industrial prosperity, that the flush toilet became a widely used and marketed invention. This period coincided with the dramatic growth in the sewage system, especially in London, which made the flush toilet particularly attractive for health and sanitation reasons.
Flush toilets were also known as "water closets", as opposed to the earth closets described above. WCs first appeared in Britain in the 1880s, and soon spread to Continental Europe. In America, the chain-pull indoor toilet was introduced in the homes of the wealthy and in hotels in the 1890s. William Elvis Sloan invented the Flushometer in 1906, which used pressurized water directly from the supply line for faster recycle time between flushes.
High-tech toilet
"High-tech" toilets, which can be found in countries like Japan, include features such as automatic-flushing mechanisms; water jets or "bottom washers"; blow dryers, or artificial flush sounds to mask noises. Others include medical monitoring features such as urine and stool analysis and the checking of blood pressure, temperature, and blood sugar. Some toilets have automatic lid operation, heated seats, deodorizing fans, or automated replacement of paper toilet-seat-covers. Interactive urinals have been developed in several countries, allowing users to play video games. The "Toylet", produced by Sega, uses pressure sensors to detect the flow of urine and translates that into on-screen action.
Astronauts on the International Space Station use a space toilet with urine diversion which can recover potable water.
Names
Etymology
Toilet was originally a French loanword (first attested in 1540) that referred to the ("little cloth") draped over one's shoulders during hairdressing. During the late 17th century, the term came to be used by metonymy in both languages for the whole complex of grooming and body care that centered at a dressing table (also covered by a cloth) and for the equipment composing a toilet service, including a mirror, hairbrushes, and containers for powder and makeup. The time spent at such a table also came to be known as one's "toilet"; it came to be a period during which close friends or tradesmen were received as "toilet-calls".
The use of "toilet" to describe a special room for grooming came much later (first attested in 1819), following the French . Similar to "powder room", "toilet" then came to be used as a euphemism for rooms dedicated to urination and defecation, particularly in the context of signs for public toilets, as on trains. Finally, it came to be used for the plumbing fixtures in such rooms (apparently first in the United States) as these replaced chamber pots, outhouses, and latrines. These two uses, the fixture and the room, completely supplanted the other senses of the word during the 20th century except in the form "toiletries".
Contemporary use
The word "toilet" was by etymology a euphemism, but is no longer understood as such. As old euphemisms have become the standard term, they have been progressively replaced by newer ones, an example of the euphemism treadmill at work. The choice of word relies not only on regional variation, but also on social situation and level of formality (register) or social class. American manufacturers show an uneasiness with the word and its class attributes: American Standard, the largest firm, sells them as "toilets", yet the higher-priced products of the Kohler Company, often installed in more expensive housing, are sold as commodes or closets, words which also carry other meanings. Confusingly, products imported from Japan such as TOTO are referred to as "toilets", even though they carry the cachet of higher cost and quality. Toto (an abbreviation of Tōyō Tōki, 東洋陶器, Oriental Ceramics) is used in Japanese comics to visually indicate toilets or other things that look like toilets (see Toilets in Japan).
Regional variants
Different dialects use "bathroom" and "restroom" (American English), "bathroom" and "washroom" (Canadian English), and "WC" (an initialism for "water closet"), "lavatory" and its abbreviation "lav" (British English). Euphemisms for the toilet that bear no direct reference to the activities of urination and defecation are ubiquitous in modern Western languages, reflecting a general attitude of unspeakability about such bodily function. These euphemistic practices appear to have become pronounced following the emergence of European colonial practices, which frequently denigrated colonial subjects in Africa, Asia and South America as 'unclean'.
Euphemisms
"Crapper" was already in use as a coarse name for a toilet, but it gained currency from the work of Thomas Crapper, who popularized flush toilets in England and held several patents on toilet improvements.
"The Jacks" is Irish slang for toilet. It perhaps derives from "jacques" and "jakes", an old English term.
"Loo" – The etymology of loo is obscure. The Oxford English Dictionary notes the 1922 appearance of "How much cost? Waterloo. Watercloset." in James Joyce's novel Ulysses and defers to Alan S. C. Ross's arguments that it derived in some fashion from the site of Napoleon's 1815 defeat. In the 1950s the use of the word "loo" was considered one of the markers of British upper-class speech, featuring in a famous essay, "U and non-U English". "Loo" may have derived from a corruption of French ("water"), – whence Scots gardy loo – ("mind the water", used in reference to emptying chamber pots into the street from an upper-story window), ("place"), ("place of ease", used euphemistically for a toilet), or ("English place", used from around 1770 to refer to English-style toilets installed for travelers). Other proposed etymologies include a supposed tendency to place toilets in room 100 (hence "loo") in English hotels, a sailors' dialectal corruption of the nautical term "lee" in reference to the shipboard need to urinate and defecate with the wind prior to the advent of head pumps, or the 17th-century preacher Louis Bourdaloue, whose long sermons at Paris's Saint-Paul-Saint-Louis prompted his parishioners to bring along chamber pots, and his surname was applied to the pots themselves.
Gallery
See also
Community toilet scheme
Electronic toilet
Green train corridor
Human right to water and sanitation
Improved sanitation
Sanisette
Sulabh International Museum of Toilets
Sustainable Sanitation Alliance
Swachh Bharat Mission
Toilet humour
Toilet-related injuries and deaths
Vermifilter toilet
Waste management
World Toilet Day
World Toilet Organization – organization which focuses on toilets and sanitation at the global level
Workers' right to access the toilet
Explanatory notes
References
External links
Ancient inventions
Articles containing video clips
Bathroom equipment
Bathrooms
Sanitation
Toilet types | Toilet | Biology | 5,870 |
12,231,952 | https://en.wikipedia.org/wiki/Pest-exclusion%20fence | A pest-exclusion fence is a barrier that is built to exclude certain types of animal pests from an enclosure. This may be to protect plants in horticulture, preserve grassland for grazing animals, separate species carrying diseases (vector species) from livestock, prevent troublesome species entering roadways, or to protect endemic species in nature reserves. These fences are not necessarily traditional wire barriers, but may also include barriers of sound, or smell.
Design techniques
Animals can be excluded by a fence's height, depth under the ground and mesh size. It is also important to choose a construction material that cannot be climbed; furthermore, sometimes it is necessary to create a subsurface fencing element to prevent burrowing under the fence. Fences are usually designed with the target pest species (the species to be excluded) in mind, and the fences are made to effectively exclude those species. This results in a wide variety of designs for pest exclusion fences (see examples below). Often the fence is encircled in electric wire to ensure that animals can not climb over the fence.
Examples
The 1.9m-high fence at the Orokonui Ecosanctuary in Waitati, New Zealand is designed to keep out all introduced mammals such as possums, rats, stoats, ferrets and even mice. It uses stainless steel mesh that continues down to form a skirt at ground level that prevents animals from burrowing under it. On the top is a curved steel hood that prevents climbers like cats and possums from climbing over the top.
Agricultural exclusion fences in central-western Queensland vary between 1.6m and 2m in height. The fences have a single top barbed wire and ring-lock or hinge-joint wire underneath and steel fence posts. The ring-lock or hinge joint wire has smaller holes at the bottom, gradually increasing in size to be marginally larger at the top. A section of this mesh lays flat against the ground at the bottom of the posts to form a skirt (or radial-apron) on the outside of the fence.
In Africa and Asia, crop-raiding elephants are excluded using a variety of techniques. These include electric fencing, fences of cacti, chilli-greased rope, and bee-hives or sounds of disturbed bees.
Exclusion fences are also used in Australia at sanctuaries run by the Australian Wildlife Conservancy. Inside the fenced off zone captive breeding programs for endangered animals take place.
Use in Australia
Barrier fencing
Australia has utilised exclusion fencing since the 1860s. The most well known exclusion fences in Australia are the barrier fences. Barrier fences are long (usually linear) barriers erected for the purpose of excluding particular species from large portions of Australia. The most well known barrier fences are the Dingo Fence and the Rabbit-proof fence, but there are many others.
Agricultural exclusion fencing
In more recent years, pest-exclusion fences have been built around singular properties, or groups of properties. This practice is known as cluster fencing. Cluster fencing allows farmers to monitor and mitigate predation pressure on livestock, and monitor Total Grazing Pressure (TGP) through accurate abundance data of native, pest, and domestic herbivores.
Conservation fencing
Australia uses pest-exclusion fencing to separate several high-value or threatened species from introduced predators. One such example is Arid Recovery in South Australia, where feral cat, red fox and rabbit have been removed for the conservation of 5 threatened species.
Use in New Zealand
Prior to human settlement New Zealand had no land-based mammals apart from three bat species. The introduced mammal species, such as rabbits, deer, and possum, have since caused huge ecological changes to the biota of New Zealand. Pest-exclusion fences are increasingly used for conservation of indigenous species by excluding all mammals.
Locations of predator-proof fences include:
Cape Brett
Cape Farewell
Deans Bush, Christchurch
Zealandia, Wellington
Bushy Park
Maungatautari Restoration Project
Orokonui Ecosanctuary
Shakespear Regional Park
Styx Mill Reserve, Christchurch (under construction)
Stewart Island
Tawharanui Peninsula
Use in Japan
Conservation fencing
Deer-proof fencing was used in Nagano Prefecture, Japan in a conservation effort to maintain plant diversity. The methods were effective for increasing species richness, but not as effective for conserving rare plants.
See also
Deer fence
Rabbit-proof fence
Dingo fence
Ecological island
Exclosure
References
Fences
Animal migration
Land management | Pest-exclusion fence | Biology | 896 |
69,046,402 | https://en.wikipedia.org/wiki/Mercedes-Benz%20500I%20engine | The Mercedes-Benz 500I engine is a highly powerful, turbocharged, 3.4-liter, Indy car racing V-8 engine, designed, developed, and built by Ilmor, in partnership with Mercedes-Benz, specifically to compete in the 1994 Indianapolis 500.
The Mercedes-Benz 500I engine was slightly lighter than the Ilmor 265D Indy V8 it replaced in the Penske PC-23, although because of its longer inlets, the 500I had a higher overall centre of gravity, thus changing the overall balance of the car a bit. The development and testing of the 500I engine, at that time called Ilmor 265E, took place in the utmost secrecy because there was a possibility of the turbocharger boost level being changed, or the engine being banned by the Indy 500 sanctioning body.
Background
Mercedes-Benz 500I
Much to the surprise of competitors, media, and fans, Marlboro Team Penske arrived at the Indianapolis Motor Speedway with a brand new, secretly-built 209 cid Mercedes-Benz pushrod engine, which was capable of a reported 1000 horsepower. Despite reliability issues with the engine and handling difficulties with the chassis, the three-car Penske team (Unser, Emerson Fittipaldi and Paul Tracy) dominated most of the month, and nearly the entire race. This engine used a provision in the rules intended for stock block pushrod engines such as the V-6 Buick engines that allowed an extra 650 cm³ and 10 inches (4.9 psi/33.8 kPa) of boost. This extra power (1,024 horsepower, which was up a 150-200 hp advantage over the conventional V-8s.) allowed the Penskes to run significantly faster, giving them the pole and outside front row on the grid for the 78th Indianapolis 500. Al Unser Jr. and Emerson Fittipaldi dominated the race, eventually lapping the field with 16 laps to go in the 200 lap race when Emerson made contact with a wall coming out of Turn 4, giving Al Unser Jr. the lead and win. The only other driver who finished on the lead lap was rookie Jacques Villeneuve.
In the summer and fall of 1993, Ilmor and Penske engaged in a new engine program. Under complete secrecy, a 209-CID purpose-built, pushrod engine was being developed. Mercedes stepped in near the end of development and paid a fee in order to badge the engine as the Mercedes-Benz 500I. The engine was designed to exploit a perceived "loophole" that existed in USAC's rulebook since 1991. While CART sanctioned the rest of the Indycar season, the Indianapolis 500 itself was conducted by USAC under slightly different rules.
In an effort to appeal to smaller engine-building companies, USAC had permitted "stock-block" pushrod engines (generally defined as single non-OHC units fitted with two valves per cylinder actuated by pushrod and rocker arm). The traditional "stock blocks," saw some limited use in the early 1980s, but became mainstream at Indy starting with the introduction of the Buick V-6 Indy engine. Initially, the stock blocks were required to have some production-based parts. However, in 1991, USAC quietly lifted the requirement, and purpose-built pushrod engines were permitted to be designed for racing at the onset. Attempting to create an equivalency formula, both pushrod engine formats were allowed increased displacement (209.3 cid vs. 161.7), and increased turbocharger boost (55 inHG vs. 45 inHG)
Team Penske mated the engine with the in-house Penske chassis, the PC-23. It was introduced to the public in April, just days before opening day at Indy.
Applications
Penske PC-23
References
Engines by model
Mercedes-Benz engines
IndyCar Series
Champ Car
V8 engines
Mercedes-Benz in motorsport | Mercedes-Benz 500I engine | Technology | 798 |
47,137,343 | https://en.wikipedia.org/wiki/European%20Information%20Technologies%20Certification%20Academy | European Information Technologies Certification Academy (EITCA) programme is an international professional ICT knowledge and skills certification standard, developed and governed by the EITCI Institute – a non-profit organization based in Brussels, that
provides certification of individuals' knowledge and skills in broad field-oriented areas of ICT expertise such as Computer graphics, Information security etc. The EITCA programmes, referred to as EITCA Academies, include selected sets of several to over a dozen of individual EITC programmes, that together comprise a particular area of qualifications.
EITCA Academies
As of June 2015 the EITCA certification standard includes the following Academies:
See also
EITC programme
EITCI institute
References
External links
EITCI Official Website
EITCI certificate and accreditation validation page
Certification Academy
International standards
Computer standards
Cryptography standards
Information technology qualifications
Computer security qualifications
Professional titles and certifications
EITCI certification programmes
Digital divide | European Information Technologies Certification Academy | Technology | 181 |
252,064 | https://en.wikipedia.org/wiki/Stercobilin | Stercobilin is a tetrapyrrolic bile pigment and is one end-product of heme catabolism. It is the chemical responsible for the brown color of human feces and was originally isolated from feces in 1932. Stercobilin (and related urobilin) can be used as a marker for biochemical identification of fecal pollution levels in rivers.
Metabolism
Stercobilin results from breakdown of the heme moiety of hemoglobin found in erythrocytes (red blood cells). Macrophages break down senescent erythrocytes and break the heme down into biliverdin, which rapidly reduces to free bilirubin. Bilirubin binds tightly to plasma proteins (especially albumin) in the blood stream and is transported to the liver, where it is conjugated with one or two glucuronic acid residues into bilirubin diglucuronide, and secreted into the small intestine as bile. In the small intestine, some bilirubin glucuronide is converted back to bilirubin via bacterial enzymes in the terminal ileum. This bilirubin is further converted to colorless urobilinogen by the bacterial enzyme bilirubin reductase. Urobilinogen that remains in the colon can either be reduced to stercobilinogen and finally oxidized to stercobilin, or it can be directly reduced to stercobilin. Stercobilin is responsible for the brown color of human feces. Stercobilin is then excreted in the feces.
Role in disease
Obstructive jaundice
In obstructive jaundice, no bilirubin reaches the small intestine, meaning that there is no formation of stercobilinogen. The lack of stercobilin and other bile pigments causes feces to become clay-colored.
Brown pigment gallstones
An analysis of two infants suffering from cholelithiasis observed that a substantial amount of stercobilin was present in brown pigment gallstones. This study suggested that brown pigment gallstones could form spontaneously in infants suffering from bacterial infections of the biliary tract.
Role in treatment of disease
A 1996 study by McPhee et al. suggested that stercobilin and other related pyrrolic pigments — including urobilin, biliverdin, and xanthobilirubic acid — has potential to function as a new class of HIV-1 protease inhibitors when delivered at low micromolar concentrations. These pigments were selected due to a similarity in shape to the successful HIV-1 protease inhibitor Merck L-700,417 (N,N-bis(2-hydroxy-1-indanyl)-2,6-diphenylmethyl-4-hydroxy-1,7-heptandiamide). Further research is suggested to study the pharmacological efficacy of these pigments.
See also
Bile pigment
Bilirubin
Biliverdin
Heme
Urobilin
References
Metabolism
Digestive system
Tetrapyrroles
Gamma-lactams | Stercobilin | Chemistry,Biology | 669 |
6,730,121 | https://en.wikipedia.org/wiki/Presentation%20of%20a%20monoid | In algebra, a presentation of a monoid (or a presentation of a semigroup) is a description of a monoid (or a semigroup) in terms of a set of generators and a set of relations on the free monoid (or the free semigroup ) generated by . The monoid is then presented as the quotient of the free monoid (or the free semigroup) by these relations. This is an analogue of a group presentation in group theory.
As a mathematical structure, a monoid presentation is identical to a string rewriting system (also known as a semi-Thue system). Every monoid may be presented by a semi-Thue system (possibly over an infinite alphabet).
A presentation should not be confused with a representation.
Construction
The relations are given as a (finite) binary relation on . To form the quotient monoid, these relations are extended to monoid congruences as follows:
First, one takes the symmetric closure of . This is then extended to a symmetric relation by defining if and only if = and = for some strings with . Finally, one takes the reflexive and transitive closure of , which then is a monoid congruence.
In the typical situation, the relation is simply given as a set of equations, so that . Thus, for example,
is the equational presentation for the bicyclic monoid, and
is the plactic monoid of degree 2 (it has infinite order). Elements of this plactic monoid may be written as for integers i, j, k, as the relations show that ba commutes with both a and b.
Inverse monoids and semigroups
Presentations of inverse monoids and semigroups can be defined in a similar way using a pair
where
is the free monoid with involution on , and
is a binary relation between words. We denote by (respectively ) the equivalence relation (respectively, the congruence) generated by T.
We use this pair of objects to define an inverse monoid
Let be the Wagner congruence on , we define the inverse monoid
presented by as
In the previous discussion, if we replace everywhere with we obtain a presentation (for an inverse semigroup) and an inverse semigroup presented by .
A trivial but important example is the free inverse monoid (or free inverse semigroup) on , that is usually denoted by (respectively ) and is defined by
or
Notes
References
John M. Howie, Fundamentals of Semigroup Theory (1995), Clarendon Press, Oxford
M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, .
Ronald V. Book and Friedrich Otto, String-rewriting Systems, Springer, 1993, , chapter 7, "Algebraic Properties"
Semigroup theory | Presentation of a monoid | Mathematics | 603 |
39,447,416 | https://en.wikipedia.org/wiki/DEAP%20%28software%29 | Distributed Evolutionary Algorithms in Python (DEAP) is an evolutionary computation framework for rapid prototyping and testing of ideas. It incorporates the data structures and tools required to implement most common evolutionary computation techniques such as genetic algorithm, genetic programming, evolution strategies, particle swarm optimization, differential evolution, traffic flow and estimation of distribution algorithm. It is developed at Université Laval since 2009.
Example
The following code gives a quick overview how the Onemax problem optimization with genetic algorithm can be implemented with DEAP.
import array
import random
from deap import creator, base, tools, algorithms
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Individual", array.array, typecode="b", fitness=creator.FitnessMax)
toolbox = base.Toolbox()
toolbox.register("attr_bool", random.randint, 0, 1)
toolbox.register(
"individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, 100
)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
evalOneMax = lambda individual: (sum(individual),)
toolbox.register("evaluate", evalOneMax)
toolbox.register("mate", tools.cxTwoPoint)
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
toolbox.register("select", tools.selTournament, tournsize=3)
population = toolbox.population(n=300)
NGEN = 40
for gen in range(NGEN):
offspring = algorithms.varAnd(population, toolbox, cxpb=0.5, mutpb=0.1)
fits = toolbox.map(toolbox.evaluate, offspring)
for fit, ind in zip(fits, offspring):
ind.fitness.values = fit
population = offspring
See also
Python SCOOP (software)
References
External links
Articles with example Python (programming language) code
Evolutionary computation
Free science software
Python (programming language) scientific libraries | DEAP (software) | Biology | 460 |
729,899 | https://en.wikipedia.org/wiki/Fructose%20malabsorption | Fructose malabsorption, formerly named dietary fructose intolerance (DFI), is a digestive disorder in which absorption of fructose is impaired by deficient fructose carriers in the small intestine's enterocytes. This results in an increased concentration of fructose. Intolerance to fructose was first identified and reported in 1956.
Similarity in symptoms means that patients with fructose malabsorption often fit the profile of those with irritable bowel syndrome.
Fructose malabsorption is not to be confused with hereditary fructose intolerance, a potentially fatal condition in which the liver enzymes that break up fructose are deficient. Hereditary fructose intolerance is quite rare, affecting up to 1 in 20,000 to 30,000 people.
Symptoms and signs
Fructose malabsorption may cause gastrointestinal symptoms such as abdominal pain, bloating, flatulence or diarrhea.
Pathophysiology
Fructose is absorbed in the small intestine without help of digestive enzymes. Even in healthy persons, however, only about 25–50 g of fructose per sitting can be properly absorbed. People with fructose malabsorption absorb less than 25 g per sitting. Simultaneous ingestion of fructose and sorbitol seems to increase malabsorption of fructose. Fructose that has not been adequately absorbed is fermented by intestinal bacteria producing hydrogen, carbon dioxide, methane and short-chain fatty acids. This abnormal increase in hydrogen may be detectable with the hydrogen breath test.
The physiological consequences of fructose malabsorption include increased osmotic load, rapid bacterial fermentation, altered gastrointestinal motility, the formation of mucosal biofilm and altered profile of bacteria. These effects are additive with other short-chain poorly absorbed carbohydrates such as sorbitol. The clinical significance of these events depends upon the response of the bowel to such changes. Some effects of fructose malabsorption are decreased tryptophan, folic acid and zinc in the blood.
Restricting dietary intake of free fructose and/or fructans may provide symptom relief in a high proportion of patients with functional gut disorders.
Diagnosis
The diagnostic test, when used, is similar to that used to diagnose lactose intolerance. It is called a hydrogen breath test and is the method currently used for a clinical diagnosis. Nevertheless, some authors argue this test is not an appropriate diagnostic tool, because a negative result does not exclude a positive response to fructose restriction, implying a lack of sensitivity.
Treatment
Physical activity (avoiding sitting down for long periods)
Sitting down can cause your abdomen to compress, which slows down digestion. This can lead to issues such as bloating, heartburn and constipation. It could thus increase or cause fructose malabsorption. A study show that physical activity between long periods of sitting is not enough: "focusing on acquiring the recommended dose of exercise is not a strong enough of a stimulant to completely protect the body from physical inactivity the other 23+ h/day". "Reducing prolonged overall sitting time may reduce metabolic disturbances"
Dietary supplements
Xylose isomerase acts to convert fructose sugars into glucose. Dietary supplements of xylose isomerase may improve some symptoms of fructose malabsorption, although there is currently only a single scientific study available.
Diet
There is no known cure, but an appropriate diet and the enzyme xylose isomerase can help. The ingestion of glucose simultaneously with fructose improves fructose absorption and may prevent the development of symptoms. For example, people may tolerate fruits such as grapefruits or bananas, which contain similar amounts of fructose and glucose, but apples are not tolerated because they contain high levels of fructose and lower levels of glucose. But a randomised controlled trials in patients with fructose malabsorption (made by the Cochrane institute) found that "Adding glucose to food and solutions to enhance fructose absorption is not effective in preventing fructose-induced functional gastrointestinal symptoms".
Foods that should be avoided by people with fructose malabsorption include:
Foods and beverages containing greater than 0.5 g fructose in excess of glucose per 100 g and greater than 0.2 g of fructans per serving should be avoided. Foods with >3 g of fructose per serving are termed a 'high fructose load' and possibly present a risk of inducing symptoms. However, the concept of a 'high fructose load' has not been evaluated in terms of its importance in the success of the diet.
Foods with high fructose-to-glucose ratio. Glucose enhances absorption of fructose, so fructose from foods with fructose-to-glucose ratio <1, like white potatoes, are readily absorbed, whereas foods with fructose-to-glucose ratio >1, like apples and pears, are often problematic regardless of the total amount of fructose in the food.
Foods rich in fructans and other fermentable oligo-, di- and mono-saccharides and polyols (FODMAPs), including artichokes, asparagus, leeks, onions, and wheat-containing products, including breads, cakes, biscuits, breakfast cereals, pies, pastas, pizzas, and wheat noodles.
The role that fructans play in fructose malabsorption is still under investigation. However, it is recommended that fructan intake for fructose malabsorbers should be kept to less than 0.5 grams/serving, and supplements with inulin and fructooligosaccharide (FOS), both fructans, should be avoided.
Foods containing artificial sweeteners like sorbitol (present in some diet drinks and foods and occurring naturally in some stone fruits), xylitol (present in some berries) and other polyols (sugar alcohols, such as erythritol, mannitol and other ingredients that end with -tol, commonly added as in commercial foods).
Foods containing high fructose corn syrup.
Foods with a high glucose content ingested with foods containing excess fructose may help patients absorb the excess fructose.
Foods with high fructose content
According to the USDA database, foods with more fructose than glucose include:
The USDA food database reveals that many common fruits contain nearly equal amounts of the fructose and glucose, and they do not present problems for those individuals with fructose malabsorption. Some fruits with a greater ratio of fructose than glucose are apples, pears and watermelon, which contain more than twice as much fructose as glucose. Fructose levels in grapes varies depending on ripeness and variety, where unripe grapes contain more glucose.
Dietary guidelines for management
Researchers at Monash University in Australia developed dietary guidelines for managing fructose malabsorption, particularly for individuals with IBS.
Unfavorable foods (i.e. more fructose than glucose)
Fruit – apple, pear, honeydew melon, nashi pear, pawpaw, papaya, quince, star fruit, watermelon;
Dried fruit – apple, currant, date, fig, pear, raisin, sultana;
Fortified wines
Foods containing added sugars, such as agave nectar, some corn syrups, and fruit juice concentrates.
Favorable foods (i.e. fructose equal to or less than glucose)
The following list of favorable foods was cited in the paper: "Fructose malabsorption and symptoms of Irritable Bowel Syndrome Guidelines for effective dietary management". The fructose and glucose contents of foods listed on the Australian food standards would appear to indicate that most of the listed foods have higher fructose levels.
Stone fruit: apricot, nectarine, peach, plum (caution – these fruits contain sorbitol);
Berry fruit: blackberry, boysenberry, cranberry, raspberry, strawberry, loganberry;
Citrus fruit: kumquat, grapefruit, lemon, lime, mandarin, orange, tangelo;
Other fruits: ripe banana, jackfruit, passion fruit, pineapple, rhubarb, tamarillo.
Food-labeling
Producers of processed food in most or all countries, including the US, are not currently required by law to mark foods containing "fructose in excess of glucose". This can cause some surprises and pitfalls for fructose malabsorbers.
Foods (such as bread) marked "gluten-free" are usually suitable for fructose malabsorbers, though they need to be careful of gluten-free foods that contain dried fruit or high fructose corn syrup or fructose itself in sugar form. However, fructose malabsorbers do not need to avoid gluten, as those with celiac disease must.
Many fructose malabsorbers can eat breads made from rye and corn flour. However, these may contain wheat unless marked "wheat-free" (or "gluten-free") (Note: Rye bread is not gluten-free.) Although often assumed to be an acceptable alternative to wheat, spelt flour is not suitable for people with fructose malabsorption, just as it is not appropriate for those with wheat allergies or celiac disease. However, some fructose malabsorbers do not have difficulty with fructans from wheat products while they may have problems with foods that contain excess free fructose.
There are many breads on the market that boast having no high fructose corn syrup. In lieu of high fructose corn syrup, however, one may find the production of special breads with a high inulin content, where inulin is a replacement in the baking process for the following: high fructose corn syrup, flour and fat. Because of the caloric reduction, lower fat content, dramatic fiber increase and prebiotic tendencies of the replacement inulin, these breads are considered a healthier alternative to traditionally prepared leavening breads. Though the touted health benefits may exist, people with fructose malabsorption will likely find no difference between these new breads and traditionally prepared breads in alleviating their symptoms because inulin is a fructan, and, again, consumption of fructans should be reduced dramatically in those with fructose malabsorption in an effort to appease symptoms.
Research
Fructose and fructans are FODMAPs (fermentable oligo-, di- and mono-saccharides and polyols) known to cause gastrointestinal discomfort in susceptible individuals. FODMAPs are not the cause of these disorders, but FODMAPs restriction (a low-FODMAP diet) might help to improve short-term digestive symptoms in adults with irritable bowel syndrome (IBS) and other functional gastrointestinal disorders (FGID). Nevertheless, its long-term follow-up can have negative effects because it causes a detrimental impact on the gut microbiota and metabolome.
See also
Hereditary fructose intolerance
Food intolerance
Gastroenterology
Invisible disability
References
External links
Membrane transport protein disorders
Inborn errors of carbohydrate metabolism | Fructose malabsorption | Chemistry | 2,443 |
51,523,353 | https://en.wikipedia.org/wiki/Khajidsuren%20Bolormaa | Khajidsuren Bolormaa, or Khajidsurengiin Bolormaa, (; born January 18, 1965) is a Mongolian mineralogical engineer, as well as a healthcare and children's rights advocate, who served as the First Lady of Mongolia from 2009 to 2017. Bolormaa is the wife of former President Tsakhiagiin Elbegdorj. In 2006, Bolormaa founded the Bolor Foundation, which cares for orphans in Mongolia.
Biography
Bolormaa was born on January 18, 1965, in Ulaanbaatar, Mongolia. She graduated high school in Mongolia. She then enrolled at Lviv State University in Lviv, Ukrainian S.S.R. (present-day Ukraine), from 1983 to 1988 to study geochemistry. Bolormaa met her future husband, Tsakhiagiin Elbegdorj, while both were students living in Lviv. The couple married and had their first son, who was born in Lviv. They returned to Mongolia in 1988.
Khajidsurengiin Bolormaa worked as a mineralogical engineer for the government-run Central Geological Laboratory of Mongolia. She then established and opened Ankh-Erdene, a private research laboratory focusing on mineralogy and the Mongolia's mining industry.
Tsakhiagiin Elbegdorj was elected president in 2009, making Bolormaa the First Lady of Mongolia. Elbegdorj was re-elected in 2013.
In March 2010, First Lady Bolormaa established the Hope Cancer-free Mongolia National Foundation to improve cancer treatment services in the country. She called for increased cooperation between 38 Asian First Ladies to fight cancer on the continent, especially among women. The foundation retrained Mongolian doctors, nurses and other staff at both domestic and international medical facilities between 2010 and 2013.
References
Living people
1965 births
First ladies of Mongolia
Mongolian engineers
Mining engineers
Mongolian health activists
Children's rights activists
People from Ulaanbaatar
20th-century Mongolian women
20th-century Mongolian people
21st-century Mongolian women
21st-century Mongolian people | Khajidsuren Bolormaa | Engineering | 438 |
20,915,085 | https://en.wikipedia.org/wiki/Pre-start-up%20audit | The Pre-Start-Up Audit (PSUA) or Pre-startup review is a part of the planning process for projects during which the safety and functionality of the process itself is checked to ensure that all systems should function as intended and potential hazards can be dealt with. This process is commonly used in projects related to construction, reconditioning or repair of equipment and facilities designed to process hazardous substances.
Overview
The PSUA process is commonly found in nuclear, oil, gas, pharmaceutical and petrochemical projects and comprises a combined safety and operability audit to ensure that hydrocarbons or other hazardous materials can be safely introduced into a newly constructed (or reconstructed) plant, asset or facility for the first time.
Furthermore, it is used to assure the owner or operator of that plant, asset or facility that it is capable of starting, controlling and stopping it safely, efficiently and without harm to people, the plant itself or to the environment.
A PSUA is often preceded by a Pre-Start-Up Review (PSUR), some months prior to the scheduled PSUA, in order to evaluate the magnitude of the outstanding work that will need to be completed prior to start-up. This will also provide feedback to the project team of any items that have been overlooked or may seem to be behind schedule.
The PSUA process is often carried out as a part of the operations readiness and assurance (OR&A) process.
References
Business terms
Process safety | Pre-start-up audit | Chemistry,Engineering | 294 |
55,500,494 | https://en.wikipedia.org/wiki/Iodine%20Satellite | Iodine Satellite (iSat) is a technology demonstration satellite of the CubeSat format that will undergo high changes in velocity (up to 300 meters/second) from a primary propulsion system by using a Hall thruster with iodine as the propellant. It will also change its orbital altitude and demonstrate deorbit capabilities to reduce space junk.
iSat was being developed by NASA's Glenn Research Center, and was initially planned as a secondary payload for launch in mid-2018, but launch was delayed to allow for the propulsion system development to mature. The mission is planned to last one year before deorbit.
Spacecraft
Electrically powered spacecraft propulsion uses electricity, typically from solar panels, to accelerate the propellant and produce thrust. The technology can be scaled up to be used on small satellites up to . iSat will also demonstrate advanced power management and thermal control capabilities developed for spacecraft of its size.
The satellite is a 12U CubeSat format, with dimensions of about 20 cm × 20 cm × 30 cm. Its solar arrays aim to produce 100 W.
Propulsion
The propulsion maturation is a partnership between NASA and the U.S. Air Force. iSat's iodine propulsion system consists of a 200 watt Hall thruster (BHT-200-I) developed by Busek Co, a cathode, a tank to store solid iodine, a power processing unit (PPU) and the feed system to supply the iodine. The cathode technology is planned to enable heaterless cathode conditioning, significantly increasing total system efficiency.
A key advantage to using iodine as a propellant is that it provides a high density times specific impulse, it is three times as fuel efficient as the commonly flown xenon, it may be stored in the tank as an unpressurized solid, and it is not a hazardous propellant. 1U with 5 kg of iodine on a 12U vehicle can provide a change of velocity of 4 km/s ΔV, perform a 20,000km altitude change, 30° inclination change from LEO, or an 80° inclination change from GEO. During operations, the tank is heated to vaporize the propellant. The thruster then ionizes the vapor and accelerates it via magnetic and electrostatic fields, resulting in high specific impulse. The satellite has full three-axis attitude control capability by using momentum wheels and magnetic torque rods to rotate. iSat also counts with a passive thermal control system.
Mission
In the early 2010s, there was an emerging and rapidly growing market for small satellites, although they are often significantly limited by primary propulsion. In 2013, NASA Marshall Space Flight Center competitively selected a project for the maturation of an iodine flight operational feed system. This demonstration flight will address not just propulsion, but the process to integrate commercial off the shelves components as well as custom designed components, opening affordable options of utilizing iodine propulsion systems for national security and for NASA's Discovery class missions.
The iodine thruster will allow iSat to alter its orbital inclination and elevation, opening up a wider range of mission objectives than previously possible with spacecraft of this size, such as transferring from a geosynchronous orbit to geostationary orbit, enter and manage lunar orbits, and be deployed to explore near Earth asteroids, Mars and Venus. As a demonstration, the spacecraft will use its propulsion to lower altitude from its initial orbit of about 600 km to a circular orbit of about 300 km. Then it will perform a plane change maneuver, complete any final operational maneuvers and continue to lower its closest approach to Earth, and de-orbit the spacecraft in less than 90 days following the end of the mission.
See also
List of spacecraft with electric propulsion
References
CubeSats
Ion engines
Hall effect
Proposed NASA space probes
Technology demonstration satellites
Proposed satellites | Iodine Satellite | Physics,Chemistry,Materials_science,Astronomy | 769 |
77,857,795 | https://en.wikipedia.org/wiki/Xevinapant | Xevinapant is an investigational new drug that is being evaluated to treat squamous cell cancer. By acting as a SMAC mimetic, it functions as an inhibitor of several members of the IAP protein family (including XIAP, c-IAP1, and c-IAP2).
References
Antineoplastic drugs
Amides
Amines
Benzene derivatives
Pyrroles
Pyrrolodiazocines
Carboxamides | Xevinapant | Chemistry | 97 |
16,252,996 | https://en.wikipedia.org/wiki/Neumann%E2%80%93Dirichlet%20method | In mathematics, the Neumann–Dirichlet method is a domain decomposition preconditioner which involves solving Neumann boundary value problem on one subdomain and Dirichlet boundary value problem on another, adjacent across the interface between the subdomains. On a problem with many subdomains organized in a rectangular mesh, the subdomains are assigned Neumann or Dirichlet problems in a checkerboard fashion.
See also
Neumann–Neumann method
References
Domain decomposition methods | Neumann–Dirichlet method | Mathematics | 95 |
20,620,479 | https://en.wikipedia.org/wiki/Blood%20plasma%20fractionation | Blood plasma fractionation are the general processes separating the various components of blood plasma, which in turn is a component of blood obtained through blood fractionation. Plasma-derived immunoglobulins are giving a new narrative to healthcare across a wide range of autoimmune inflammatory diseases.
Blood plasma
Blood plasma is the liquid component of whole blood, and makes up approximately 55% of the total blood volume. It is composed primarily of water with small amounts of minerals, salts, ions, nutrients, and proteins in solution. In whole blood, red blood cells, leukocytes, and platelets are suspended within the plasma.
Plasma proteins
Plasma contains a large variety of proteins including albumin, immunoglobulins, and clotting proteins such as fibrinogen. Albumin constitutes about 60% of the total protein in plasma and is present at concentrations between 35 and 55 mg/mL. It is the main contributor to osmotic pressure of the blood and it functions as a carrier molecule for molecules with low water solubility such as lipid-soluble hormones, enzymes, fatty acids, metal ions, and pharmaceutical compounds. Albumin is structurally stable due to its seventeen disulfide bonds and unique in that it has the highest water solubility and the lowest isoelectric point (pI) of the plasma proteins. Due to the structural integrity of albumin it remains stable under conditions where most other proteins denature.
Plasma proteins for clinical use
Many of the proteins in plasma have important therapeutic uses. Albumin is commonly used to replenish and maintain blood volume after traumatic injury, during surgery, and during plasma exchange. Since albumin is the most abundant protein in the plasma its use may be the most well known, but many other proteins, although present in low concentrations, can have important clinical uses. See table below.
Plasma processing
When the ultimate goal of plasma processing is a purified plasma component for injection or transfusion, the plasma component must be highly pure. The first practical large-scale method of blood plasma fractionation was developed by Edwin J. Cohn during World War II. It is known as the Cohn process (or Cohn method). This process is also known as cold ethanol fractionation as it involves gradually increasing the concentration of ethanol in the solution at 5 °C and 3 °C. The Cohn Process exploits differences in properties of the various plasma proteins, specifically, the high solubility and low pI of albumin. As the ethanol concentration is increased in stages from 0% to 40% the [pH] is lowered from neutral (pH ~ 7) to about 4.8, which is near the pI of albumin. At each stage certain proteins are precipitated out of the solution and removed. The final precipitate is purified albumin. Several variations to this process exist, including an adapted method by Nitschmann and Kistler that uses fewer steps and replaces centrifugation and bulk freezing with filtration and diafiltration.
Some newer methods of albumin purification add additional purification steps to the Cohn Process and its variations, while others incorporate chromatography, with some methods being purely chromatographic. Chromatographic albumin processing as an alternative to the Cohn Process emerged in the early 1980s, however, it was not widely adopted until later due to the inadequate availability of large scale chromatography equipment. Methods incorporating chromatography generally begin with cryodepleted plasma undergoing buffer exchange via either diafiltration or buffer exchange chromatography, to prepare the plasma for following ion exchange chromatography steps. After ion exchange there are generally further chromatographic purification steps and buffer exchange.
For further information see chromatography in blood processing.
Plasma for analytical uses
In addition to the clinical uses of a variety of plasma proteins, plasma has many analytical uses. Plasma contains many biomarkers that can play a role in clinical diagnosis of diseases, and separation of plasma is a necessary step in the expansion of the human plasma proteome.
Plasma in clinical diagnosis
Plasma contains an abundance of proteins many of which can be used as biomarkers, indicating the presence of certain diseases in an individual. Currently, 2D Electrophoresis is the primary method for discovery and detection of biomarkers in plasma. This involves the separation of plasma proteins on a gel by exploiting differences in their size and pI. Potential disease biomarkers may be present in plasma at very low concentrations, so, plasma samples must undergo preparation procedures for accurate results to be obtained using 2D Electrophoresis. These preparation procedures aim to remove contaminants that may interfere with detection of biomarkers, solubilize the proteins so they are able to undergo 2D Electrophoresis analysis, and prepare plasma with minimal loss of low concentration proteins, but optimal removal of high abundance proteins.
The future of laboratory diagnostics are headed toward lab-on-a-chip technology, which will bring the laboratory to the point-of-care. This involves integration of all of the steps in the analytical process, from the initial removal of plasma from whole blood to the final analytical result, on a small microfluidic device. This is advantageous because it reduces turn around time, allows for the control of variables by automation, and removes the labor-intensive and sample wasting steps in current diagnostic processes.
Expansion of the human plasma proteome
The human plasma proteome may contain thousands of proteins, however, identifying them presents challenges due to the wide range of concentrations present. Some low abundance proteins may be present in picogram (pg/mL) quantities, while high abundance proteins can be present in milligram (mg/mL) quantities. Many efforts to expand the human plasma proteome overcome this difficulty by coupling some type of high performance liquid chromatography (HPLC) or reverse phase liquid chromatography (RPLC) with high efficiency cation exchange chromatography and subsequent tandem mass spectrometry for protein identification.
See also
Blood fractionation
References
Medical technology
Blood
Fractionation | Blood plasma fractionation | Chemistry,Biology | 1,243 |
78,625,218 | https://en.wikipedia.org/wiki/M.G.%206669 | M.G. 6669 is an antidepressant and central nervous system equilibrating agent.
See also
EXP-561
References
Amines
Experimental antidepressants
Stimulants
Experimental drugs
Cyclohexanes | M.G. 6669 | Chemistry | 52 |
76,269,627 | https://en.wikipedia.org/wiki/Puff-puff%20%28onomatopoeia%29 | is an onomatopoeia that conveys a woman's breasts being rubbed in someone's face. It was first coined by Akira Toriyama, creator of Dragon Ball and lead artist of Dragon Quest, both of which featured it. In Dragon Quest, it appears in multiple games as a service a character may receive. It has been featured in a non-sexual way in Dragon Quest as well through methods such as having two Slimes being used to simulate the act, or by swapping the performer for a man, which has been criticized for lacking consent by critics. It has been censored in most games in the Dragon Quest series in English until Dragon Quest XI. Multiple video games in other series include the puff-puff scene or make references to it, including 3D Dot Game Heroes, Yakuza: Like a Dragon, Final Fantasy XIV, and Dragon Ball Xenoverse.
History
Puff-puff is an onomatopoeia for the sound of a woman's breasts being rubbed in another person's face. The term was first used to convey this act by Dragon Ball creator and Dragon Quest artist Akira Toriyama, having been originally featured in the Dragon Ball manga. It was featured in the first Dragon Quest game as a service offered by a woman in the town of Kol in exchange for money. In addition to being a service offered by certain characters, some characters are able to use it as a special technique in battle, such as Jessica Albert from Dragon Quest VIII in order to make enemies "swoon" over her. It was also featured in the mobile game Dragon Quest Walk as a technique.
The "puff-puff" scene has been depicted in the Dragon Quest series in various ways, including women tricking the protagonist. Most puff-puff sessions in the series do not involve women's breasts; in both Dragon Quest II and Dragon Quest III, a woman tricks the player into having it performed by a man. In Dragon Quest VIII, a woman performs a "puff-puff" massage using two Slimes, while Dragon Quest IX: Sentinels of the Starry Skies depicts the character's face being rubbed between two sheep's rear ends. Dragon Quest XI features multiple such scenes, including bungee jumping, a makeup session, and a session where it was performed by a man. The mobile game Dragon Quest Walk features a recreation of the puff-puff scene from the first Dragon Quest. It has also been featured outside Dragon Quest video games, such as a Line sticker and a Puff-Puff Room offered as a reward in a Dragon Quest III-themed escape room.
Censorship
The scene has been censored outside of Japan in multiple Dragon Quest games as well as in Dragon Ball. Dragon Quest III replaces it with a fortune teller, while Dragon Quest IV and Dragon Quest VI changes it to the non-sexual "Pufpuf therapy" and a makeup session respectively. The "Puff-Puff" technique used by Jessica and other characters was changed to be called "Pattycake". When asked by GamesRadar+ about the absence of "puff-puff" scenes from the Dragon Quest series outside Japan, Dragon Quest VI producer Noriyoshi Fujimoto expressed disappointment that these scenes could affect the games' age rating, thus causing them change the scenes to be more subtle in English. The English version of Dragon Quest XI did not have the puff-puff scenes censored.
Impact
The scene has been a running joke in the Dragon Quest series. Inside Games writer Sushishi commented that, since being able to chat with the player's partner characters was not a feature yet by Dragon Quest III, the "puff-puff" scene was a valuable character interaction experience. IGN writer Jared Petty was critical of the depiction of a "puff-puff" scene in Dragon Quest XI where it turns out that a man performed the "puff-puff", arguing that it was not funny and had issues with consent. A writer for The Independent was also critical of its use, feeling that the "puff-puff" scenes in the game were forced in for "cheap, innocuous laughs" and criticized the scene discussed by Jared Petty for similar reasons. Author Daniel Andreyev discussed the various depictions of the act in the series, specifically how it evolved over time and manifested in Dragon Quest XI. He argued that its use contributes to a feeling of nostalgia, particularly for 1980s Japanese pop culture.
It has been referenced in multiple video games, including 3D Dot Game Heroes and Yakuza: Like a Dragon. In Like a Dragon, the game calls it "nigi-nigi", coming from the verb nigiru, meaning to grasp or grip. It is also called "honk-honk" in the sequel Like a Dragon: Infinite Wealth. The "puff-puff" scene is featured in Final Fantasy XIV as part of their Dragon Quest X collaboration. It also appeared in Dragon Ball Xenoverse as a gesture that can be performed by the character Master Roshi. A t-shirt with the words "puff puff" printed on it was released as part of a set of t-shirts by Zozotown, which were based on the Dragon Ball character Bulma.
See also
Cultural impact of Dragon Ball
Notes
References
Anime and manga terminology
Breast
Dragon Ball
Dragon Quest
Japanese sex terms
Onomatopoeia
Sex industry
Video game censorship
Video game terminology
Women-related neologisms | Puff-puff (onomatopoeia) | Technology | 1,094 |
24,507,900 | https://en.wikipedia.org/wiki/Gymnopilus%20robustus | Gymnopilus robustus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus robustus at Index Fungorum
robustus
Fungus species | Gymnopilus robustus | Biology | 49 |
38,436 | https://en.wikipedia.org/wiki/Half-reaction | In chemistry, a half reaction (or half-cell reaction) is either the oxidation or reduction reaction component of a redox reaction. A half reaction is obtained by considering the change in oxidation states of individual substances involved in the redox reaction.
Often, the concept of half reactions is used to describe what occurs in an electrochemical cell, such as a Galvanic cell battery. Half reactions can be written to describe both the metal undergoing oxidation (known as the anode) and the metal undergoing reduction (known as the cathode).
Half reactions are often used as a method of balancing redox reactions. For oxidation-reduction reactions in acidic conditions, after balancing the atoms and oxidation numbers, one will need to add ions to balance the hydrogen ions in the half reaction. For oxidation-reduction reactions in basic conditions, after balancing the atoms and oxidation numbers, first treat it as an acidic solution and then add ions to balance the ions in the half reactions (which would give ).
Example: Zn and Cu Galvanic cell
Consider the Galvanic cell shown in the adjacent image: it is constructed with a piece of zinc (Zn) submerged in a solution of zinc sulfate () and a piece of copper (Cu) submerged in a solution of copper(II) sulfate (). The overall reaction is:
Zn_{(s)}{} + CuSO4_{(aq)} -> ZnSO4_{(aq)}{} + Cu_{(s)}
At the Zn anode, oxidation takes place (the metal loses electrons). This is represented in the following oxidation half reaction (note that the electrons are on the products side):
Zn_{(s)} -> Zn^2+ + 2e-
At the Cu cathode, reduction takes place (electrons are accepted). This is represented in the following reduction half reaction (note that the electrons are on the reactants side):
Cu^2+ + 2e- -> Cu_{(s)}
Example: oxidation of magnesium
Consider the example burning of magnesium ribbon (Mg). When magnesium burns, it combines with oxygen () from the air to form magnesium oxide (MgO) according to the following equation:
2Mg_{(s)}{} + O2_{(g)} -> 2MgO_{(s)}
Magnesium oxide is an ionic compound containing and ions whereas and are elements with no charges.
The with zero charge gains a +2 charge going from the reactant side to product side, and the with zero charge gains a –2 charge. This is because when becomes , it loses 2 electrons. Since there are 2 Mg on left side, a total of 4 electrons are lost according to the following oxidation half reaction:
2Mg_{(s)} -> 2Mg^2+ + 4e-
On the other hand, was reduced: its oxidation state goes from 0 to -2. Thus, a reduction half reaction can be written for the O2 as it gains 4 electrons:
O2_{(g)}{} + 4e- -> 2O^2-
The overall reaction is the sum of both half reactions:
2Mg_{(s)}{} + O2_{(g)}{} + 4e- -> 2Mg^2+ + 2O^2- + 4e-
When chemical reaction, especially, redox reaction takes place, we do not see the electrons as they appear and disappear during the course of the reaction. What we see is the reactants (starting material) and end products. Due to this, electrons appearing on both sides of the equation are canceled. After canceling, the equation is re-written as
2Mg_{(s)}{} + O2_{(g)} -> 2Mg^2+ + 2O^2-
Two ions, positive () and negative () exist on product side and they combine immediately to form a compound magnesium oxide (MgO) due to their opposite charges (electrostatic attraction). In any given oxidation-reduction reaction, there are two half reactions—oxidation half reaction and reduction half reaction. The sum of these two half reactions is the oxidation–reduction reaction.
Half-reaction balancing method
Consider the reaction below:
Cl2 + 2Fe^2+ -> 2Cl- + 2Fe^3+
The two elements involved, iron and chlorine, each change oxidation state; iron from +2 to +3, chlorine from 0 to −1. There are then effectively two half reactions occurring. These changes can be represented in formulas by inserting appropriate electrons into each half reaction:
Given two half reactions it is possible, with knowledge of appropriate electrode potentials, to arrive at the complete (original) reaction the same way. The decomposition of a reaction into half reactions is key to understanding a variety of chemical processes. For example, in the above reaction, it can be shown that this is a redox reaction in which Fe is oxidised, and Cl is reduced. Note the transfer of electrons from Fe to Cl. Decomposition is also a way to simplify the balancing of a chemical equation. A chemist can atom balance and charge balance one piece of an equation at a time.
For example:
becomes
is added to
and finally becomes
It is also possible and sometimes necessary to consider a half reaction in either basic or acidic conditions, as there may be an acidic or basic electrolyte in the redox reaction. Due to this electrolyte it may be more difficult to satisfy the balance of both the atoms and charges. This is done by adding , and/or to either side of the reaction until both atoms and charges are balanced.
Consider the half reaction below:
PbO2 -> PbO
, and can be used to balance the charges and atoms in basic conditions, as long as it is assumed that the reaction is in water.
2e- + H2O + PbO2 -> PbO + 2OH-
Again consider the half reaction below:
PbO2 -> PbO
, and can be used to balance the charges and atoms in acidic conditions, as long as it is assumed that the reaction is in water.
2e- + 2H+ + PbO2 -> PbO + H2O
Notice that both sides are both charge balanced and atom balanced.
Often there will be both and present in acidic and basic conditions but that the resulting reaction of the two ions will yield water, (shown below):
H+ + OH- -> H2O
See also
Electrode potential
Standard electrode potential (data page)
References
Electrochemistry | Half-reaction | Chemistry | 1,377 |
38,094,287 | https://en.wikipedia.org/wiki/Simcenter%20Amesim | Simcenter Amesim is a commercial simulation software for the modeling and analysis of multi-domain systems. It is part of systems engineering domain and falls into the mechatronic engineering field.
The software package is a suite of tools used to model, analyze and predict the performance of mechatronics systems. Models are described using nonlinear time-dependent analytical equations that represent the system's hydraulic, pneumatic, thermal, electric or mechanical behavior. Compared to 3D CAE modeling this approach gives the capability to simulate the behavior of systems before detailed CAD geometry is available, hence it is used earlier in the system design cycle or V-Model.
To create a simulation model for a system, a set of libraries is used. These contain pre-defined components for different physical domains. The icons in the system have to be connected and for this purpose each icon has ports, which have several inputs and outputs. Causality is enforced by linking the inputs of one icon to the outputs of another icon (and vice versa).
Simcenter Amesim libraries are written in C language, Python and also support Modelica, which is a non-proprietary, object-oriented, equation based language to model complex physical systems containing, e.g., mechanical, electrical, electronic, hydraulic, thermal, control, electric power or process-oriented subcomponents. The software runs on Linux and on Windows platforms.
Simcenter Amesim is a part of the Siemens Digital Industries Software Simcenter portfolio. This combines 1D simulation, 3D CAE and physical testing with intelligent reporting and data analytics. This portfolio is intended for development of complex products that include smart systems, through implementing a Predictive Engineering Analytics approach.
History
The Simcenter Amesim software was developed by Imagine S.A., a company which was acquired in June 2007 by LMS International, which itself was acquired in November 2012 by Siemens AG.
The Imagine S.A. company was created in 1987 by Dr Michel Lebrun from the University Claude Bernard in France, to control complex dynamic systems coupling hydraulic servo-actuators with finite-elements mechanical structures. The initial engineering project involved the deck elevation of the sinking Ekofisk North Sea petroleum platforms.
In the early 1990s the association with Pr C. W. Richards, coming from the University of Bath in England, led to the first commercial release of Simcenter Amesim in 1995 which was then dedicated to fluid control systems.
Simcenter Amesim is used by companies in the automotive, aerospace and other advanced manufacturing industries.
Usage
Simcenter Amesim is a multi-domain software that supports modeling a variety of physics domains (hydraulic, pneumatic, mechanic, electrical, thermal, electromechanical). It is based on the Bond graph theory.
Under the Windows platform, Simcenter Amesim works with the free Gcc compiler, which is provided with the software. It also works with the Microsoft Visual C++ compiler and its free Express edition. Since the version 4.3.0 Simcenter Amesim uses the Intel compiler on all platforms.
Platform facilities
Simcenter Amesim features:
Platform Facilities
graphical user interface, interactive help, supercomponents, post-processed variables, experiments management, meta-data, statechart designer
Analysis Tools
table editor, plots, dashboard, 3D animation, replay of results, linear analysis (eigenvalues, modal shapes, transfer functions, root locus), activity index, power and energy computation
Optimization, Robustness, DOE
Design Of Experiments, optimization, Monte-Carlo
Solvers and Numerics
LSODA, DASSL, DASKR, fixed-step solvers, discrete partitioning, parallel processing, Simcenter Amesim/Simcenter Amesim cosimulations
Software Interfaces
generic co-simulation (to be used to co-simulate with any software coupled to Simcenter Amesim), functional mock-up interface (export)
MIL/SIL/HIL and Real-Time
plant/control, various real-time targets
Simulator Scripting
scripting functions to pilot the simulations from Microsoft Excel, MATLAB, Scilab, Python, and support for C and Python development and reverse-engineering script generation from a model
Customization
own customized pre and post-processing tools with python, script caller assistant, editor of parameters group, app designer
Modelica Platform
support of the Modelica modeling language
1D/3D CAE
CAD Import, CFD software co-simulation, FEA import of reduced modal basis with pre-defined frontier nodes, MBS software cosimulation and import/export
Development
Users can develop submodels from different standard submodels (supercomponent) using Component Customization functionality or by programming them in C or in Fortran with the Submodel Editor.
Physical libraries
Physical libraries from which models can be built include control, electrical networks, mechanical, fluid, thermodynamic, IC engine, and aerospace and defense libraries.
Education and research
Simcenter Amesim is used by engineering schools and universities.
It is also the reference framework for various research projects in Europe.
Release history
See also
Model-based design
Lumped-element model
Distributed-element model
Bond graphs
GT-SUITE
Mechatronics
Control theory
Real-time computing
Hardware-in-the-loop simulation
Systems engineering
Simulink
20-sim
Wolfram SystemModeler
References
Simulation software
Numerical software
Computer-aided engineering
Simulation programming languages
Fortran | Simcenter Amesim | Mathematics,Engineering | 1,113 |
18,339 | https://en.wikipedia.org/wiki/Law%20of%20multiple%20proportions | In chemistry, the law of multiple proportions states that in compounds which contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. For instance, the ratio of the hydrogen content in methane (CH4) and ethane (C2H6) per measure of carbon is 4:3. This law is also known as Dalton's Law, named after John Dalton, the chemist who first expressed it. The discovery of this pattern led Dalton to develop the modern theory of atoms, as it suggested that the elements combine with each other in multiples of a basic quantity. Along with the law of definite proportions, the law of multiple proportions forms the basis of stoichiometry.
The law of multiple proportions often does not apply when comparing very large molecules. For example, if one tried to demonstrate it using the hydrocarbons decane (C10H22) and undecane (C11H24), one would find that 100 grams of carbon could react with 18.46 grams of hydrogen to produce decane or with 18.31 grams of hydrogen to produce undecane, for a ratio of hydrogen masses of 121:120, which is hardly a ratio of "small" whole numbers.
History
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right. The following examples come from Dalton's own books A New System of Chemical Philosophy (in two volumes, 1808 and 1817):
Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. Tin oxides are actually crystals, they don't exist in molecular form.
Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide (Fe2O2) and iron(III) oxide (Fe2O3). Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". As with tin oxides, iron oxides are crystals.
Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there are 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2.
The earliest definition of Dalton's observation appears in an 1807 chemistry encyclopedia:
The first known writer to refer to this principle as the "doctrine of multiple proportions" was Jöns Jacob Berzelius in 1813.
Dalton's atomic theory garnered widespread interest but not universal acceptance shortly after he published it because the law of multiple proportions by itself was not complete proof of the existence of atoms. Over the course of the 19th century, other discoveries in the fields of chemistry and physics would give atomic theory more credence, such that by the end of the 19th century it had found universal acceptance.
Footnotes
References
Bibliography
Physical chemistry
Stoichiometry | Law of multiple proportions | Physics,Chemistry | 1,288 |
520,754 | https://en.wikipedia.org/wiki/Yevgeny%20Primakov | Yevgeny Maksimovich Primakov (29 October 1929 – 26 June 2015, ) was a Russian politician and diplomat who served as Prime Minister of Russia from 1998 to 1999. During his long career, he also served as Minister of Foreign Affairs from 1996 to 1998, the Director of Foreign Intelligence from 1991 to 1996, and Speaker of the Supreme Soviet of the Soviet Union from 1990 to 1991. Primakov was an academician (Arabist) and a member of the Presidium of the Russian Academy of Sciences.
Personal life
Primakov was born in Kyiv in the Ukrainian SSR, and grew up in Tbilisi in the Georgian SSR.
Primakov's father's surname was Nemchenko. He had been imprisoned in the Gulag during the Stalinist purges. Primakov's mother was Jewish, named Anna Yakovlevna Primakova. She worked as an obstetrician and was a cousin of the famous physiologist .
Primakov was educated at the Moscow Institute of Oriental Studies, graduating in 1953, and carried out postgraduate work at Moscow State University. He graduated with a degree in Arabic.
His grandson is Yevgeny Primakov Jr. (), a journalist, TV host, politician and diplomat.
Early career
From 1956 to 1970, he worked as a journalist for Soviet radio and a Middle Eastern correspondent for Pravda newspaper. During this time, he was sent frequently on intelligence missions to the Middle East and the United States as a KGB co-optee under codename MAKSIM. Primakov reportedly may have been coerced into joining the intelligence services.
As a Senior Researcher at the Institute of World Economy and International Relations, Primakov entered into scientific society in 1962. From 30 December 1970 to 1977, he served as deputy director of the Institute of World Economy and International Relations, part of the Academy of Sciences of the Soviet Union. In this role he participated in the Dartmouth Conferences alongside, among others, Charles Yost. From 1977 to 1985 he was Director of the Institute of Oriental Studies of the USSR Academy of Sciences. During this time he was also First Deputy Chairman of the Soviet Peace Committee. In 1985 he returned to the Institute of World Economy and International Relations, serving as Director until 1989.
Primakov became involved in national politics in 1989, as the Chairman of the Soviet of the Union, one of two chambers of the Soviet parliament. From 1990 until 1991 he was a member of Soviet leader Mikhail Gorbachev's Presidential Council of the Soviet Union. He served as Gorbachev's special envoy to Iraq in the run-up to the Persian Gulf War, in which capacity he held talks with President Saddam Hussein to try to convince him to withdraw Iraqi forces from Kuwait.
Foreign intelligence chief (1991–1996)
After the failed August 1991 putsch, Primakov was appointed First Deputy Chairman of the KGB and Director of the KGB First Chief Directorate responsible for foreign intelligence.
After the formation of the Russian Federation, Primakov shepherded the transition of the KGB First Chief Directorate to the control of the Russian Federation government, under the new name Foreign Intelligence Service (SVR). Primakov preserved the old KGB foreign intelligence apparatus under the new SVR label, and led no personnel purges or structural reforms. He served as SVR director from 1991 until 1996.
Russian Minister of Foreign Affairs (1996–1998)
Primakov served as Minister of Foreign Affairs from January 1996 until September 1998. As foreign minister, he gained respect at home and abroad, with a reputation as a tough but pragmatic supporter of Russia's interests and as an opponent of NATO's expansion into the former Eastern Bloc, though on 27 May 1997, after five months of negotiation with NATO Secretary General Javier Solana, Russia signed the Founding Act, which is seen as marking the end of Cold War hostilities. He supported Slobodan Milošević during the Yugoslav Wars.
He was also famously an advocate of multilateralism as an alternative to American global hegemony following the collapse of the Soviet Union and the end of the Cold War. Primakov called for a Russian foreign policy based on low-cost mediation while expanding influence towards the Middle East and the former Soviet republics. Called the "Primakov doctrine", beginning in 1999, he promoted Russia, China, and India as a "strategic triangle" to counterbalance the United States. The move was interpreted by some observers as an agreement to fight together against 'colour revolutions' in Central Asia.
Prime Minister of Russia (1998–1999)
After Yeltsin's bid to reinstate Viktor Chernomyrdin as Prime Minister of Russia was blocked by the State Duma in September 1998, the President turned to Primakov as a compromise figure whom he rightly judged would be accepted by the parliament's majority. As Prime Minister, Primakov was given credit for forcing some very difficult reforms in Russia; most of them, such as the tax reform, became major successes. Following the 1998 harvest, which was the worst in 45 years, coupled with a plummeting ruble, one of Primakov's first actions as Prime Minister, in October 1998, was to appeal to the United States and Canada for food aid, while also appealing to the European Union for economic relief.
While Primakov's opposition to perceived US unilateralism was popular among some Russians, it also led to a breach with the West during the 1999 NATO bombing of Yugoslavia, and isolated Russia during subsequent developments in the former Yugoslavia.
On 24 March 1999, Primakov was heading to Washington, D.C. for an official visit. Flying over the Atlantic Ocean, he learned that NATO had started to bomb Yugoslavia. Primakov decided to cancel the visit, ordered the plane to turn around over the ocean and returned to Moscow in a manoeuvre popularly dubbed "Primakov's Loop".
Yeltsin fired Primakov on 12 May 1999, ostensibly over the sluggish pace of the Russian economy. Many analysts believed the firing of Primakov reflected Yeltsin's fear of losing power to a more successful and popular person, although sources close to Yeltsin said at the time that the president viewed Primakov as being too close to the Communist Party. Primakov himself would have had good chances as a candidate for the presidency. Primakov had refused to dismiss Communist ministers while the Communist Party was leading the process of preparing unsuccessful impeachment proceedings against the president. Ultimately, Yeltsin resigned at the end of the year and was succeeded by his last prime minister, Vladimir Putin, whom Primakov had tried to fire from his role as head of the FSB when he tapped the phone of the Duma President. Primakov's dismissal was extremely unpopular with the Russian population: according to a poll, 81% of the population did not approve of the decision, and even among the liberal pro-Western party Yabloko supporters, 84% did not approve of the dismissal.
Post-PM career
After 1988, Primakov held several roles: Academic Secretary of the World Economy and International Relations Division, director of the Institute of World Economy and International Relations and member of the Presidium of the Academy of Sciences of the Soviet Union.
Before Yeltsin's resignation, Primakov supported the Fatherland – All Russia electoral faction, which at that time was the major opponent of the pro-Putin Unity, and launched his presidential bid. Initially considered the man to beat, Primakov was rapidly overtaken by the factions loyal to Prime Minister Vladimir Putin in the December 1999 Duma elections. Primakov officially abandoned the presidential race in his TV address on 4 February 2000 less than two months before the 26 March presidential elections. Soon he became an adviser to Putin and a political ally. On 14 December 2001, Primakov became President of the Russian Chamber of Commerce and Industry, a position he held until 2011.
In February and March 2003, he visited Iraq and held talks with Iraqi President Saddam Hussein, as a special representative of President Putin. He passed on a message from Putin calling for Hussein to resign voluntarily. He tried to prevent the 2003 U.S.-led invasion of Iraq, a move which received some support from several nations opposed to the war. Primakov suggested that Hussein must hand over all Iraq's weapons of mass destruction to the United Nations, among other things. However, Hussein told Primakov that he was confident that nothing would befall him personally—a belief that was later proven incorrect. Primakov later claimed Hussein's execution in 2006 was rushed to prevent him from revealing information on Iraq–United States relations that could embarrass the U.S. government. In a 2006 speech Primakov thundered that: "The collapse of the US policies pursued in Iraq delivered a fatal blow on the American doctrine of unilateralism."
On 26 May 2008, Primakov was elected as a member of the Presidium of the Russian Academy of Sciences. In 2009, the University of Niš, Serbia awarded Primakov an honorary doctorate.
In November 2004, Primakov testified in defense of the former Yugoslav President Slobodan Milošević, on trial for war crimes. He had earlier led a Russian delegation that met with Milošević during the NATO bombing of Yugoslavia in March 1999.
Death
Primakov died in Moscow on 26 June 2015, at the age of 85, after prolonged illness (liver cancer). He was buried with military honours at Novodevichy Cemetery. He was lionized in Russia obituaries as "the Russian Kissinger", and President Vladimir Putin said Primakov had made a "colossal contribution to the formation of modern Russia... This is a sad, grievous loss for our society. … Yevgeny's authority was respected both in our country and abroad..." Indeed, "his death occurred at a time when his positions [were] very much the official line and the backbone for Putin's grand strategy."
In honor of Primakov, Primakov Readings was established in October 2015 - an annual international summit aimed at promoting dialogue on current global trends in the world economy, international politics and security among high-ranking experts, diplomats and decision-makers from around the Globe, organized by the Institute of World Economy and International Relations and held in Moscow.
One of his credos was: "Those who do good will be rewarded. Life gets even with those who do bad."
Awards
Order of Merit for the Fatherland 1st Class (2009)
Order of Merit for the Fatherland 2nd Class (1998)
Order of Merit for the Fatherland 3rd Class (1995)
Medal "In Commemoration of the 850th Anniversary of Moscow" (1997)
Order of Honor (2004)
Order of Lenin (1986)
Order of the Badge of Honor (1985)
Order of Friendship of Peoples (1979)
Order of the Red Banner of Labour (1975)
Medal "Veteran of Labour" (1974)
Order of Friendship (Tajikistan, 1999)
Order of Yaroslav I the Wise 5th Class (Ukraine, 2004)
Order of Danaker (Kyrgyzstan) (2005)
Order of Friendship of Peoples (Belarus, 2005)
Order of Dostyk 1st Class (Kazakhstan, 2007)
Order of the Republic 1st Class (Transnistria, 2009)
Medal of Independence (Kazakhstan, 2012)
Order of Jerusalem 1st Class (Palestinian National Authority, 2014)
Demidov Prize (2012)
Recipient of the USSR State Prize (1980)
Recipient of the Nasser Prize (1974)
Recipient of the Avicenna Prize (1983)
Recipient of the George F. Kennan Prize (1990)
Recipient of the Hugo Grotius Prize for the huge contribution to the development of international law and for the creation of a multipolar world doctrine (2000).
Publications
1979: Anatomy of the Middle East Conflict
2003: A World Challenged: Fighting Terrorism in the Twenty-First Century
2004: Russian Crossroads: Toward the New Millennium
2009: Russia and the Arabs: Behind the Scenes in the Middle East from the Cold War to the Present
2014: Встречи на перекрестках (Meetings at the crossroads)
See also
Yevgeny Primakov's Cabinet
Yevgeny Primakov Jr.
Operation INFEKTION
Primakov Readings
Notes
References
External links
Yevgeny Primakov's Project Syndicate op/eds
Sergey Lavrov predicts historians may coin new term: Primakov Doctrine
1929 births
2015 deaths
21st-century Russian politicians
Politicians from Kyiv
Academic staff of the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation
Directors of the Foreign Intelligence Service (Russia)
Full Members of the Russian Academy of Sciences
Full Members of the USSR Academy of Sciences
Honorary members of the Russian Academy of Education
Members of the Tajik Academy of Sciences
Moscow Institute of Oriental Studies alumni
Moscow State University alumni
Ambassador Extraordinary and Plenipotentiary (Russian Federation)
Chairmen of the Soviet of the Union
Heads of government of the Russian Federation
Ministers of foreign affairs of Russia
Candidates of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union
Candidates of the Politburo of the 27th Congress of the Communist Party of the Soviet Union
Members of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union
Third convocation members of the State Duma (Russian Federation)
Eleventh convocation members of the Supreme Soviet of the Soviet Union
Demidov Prize laureates
Recipients of the Lomonosov Gold Medal
Recipients of the Order "For Merit to the Fatherland", 1st class
Recipients of the Order "For Merit to the Fatherland", 2nd class
Recipients of the Order "For Merit to the Fatherland", 3rd class
Recipients of the Order of Alexander Nevsky
Recipients of the Order of the Badge of Honour
Recipients of the Order of Friendship of Peoples
Recipients of the Order of Honour (Russia)
Recipients of the Order of Prince Yaroslav the Wise, 5th class
Recipients of the Order of the Red Banner of Labour
Recipients of the USSR State Prize
State Prize of the Russian Federation laureates
Geopoliticians
Jewish prime ministers
Jewish Russian politicians
KGB officers
Pravda people
Russian Arabists
Russian orientalists
Russian people of Jewish descent
Russian political scientists
Soviet Arabists
Soviet orientalists
Deaths from liver cancer
Burials at Novodevichy Cemetery | Yevgeny Primakov | Technology | 2,882 |
11,421,126 | https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%20II%20IRES | The insulin-like growth factor II (IGF-II) internal ribosome entry site IRES is found in the 5' UTR of IGF-II leader 2 mRNA. This RNA element allows cap-independent translation of the mRNA and it is thought that this family may facilitate a continuous IGF-II production in rapidly dividing cells during development. Ribosomal scanning on human insulin-like growth factor II (IGF-II) is hard to comprehend due to one open reading frame and the ability for the hormone to fold into a stable structure.
References
External links
IRESite page for IGF2 leader2
Cis-regulatory RNA elements | Insulin-like growth factor II IRES | Chemistry | 132 |
23,967,264 | https://en.wikipedia.org/wiki/Advanced%20Engineering%20Materials | Advanced Engineering Materials is a peer-reviewed materials science journal that publishes monthly.
Advanced Engineering Materials publishes peer-reviewed reviews, communications, and full papers, on topics centered around structural materials, such as metals, alloys, ceramics, composites, polymers etc..
Abstracting and indexing
Thomson Reuters
Current Contents / Engineering, Computing & Technology
Journal Citation Reports
Materials Science Citation Index
Science Citation Index Expanded
Elsevier
Compendex
SCOPUS
CSA Illumina
Advanced Polymer Abstracts
Ceramic Abstracts
Civil Engineering Abstracts
Computer & Information Systems Abstracts
Computer Information & Technology Abstracts
Earthquake Engineering Abstracts
Mechanical & Transportation Engineering Abstracts
Technology Research Database
Engineered Materials Abstracts
International Aerospace Abstracts
Materials Business File
METADEX
Other databases
Chemical Abstracts Service - SciFinder
PASCAL database
FIZ Karlsruhe
INSPEC
Polymer Library
See also
Advanced Materials
References
Chemistry journals
Materials science journals
English-language journals
Monthly journals
Wiley-Blackwell academic journals | Advanced Engineering Materials | Materials_science,Engineering | 172 |
40,133,278 | https://en.wikipedia.org/wiki/HD%2041742%20and%20HD%2041700 | HD 41742 and HD 41700 is a star system that lies approximately 88 light-years away in the constellation of Puppis. The system consists of two bright stars where the primary is orbited by two fainter stars, making it a quadruple with an unequal hierarchy.
Component discovery
HD 41742 B was discovered early on in the history of visual binaries, due to the brightness of the primary. The earliest measurement in the Washington Double Star Catalog (WDS) dates to 1837 and was made by William Herschel, stating a position angle of 246° and a separation of 1.1" for the companion. Surprisingly, recent measures suggest that the secondary has moved significantly over the two centuries since, with it lying at a position angle of around 215° and a separation increasing between 5.30" in the late 1970s to 5.95" in 2010. This translates to a minimum change in physical separation between 142 and 159 AU over 35 years, which suggests that HD 41742 B is moving quickly away from the primary.
Lying at a considerably wider separation, HD 41700 was first observed relative to HD 41742 later than the tighter binary, despite being much brighter. The first measurement in the WDS dates to 1854 and was again made by Herschel, giving a position angle of 320° and a separation of 174". More recent values agree on the position angle, but suggest a separation closer to 200". The wide separation of this tertiary component means that it has a separate Hipparcos entry to the primary, which confirms that the two stars lie at the same distance and are co-moving. The physical separation between the two is about 0.026 parsecs (0.084 light-years), or approximately 17200 AU. This is comparable to the ~15000 AU separation between Alpha Centauri AB and Proxima Centauri; such wide separations between components are relatively rare, at least for solar-type stars.
Radial velocity observations of HD 41742 A with the HARPS spectrograph detected variations on a level of several km/s over a period of months, indicating that the star is a single-lined spectroscopic binary (SB1). Though an orbital fit was not attempted, A good orbital fit is possible (left), which implies that HD 41742 Ab has a minimum mass of , and is on a high eccentricity ~222-day orbit around the primary. Given that the lines of the secondary are not detected, it must have a significantly lower luminosity than the primary, indicating that it is of late spectral type.
Properties
On the celestial sphere, HD 41742/41700 can be seen as a 6th magnitude star (a magnitude barely observable by the naked eye under good conditions) lying close to the border between Puppis and Pictor. The nearest bright star to its location is the 4th magnitude Eta Columbae approximately two arcminutes to the north; The system lies about a quarter of the distance between Eta Columbae and Canopus (Alpha Carinae) on the sky.
HD 41742 A and HD 41700 (C) are similar stars, with their colours indicating spectral types of F6 and F7.5; this means that the two stars are about 500 K hotter than the Sun, and in turn the difference in temperature between the stars is about 150 kelvins. The stars lie slightly below the main sequence on the Hertzsprung–Russell diagram (left image), which is probably due to their sub-solar metallicity (Fe/H ≈ −0.2).
HD 41742 B is a much cooler star than the brighter components; its B-V indicates a spectral type of K3, making it approximately 1000 K cooler than the Sun. It lies on the main sequence on the HRD (left image), and its photometry is fully consistent with an dwarf.
Indication of a young age for the HD 41742/41700 system was first found Henry et al. (1996), detecting large chromospheric activity in HD 41700; they measured a log R'HK of −4.35 for the star, significantly higher than a "quiet" value of < −4.70, indicating that the system is considerably younger than 1 Gyr. The brightest stars in the system are both moderately fast rotators for late-F dwarfs, again indicating that they are young. Finally, HD 41700 has a somewhat large lithium content; because lithium is used up by a star at an approximately constant rate over its lifetime, this can be used to estimate a star's age. For HD 41700, its lithium abundance indicates an age of 200 ± 50 million years.
Some young star systems remain loosely associated with other stars that formed in the same molecular cloud as they move through space, known as a moving group. The HD 41742/41700 system has space velocities of (UVW) = −37.8, −10.4, −14.6 km/s, which is similar to those of the Hyades (UVW = −39.7, −17.7, −2.4 km/s); however, the system is probably not a Hyad because it has a lower peculiar velocity than expected, as well as a lower metallicity and lithium age than the Hyades.
Another feature prevalent around young stars are debris disks. For HD 41742 A and HD 41700 (C), IRAS and ISO detected infra-red excesses, which are typically indicative of disks of material re-radiating absorbed light at redder wavelengths; however, in both cases evidence against the excesses have been found. For HD 41742 A, the excess is offset by 26", which is large enough so that contamination from another object is likely responsible for the excess, while for HD 41700 (C) the excess has not been confirmed by Spitzer observations.
Planet searches
HD 41700 (C) is included on the CORALIE and Keck-HIRES planet search samples. No variability has been announced so far, so the star likely does not host a close-in, easily detectable giant planet.
HD 41742 A was included on a planet search around early-type (<~F7) stars with HARPS that detected its spectroscopic binarity, as discussed above.
Notes
References
Puppis
041700
028764
9200
2157
F-type main-sequence stars
K-type main-sequence stars
4
Spectroscopic binaries
Durchmusterung objects | HD 41742 and HD 41700 | Astronomy | 1,341 |
14,338,608 | https://en.wikipedia.org/wiki/Neural%20backpropagation | Neural backpropagation is the phenomenon in which, after the action potential of a neuron creates a voltage spike down the axon (normal propagation), another impulse is generated from the soma and propagates towards the apical portions of the dendritic arbor or dendrites (from which much of the original input current originated). In addition to active backpropagation of the action potential, there is also passive electrotonic spread. While there is ample evidence to prove the existence of backpropagating action potentials, the function of such action potentials and the extent to which they invade the most distal dendrites remain highly controversial.
Mechanism
When the graded excitatory postsynaptic potentials (EPSPs) depolarize the soma to spike threshold at the axon hillock, first, the axon experiences a propagating impulse through the electrical properties of its voltage-gated sodium and voltage-gated potassium channels. An action potential occurs in the axon first as research illustrates that sodium channels at the dendrites exhibit a higher threshold than those on the membrane of the axon (Rapp et al., 1996). Moreover, the voltage-gated sodium channels on the dendritic membranes having a higher threshold helps prevent them triggering an action potential from synaptic input. Instead, only when the soma depolarizes enough from accumulating graded potentials and firing an axonal action potential will these channels be activated to propagate a signal traveling backwards (Rapp et al. 1996). Generally, EPSPs from synaptic activation are not large enough to activate the dendritic voltage-gated calcium channels (usually on the order of a couple milliamperes each) so backpropagation is typically believed to happen only when the cell is activated to fire an action potential. These sodium channels on the dendrites are abundant in certain types of neurons, especially mitral and pyramidal cells, and quickly inactivate. Initially, it was thought that an action potential could only travel down the axon in one direction (towards the axon terminal where it ultimately signaled the release of neurotransmitters). However, recent research has provided evidence for the existence of backwards-propagating action potentials (Staley 2004).
To elaborate, neural backpropagation can occur in one of two ways. First, during the initiation of an axonal action potential, the cell body, or soma, can become depolarized as well. This depolarization can spread through the cell body towards the dendritic tree where there are voltage-gated sodium channels. The depolarization of these voltage-gated sodium channels can then result in the propagation of a dendritic action potential. Such backpropagation is sometimes referred to as an echo of the forward propagating action potential (Staley 2004). It has also been shown that an action potential initiated in the axon can create a retrograde signal that travels in the opposite direction (Hausser 2000). This impulse travels up the axon eventually causing the cell body to become depolarized, thus triggering the dendritic voltage-gated calcium channels. As described in the first process, the triggering of dendritic voltage-gated calcium channels leads to the propagation of a dendritic action potential.
It is important to note that the strength of backpropagating action potentials varies greatly between different neuronal types (Hausser 2000). Some types of neuronal cells show little to no decrease in the amplitude of action potentials as they invade and travel through the dendritic tree while other neuronal cell types, such as cerebellar Purkinje neurons, exhibit very little action potential backpropagation (Stuart 1997). Additionally, there are other neuronal cell types that manifest varying degrees of amplitude decrement during backpropagation. It is thought that this is due to the fact that each neuronal cell type contains varying numbers of the voltage-gated channels required to propagate a dendritic action potential.
Regulation and inhibition
Generally, synaptic signals that are received by the dendrite are combined in the soma in order to generate an action potential that is then transmitted down the axon toward the next synaptic contact. Thus, the backpropagation of action potentials poses a threat to initiate an uncontrolled positive feedback loop between the soma and the dendrites. For example, as an action potential was triggered, its dendritic echo could enter the dendrite and potentially trigger a second action potential. If left unchecked, an endless cycle of action potentials triggered by their own echo would be created. In order to prevent such a cycle, most neurons have a relatively high density of A-type K+ channels.
A-type K+ channels belong to the superfamily of voltage-gated ion channels and are transmembrane channels that help maintain the cell's membrane potential (Cai 2007). Typically, they play a crucial role in returning the cell to its resting membrane following an action potential by allowing an inhibitory current of K+ ions to quickly flow out of the neuron. The presence of these channels in such high density in the dendrites explains their inability to initiate an action potential, even during synaptic input. Additionally, the presence of these channels provides a mechanism by which the neuron can suppress and regulate the backpropagation of action potentials through the dendrite (Vetter 2000). Pharmacological antagonists of these channels promoted the frequency of backpropagating action potentials which demonstrates their importance in keeping the cell from excessive firing (Waters et al., 2004). Results have indicated a linear increase in the density of A-type channels with increasing distance into the dendrite away from the soma. The increase in the density of A-type channels results in a dampening of the backpropagating action potential as it travels into the dendrite. Essentially, inhibition occurs because the A-type channels facilitate the outflow of K+ ions in order to maintain the membrane potential below threshold levels (Cai 2007). Such inhibition limits EPSP and protects the neuron from entering a never-ending positive-positive feedback loop between the soma and the dendrites.
History
Since the 1950s, evidence has existed that neurons in the central nervous system generate an action potential, or voltage spike, that travels both through the axon to signal the next neuron and backpropagates through the dendrites sending a retrograde signal to its presynaptic signaling neurons. This current decays significantly with travel length along the dendrites, so effects are predicted to be more significant for neurons whose synapses are near the postsynaptic cell body, with magnitude depending mainly on sodium-channel density in the dendrite. It is also dependent on the shape of the dendritic tree and, more importantly, on the rate of signal currents to the neuron. On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres.
Backpropagation occurs actively in the neocortex, hippocampus, substantia nigra, and spinal cord, while in the cerebellum it occurs relatively passively. This is consistent with observations that synaptic plasticity is much more apparent in areas like the hippocampus, which controls spatial memory, than the cerebellum, which controls more unconscious and vegetative functions.
The backpropagating current also causes a voltage change that increases the concentration of Ca2+ in the dendrites, an event which coincides with certain models of synaptic plasticity. This change also affects future integration of signals, leading to at least a short-term response difference between the presynaptic signals and the postsynaptic spike.
Functions
While many questions have yet to be answered in regards to neural backpropagation, there exists a number of hypotheses regarding its function. Some proposed function include involvement in synaptic plasticity, involvement in dendrodendritic inhibition, boosting synaptic responses, resetting membrane potential, retrograde actions at synapses and conditional axonal output.
Backpropagation is believed to help form LTP (long term potentiation) and Hebbian plasticity at hippocampal synapses. Since artificial LTP induction, using microelectrode stimulation, voltage clamp, etc. requires the postsynaptic cell to be slightly depolarized when EPSPs are elicited, backpropagation can serve as the means of depolarization of the postsynaptic cell.
Backpropagating action potentials can induce Long-term potentiation by behaving as a signal that informs the presynaptic cell that the postsynaptic cell has fired. Moreover, Spike-Time Dependent Plasticity is known as the narrow time frame for which coincidental firing of both the pre and post synaptic neurons will induce plasticity. Neural backpropagation occurs in this window to interact with NMDA receptors at the apical dendrites by assisting in the removal of voltage sensitive Mg2+ block (Waters et al., 2004). This process permits the large influx of calcium which provokes a cascade of events to cause potentiation.
Current literature also suggests that backpropagating action potentials are also responsible for the release of retrograde neurotransmitters and trophic factors which contribute to the short-term and long-term efficacy between two neurons. Since the backpropagating action potentials essentially exhibit a copy of the neurons axonal firing pattern, they help establish a synchrony between the pre and post synaptic neurons (Waters et al., 2004).
Importantly, backpropagating action potentials are necessary for the release of Brain-Derived Neurotrophic Factor (BDNF). BDNF is an essential component for inducing synaptic plasticity and development (Kuczewski N., Porcher C., Ferrand N., 2008). Moreover, backpropagating action potentials have been shown to induce BDNF-dependent phosphorylation of cyclic AMP response element-binding protein (CREB) which is known to be a major component in synaptic plasticity and memory formation (Kuczewski N., Porcher C., Lessmann V., et al. 2008).
Algorithm
While a backpropagating action potential can presumably cause changes in the weight of the presynaptic connections, there is no simple mechanism for an error signal to propagate through multiple layers of neurons, as in the computer backpropagation algorithm. However, simple linear topologies have shown that effective computation is possible through signal backpropagation in this biological sense.
References
Vetter P, et al. Propagation of Action Potentials in Dendrites Depends on Dendritic Morphology. The American Physiology Society 2000; 926-937
Neural circuitry
Neuroscience
Computational neuroscience | Neural backpropagation | Biology | 2,309 |
75,643,932 | https://en.wikipedia.org/wiki/Victor%20Aladjev | Victor Zakharovich Aladjev (; born June 14, 1942) is an Estonian mathematician and cybernetician, creator of the scientific school on the theory of homogeneous structures.
Early life and education
Victor Aladjev was born in 1942 in Grodno to parents Zakhar Ivanovich Aladjev and Maria Adolfovna Novogrotska. His father participated in the underground resistance organization during World War II while in German-occupied Grodno.
Aladjev attended University of Grodno in 1959, later transferring to the University of Tartu, Estonia in 1962, where he earned his degree in mathematics in 1966. Subsequently, he entered the graduate school of the Estonian Academy of Sciences in 1969, achieving a doctorate in mathematics (DSc) in 1972, specializing in Theoretical Cybernetics and Technical Cybernetics. His doctoral work focused on the mathematical theory of homogeneous structures, resulting in the award of a DSc under the guidance of Professor Richard E. Bellman.
Scientific career
In 1970, Aladjev became the President of the Tallinn Research Group (TRG), contributing to the mathematical theory of homogeneous structures, particularly Cellular Automata (CA). Between 1972 and 1990, Aladjev held various senior positions in design, technological, and research organizations in Tallinn.
His involvement in international mathematical publications includes serving as a referent and editorial board member for Zentralblatt für Mathematik since 1972 and being a member of International Association of Mathematical Modeling (IAMM) since 1980. In 1993, he was elected to the International Federation for Information Processing (IFIP) working group on the mathematical theory of homogeneous structures and its applications.
In 1994, Aladjev was honored with election as an academician of the Russian Academy of Cosmonautics and the International Academy of Noosphere (IAN). The following year, in 1995, he achieved full membership in the Russian Academy of Natural Sciences (RANS). By 1998, he rose to the position of First Vice-president of the IAN.
Research
Aladjev is the author of more than 500 scientific works, including 90 monographs, textbooks, and articles. Particularly noteworthy is his 1972 monograph on the theory of homogeneous structures, acknowledged as one of the finest monographic publications by the Estonian Academy of Sciences. It received recognition in the Encyclopedia of Physical Science and Technology. This monograph not only unveiled numerous original findings, but also introduced fundamental terminology on cellular automata, now widely accepted in the field.
Aladjev is a member of the editorial boards of a number of scientific journals. He created the Estonian School for the mathematical theory of homogeneous structures, whose fundamental results received international recognition and have made certain contributions in the basis of a new division of the modern mathematical cybernetics. He also created the UserLib6789 library of new software (more than 850 tools) for which he was won the Smart Award network award, and a large unified MathToolBox package (more than 1420 tools) for Maple and Mathematica systems.
As part of the Visiting Professor program, Aladjev collaborated with various universities in the computer science, delivering lectures on Maple and Mathematica systems. In recognition of his contributions, he was awarded the Gold Medal European Quality in May 2015 by the European Scientific & Industrial Consortium (ESIC). Aladjev's work on cellular automata gained acknowledgment, with one publication listed in the top 100 e-books in discrete mathematics by BookAuthority.
Personal life
Apart from his academic pursuits, Aladjev actively participated in the annual international sport events (Spartakiad) from 1976 to 1990, achieving success and winning several medals in athletics and volleyball.
Selected publications
Computability in homogeneous structures. V Aladyev, Izv. Akad. Nauk. Estonian SSR, Fiz.-Mat, 1972
Survey of research in the theory of homogeneous structures and their applications. V Aladyev, Mathematical Biosciences, 1974.
Mathematical Theory of Homogeneous Structures and Their Applications. Victor Alajev. Valgus Press, Tallinn, 1980.
Computer laboratory for engineering researches. VZ Aladjev, ML Shishakov, TA Trokhova, Intern. Conf. ACA-2000.–Saint-Petersburg, Russia, 2000.
A workstation for solution of systems of differential equations. VZ Aladjev, ML Shishakov, TA Trokhova - 3rd Int., 2000.
Educational computer laboratory of the engineer. VZ Aladjev, ML Shishakov, TA Trokhova - Proc. 8th Byelorussia Mathemat. Conf, 2000.
Maple 6: Solution of the Mathematical, Statistical and Engineering–Physical Problems. V Aladjev, M Bogdevicius, Laboratory of Basic Knowledge's, Moscow, 2001.
New software for mathematical package Maple of releases 6, 7 and 8. V Aladjev, M Bogdevičius, O Prentkovskis, Technika, 2002.
Classical cellular automata. Homogeneous structures. Aladjev, V., Fultus Books, 2010. ISBN 9781596822221
Classical cellular automata: Mathematical theory and applications. Aladjev, V., Scholar's Press, 2014. ISBN 9783639713459
Toolbox for the Mathematica programmers. V. Aladjev, V. Vaganov., CreateSpace Independent Publishing Platform, 2016. ISBN 9781532748837
Software Etudes in the Mathematica: Tallinn Research Group. Aladjev, Victor; Shishakov, Michael, CreateSpace Independent Publishing Platform, 2017.
Selected problems in the theory of classical cellular automata. Aladjev, Victor Zachar; Shishakov, Michael Leonid; Vaganov, Vjacheslav Alexei, Independently published, 2018.
Functional and procedural programming in Mathematica. Aladjev, V.; Shishakov, M.; Vaganov, V., TRG press, 2020.
Cellular automata, mainframes, Maple, Mathematica and computer science in Tallinn Research Group. Aladjev, V., Kindle Press, 2022. ISBN 9798447660208
References
1942 births
Applied mathematicians
Living people
Estonian mathematicians | Victor Aladjev | Mathematics | 1,311 |
11,066,704 | https://en.wikipedia.org/wiki/Electricity%20Directive%202019 | The Electricity Directive 2019 (2019/944) is a Directive in EU law concerning rules for the internal market in electricity.
Background
The first Electricity Directive 96/92/EC on common rules for the internal market in electricity aimed to create an internal market for electricity. Concrete provisions were thought to be needed to ensure a level playing field in generation and to reduce the risks of market dominance and predatory behaviour, ensuring non-discriminatory transmission and distribution tariffs, through access to the network based on third-party access rights and on the basis of tariffs published prior to their entry into force, and ensuring that the rights of small and vulnerable customers are protected and that information on energy sources for electricity generation is disclosed, as well as reference to sources, where available, giving information on their environmental impact.
To ensure efficient and non-discriminatory network access, the updated Directive sought to ensure distribution and transmission systems are operated through legally separate entities where vertically integrated undertakings exist. Independent management structures had to be in place between the distribution system operators, the transmission system operators, and any generation/supply companies. Legal separation does not imply a change of ownership of assets and nothing prevents similar or identical employment conditions applying throughout the whole of the vertically integrated undertakings. However, a non-discriminatory decision-making process should be ensured through organisational measures regarding the independence of the decision-makers responsible.
The 1996 Directive was updated and replaced by the Electricity Directive 2003/54/EC, followed by Directive 2009/72/EC, and then the current Electricity Directive 2019/944.
Contents
Articles 3 to 6 require that different enterprises have rights to access infrastructure of network owners on fair and transparent terms, as a way to ensure different member state networks and supplies can become integrated across the EU.
Article 8 requires that electricity or gas enterprises acquire a licence from member state authorities
Article 35 requires that there is legal separation into different entities of owners of networks from retailers, although they can be owned by the same enterprise, to ensure transparency of accounting.
See also
EU law
Energy policy of the European Union
References
External links
Directive (EU) 2019/944 of 5 June 2019 on common rules for the internal market for electricity on EUR-Lex
EU legislation summary
Electricity in the European Union.
Energy economics
Electric power in the European Union
European Union energy law
Energy policies and initiatives of the European Union
Economy of the European Union
Politics of the European Union
European Union directives
2003 in law
2003 in the European Union | Electricity Directive 2019 | Environmental_science | 497 |
25,925,761 | https://en.wikipedia.org/wiki/N-end%20rule | The N-end rule is a rule that governs the rate of protein degradation through recognition of the N-terminal residue of proteins. The rule states that the N-terminal amino acid of a protein determines its half-life (time after which half of the total amount of a given polypeptide is degraded). The rule applies to both eukaryotic and prokaryotic organisms, but with different strength, rules, and outcome. In eukaryotic cells, these N-terminal residues are recognized and targeted by ubiquitin ligases, mediating ubiquitination thereby marking the protein for degradation. The rule was initially discovered by Alexander Varshavsky and co-workers in 1986. However, only rough estimations of protein half-life can be deduced from this 'rule', as N-terminal amino acid modification can lead to variability and anomalies, whilst amino acid impact can also change from organism to organism. Other degradation signals, known as degrons, can also be found in sequence.
Rules in different organisms
The rule may operate differently in different organisms.
Yeast
N-terminal residues - approximate half-life of proteins for S. cerevisiae
Met, Gly, Ala, Ser, Thr, Val, Pro - > 20 hrs (stabilizing)
Ile, Glu - approx. 30 min (stabilizing)
Tyr, Gln - approx. 10 min (destabilizing)
Leu, Phe, Asp, Lys - approx. 3 min (destabilizing)
Arg - approx. 2 min (destabilizing)
Mammals
N-terminal residues - approximate half-life of proteins in mammalian systems
Bacteria
In Escherichia coli, positively-charged and some aliphatic and aromatic residues on the N-terminus, such as arginine, lysine, leucine, phenylalanine, tyrosine, and tryptophan, have short half-lives of around 2-minutes and are rapidly degraded. These residues (when located at the N-terminus of a protein) are referred to as destabilising residues. In bacteria, destabilising residues can be further defined as Primary destabilising residues (leucine, phenylalanine, tyrosine, and tryptophan) or secondary destabilising residues (arginine, lysine and in a special case methionine ). Secondary destabilising residues are modified by the attachment of a Primary destabilising residue by the enzyme leucyl/phenylalanyl-tRNA-protein transferase. All other amino acids when located at the N-terminus of a protein are referred to as stabilising residues and have half-lives of more than 10 hours . Proteins bearing an N-terminal Primary destabilising residue are specifically recognised by the bacterial N-recognin (recognition component) ClpS. ClpS is as a specific adaptor protein for the ATP-dependent AAA+ protease ClpAP, and hence ClpS delivers N-degron substrates to ClpAP for degradation.
A complicating issue is that the first residue of bacterial proteins is normally expressed with an N-terminal formylmethionine (f-Met). The formyl group of this methionine is quickly removed, and the methionine itself is then removed by methionyl aminopeptidase. The removal of the methionine is more efficient when the second residue is small and uncharged (for example alanine), but inefficient when it is bulky and charged such as arginine. Once the f-Met is removed, the second residue becomes the N-terminal residue and are subject to the N-end rule. Residues with middle sized side-chains such as leucine as the second residue therefore may have a short half-life.
Chloroplasts
There are several reasons why it is possible that the N-end rule functions in the chloroplast organelle of plant cells as well. The first piece of evidence comes from the endosymbiotic theory which encompasses the idea that chloroplasts are derived from cyanobacteria, photosynthetic organisms that can convert light into energy. It is thought that the chloroplast developed from an endosymbiosis between a eukaryotic cell and a cyanobacterium, because chloroplasts share several features with the bacterium, including photosynthetic capabilities. The bacterial N-end rule is already well documented; it involves the Clp protease system which consists of the adaptor protein ClpS and the ClpA/P chaperone and protease core. A similar Clp system is present in the chloroplast stroma, suggesting that the N-end rule might function similarly in chloroplasts and bacteria.
Additionally, a 2013 study in Arabidopsis thaliana revealed the protein ClpS1, a possible plastid homolog of the bacterial ClpS recognin. ClpS is a bacterial adaptor protein that is responsible for recognizing protein substrates via their N-terminal residues and delivering them to a protease core for degradation. This study suggests that ClpS1 is functionally similar to ClpS, also playing a role in substrate recognition via specific N-terminal residues (degrons) like its bacterial counterpart. It is posited that upon recognition, ClpS1 binds to these substrate proteins and brings them to the ClpC chaperone of the protease core machinery to initiate degradation.
In another study, Arabidopsis thaliana stromal proteins were analyzed to determine the relative abundance of specific N-terminal residues. This study revealed that Alanine, Serine, Threonine, and Valine were the most abundant N-terminal residues, while Leucine, Phenylalanine, Tryptophan, and Tyrosine (all triggers for degradation in bacteria) were among the residues that were rarely detected.
Furthermore, an affinity assay using ClpS1 and N-terminal residues was performed to determine whether ClpS1 did indeed have specific binding partners. This study revealed that Phenylalanine and Tryptophan bind specifically to ClpS1, making them prime candidates for N-degrons in chloroplasts.
Further research is currently being conducted to confirm whether the N-end rule operates in chloroplasts.
Apicoplast
An apicoplast is a derived non-photosynthetic plastid found in most Apicomplexa, including Toxoplasma gondii, Plasmodium falciparum and other Plasmodium spp. (parasites causing malaria). Similar to plants, several Apicomplexan species, including Plasmodium falciparum contain all of the necessary components required for a Apicoplast-localized Clp-protease, including a potential homolog of the bacterial ClpS N-recognin. In vitro data demonstrate that Plasmodium falciparum ClpS is able to recognize a variety of N-terminal primary destabilizing residues, not only the classic bacterial Primary destabilizing residues (leucine, phenylalanine, tyrosine and tryptophan) but also N-terminal Isoleucine and hence exhibits broad specificity (in comparison to its bacterial counterpart).
References
Protein biosynthesis | N-end rule | Chemistry | 1,574 |
9,129,848 | https://en.wikipedia.org/wiki/CRT%20%28genetics%29 | CRT is the gene cluster responsible for the biosynthesis of carotenoids. Those genes are found in eubacteria, in algae and are cryptic in Streptomyces griseus.
Carotenoid synthesis is probably present in the common ancestor of Bacteria and Archaea; the phytoene synthase gene crtB is universal among carotenoid synthesizers. Among eukaryotes, plants and algae inherited the cyanobacterial pathway via biosynthesis of their plastids, while fungi retain a archaeal-like pathway. Among all these synthesizers, several possible selection and arrangements of biosynthetic genes exist, consisting of one gene cluster cluster, several clusters, or no clustering at all.
Role of CRT genes in carotenoid biosynthesis
The CRT gene cluster consists of twenty-five genes such as crtA, crtB, crtC, crtD, crtE, crtF, crtG, crtH, crtI, crtO, crtP, crtR, crtT, crtU, crtV, and crtY, crtZ. These genes play a role in varying stages of the Astaxanthin biosynthesis and Carotenoid biosynthesis (Table 1).
crtE encodes for an enzyme known as geranylgeranyl diphosphate synthase known to catalyze the condensation reaction of isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP) into geranylgeranyl diphosphate (GGDP). Two GGDP molecules are subsequently converted into a single phytoene molecule by phytoene synthase, an enzyme encoded by crtB, known as PSY in Chlorophyta. The following desaturation of phytoene into ζ-carotene is catalyzed by the phytoene desaturase encoded by crtI, crtP, and/or PDS. ζ -carotene can also be obtained through phytoene using the carotene 2,4-desaturase enzyme (crtD). Depending on the species, varying carotenoids are accumulated following these steps.
Spirilloxanthin
Spirilloxanthin is obtained from lycopene following a hydration, desaturation, and methylation reaction. These reactions are catalyzed by carotene hydratase (crtC), carotene 3,4- desaturase (crtD), and carotene methyltransferase (crtF), respectively.
Canthaxanthin
Lycopene is cyclized through two enzymes lycopene cyclase and β-C-4-oxygenase/β-carotene ketolase encoded on the crtY (in Chlorophyta) /crtL (in cyanobacteria), and crtW, respectively. crtY cyclizes lycopene into β-carotene, which is subsequently oxygenated by crtW to form canthaxanthin.
Zeaxanthin and lutein
Zeaxanthin and lutein are obtained through hydroxylation of α- and β-carotene. Hydroxylation of Zeaxanthin occurs by β-carotene hydroxylase an enzyme encoded on the crtR (in cyanobacteria) and crtZ gene (in Chlorophyta).
Other
Zeaxanthin can be further processed to obtain zeaxanthin-diglucoside by Zeaxanthin glucosyl transferase (crtX).
Echinenone is obtained from β -carotene through the catalyzing enzyme β-C-4-oxygenase/β-carotene ketolase (crtO). CrtO, also known as bkt2 in Chlorophyta, is also involved in the conversion of other carotenoids into Canthaxanthin, 3-Hydroxyechinenone, 3'-Hydroxyechinenone, Adonixanthin, and Astaxanthin.CrtZ, similarly to crtO, is also capable of converting carotenoids into β-cryptoxanthin, Zeaxanthin, 3-Hydroxyechinenone, 3'-Hydroxyechinenone, Astaxanthin, Adonixanthin, and Adonirubin.
crtH catalyzes the isomerization of cis-carotenes into trans-carotenes through carotenoid isomerase.
crtG encodes for carotenoid 2,2'- β-hydroxylase, this enzyme leads to the formation of 2-hydroxylated and 2,2′-dihydroxylated products in E coli.
Phylogeny
Previous studies have indicated through phylogenetic analysis that evolutionary patterns of crt genes are characterized by horizontal gene transfer and gene duplication events.
Horizontal gene transfer has been hypothesized to have occurred between cyanobacteria and Chlorophyta, as similarities in these genes have been found across taxa. Note, however, that some cyanobacteria retained their nature. Horizontal gene transfer among species occurred with a high probability in genes involved in the initial steps of the carotenoid biosynthesis pathway such as crtE, crtB, crtY, crtL, PSY, and crtQ. These genes are often well conserved while others involved in the later stages of Carotenoid biosynthesis such as crtW and crtO are less conserved. The less conserved nature of these genes allowed for the expansion of the carotenoid biosynthesis pathway and its end products. Amino acid variations within crt genes have evolved due to purifying and adaptive selection.
Gene duplications are suspected to have occurred due to the presence of multiple copies of ctr clusters or genes within a single species. An example of this can be seen in the Bradyrhizobium ORS278 strain, where initial crt genes can be found (excluding crtC, crtD, and crtF genes) as well as a second crt gene cluster. This second gene cluster has been shown to also be involved in carotenoid biosynthesis using its crt paralogs.
References
Carotenoids
Genetics | CRT (genetics) | Biology | 1,362 |
42,365,452 | https://en.wikipedia.org/wiki/Snow%20in%20Louisiana | Snow in Louisiana is a relatively rare but not unheard of sight because of Louisiana’s subtropical climate. For snow to push into Louisiana, extreme weather conditions for the area must be present, usually a low-pressure system coupled with unusually low temperatures. Average snowfall in Louisiana is approximately per year, a low figure rivaled only by the states of Florida and Hawaii.
According to the National Weather Service, measurable snowfall amounts occur on an average of only once every other year in Northwest Louisiana; many consecutive years may pass with no measurable snowfall. The heaviest snowstorm ever recorded in the state was in the Shreveport area, where of snow fell in December 1929. This fell on December 21 and 22; half an inch remained on the ground on December 25 making this the only Christmas Day of record with snow on the ground. In 1948, 12.4 inches of snow was measured for the month of January for the greatest monthly amount on record. Occasional ice and sleet storms do considerable damage to trees, power and telephone lines, as well as make travel very difficult.
Notable events
1895: As part of the Great Freeze, large snow storm spanning from Texas to Alabama left New Orleans with approximately of snow, Lake Charles with of snow, and Rayne with of snow. However, these are unconfirmed.
1899: With the Great Blizzard of 1899, snowfall in New Orleans reached with strong winds and temperatures below .
2000: This snow was nationally televised as the 2000 Independence Bowl was being played on December 31, 2000, in Shreveport. The game was later referred to as "The Snow Bowl", as a snowstorm (rare for the Shreveport area) began just before kickoff, blanketing the field in powder, and continued throughout the entire game.
2004: The 2004 Christmas Eve snowstorm swept across southern Texas and Louisiana, leaving unprecedented amounts of snow in areas that had not seen snow in 15 to 120 years.
2008: It snowed in and around semi-tropical New Orleans on December 11, 2008.
2014: The early 2014 North American cold wave that blew through the eastern portion of the continental United States produced record low temperatures and brought freezing snow and sleet to Louisiana.
2017: Early in the morning on December 8, 2017, a winter storm dripped snowflakes on much of south Louisiana. Throughout the day, more and more snow fell. Snow lasted all day long. Heavy snowfall fell on the ground, giving some places a height of of snow. Most schools across Louisiana closed due to the snow.
2021: Significant snow and ice was reported nearly statewide on February 14–15, 2021, with the February 13–17, 2021, North American winter storm and again on February 17 from the February 15–20, 2021, North American winter storm.
2025: The 2025 Gulf Coast blizzard swept through Louisiana on January 21, producing snow and ice throughout much of the state. Snowfall totaled near Chalmette, and in nearby New Orleans. Schools across the state closed, and a Blizzard Warning was issued for the first time in Louisiana’s history due to the heavy snowfall and winds exceeding .
State preparedness
Because of the scarcity of freezing temperatures in Louisiana, many citizens of the region are often left unprepared to handle what might be considered a storm of little consequence in more northern states. The region has developed a system of road and school closures with only minimal snowfall, as most drivers in the area are unprepared to deal with slick, frozen roads. In 2014, Gov. Bobby Jindal invoked the Louisiana Homeland Security and Emergency Assistance and Disaster Act in advance of the weather and assembled teams to assist in preparation and recovery.
Louisiana's environment
The state's typically humid subtropical climate rarely encounters precipitation coupled with freezing temperatures. The low latitudes and proximity to the Gulf of Mexico helps maintain this climate, particularly closer to the coast. The normally extreme summers are rarely countered by cold winters, with snowfall low in intensity and frequency. Also the southern portions of the state typically has two seasons, a wet season from April to October and a dry season from November to March. The cooler season typically brings in very little precipitation, also limiting snowfall. Average winter temperature normals in southern Louisiana vary from the 40s to the 60s Fahrenheit. Natural disasters such as hurricanes are far more common, and such an ecosystem is ill prepared for snow, particularly the seafood supply on which Louisiana relies for much of its revenue. Little research has been done directly linking effects on Louisiana's ecosystem to snow conditions. However, the jet stream that created the 2014 North American cold wave has been linked to global warming, and resultant cold fronts have been linked to salt water intrusion in Louisiana's Atchafalaya Bay. However, one of Louisiana's most famous animals, the alligator, has proved versatile in adapting to cold weather conditions by burrowing in "alligator holes", which they usually use for waiting out a drought.
See also
Snow in Florida
References
External links
Louisiana Office of State Climatology. losc.lsu.edu
United Nations Environment Programme unep.org
Snow
Louisiana
Weather-related lists | Snow in Louisiana | Physics | 1,030 |
23,320,406 | https://en.wikipedia.org/wiki/List%20of%20microscopy%20visualization%20systems | This is a list of software systems that are used for visualizing microscopy data.
For each software system, the table below indicates which type of data can be displayed: EM = Electron microscopy; MG = Molecular graphics; Optical = Optical microscopy.
See also
Biological data visualization
List of molecular graphics systems
References
Electron microscopy
Microscopy
Molecular modelling software | List of microscopy visualization systems | Chemistry | 68 |
63,192,624 | https://en.wikipedia.org/wiki/Astranis | Astranis Space Technologies Corp. is an American company specializing in geostationary communications satellites. It is headquartered in San Francisco, California.
In 2018, Astranis launched DemoSat-2, a prototype 3U CubeSat. The launch aimed to test software-defined radio (SDR) technology for future larger communications satellites.
The company publicly disclosed its projects in March 2018, following a funding round that was aimed at the development of geostationary communications satellites.
In January 2019, Astranis initiated a commercial program with Pacific Dataport, Inc. to increase the satellite internet capacity in Alaska. A 350 kg satellite was launched on April 30, 2023, as part of a multi-satellite payload.
Astranis was part of the Winter 2016 cohort of the Y Combinator accelerator program and has raised over $350 million in venture funding from firms such as BlackRock, Venrock, and Andreessen Horowitz.
History
Demonstration satellite
On January 12, 2018, Astranis launched its first satellite, "DemoSat 2", using an Indian PSLV-XL rocket. The satellite was a 3U cubesat measuring 10 cm x 10 cm x 30 cm and weighing less than 3 kg. It carried a prototype of the company's software-defined radio.
Geostationary satellites
Block 1
In 2019, Astranis leased its first MicroGEO spacecraft to Pacific Dataport, Inc., a subsidiary of Microcom. The satellite, named Arcturus, initially had an anticipated launch date in early 2022, which was later delayed to April 2023. After the launch, the company confirmed successful communication with the satellite and hardware deployment. Subsequent tests showed the spacecraft could deliver up to 8.5 Gbit/s, compared to its design specification of 7.5 Gbit/s.
In July 2023, Astranis reported a malfunction in an externally supplied solar array drive assembly on Arcturus, which affected the spacecraft's ability to provide internet service. According to Astranis CEO John Gedmark, no hardware built by Astranis failed.
Block 2
In April 2022, Astranis signed a launch contract with SpaceX for their "Block 2" MicroGEO spacecraft. The company had previously initiated component orders for these spacecraft, with an initial aim to complete them by the end of 2022.
Block 3
Block 3, consisting of five satellites, was originally planned to launch in mid-2024 but is now scheduled for 2025. Customers include Orbits Corp of the Philippines, Thaicom of Thailand, Orbith of Argentina, and Apco Networks of Mexico.
Future
A replacement for Arcturus is scheduled for early 2025. Astranis CEO John Gedmark stated in April 2022 that the company aims to have over 100 satellites in active service by 2030.
Spacecraft
References
Space
Communications satellite operators
Spacecraft manufacturers
Satellite operators | Astranis | Physics,Mathematics | 588 |
7,857,011 | https://en.wikipedia.org/wiki/Marine%20transfer%20operations | Marine Transfer Operations are conducted at many ports around the world between tanker ships, barges, and marine terminals. Specifically, once the marine vessel is secure at the dock a loading arm or transfer hose is connected between a valve header on the dock and the manifold header on the vessel. A marine transfer of petroleum products cannot be conducted unless it is supervised by a person-in-charge (PIC) on the vessel who is seafarer in the Merchant Marine and another person-in-charge on the dock.
Person-in-charge
The person-in-charge on the dock is called a Loading master-PIC and the person-in charge on the barge will be the Tankerman-PIC. The person-in-charge on a tanker ship will be the deck officer who monitors the transfer of product in the cargo control room. All persons-in-charge must have special training in order to obtain the proper credentials such as licensing and endorsement on their merchant mariner documents.
Marine surveyor
Loading Masters work closely with the marine surveyor in agreeing to the sequence of the transfer. Such as whether any product sampling will take place prior to commencement, determining if a line displacement will occur, agreeing on whether the final stop at completion will either be a shore stop or a draft stop on the vessel. The marine surveyor gauges the vessel's tanks and shore tanks to ensure the correct amount of product is transferred. Additionally, the surveyor or inspector will obtain product samples on the marine vessel and shore tank for laboratory analysis to ensure that the product meets all specifications of purity.
Regulations
Transfer operations and commencement of a transfer is highly regulated throughout the world with consideration of the environment with potential of water pollution occurring if petroleum product is released into the water during the transfer. Federal, state, and local laws must be observed during marine transfer operations.
Maritime Security (USCG), occupational safety and health regulations must be adhered to in addition to environmental regulations during marine transfer operations. These regulation's are enforced by local state port control organizations such as the United States Coast Guard in the United States.
Marine transfer operators
A marine transfer operation occurs between three main stakeholders which includes the Loading Master-PIC, Vessel Person-In -Charge PIC, and marine surveyor or inspector. These individuals communicate prior to the transfer agreeing on the sequence of events that will occur before, during, and after the transfer. During the course of the transfer the Loading Master is in continuous two way radio contact with the vessel Person-In-Charge and standing by to stop the transfer immediately if any problems develop such as leaks at the transfer hose or loading arm.
See also
Barge
Tanker
Oil tanker
Marine loading arm
External links
Marine Transfer Operations
Barges News
Google Group Marine Transfer Operations
Water transport
Barges
Water pollution | Marine transfer operations | Chemistry,Environmental_science | 548 |
26,107,799 | https://en.wikipedia.org/wiki/Dextrose%20equivalent | Dextrose equivalent (DE) is a measure of the amount of reducing sugars present in a sugar product, expressed as a percentage on a dry basis relative to dextrose. The dextrose equivalent gives an indication of the average degree of polymerisation (DP) for starch sugars. As a rule of thumb, DE × DP = 120.
In all glucose polymers, from the native starch to glucose syrup, the molecular chain ends with a reducing sugar, containing a free aldehyde in its linear form. As the starch is hydrolysed, the molecules become shorter and more reducing sugars are present. Therefore, the dextrose equivalent describes the degree of conversion of starch to dextrose. The standard method of determining the dextrose equivalent is the Lane-Eynon titration, based on the reduction of copper(II) sulfate in an alkaline tartrate solution, an application of Fehling's test.
Examples:
A maltodextrin with a DE of 10 would have 10% of the reducing power of dextrose which has a DE of 100.
Maltose, a disaccharide made of two glucose (dextrose) molecules, has a DE of 52, correcting for the water loss in molecular weight when the two molecules are combined. Glucose (dextrose) has a molecular mass of 180, while water has a molecular mass of 18. For each 2 glucose monomers binding, a water molecule is removed.
Therefore, the molecular mass of a glucose polymer can be calculated by using the formula (180*n - 18*(n-1)) with n the DP (degree of polymerisation) of the glucose polymer. The DE can be calculated as 100*(180 / Molecular mass( glucose polymer)). In this example the DE is calculated as 100*(180/(180*2-18*1)) = 52.
Sucrose actually has a DE of zero even though it is a disaccharide, because both reducing groups of the monosaccharides that make it are connected, so there are no remaining reducing groups.
Because different reducing sugars (e.g. fructose and glucose) have different sweetness, it is incorrect to assume that there is any direct relationship between dextrose equivalent and sweetness.
References
Starch
Food science
Units of chemical measurement | Dextrose equivalent | Chemistry,Mathematics | 496 |
2,977,969 | https://en.wikipedia.org/wiki/Cycloamylose | Cycloamyloses are cyclic α-1,4 linked glucans comprising dozens or hundreds of glucose units. Chemically they are similar to the much smaller cyclodextrins, which are typically composed of 6, 7 or 8 glucose units.
Discovery
Cycloamyloses were discovered as a result of studies of the function of 4-α-glucanotransferase, also known as disproportionating enzyme or D-enzyme (EC 2.4.1.25) isolated from potato.
Synthesis
Upon incubation of D-enzyme with high molecular weight amylose, a product was obtained with decreased ability to form a blue complex with iodine, without reducing or non-reducing ends, and resistant to hydrolysis by glucoamylase (an exoamylase). Takaha and Smith deduced that the product was a cyclic polymer, which they confirmed by mass spectrometry and acid hydrolysis, and showed that it comprised between 17 and several hundred glucose units. It was subsequently shown that D-enzyme could create complex cycloglucans from amylopectin. Similar 4-α-glucanotransferases from bacteria and other organisms have also been shown to produce cycloglucans upon incubation with amylose or amylopectin.
Structure
While the structures of cyclodextrins are planar circles, the structure of cycloamyloses with 10 to 14 glucose units were determined to be circular with strain-induced band-flips and kinks. In contrast the structure of a larger cycloamylose with 26 glucose units was determined to comprise two short left-handed V-amylose helices in antiparallel arrangement.
Applications
Cycloamyloses contain cavities in the helices which are capable of accommodating guest molecules, which suggested applications in chemical technologies. Cycloamylose is used in artificial chaperone technology for the refolding of denatured proteins. Cycloglucans have physicochemical properties that make them useful in food and manufacturing.
References
Polysaccharides
Starch
Macrocycles | Cycloamylose | Chemistry | 458 |
9,712,439 | https://en.wikipedia.org/wiki/Filled%20Julia%20set | The filled-in Julia set of a polynomial is a Julia set and its interior, non-escaping set.
Formal definition
The filled-in Julia set of a polynomial is defined as the set of all points of the dynamical plane that have bounded orbit with respect to
where:
is the set of complex numbers
is the -fold composition of with itself = iteration of function
Relation to the Fatou set
The filled-in Julia set is the (absolute) complement of the attractive basin of infinity.
The attractive basin of infinity is one of the components of the Fatou set.
In other words, the filled-in Julia set is the complement of the unbounded Fatou component:
Relation between Julia, filled-in Julia set and attractive basin of infinity
The Julia set is the common boundary of the filled-in Julia set and the attractive basin of infinity
where: denotes the attractive basin of infinity = exterior of filled-in Julia set = set of escaping points for
If the filled-in Julia set has no interior then the Julia set coincides with the filled-in Julia set. This happens when all the critical points of are pre-periodic. Such critical points are often called Misiurewicz points.
Spine
The most studied polynomials are probably those of the form , which are often denoted by , where is any complex number. In this case, the spine of the filled Julia set is defined as arc between -fixed point and ,
with such properties:
spine lies inside . This makes sense when is connected and full
spine is invariant under 180 degree rotation,
spine is a finite topological tree,
Critical point always belongs to the spine.
-fixed point is a landing point of external ray of angle zero ,
is landing point of external ray .
Algorithms for constructing the spine:
detailed version is described by A. Douady
Simplified version of algorithm:
connect and within by an arc,
when has empty interior then arc is unique,
otherwise take the shortest way that contains .
Curve :
divides dynamical plane into two components.
Images
Names
airplane
Douady rabbit
dragon
basilica or San Marco fractal or San Marco dragon
cauliflower
dendrite
Siegel disc
Notes
References
Peitgen Heinz-Otto, Richter, P.H. : The beauty of fractals: Images of Complex Dynamical Systems. Springer-Verlag 1986. .
Bodil Branner : Holomorphic dynamical systems in the complex plane. Department of Mathematics Technical University of Denmark, MAT-Report no. 1996-42.
Fractals
Limit sets
Complex dynamics | Filled Julia set | Mathematics | 508 |
5,361,701 | https://en.wikipedia.org/wiki/XLD%20agar | Xylose lysine deoxycholate agar (XLD agar) is a selective growth medium used in the isolation of Salmonella and Shigella species from clinical samples and from food. The agar was developed by Welton Taylor in 1965. It has a pH of approximately 7.4, leaving it with a bright pink or red appearance due to the indicator phenol red. Sugar fermentation lowers the pH and the phenol red indicator registers this by changing to yellow. Most gut bacteria, including Salmonella, can ferment the sugar xylose to produce acid; Shigella colonies cannot do this and therefore remain red. After exhausting the xylose supply Salmonella colonies will decarboxylate lysine, increasing the pH once again to alkaline and mimicking the red Shigella colonies. Salmonellae metabolise thiosulfate to produce hydrogen sulfide, which leads to the formation of colonies with black centers and allows them to be differentiated from the similarly coloured Shigella colonies.
Other enterobacteria such as E. coli will ferment the lactose present in the medium to an extent that will prevent pH reversion by decarboxylation and acidify the medium, turning it yellow.
Salmonella species: red colonies, some with black centers. The agar itself will turn red due to the presence of Salmonella type colonies.
Shigella species: red colonies.
Coliforms: yellow to orange colonies.
Pseudomonas aeruginosa: pink, flat, rough colonies. This type of colony can be easily mistaken for Salmonella due to the color similarities.
XLD agar contains:
See also
Agar plate
MRS agar
R2a agar
References
External links
Biochemistry detection reactions
Microbiological media | XLD agar | Chemistry,Biology | 372 |
11,720,339 | https://en.wikipedia.org/wiki/Cercospora%20gerberae | Cercospora gerberae is a fungal plant pathogen.
References
gerberae
Fungal plant pathogens and diseases
Fungus species | Cercospora gerberae | Biology | 27 |
12,010,723 | https://en.wikipedia.org/wiki/Ra%C3%ABlian%20beliefs%20and%20practices | Raëlian beliefs and practices are the concepts and principles of Raëlism, a new religious movement and UFO religion founded in 1974 by Claude Vorilhon, an auto racing journalist who changed his name to "Raël". The followers of the International Raëlian Movement believe in an advanced species of extraterrestrial aliens called Elohim who created life on Earth. Raëlians are individualists who believe in sexual self-determination. As advocates of the universal ethic and world peace, they believe the world would be better if geniuses had an exclusive right to govern in what Rael terms Geniocracy. As believers of life in outer space, they hope that human scientists will follow the path of the Elohim by achieving space travel through the cosmos and creating life on other planets. As believers in the resurrection of Jesus through a scientific cloning process (which includes memory transfer) by the Elohim, they encourage scientific research to extend life through cloning; however, critics outside are doubtful of its possibility.
Active followers of Raëlianism have exhibited their sex-positive feminism and pacifism through outdoor contacts such as parades. The major initiation rite in the Raëlian Church is the baptism or Transmission of the Cellular Plan and is enacted by upper-level members in the Raëlian clergy known as guides.
Beliefs
Structure of the Universe
Raël says that "everything is in everything". He says that, inside the atoms of living things, are living things that are made of atoms, which themselves contain living things that are made of atoms, and so on, to the infinitely small. The universe itself is contained in an atom inside of another universe, and so on, to the infinitely large. Because of the difference of mass, the activity of life inside of a living thing's atoms would undergo many millennia before enough time passes for that living thing to take a single step. Raëlians believe that the universe is infinite, and thus lacks a center. Because of this, one could not imagine where an ethereal soul would go, due to the universe's infinite nature. They believe that infinity exists in time as well as in space, for all levels of life. Some of critics believe that rationally it is not possible to connect the chain to infinity, and finally this chain must lead somewhere.
Raëlians believe that humanity would be able to create life on other planets only if humanity is peaceful enough to stop war. In that case, humanity could travel the distances between stars and create life on another planet. Progress in terraforming, molecular biology, and cloning would enable these teams to create continents and life from scratch. Progress in social engineering would ensure that this creation would have a better chance of both surviving and having the potential to understand its creators. Research on how civilization would occur on another planet would allow scientists to decide what traces of their origin should be left behind so that their role in life creation would someday be revealed. The progress achieved by the science teams would ultimately sustain a perpetual chain of life.
Intelligent Design
Creation of life on Earth by extraterrestrials
In his book The Message Given to me by Extraterrestrials (now republished as Intelligent Design: Message from the Designers 2006 ), Vorilhon claims that on 13 December 1973, he found a spacecraft shaped like a flattened bell that landed inside Puy de Lassolas, a volcano near the capital city of Auvergne. A 25,000-year-old human-like extraterrestrial inside the spacecraft named Yahweh said that Elohim was the name that primitive people of Earth called members of his extraterrestrial race, who were seen as "those who came from the sky". Yahweh explained that Earth was originally void of land, but the Elohim came, broke apart the clouds, exposed the seas to sunlight, built a continent, and synthesized a global ecosystem. Solar astronomy, terraformation, nanotechnology, and genetic engineering allowed Elohim to adapt life to Earth's thermal and chemical makeup.
Yahweh gave materialistic explanations of the Garden of Eden, a large laboratory that was based on an artificially constructed continent; Noah's Ark, a spaceship that preserved DNA that was used to resurrect animals through cloning; the Tower of Babel, a rocket that was supposed to reach the creators' planet; and the Great Flood, the byproduct of a nuclear missile explosion that the Elohim sent. After tidal wave floods following the explosions receded, Elohim scattered the Israelites and had them speak the language of other tribes.
According to Vorilhon, Elohim contacted about forty people to act as their prophets on Earth, including Moses, Elijah, Ezekiel, Buddha,
John the Baptist, Jesus,
Muhammad, and Joseph Smith. The religions thought to be from Elohimic origins include Judaism, Buddhism, Christianity, Islam, and Mormonism.
According to Vorilhon, multiple religious texts indicate that the Elohim would return at the age of Apocalypse or Revelation (unveiling of the truth). Humans from another world would appear to drop down from the sky, and meet in the embassy that they have asked Raël to build for them, and share their advanced scientific knowledge with humanity. Thus, one of the stated main goals of the Raëlian movement is to inform as many people as possible about this extraterrestrial race.
The controversy surrounding the origins of Raelian beliefs centers on the writings of several authors in the late 1960s. Jean Sendy, a French writer, translator, and author of books on the esoteric and UFOs wrote several novels detailing the creation of Earth by extraterrestrials. One of the best known researchers in this field is Erich von Däniken, the 'father' of the Ancient Astronauts theory, which postulates that Earth might have been visited by extraterrestrials in the remote past.
With the publication of Chariots of the Gods? in 1968, Erich von Däniken introduced the intervention theory to the general public. Von Däniken wrote that the technologies and religions of ancient civilizations were granted by extraterrestrials worshiped as gods. Von Däniken argued that only extraterrestrial intervention can explain the higher technological knowledge presumed to be essential for the production of ancient artifacts such as the Egyptian pyramids, Stonehenge and the Moai of Easter Island. Humans in ancient times considered this extraterrestrial high-tech to be supernatural and the aliens themselves to be 'gods'. One can find direct parallels to the messages that Vorilhon claimed to have received and written about in his books. Marie-Hélène Parent, ex-guide Raëlian priest, describes Sendy and Vorilhon meeting several times for drinks and conversation throughout the years of 1973 and 1974, prior to Vorilhon's claimed extraterrestrial encounter.
Humanity's chance of creating life on other planets
Raëlians believe that humanity would be able to create life on other planets only if it is peaceful enough to stop war. If done, humanity could travel the distances between stars and create life on another planet. Progress in terraformation, molecular biology, and cloning would enable these teams to create continents and life from scratch. Progress in social engineering would ensure that this creation would have a better chance of both surviving as well as having the potential to understand its creators. Research on how globalization would occur on another planet would allow scientists to decide what traces of their origin should be left behind so that their role in life creation would someday be revealed. The progress achieved by the science teams would ultimately sustain a perpetual chain of life.
A coming judgement
Raëlians do not believe in reincarnation as dictated by mystical writings because they do not believe that an ethereal soul exists free of physical confinement. Instead the Raëlians think that advanced supercomputers of the Elohim are right now recording the memories and DNA of human beings. When Elohim release this information for the coming resurrection, people would be brought back from the dead and the judgments upon them would be realized based on actions in their past life. People excluded from physical recreation would include those who achieved nothing positive but were not evil. Vorilhon expressed an interest in cloning Hitler for war trials and retroactive punishment. Raël also mentioned cloning as the solution to terrorism by suicide attacks, as the perpetrators would not be able to escape punishment by killing themselves if the Elohim recreated them after their attacks.
Practices
Initiation of new members
The major initiation rite in the Raëlian Church is the "baptism" or "transmission of the cellular plan". That rite is enacted by upper-level members of the Raëlian clergy who are called "guides". Canadian sociologist Susan J. Palmer says that in 1979, Raël introduced the "Act of Apostasy" as an obligation for people who are preparing for their Raëlian baptism. CTV Television Network states that apostasy from other religions is required for new Raëlian members. Joining the Raëlian Church through transmission of the cellular plan happens only on certain days of the year. There are four of such days, all of which mark anniversaries in the Raëlian calendar.
The Raëlian baptism is known as "transmission of the cellular plan", where "cellular" refers to the organic cells of the body, and the "plan" refers to the genetic makeup of the individual. That Raëlian baptism involves a guide member laying water onto the forehead of the new member. That practice began on "the first Sunday in April" of 1976, when Raël baptized 40 Raëlians. Raëlians believe that their genetic information is recorded by a remote computer, and would become recognized during their final hour, when they will be judged by the extraterrestrial Elohim.
There is continuing debate on whether Raëlians can be identified as a cult. The government of France classifies the Raelian Movement as a "secte" (French word for cult). However, according to Glenn McGee, the associate director of the Center for Bioethics at the University of Virginia, part of the sect is a cult, while the other part is a commercial website that collects large sums of money from people who are interested in human cloning. The Bureau of Democracy, Human Rights, and Labor of the United States Department of State, and sociologist Susan J. Palmer, have classified the International Raëlian Movement as a religion.
Activism
Raëlians routinely advocate sex-positive feminism and genetically modified food and actively protest against wars in addition to the Catholic Church. For example, a photographer of the Associated Press snapped a picture of half-naked Raëlian women wearing pasties as part of an anti-war demonstration in Seoul, Korea. A snapshot by Agence France-Presse revealed Raëlians in white alien costumes with signs bearing the message "NO WAR ... ET wants Peace, too!". On 6 August 2003, the first day of Raëlian year 58 AH, a tech article on the USA Today newspaper mentions an "unlikely ally" of the Monsanto Company, the Raëlian Movement of Brazil. The movement gave vocal support in response to the company's support for genetically modified organisms particularly in their country. Brazilian farmers have been using Monsanto's genetically engineered soy plant as well as the glyphosate herbicide to which it was artificially adapted. The Raëlians spoke against the Brazilian government's ban on GMOs.
In July 2001, Raëlians on the streets attracted Italians and Swiss people as they gave leaflets in protest to over a hundred child molesters in existence among Roman Catholic clergy in France. They recommended that parents should not send their children to Catholic confession. The Episcopal vicar of Geneva sued the Raëlian Church for libel but did not win. The judge did not accept the charges for the reason that the Raëlians were not attacking the whole of the Catholic Church.
In October 2002, Raëlians in a Canadian anti-clerical parade held handed out Christian crosses to high school students. They were invited to burn the crosses in a park not far from Montreal's Mount Royal and to sign letters of apostasy from the Roman Catholic Church. The Quebec Association of Bishops called this "incitement to hatred", and several school boards attempted to prevent their students from meeting Raëlians.
Topless Rights of Women
Several Raëlian groups in the United States have organized annual protests, based upon their claim that women should have the same legal right to go topless in public, which men can do without fear of arrest for indecent exposure. Some people have called that a publicity stunt that serves to recruit new members. "Go Topless Day" is their annual event, in which women protest while topless, except for nipple pasties to avoid arrest. That event is held near August 26th, which is the anniversary of the day that women were given the right to vote in the USA.
Advocacy
Embassy for Extraterrestrials
Raëlians believe that life on earth—as well as many religions of the world—was the work of extraterrestrial influence. They believe these were scientists and that ancient people saw them as "gods" and gave the name "Elohim". Raëlians believe that the Embassy for Extraterrestrials or "Third Temple" is to support an official contact with Extraterrestrial Elohim and their messengers of the main religions at the "New Jerusalem". Writers who have influenced Raëlian beliefs include Zechariah Sitchin and Erich von Däniken.
The International Raëlian Movement envisions having an entrance with an aseptic chamber leading to a conference room for twenty-one people as well as a dining room of the same capacity. In the plan are seven rooms for the purpose of receiving human guests into the embassy. The embassy building, along with the swimming pool, would be in the center of a large park and protected from trespassing by a wall−a maximum of two stories-to surround the entire complex's circumference. Trees and bushes are to be planted in the outskirts of the wall's area. The walls are to have a northern and southern entrance. The landing pad for the embassy should be able to fit a spaceship of twelve meters of diameter or 39'4" on its terrace. The terrace is to be above the rooms in the torus, which are for extraterrestrials only. The seven rooms directly underneath the landing pad would be protected from occupants of other rooms with a thick metal door. Finally, the International Raëlian Movement wants to avoid military and radar surveillance of the airspace above the embassy. Buildings for administration, food and water provisions, and state-of-the-art sanitation and communication systems are part of this vision. A nearby replica of the Raëlian Embassy for Extraterrestrials open to the public is expected to show visitors what it is like inside the real one.
In February 1991, the Raëlian Church modified their symbol to remove the swastika to help in negotiations with building the "Third Temple of Israel". The official reason given was a telepathic request from extraterrestrials called Elohim to change the symbol in order to help in negotiations with Israel for the building of a Raëlian "embassy" or "third temple of Israel" to greet the anticipated coming of extraterrestrials and founders of past religions, although the country still denies their request.
On 13 December 1997, the leader of the International Raëlian Movement had decided to extend the possibility of building the embassy outside of Jerusalem and also allow that a significant portion of the embassy property be covered with water. The area of the proposed embassy property is still envisioned at a minimum of 3.47 square kilometers, with a radius of at least 1.05 kilometers.
In 2005, the Israeli Raëlian Guide Kobi Drori stated that the Lebanese government was discussing proposals by the Raëlian movement to build their "interplanetary embassy" in Lebanon. However, one condition was that the Raëlians did not display their logo on top of the building because it mixes a swastika and a Star of David. According to Drori, the Raëlians involved declined this offer, as they wished to keep the symbol as is.
Ideas of how government and the economy should run
According to the book Geniocracy, creating a peaceful worldwide political union requires a form of government that favors intelligence over mediocrity. While having a democratic electoral apparatus, it differs from traditional liberal democracy by requiring members of the electorate to meet a minimum standard of intelligence. The thresholds proposed by the Raëlians are 50% above average for a candidate and 10% above average for a voter. Raëlians believe that a world government is only possible through establishing a global currency, a common language, and a transformation of militaries of the world into civil police.
In Raël's book, Extraterrestrials took me to their Planet, Raël claims that an extraterrestrial gave him the idea of Humanitarianism. Under the establishment of Humanitarianism, people would not have ownership of businesses or exploitable goods created by others. Instead, people would rent each of them for a period of 49 years. The founders would be able to receive the rents for up to 49 years or when they die, whichever is later. Any rents not inherited by relatives after 49 years would go to the State. By balancing inheritances, children would be born with enough financial means to forsake menial tasks for endeavors that may benefit the whole of humanity. Family houses could be inherited from generation to generation, free of rent.
In his much later book, Maitreya, Raël says that the road to a world without money is capitalism and globalization, as opposed to communism. Capitalism would allow people who contribute much to society to also contribute to its scientific and technological development. Under capitalism, society would produce as much money as it can. The money would become important in the short run, as nanotechnology quickly lowers the cost of goods while putting many people out of work.
An anti-cult organization called Info-Cult argued that Geniocracy was a fascist ideology. However, Geniocracy is not a political party, because it allows for differing ideologies.
Liberal sensuality
According to Vorilhon's book Sensual Meditation, one should develop the ability to break free of habitual thoughts that prevent one from appreciating everyday phenomena. The book describes in detail six different meditations involving making full use of the lungs' capacity to expand and contract, oxygenating the blood and the cells within, imagining heat travelling upwards from toe to the head, allowing the skin to feel under itself, and experiencing touch with another person's body and examining their figure.
According to the book Maitreya by Vorilhon, love involves experiencing different varieties and possibilities that allow one to break habits in order to make life more pleasant and interesting and that it is the only thing that can stop war and injustice that persists in today's world. Raëlians believe in the right to form new religions or new political parties as long as they do not promote violence. As individualists, Raëlians believe that the one who gives the order to harm others is less at fault than the one who executes it.
Raëlians say they encourage adult homosexual, bisexual, and heterosexual relationships and that society should recognize them legally. However, government authorities such as those in Switzerland fear that Raëlians are a threat to public morals for supporting liberalized sex education for children. The authorities believe that such liberalized sex education teaches youth how to obtain sexual gratification which would encourage sexual abuse of underage children. The Raëlians disagree with those fears and stated that sex education done properly would involve educating parents as well as children.
Susan J. Palmer writes that in 1991, a French journalist went to a Raëlian Seminar and taped couples having sexual intercourse in tents. These tapes gained widespread publicity—with news stories describing these practices as perverted and a form of brainwashing.
Since 1991, Raël's teachings on sexual intercourse have caused controversy among other religious groups. The next year, Catholic schools in Montreal, Canada objected to a proposed condom vending machine as contrary to their mission. In response, Raëlian guides gave the Catholic students ten thousand condoms. The Commissioner of Catholic schools for Montreal said they could do nothing to stop them. Around this time, Raëlians dubbed the event "Operation Condom".
Cloning of humans
In the scientific community, reproductive cloning refers only to the creation of a genetically identical living thing. "Genetically identical" does not mean altogether identical; this kind of cloning does not reproduce a living thing's memories or experiences, for example. However, in discussions of Raëlianism, cloning sometimes seems to refer not only to reproductive cloning, but also reproductive human cloning plus mind and/or brain transfer, or to a process of making adult clones. Raëlians take this even further and say that humanity can attain eternal life through the science of cloning.
According to the book Yes to Human Cloning, the first stage of this extended cloning process is creating a human embryo through human cloning. Raëlian bishop and Clonaid CEO Brigitte Boisselier claimed that an American woman underwent a cloning procedure of this type that led to the birth of a girl named Eve on 26 December 2002. Vorilhon told lawmakers that banning the development of human cloning was comparable to outlawing medical advances such as "antibiotics, blood transfusions, and vaccines."
The second stage of cloning, according to Raëlians, is causing the clone to mature faster than normal. Raël says that in the future, scientists will discover an "accelerated-growth process" in which a process like guided self-assembly of rapidly expanded cells or even nanotechnological assembly of a whole human body can form in a very short time.
The third stage is the transfer of memory and personality from the original person to the mature clone. For the process to maintain one branch for personality and memory, as opposed to two, a recording of the individual's mind would be required before the time of death, and would be transferred to an adult clone body after the original has died.
In the final stages of development, hitherto unknown information contained within undamaged DNA would be enough to bring others back from the dead including their memories and personality. This would be done by taking a small sample from someone's body and preserving it at the time when the level of the brain's efficiency and knowledge is highest. On the day of death, a cell would be taken from the sample for the cloning to take place, and the memories and personality would be restored to their peak level.
The Raëlian Church has close links with the controversial company Clonaid. Brigitte Boisselier, a Raëlian and chief executive of Clonaid, made a controversial and unverified claim that a human baby was conceived through cloning technology. Around this time, Clonaid's subsidiary BioFusion Tech claimed to have in possession a cell fusion device that assisted the cloning of human embryos. The Vatican, however, says that experimenters expressed "brutal mentality" for attempting to clone human beings. Pope John Paul II criticized the experiment which he believed threatened the dignity of human life. In response, the leader of the Raëlian Church dismissed the Pope's ethical concerns, calling them an "accumulation of religious prejudices."
See also
Geniocracy
History of Raëlism
Raëlism
Biogenesis
Exotheology
Directed Panspermia
Brigitte Boisselier – French CEO of Clonaid
Claude Vorilhon/Raël – French singer, guitarist, and former automobile journalist
Glenn Carter – British singer, actor
Nayah – French singer
References
Cited texts
Raël, Geniocracy. The Raëlian Foundation, 2004.
Raël, Intelligent Design. Nova Distribution, 2006. .
Raël, Maitreya. The Raëlian Foundation, 2003.
Raël, Sensual Meditation. Tagman Press, 2001.
Raël, Yes to Human Cloning: Immortality Thanks to Science. Tagman Press, 2001. .
Further reading
The 2005 novel The Possibility of an Island – (translated by Gavin Bowd, original title La Possibilité d'une île) by the French writer, Michel Houellebecq is seen by reviewers as a description of Raëlism in the future.
Raël, La géniocratie . L'Edition du message, 1977. .
External links
Who are the Raëlians? David Chazan, BBC News 2002.
The Raëlian books compared to Jean Sendy's
Testimonies by ex-Raelians Site by former Raelian.
Intelligent design | Raëlian beliefs and practices | Engineering | 5,061 |
12,465,821 | https://en.wikipedia.org/wiki/C6H8O2 | {{DISPLAYTITLE:C6H8O2}}
The molecular formula C6H8O2 may refer to:
Cyclohexanediones
1,2-Cyclohexanedione
1,3-Cyclohexanedione
1,4-Cyclohexanedione
Cyclotene (maple lactone)
cis-1,2-Dihydrocatechol
Methylene cyclopropyl acetic acid
5-Methylfurfuryl alcohol
Parasorbic acid
Sorbic acid | C6H8O2 | Chemistry | 115 |
36,675,565 | https://en.wikipedia.org/wiki/Glossary%20of%20biology | This glossary of biology terms is a list of definitions of fundamental terms and concepts used in biology, the study of life and of living organisms. It is intended as introductory material for novices; for more specific and technical definitions from sub-disciplines and related fields, see Glossary of cell biology, Glossary of genetics, Glossary of evolutionary biology, Glossary of ecology, Glossary of environmental science and Glossary of scientific naming, or any of the organism-specific glossaries in :Category:Glossaries of biology.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
Related to this search
Index of biology articles
Outline of biology
Glossaries of sub-disciplines and related fields:
Glossary of botany
Glossary of ecology
Glossary of entomology
Glossary of environmental science
Glossary of genetics
Glossary of ichthyology
Glossary of ornithology
Glossary of scientific naming
Glossary of speciation
Glossary of virology
References
biology
Wikipedia glossaries using description lists | Glossary of biology | Biology | 214 |
3,104,473 | https://en.wikipedia.org/wiki/Diurnality | Diurnality is a form of plant and animal behavior characterized by activity during daytime, with a period of sleeping or other inactivity at night. The common adjective used for daytime activity is "diurnal". The timing of activity by an animal depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period; cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors except for a zeitgeber. Animals active during twilight are crepuscular, those active during the night are nocturnal and animals active at sporadic times during both night and day are cathemeral.
Plants that open their flowers during the daytime are described as diurnal, while those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths.
Animals
Many types of animals are classified as being diurnal, meaning they are active during the day time and inactive or have periods of rest during the night time. Commonly classified diurnal animals include mammals, birds, and reptiles. Most primates are diurnal, including humans. Scientifically classifying diurnality within animals can be a challenge, apart from the obvious increased activity levels during the day time light.
Evolution of diurnality
Initially, most animals were diurnal, but adaptations have allowed some animals to become nocturnal, contributing to the success of many, especially mammals. This evolutionary movement to nocturnality allowed them to better avoid predators and gain resources with less competition from other animals. This did come with some adaptations that mammals live with today. Vision has been one of the most greatly affected senses from switching back and forth from diurnality to nocturnality, and this can be seen using biological and physiological analysis of rod nuclei from primate eyes. This includes losing two of four cone opsins that assists in colour vision, making many mammals dichromats. When early primates converted back to diurnality, better vision that included trichromatic colour vision became very advantageous, making diurnality and colour vision adaptive traits of simiiformes, which includes humans. Studies using chromatin distribution analysis of rod nuclei from different simian eyes found that transitions between diurnality and nocturnality occurred several times within primate lineages, with switching to diurnality being the most common transitions.
Still today, diurnality seems to be reappearing in many lineages of other animals, including small rodent mammals like the Nile grass rat and golden mantle squirrel and reptiles. More specifically, geckos, which were thought to be naturally nocturnal have shown many transitions to diurnality, with about 430 species of geckos now showing diurnal activity. With so many diurnal species recorded, comparative analysis studies using newer lineages of gecko species have been done to study the evolution of diurnality. With about 20 transitions counted for the gecko lineages, it shows the significance of diurnality. Strong environmental influences like climate change, predation risk, and competition for resources are all contributing factors. Using the example of geckos, it is thought that species like Mediodactylus amictopholis that live at higher altitudes have switched to diurnality to help gain more heat through the day, and therefore conserve more energy, especially in colder seasons.
Light
Light is one of the most defining environmental factors that determines an animal's activity pattern. Photoperiod or a light dark cycle is determined by the geographical location, with day time being associated with much ambient light, and night time being associated with little ambient light. Light is one of the strongest influences of the suprachiasmatic nucleus (SCN) which is part of the hypothalamus in the brain that controls the circadian rhythm in most animals. This is what determines whether an animal is diurnal or not. The SCN uses visual information like light to start a cascade of hormones that are released and work on many physiological and behavioural functions.
Light can produce powerful masking effects on an animal's circadian rhythm, meaning that it can "mask" or influence the internal clock, changing the activity patterns of an animal, either temporarily or over the long term if exposed to enough light over a long period of time. Masking can be referred to either as positive masking or negative masking, with it either increasing a diurnal animal's activity or decreasing a nocturnal animal's activity, respectively. This can be depicted when exposing different types of rodents to the same photoperiods. When a diurnal Nile grass rat and nocturnal mouse are exposed to the same photoperiod and light intensity, increased activity occurred within the grass rat (positive masking), and decreased activity within the mouse (negative masking).
Even small amounts of environmental light change have shown to have an effect on the activity of mammals. An observational study done on the activity of nocturnal owl monkeys in the Gran Chaco in South America showed that increased amounts of moonlight at night increased their activity levels through the night which led to a decrease of daytime activity. Meaning that for this species, ambient moonlight is negatively correlated with diurnal activity. This is also connected with the foraging behaviours of the monkeys, as when there were nights of little to no moonlight, it affected the monkey's ability to forage efficiently, so they were forced to be more active in the day to find food.
Other environmental influences
Diurnality has shown to be an evolutionary trait in many animal species, with diurnality mostly reappearing in many lineages. Other environmental factors like ambient temperature, food availability, and predation risk can all influence whether an animal will evolve to be diurnal, or if their effects are strong enough, then mask over their circadian rhythm, changing their activity patterns to becoming diurnal. All three factors often involve one another, and animals need to be able to find a balance between them if they are to survive and thrive.
Ambient temperature has been shown to affect and even convert nocturnal animals to diurnality as it is a way for them to conserve metabolic energy. Nocturnal animals are often energetically challenged due to being most active in the nighttime when ambient temperatures are lower than through the day, and so they lose a lot of energy in the form of body heat. According to the circadian thermos-energetics (CTE) hypothesis, animals that are expending more energy than they are taking in (through food and sleep) will be more active in the light cycle, meaning they will be more active in the day. This has been shown in studies done on small nocturnal mice in a laboratory setting. When they were placed under a combination of enough cold and hunger stress, they converted to diurnality through temporal niche switching, which was expected. Another similar study that involved energetically challenging small mammals showed that diurnality is most beneficial when the animal has a sheltered location to rest in, reducing heat loss. Both studies concluded that nocturnal mammals do change their activity patterns to be more diurnal when energetically stressed (due to heat loss and limited food availability), but only when predation is also limited, meaning the risks of predation are less than the risk of freezing or starving to death.
Plants
Many plants are diurnal or nocturnal in the opening and closing of their flowers. Most angiosperm plants are visited by various insects, so the flower adapts its phenology to the most effective pollinators. For example, the baobab is pollinated by fruit bats and starts blooming in late afternoon; the flowers are dead within twenty-four hours.
In technology operations
Services that alternate between high and low utilization in a daily cycle are described as being diurnal. Many websites have the most users during the day and little utilization at night, or vice versa. Operations planners can use this cycle to plan, for example, maintenance that needs to be done when there are fewer users on the web site.
Notes
See also
Diurnal cycle
Cathemeral
Chronotype
Crepuscular
Crypsis
Nocturnality
References
Ethology
Circadian rhythm
Day | Diurnality | Biology | 1,723 |
48,938,687 | https://en.wikipedia.org/wiki/Absement | In kinematics, absement (or absition) is a measure of sustained displacement of an object from its initial position, i.e. a measure of how far away and for how long. The word absement is a portmanteau of the words absence and displacement. Similarly, its synonym absition is a portmanteau of the words absence and position.
Absement changes as an object remains displaced and stays constant as the object resides at the initial position. It is the first time-integral of the displacement (i.e. absement is the area under a displacement vs. time graph), so the displacement is the rate of change (first time-derivative) of the absement. The dimension of absement is length multiplied by time. Its SI unit is meter second (m·s), which corresponds to an object having been displaced by 1 meter for 1 second. This is not to be confused with a meter per second (m/s), a unit of velocity, the time-derivative of position.
For example, opening the gate of a gate valve (of rectangular cross section) by 1 mm for 10 seconds yields the same absement of 10 mm·s as opening it by 5 mm for 2 seconds. The amount of water having flowed through it is linearly proportional to the absement of the gate, so it is also the same in both cases.
Occurrence in nature
Whenever the rate of change ′ of a quantity is proportional to the displacement of an object, the quantity is a linear function of the object's absement. For example, when the fuel flow rate is proportional to the position of the throttle lever, then the total amount of fuel consumed is proportional to the lever's absement.
The first published paper on the topic of absement introduced and motivated it as a way to study
flow-based musical instruments, such as the hydraulophone, to model empirical observations of some hydraulophones in which obstruction of a water jet for a longer period of time resulted in a buildup in sound level, as water accumulates in a sounding mechanism (reservoir), up to a certain maximum filling point beyond which the sound level reached a maximum, or fell off (along with a slow decay when a water jet was unblocked). Absement has also been used to model artificial muscles, as well as for real muscle interaction in a physical fitness context. Absement has also been used to model human posture.
As the displacement can be seen as a mechanical analogue of electric charge, the absement can be seen as a mechanical analogue of the time-integrated charge, a quantity useful for modelling some types of memory elements.
Applications
In addition to modeling fluid flow and for Lagrangian modeling of electric circuits, absement is used in physical fitness and kinesiology to model muscle bandwidth, and as a new form of physical fitness training. In this context, it gives rise to a new quantity called actergy, which is to energy as energy is to power. Actergy has the same units as action (joule-seconds) but is the time-integral of total energy (time-integral of the Hamiltonian rather than time-integral of the Lagrangian).
Just as displacement and its derivatives form kinematics, so do displacement and its integrals form "integral kinematics".
Fluid flow in a throttle:
Relation to PID controllers
PID controllers are controllers that work on a signal that is proportional to a physical quantity (e.g. displacement, proportional to position) and its integral(s) and derivative(s), thusly defining PID in the context of integrals and derivatives of a position of a control element in the Bratland sense
Example of PID controller (Bratland 2014):
P, position;
I, absement;
D, velocity.
Strain absement
Strain absement is the time-integral of strain, and is used extensively in mechanical systems and memsprings:
a quantity called absement which allows mem-spring models to display hysteretic response in great abundance.
Anglement
Absement originally arose in situations involving valves and fluid flow, for which the opening of a valve was by a long, T-shaped handle, which actually varied in angle rather than position. The time-integral of angle is called "anglement" and it is approximately equal or proportional to absement for small angles, because the sine of an angle is approximately equal to the angle for small angles.
Phase space: Absement and momentement
In regard to a conjugate variable for absement, the time-integral of momentum, known as momentement, has been proposed.
This is consistent with Jeltsema's 2012 treatment with charge and flux as the base units rather than current and voltage.
References
External links
Motion (physics)
Vector physical quantities | Absement | Physics,Mathematics | 986 |
25,341,587 | https://en.wikipedia.org/wiki/Michael%20Hissmann | Michael Hissmann (1752, Hermannstadt – 1784, Göttingen) was a German philosopher, an advocate of French sensualism, and a radical materialist who translated Condillac, Charles de Brosses, and Joseph Priestley into German.
Hissmann studied philosophy at Erlangen and Göttingen. From 1778 to 1783 he edited the Magazin für die Philosophie und ihre Geschichte. He became an extraordinary professor at Göttingen in 1782, and a full professor in 1784.
Selected works
De Infinito, 1776
Geschichte der Lehre von der Association der Ideen, 1776.
Psychologische Versuche, ein Beytrag zur esoterischen Logik, 1777.
Anleitung zur Kenntniß der auserlesenen Literatur in allen Theilen der Philosophie, 1778.
Briefe über Gegenstände der Philosophie, 1778.
Magazin für die Philosophie und ihre Geschichte, 6 volumes, 1778–83.
Untersuchungen über den Stand der Natur, 1780.
Vesuch über das Leben des Freyhernn von Leibnitz, 1783.
Ausgewählte Schriften, edited by Udo Roth und Gideon Stiening, Berlin 2013.
References
1752 births
1784 deaths
18th-century German philosophers
Materialists
University of Erlangen-Nuremberg alumni
University of Göttingen alumni
German male writers | Michael Hissmann | Physics | 289 |
16,712,120 | https://en.wikipedia.org/wiki/Ultra%2024 | The Ultra 24 is a family of computer workstations by Sun Microsystems based on the Intel Core 2 processor.
The Sun Ultra 24 launched in 2007, and shipped with Solaris 10 pre-installed. Other than Solaris, it is officially compatible with various flavours of Linux as well as Microsoft's Windows XP and Windows Vista.
Features
CPU: one Intel Core 2 processor, 2.0 GHz or higher:
Intel Core 2 Duo processor
Intel Core 2 Quad processor
Intel Core 2 Extreme processor
Memory—ECC unbuffered DDR2-667 DIMMs, 4 DIMM slots, 8 GB maximum. Three DIMM sizes, 512 MB, 1 GB, and 2 GB
Networking—Single Gigabit Ethernet integrated on motherboard, one RJ-45 port (rear)
Hard Disk Drives—Up to four internal drives:
either up to four SATA drives, 3 TB maximum: 250 GB, 750 GB (7,200 rpm)
or, with optional PCIe SAS HBA: Up to four SAS drives, 1.2 TB maximum: 146 GB, 300 GB (15,000 rpm)
Graphics: provided by a PCIe card
PCI Express Slots:
Two full-length x16 Gen-2 slots
One full-length x8 slot (Electrically x4)
One full-length x1 slot
References
External links
System Specifications on the Oracle documentation website
Official Oracle Ultra 24 documentation
See also
Sun Ultra series: various Sun workstations and servers using SPARC, AMD or Intel processors.
Sun workstations | Ultra 24 | Technology | 312 |
1,110,270 | https://en.wikipedia.org/wiki/Plastoquinone | Plastoquinone (PQ) is a terpenoid-quinone (meroterpenoid) molecule involved in the electron transport chain in the light-dependent reactions of photosynthesis. The most common form of plastoquinone, known as PQ-A or PQ-9, is a 2,3-dimethyl-1,4-benzoquinone molecule with a side chain of nine isoprenyl units. There are other forms of plastoquinone, such as ones with shorter side chains like PQ-3 (which has 3 isoprenyl side units instead of 9) as well as analogs such as PQ-B, PQ-C, and PQ-D, which differ in their side chains. The benzoquinone and isoprenyl units are both nonpolar, anchoring the molecule within the inner section of a lipid bilayer, where the hydrophobic tails are usually found.
Plastoquinones are very structurally similar to ubiquinone, or coenzyme Q10, differing by the length of the isoprenyl side chain, replacement of the methoxy groups with methyl groups, and removal of the methyl group in the 2 position on the quinone. Like ubiquinone, it can come in several oxidation states: plastoquinone, plastosemiquinone (unstable), and plastoquinol, which differs from plastoquinone by having two hydroxyl groups instead of two carbonyl groups.
Plastoquinol, the reduced form, also functions as an antioxidant by reducing reactive oxygen species, some produced from the photosynthetic reactions, that could harm the cell membrane. One example of how it does this is by reacting with superoxides to form hydrogen peroxide and plastosemiquinone.
The prefix plasto- means either plastid or chloroplast, alluding to its location within the cell.
Role in photosynthesis
The role that plastoquinone plays in photosynthesis, more specifically in the light-dependent reactions of photosynthesis, is that of a mobile electron carrier through the membrane of the thylakoid.
Plastoquinone is reduced when it accepts two electrons from photosystem II and two hydrogen cations (H+) from the stroma of the chloroplast, thereby forming plastoquinol (PQH2). It transfers the electrons further down the electron transport chain to plastocyanin, a mobile, water-soluble electron carrier, through the cytochrome b6f protein complex. The cytochrome b6f protein complex catalyzes the electron transfer between plastoquinone and plastocyanin, but also transports the two protons into the lumen of thylakoid discs. This proton transfer forms an electrochemical gradient, which is used by ATP synthase at the end of the light dependent reactions in order to form ATP from ADP and Pi.
Within photosystem II
Plastoquinone is found within photosystem II in two specific binding sites, known as QA and QB. The plastoquinone at QA, the primary binding site, is very tightly bound, compared to the plastoquinone at QB, the secondary binding site, which is much more easily removed. QA is only transferred a single electron, so it has to transfer an electron to QB twice before QB is able to pick up two protons from the stroma and be replaced by another plastoquinone molecule. The protonated QB then joins a pool of free plastoquinone molecules in the membrane of the thylakoid. The free plastoquinone molecules eventually transfer electrons to the water-soluble plastocyanin so as to continue the light-dependent reactions. There are additional plastoquinone binding sites within photosystem II (QC and possibly QD), but their function and/or existence have not been fully elucidated.
Biosynthesis
The p-hydroxyphenylpyruvate is synthesized from tyrosine, while the solanesyl diphosphate is synthesized through the MEP/DOXP pathway. Homogentisate is formed from p-hydroxyphenylpyruvate and is then combined with solanesyl diphosphate through a condensation reaction. The resulting intermediate, 2-methyl-6-solanesyl-1,4-benzoquinol is then methylated to form the final product, plastoquinol-9. This pathway is used in most photosynthetic organisms, like algae and plants. However, cyanobacteria appear to not use homogentisate for synthesizing plastoquinol, possibly resulting in a pathway different from the one shown below.
Derivatives
Some derivatives that were designed to penetrate mitochondrial cell membranes (SkQ1 (plastoquinonyl-decyl-triphenylphosphonium), SkQR1 (the rhodamine-containing analog of SkQ1), SkQ3) have anti-oxidant and protonophore activity. SkQ1 has been proposed as an anti-aging treatment, with the possible reduction of age-related vision issues due to its antioxidant ability. This antioxidant ability results from both its antioxidant ability to reduce reactive oxygen species (derived from the part of the molecule containing plastoquinonol), which are often formed within mitochondria, as well as its ability to increase ion exchange across membranes (derived from the part of the molecule containing cations that can dissolve within membranes). Specifically, like plastoquinol, SkQ1 has been shown to scavenge superoxides both within cells (in vivo) and outside of cells (in vitro). SkQR1 and SkQ1 have also been proposed as a possible way to treat brain issues like Alzheimer's due to their ability to potentially fix damages caused by amyloid beta. Additionally, SkQR1 has been shown as a way to reduce the issues caused by brain trauma through its antioxidant abilities, which help prevent cell death signals by reducing the amounts of reactive oxygen species coming from mitochondria.
References
External links
Plastoquinones History, absorption spectra, and analogs.
Photosynthesis
Light reactions
1,4-Benzoquinones
Meroterpenoids | Plastoquinone | Chemistry,Biology | 1,367 |
42,726,277 | https://en.wikipedia.org/wiki/Beryllium%20sulfide | Beryllium sulfide (BeS) is an ionic compound from the sulfide group with the formula BeS. It is a white solid with a sphalerite structure that is decomposed by water and acids.
Preparation
Beryllium sulfide powders can be prepared by the reaction of sulfur and beryllium in a hydrogen atmosphere by heating the mixture for 10-20 minutes at temperatures from 1000-1300 °C. If done at 900 °C, there is beryllium metal impurities.
Alternatively, it can be prepared by the reaction of beryllium chloride and hydrogen sulfide at 900 °C.
References
Beryllium compounds
Monosulfides
II-VI semiconductors
Zincblende crystal structure | Beryllium sulfide | Chemistry | 144 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.