id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,655,528 | https://en.wikipedia.org/wiki/Machine%20Check%20Architecture | In computing, Machine Check Architecture (MCA) is an Intel and AMD mechanism in which the CPU reports hardware errors to the operating system.
Intel's P6 and Pentium 4 family processors, AMD's K7 and K8 family processors, as well as the Itanium architecture implement a machine check architecture that provides a mechanism for detecting and reporting hardware (machine) errors, such as: system bus errors, ECC errors, parity errors, cache errors, and translation lookaside buffer errors. It consists of a set of model-specific registers (MSRs) that are used to set up machine checking and additional banks of MSRs used for recording errors that are detected.
See also
Machine-check exception (MCE)
High availability (HA)
Reliability, availability and serviceability (RAS)
Windows Hardware Error Architecture (WHEA)
References
External links
Microsoft's article on Itanium's MCA
Linux x86 daemon for processing of machine checks
Computer architecture
X86 architecture | Machine Check Architecture | [
"Technology",
"Engineering"
] | 203 | [
"Computer engineering",
"Computer architecture",
"Computer hardware stubs",
"Computing stubs",
"Computers"
] |
14,655,670 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282003%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2003. This is part of the Wikipedia summary of Oil Megaprojects—see that page for further details. 2003 saw 30 projects come on stream with an aggregate capacity of when full production was reached (which may not have been in 2003).
Quick Links to Other Years
Detailed Project Table for 2003
See also
2003 world oil market chronology
References
2003
Oil fields
Proposed energy projects
Projects established in 2003
2003 in the environment
2003 in technology | Oil megaprojects (2003) | [
"Engineering"
] | 112 | [
"Oil megaprojects",
"Megaprojects"
] |
14,655,680 | https://en.wikipedia.org/wiki/P-type%20ATPase | The P-type ATPases, also known as E1-E2 ATPases, are a large group of evolutionarily related ion and lipid pumps that are found in bacteria, archaea, and eukaryotes. P-type ATPases are α-helical bundle primary transporters named based upon their ability to catalyze auto- (or self-) phosphorylation (hence P) of a key conserved aspartate residue within the pump and their energy source, adenosine triphosphate (ATP). In addition, they all appear to interconvert between at least two different conformations, denoted by E1 and E2. P-type ATPases fall under the P-type ATPase (P-ATPase) Superfamily (TC# 3.A.3) which, as of early 2016, includes 20 different protein families.
Most members of this transporter superfamily catalyze cation uptake and/or efflux, however one subfamily, the flippases, (TC# 3.A.3.8) is involved in flipping phospholipids to maintain the asymmetric nature of the biomembrane.
In humans, P-type ATPases serve as a basis for nerve impulses, relaxation of muscles, secretion and absorption in the kidney, absorption of nutrient in the intestine and other physiological processes. Prominent examples of P-type ATPases are the sodium-potassium pump (Na+/K+-ATPase), the proton-potassium pump (H+/K+-ATPase), the calcium pump (Ca2+-ATPase) and the plasma membrane proton pump (H+-ATPase) of plants and fungi.
General transport reaction
The generalized reaction for P-type ATPases is
nLigand1 (out) + mLigand2 (in) + ATP → nLigand1 (in) + mLigand2 (out) + ADP + Pi.
where the ligand can be either a metal ion or a phospholipid molecule.
Discovery
The first P-type ATPase discovered was the Na+/K+-ATPase, which Nobel laureate Jens Christian Skou isolated in 1957. The Na+/K+-ATPase was only the first member of a large and still-growing protein family (see Swiss-Prot Prosite motif PS00154).
Structure
P-type ATPases have a single catalytic subunit of 70 - 140 kDa. The catalytic subunit hydrolyzes ATP, contains the aspartyl phosphorylation site and binding sites for the transported ligand(s) and catalyzes ion transport. Various subfamilies of P-type ATPases also need additional subunits for proper function. Additional subunits that lack catalytic activity are present in the ATPase complexes of P1A, P2A, P2C and P4 ATPases. E.g. the catalytic alpha subunit of Na+/K+-ATPase consists of two additional subunits, beta and gamma, involved in trafficking, folding, and regulation of these pumps. The first P-type ATPase to be crystallized was SERCA1a, a sarco(endo)plasmic reticulum Ca2+-ATPase of fast twitch muscle from adult rabbit. It is generally acknowledged that the structure of SERCA1a is representative for the superfamily of P-type ATPases.
The catalytic subunit of P-type ATPases is composed of a cytoplasmic section and a transmembrane section with binding sites for the transported ligand(s). The cytoplasmic section consists of three cytoplasmic domains, designated the P, N, and A domains, containing over half the mass of the protein.
Membrane section
The transmembrane section (M domain) typically has ten transmembrane helices (M1-M10), with the binding sites for transported ligand(s) located near the midpoint of the bilayer. While most subfamilies have 10 transmembrane helices, there are some notable exceptions. The P1A ATPases are predicted to have 7, and the large subfamily of heavy metal pumps P1B) is predicted to have 8 transmembrane helices. P5 ATPases appear to have a total of 12 transmembrane helices.
Common for all P-type ATPases is a core of 6 transmembrane-spanning segments (also called the 'transport (T) domain'; M1-M6 in SERCA), that harbors the binding sites for the translocated ligand(s). The ligand(s) enter through a half-channel to the binding site and leave on the other side of the membrane through another half-channel.
Varying among P-type ATPase is the additional number of transmembrane-spanning segments (also called the 'support (S) domain', which between subfamilies ranges from 2 to 6. Extra transmembrane-segments likely provides structural support for the T domain and can also have specialized functions.
Phosphorylation (P) domain
The P domain contains the canonical aspartic acid residue phosphorylated (in a conserved DKTGT motif; the 'D' is the one letter abbreviation of the amino acid aspartate) during the reaction cycle. It is composed of two parts widely separated in sequence. These two parts assemble into a seven-strand parallel β-sheet with eight short associated a-helices, forming a Rossmann fold.
The folding pattern and the locations of the critical amino acids for phosphorylation in P-type ATPases has the haloacid dehalogenase fold characteristic of the haloacid dehalogenase (HAD) superfamily, as predicted by sequence homology. The HAD superfamily functions on the common theme of an aspartate ester formation by an SN2 reaction mechanism. This SN2 reaction is clearly observed in the solved structure of SERCA with ADP plus AlF4−.
Nucleotide binding (N) domain
The N domain serves as a built-in protein kinase that functions to phosphorylate the P domain. The N domain is inserted between the two segments of the P domain, and is formed of a seven-strand antiparallel β-sheet between two helix bundles. This domain contains the ATP-binding pocket, pointing out toward the solvent near the P-domain.
Actuator (A) domain
The A domain serves as a built-in protein phosphatase that functions to dephosphorylate the phosphorylated P domain. The A domain is the smallest of the three cytoplasmic domains. It consists of a distorted jellyroll structure and two short helices. It is the actuator domain modulating the occlusion of the transported ligand(s) in the transmembrane binding sites, and it is pivot in transposing the energy from the hydrolysis of ATP in the cytoplasmic domains to the vectorial transport of cations in the transmembrane domain. The A domain dephosphorylates the P domain as part of the reaction cycle using a highly conserved TGES motif located at one end of the jellyroll.
Regulatory (R) domain
Some members of the P-type ATPase family have additional regulatory (R) domains fused to the pump. Heavy metal P1B pumps can have several N- and C-terminal heavy metal-binding domains that have been found to be involved in regulation. The P2B Ca2+ ATPases have autoinbitory domains in their amino-terminal (plants) or carboxy-terminal (animals) regions, which contain binding sites for calmodulin, which, in the presence of Ca2+, activates P2B ATPases by neutralizing the terminal constraint. The P3A plasma membrane proton pumps have a C-terminal regulatory domain, which, when unphosphorylated, inhibits pumping.
Mechanism
All P-type ATPases use the energy derived from ATP to drive transport. They form a high-energy aspartyl-phosphoanhydride intermediate in the reaction cycle, and they interconvert between at least two different conformations, denoted by E1 and E2. The E1-E2 notation stems from the initial studies on this family of enzymes made on the Na+/K+-ATPase, where the sodium form and the potassium form are referred to as E1 and E2, respectively, in the "Post-Albers scheme". The E1-E2 schema has been proven to work, but there exist more than two major conformational states. The E1-E2 notation highlights the selectivity of the enzyme. In E1, the pump has high affinity for the exported substrate and low affinity for the imported substrate. In E2, it has low affinity of the exported substrate and high affinity for the imported substrate. Four major enzyme states form the cornerstones in the reaction cycle. Several additional reaction intermediates occur interposed. These are termed E1~P, E2P, E2-P*, and E1/E2.
ATP hydrolysis occurs in the cytoplasmic headpiece at the interface between domain N and P. Two Mg-ion sites form part of the active site. ATP hydrolysis is tightly coupled to translocation of the transported ligand(s) through the membrane, more than 40 Å away, by the A domain.
Classification
A phylogenetic analysis of 159 sequences made in 1998 by Axelsen and Palmgren suggested that P-type ATPases can be divided into five subfamilies (types; designated as P1-P5), based strictly on a conserved sequence kernel excluding the highly variable N and C terminal regions. Chan et al. (2010) also analyzed P-type ATPases in all major prokaryotic phyla for which complete genome sequence data were available and compared the results with those for eukaryotic P-type ATPases. The phylogenetic analysis grouped the proteins independent of the organism from which they are isolated and showed that the diversification of the P-type ATPase family occurred prior to the separation of eubacteria, archaea, and eucaryota. This underlines the significance of this protein family for cell survival under stress conditions.
P1 ATPases
P1 ATPases (or Type I ATPases) consists of the transition/heavy metal ATPases. Topological type I (heavy metal) P-type ATPases predominate in prokaryotes (approx. tenfold).
P1A ATPases (potassium pumps)
P1A ATPases (or Type IA) are involved in K+ import (TC# 3.A.3.7). They are atypical P-type ATPases because, unlike other P-type ATPases, they function as part of a heterotetrameric complex (called KdpFABC), where the actual K+ transport is mediated by another subcomponent of the complex.
P1B ATPases (heavy metal pumps)
P1B ATPases (or Type IB ATPases) are involved in transport of the soft Lewis acids: Cu+, Ag+, Cu2+, Zn2+, Cd2+, Pb2+ and Co2+ (TC#s 3.A.3.5 and 3.A.3.6). They are key elements for metal resistance and metal homeostasis in a wide range of organisms.
Metal binding to transmembrane metal-binding sites (TM-MBS) in Cu+-ATPases is required for enzyme phosphorylation and subsequent transport. However, Cu+ does not access Cu+-ATPases in a free (hydrated) form but is bound to a chaperone protein. The delivery of Cu+ by Archaeoglobus fulgidus Cu+-chaperone, CopZ (see TC# 3.A.3.5.7), to the corresponding Cu+-ATPase, CopA (TC# 3.A.3.5.30), has been studied. CopZ interacted with and delivered the metal to the N-terminal metal binding domain(s) of CopA (MBDs). Cu+-loaded MBDs, acting as metal donors, were unable to activate CopA or a truncated CopA lacking MBDs. Conversely, Cu+-loaded CopZ activated the CopA ATPase and CopA constructs in which MBDs were rendered unable to bind Cu+. Furthermore, under nonturnover conditions, CopZ transferred Cu+ to the TM-MBS of a CopA lacking MBDs altogether. Thus, MBDs may serve a regulatory function without participating directly in metal transport, and the chaperone delivers Cu+ directly to transmembrane transport sites of Cu+-ATPases. Wu et al. (2008) have determined structures of two constructs of the Cu (CopA) pump from Archaeoglobus fulgidus by cryoelectron microscopy of tubular crystals, which revealed the overall architecture and domain organization of the molecule. They localized its N-terminal MBD within the cytoplasmic domains that use ATP hydrolysis to drive the transport cycle and built a pseudoatomic model by fitting existing crystallographic structures into the cryoelectron microscopy maps for CopA. The results also similarly suggested a Cu-dependent regulatory role for the MBD.
In the Archaeoglobus fulgidus CopA (TC# 3.A.3.5.7), invariant residues in helixes 6, 7 and 8 form two transmembrane metal binding sites (TM-MBSs). These bind Cu+ with high affinity in a trigonal planar geometry. The cytoplasmic Cu+ chaperone CopZ transfers the metal directly to the TM-MBSs; however, loading both of the TM-MBSs requires binding of nucleotides to the enzyme. In agreement with the classical transport mechanism of P-type ATPases, occupancy of both transmembrane sites by cytoplasmic Cu+ is a requirement for enzyme phosphorylation and subsequent transport into the periplasmic or extracellular milieu. Transport studies have shown that most Cu+-ATPases drive cytoplasmic Cu+ efflux, albeit with quite different transport rates in tune with their various physiological roles. Archetypical Cu+-efflux pumps responsible for Cu+ tolerance, like the Escherichia coli CopA, have turnover rates ten times higher than those involved in cuproprotein assembly (or alternative functions). This explains the incapability of the latter group to significantly contribute to the metal efflux required for survival in high copper environments. Structural and mechanistic details of copper-transporting P-type ATPase functionhave been described.
P2 ATPases
P2 ATPases (or Type II ATPases) are split into four groups. Topological type II ATPases (specific for Na+,K+, H+ Ca2+, Mg2+ and phospholipids) predominate in eukaryotes (approx. twofold).
P2A ATPases (calcium pumps)
P2A ATPases (or Type IIA ATPases) are Ca2+ ATPases that transport Ca2+. P2A ATPases are split into two groups. Members of the first group are called sarco/endoplasmatic reticulum Ca2+-ATPases (also referred to as SERCA). These pumps have two Ca2+ ion binding sites and are often regulated by inhibitory accessory proteins having a single trans-membrane spanning segment (e.g.phospholamban and sarcolipin. In the cell, they are located in the sarcoplasmic or endoplasmatic reticulum. SERCA1a is a type IIA pump. The second group of P2A ATPases is called secretory pathway Ca2+-ATPases (also referred to as SPCA). These pumps have a single Ca2+ ion binding site and are located in secretory vesicles (animals) or the vacuolar membrane (fungi). (TC# 3.A.3.2)
Crystal structures of Sarcoplasimc/endoplasmic reticulum ATP driven calcium pumps can be found in RCSB.
SERCA1a is composed of a cytoplasmic section and a transmembrane section with two Ca2+-binding sites. The cytoplasmic section consists of three cytoplasmic domains, designated the P, N, and A domains, containing over half the mass of the protein. The transmembrane section has ten transmembrane helices (M1-M10), with the two Ca2+-binding sites located near the midpoint of the bilayer. The binding sites are formed by side-chains and backbone carbonyls from M4, M5, M6, and M8. M4 is unwound in this region due to a conserved proline (P308). This unwinding of M4 is recognised as a key structural feature of P-type ATPases.
Structures are available for both the E1 and E2 states of the Ca2+ ATPase showing that Ca2+ binding induces major changes in all three cytoplasmic domains relative to each other.
In the case of SERCA1a, energy from ATP is used to transport 2 Ca2+-ions from the cytoplasmic side to the lumen of the sarcoplasmatic reticulum, and to countertransport 1-3 protons into the cytoplasm. Starting in the E1/E2 state, the reaction cycle begins as the enzyme releases 1-3 protons from the cation-ligating residues, in exchange for cytoplasmic Ca2+-ions. This leads to assembly of the phosphorylation site between the ATP-bound N domain and the P domain, while the A domain directs the occlusion of the bound Ca2+. In this occluded state, the Ca2+ ions are buried in a proteinaceous environment with no access to either side of the membrane. The Ca2E1~P state becomes formed through a kinase reaction, where the P domain becomes phosphorylated, producing ADP. The cleavage of the β-phosphodiester bond releases the gamma-phosphate from ADP and unleashes the N domain from the P domain.
This then allows the A domain to rotate toward the phosphorylation site, making a firm association with both the P and the N domains. This movement of the A domain exerts a downward push on M3-M4 and a drag on M1-M2, forcing the pump to open at the luminal side and forming the E2P state. During this transition, the transmembrane Ca2+-binding residues are forced apart, destroying the high-affinity binding site. This is in agreement with the general model form substrate translocation, showing that energy in primary transport is not used to bind the substrate but to release it again from the buried counter ions. At the same time the N domain becomes exposed to the cytosol, ready for ATP exchange at the nucleotide-binding site.
As the Ca2+ dissociate to the luminal side, the cation binding sites are neutralised by proton binding, which makes a closure of the transmembrane segments favourable. This closure is coupled to a downward rotation of the A domain and a movement of the P domain, which then leads to the E2-P* occluded state. Meanwhile, the N domain exchanges ADP for ATP.
The P domain is dephosphorylated by the A domain, and the cycle completes when the phosphate is released from the enzyme, stimulated by the newly bound ATP, while a cytoplasmic pathway opens to exchange the protons for two new Ca2+ ions.
Xu et al. proposed how Ca2+ binding induces conformational changes in TMS 4 and 5 in the membrane domain (M) that in turn induce rotation of the phosphorylation domain (P). The nucleotide binding (N) and β-sheet (β) domains are highly mobile, with N flexibly linked to P, and β flexibly linked to M. Modeling of the fungal H+ ATPase, based on the structures of the Ca2+ pump, suggested a comparable 70º rotation of N relative to P to deliver ATP to the phosphorylation site.
One report suggests that this sarcoplasmic reticulum (SR) Ca2+ ATPase is homodimeric.
Crystal structures have shown that the conserved TGES loop of the Ca2+-ATPase is isolated in the Ca2E1 state but becomes inserted in the catalytic site in E2 states. Anthonisen et al. (2006) characterized the kinetics of the partial reaction steps of the transport cycle and the binding of the phosphoryl analogs BeF, AlF, MgF, and vanadate in mutants with alterations to conserved TGES loop residues. The data provide functional evidence supporting a role of Glu183 in activating the water molecule involved in the E2P → E2 dephosphorylation and suggest a direct participation of the side chains of the TGES loop in the control and facilitation of the insertion of the loop in the catalytic site. The interactions of the TGES loop furthermore seem to facilitate its disengagement from the catalytic site during the E2 → Ca2E1 transition.
Crystal Structures of Calcium ATPase are available in RCSB and include: , , , , among others.
P2B ATPases (calcium pumps)
P2B (or Type IIB ATPases) are Ca2+ ATPases that transport Ca2+. These pumps have a single Ca2+ ion binding site and are regulated by binding of calmodulin to autoinhibitory built-in domains situated at either the carboxy-terminal (animals) or amino-terminal (plants) end of the pump protein. In the cell, they are situated in the plasma membrane (animals and plants) and the internal membranes (plants). Plasma membrane Ca2+-ATPase (PMCA) of animals is a P2B ATPase (TC# 3.A.3.2)
P2C ATPases (sodium/potassium and proton/potassium pumps)
P2C ATPases (or Type IIC) include the closely related Na+/K+ and H+/K+ ATPases from animal cells. (TC# 3.A.3.1)
The X-ray crystal structure at 3.5 Å resolution of the pig renal Na+/K+-ATPase has been determined with two rubidium ions bound in an occluded state in the transmembrane part of the α-subunit. Several of the residues forming the cavity for rubidium/potassium occlusion in the Na+/K+-ATPase are homologous to those binding calcium in the Ca2+-ATPase of the sarco(endo)plasmic reticulum. The carboxy terminus of the α-subunit is contained within a pocket between transmembrane helices and seems to be a novel regulatory element controlling sodium affinity, possibly influenced by the membrane potential.
Crystal Structures are available in RCSB and include: , , , , among others.
P2D ATPases (sodium pumps)
P2D ATPases (or Type IID) include a small number of Na+ (and K+) exporting ATPases found in fungi and mosses. (Fungal K+ transporters; TC# 3.A.3.9)
P3 ATPases
P3 ATPases (or Type III ATPases) are split into two groups.
P3A ATPases (proton pumps)
P3A ATPases (or Type IIIA) contain the plasma membrane H+-ATPases from prokaryotes, protists, plants and fungi.
Plasma membrane H+-ATPase is best characterized in plants and yeast. It maintains the level of intracellular pH and transmembrane potential. Ten transmembrane helices and three cytoplasmic domains define the functional unit of ATP-coupled proton transport across the plasma membrane, and the structure is locked in a functional state not previously observed in P-type ATPases. The transmembrane domain reveals a large cavity, which is likely to be filled with water, located near the middle of the membrane plane where it is lined by conserved hydrophilic and charged residues. Proton transport against a high membrane potential is readily explained by this structural arrangement.
P3B ATPases (magnesium pumps)
P3B ATPases (or Type IIIB) are presumed Mg2+-ATPases found in eubacteria and plants. Fungal H+ transporters (TC# 3.A.3.3) and Mg2+ (TC# 3.A.3.4)
P4 ATPases (phospholipid flippases)
P4 ATPases (or Type IV ATPases) are flippases involved in the transport of phospholipids, such as phosphatidylserine, phosphatidylcholine and phosphatidylethanolamine.
P5 ATPases
P5 ATPases (or Type V ATPases) have unknown specificity. This large group is found only in eukaryotes and is further divided into two groups.
P5A ATPases
P5A ATPases (or Type VA) are involved in regulation of homeostasis in the endoplasmic reticulum.
P5B ATPases
P5B ATPases (or Type VB) are found in the lysosomal membrane of animals. Mutations in these pumps are linked to a variety of neurological diseases.
Further phylogenetic classification
In addition to the subfamilies of P-type ATPases listed above, several prokaryotic families of unknown function have been identified. The Transporter Classification Database provides a representative list of members of the P-ATPase superfamily, which as of early 2016 consisting of 20 families. Members of the P-ATPase superfamily are found in bacteria, archaea and eukaryotes. Clustering on the phylogenetic tree is usually in accordance with specificity for the transported ion(s).
In eukaryotes, they are present in the plasma membranes or endoplasmic reticular membranes. In prokaryotes, they are localized to the cytoplasmic membranes.
P-type ATPases from 26 eukaryotic species were analyzed later.
Chan et al., (2010) conducted an equivalent but more extensive analysis of the P-type ATPase Superfamily in Prokaryotes and compared them with those from Eukaryotes. While some families are represented in both types of organisms, others are found only in one of the other type. The primary functions of prokaryotic P-type ATPases appear to be protection from environmental stress conditions. Only about half of the P-type ATPase families are functionally characterized.
Horizontal Gene Transfer
Many P-type ATPase families are found exclusively in prokaryotes (e.g. Kdp-type K+ uptake ATPases (type III) and all prokaryotic functionally uncharacterized P-type ATPase (FUPA) families), while others are restricted to eukaryotes (e.g. phospholipid flippases and all 13 eukaryotic FUPA families). Horizontal gene transfer has occurred frequently among bacteria and archaea, which have similar distributions of these enzymes, but rarely between most eukaryotic kingdoms, and even more rarely between eukaryotes and prokaryotes. In some bacterial phyla (e.g. Bacteroidota and Fusobacteriota), ATPase gene gain and loss as well as horizontal transfer occurred seldom in contrast to most other bacterial phyla. Some families (i.e., Kdp-type ATPases) underwent far less horizontal gene transfer than other prokaryotic families, possibly due to their multisubunit characteristics. Functional motifs are better conserved across family lines than across organismal lines, and these motifs can be family specific, facilitating functional predictions. In some cases, gene fusion events created P-type ATPases covalently linked to regulatory catalytic enzymes. In one family (FUPA Family 24), a type I ATPase gene (N-terminal) is fused to a type II ATPase gene (C-terminal) with retention of function only for the latter. Genome minimalization led to preferential loss of P-type ATPase genes. Chan et al. (2010) suggested that in prokaryotes and some unicellular eukaryotes, the primary function of P-type ATPases is protection from extreme environmental stress conditions. The classification of P-type ATPases of unknown function into phylogenetic families provides guides for future molecular biological studies.
Human genes
Human genes encoding P-type ATPases or P-type ATPase-like proteins include:
P1B: Cu++ ATPase: ATP7A, ATP7B
P2A: SERCA Ca2+ ATPase: ATP2A1, ATP2A2, ATP2A3
P2A: secretory pathway Ca2+-ATPase: ATP2C1, ATP2C2
P2B: Ca2+ ATPase: ATP2B1, ATP2B2, ATP2B3, ATP2B4
P2C: Na+/K+ ATPase: ATP1A1, ATP1A2, ATP1A3, ATP1A4, ATP1B1, ATP1B2, ATP1B3, ATP1B4
P2C: H+/K+ ATPase, gastric: ATP4A;
P2C: H+/K+ ATPase, nongastric: ATP12A
P4: Flippase: ATP8A1, ATP8B1, ATP8B2, ATP8B3, ATP8B4, ATP9A, ATP9B, ATP10A, ATP10B, ATP10D, ATP11A, ATP11B, ATP11C
P5: ATP13A1, ATP13A2, ATP13A3, ATP13A4, ATP13A5
See also
H+/ K+-ATPase
Na+/ K+-ATPase
Plasma membrane H+-ATPase
Sarco/endoplasmatic reticulum Ca2+-ATPase
V-ATPase
References
EC 3.6.3
Integral membrane proteins
Transport proteins
Physiology | P-type ATPase | [
"Biology"
] | 6,533 | [
"Physiology"
] |
14,655,845 | https://en.wikipedia.org/wiki/Complex-base%20system | In arithmetic, a complex-base system is a positional numeral system whose radix is an imaginary (proposed by Donald Knuth in 1955) or complex number (proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965).
In general
Let be an integral domain , and the (Archimedean) absolute value on it.
A number in a positional number system is represented as an expansion
where
{| class="table left"
|-
| || || is the radix (or base) with ,
|-
| || || is the exponent (position or place),
|-
| || || are digits from the finite set of digits , usually with
|}
The cardinality is called the level of decomposition.
A positional number system or coding system is a pair
with radix and set of digits , and we write the standard set of digits with digits as
Desirable are coding systems with the features:
Every number in , e. g. the integers , the Gaussian integers or the integers , is uniquely representable as a finite code, possibly with a sign ±.
Every number in the field of fractions , which possibly is completed for the metric given by yielding or , is representable as an infinite series which converges under for , and the measure of the set of numbers with more than one representation is 0. The latter requires that the set be minimal, i.e. for real numbers and for complex numbers.
In the real numbers
In this notation our standard decimal coding scheme is denoted by
the standard binary system is
the negabinary system is
and the balanced ternary system is
All these coding systems have the mentioned features for and , and the last two do not require a sign.
In the complex numbers
Well-known positional number systems for the complex numbers include the following ( being the imaginary unit):
, e.g. and
, the quater-imaginary base, proposed by Donald Knuth in 1955.
and
(see also the section Base −1 ± i below).
, where , and is a positive integer that can take multiple values at a given . For and this is the system
.
, where the set consists of complex numbers , and numbers , e.g.
, where
Binary systems
Binary coding systems of complex numbers, i.e. systems with the digits , are of practical interest.
Listed below are some coding systems (all are special cases of the systems above) and resp. codes for the (decimal) numbers .
The standard binary (which requires a sign, first line) and the "negabinary" systems (second line) are also listed for comparison. They do not have a genuine expansion for .
As in all positional number systems with an Archimedean absolute value, there are some numbers with multiple representations. Examples of such numbers are shown in the right column of the table. All of them are repeating fractions with the repetend marked by a horizontal line above it.
If the set of digits is minimal, the set of such numbers has a measure of 0. This is the case with all the mentioned coding systems.
The almost binary quater-imaginary system is listed in the bottom line for comparison purposes. There, real and imaginary part interleave each other.
Base
Of particular interest are the quater-imaginary base (base ) and the base systems discussed below, both of which can be used to finitely represent the Gaussian integers without sign.
Base , using digits and , was proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965.
Connection to the twindragon
The rounding region of an integer – i.e., a set of complex (non-integer) numbers that share the integer part of their representation in this system – has in the complex plane a fractal shape: the twindragon (see figure). This set is, by definition, all points that can be written as with . can be decomposed into 16 pieces congruent to . Notice that if is rotated counterclockwise by 135°, we obtain two adjacent sets congruent to , because . The rectangle in the center intersects the coordinate axes counterclockwise at the following points: , , and , and . Thus, contains all complex numbers with absolute value ≤ .
As a consequence, there is an injection of the complex rectangle
into the interval of real numbers by mapping
with .
Furthermore, there are the two mappings
and
both surjective, which give rise to a surjective (thus space-filling) mapping
which, however, is not continuous and thus not a space-filling curve. But a very close relative, the Davis-Knuth dragon, is continuous and a space-filling curve.
See also
Dragon curve
References
External links
"Number Systems Using a Complex Base" by Jarek Duda, the Wolfram Demonstrations Project
"The Boundary of Periodic Iterated Function Systems" by Jarek Duda, the Wolfram Demonstrations Project
"Number Systems in 3D" by Jarek Duda, the Wolfram Demonstrations Project
"Large introduction to complex base numeral systems" with Mathematica sources by Jarek Duda
Non-standard positional numeral systems
Fractals
Ring theory | Complex-base system | [
"Mathematics"
] | 1,079 | [
"Functions and mappings",
"Mathematical analysis",
"Ring theory",
"Mathematical objects",
"Fractals",
"Fields of abstract algebra",
"Mathematical relations",
"Complex numbers",
"Numbers"
] |
14,655,933 | https://en.wikipedia.org/wiki/Max%20Born%20Medal%20and%20Prize | The Max Born Medal and Prize is a scientific prize awarded yearly by the German Physical Society (DPG) and the British Institute of Physics (IOP) in memory of the German physicist Max Born, who was a German-Jewish physicist, instrumental in the development of quantum mechanics. It was established in 1972, and first awarded in 1973.
The terms of the award are that it is "to be presented for outstanding contributions to physics". The award goes to physicists based in Germany and in the UK or Ireland in alternate years. The prize is accompanied by a silver medal "about 6 cm in diameter and 0.5 cm thick. One face carries a profile of Max Born and his name and dates. The other face carries the equation pq – qp = h/2πi and the full names of IOP and DPG. The recipient's full name and year of award is engraved around the rim." The medal is accompanied by €3000.
List of recipients
The following have received this award:
1973 Roger Cowley
1974 Walter Greiner
1975 Trevor Moss
1976 Hermann Haken
1977 Walter Spear
1978 Herbert Walther
1979 John Taylor
1980
1981 Cyril Domb
1982 Wolfgang Kaiser
1983 Andrew Keller
1984
1985 George Isaak
1986
1987 Cyril Hilsum
1988 Peter Armbruster
1989 Robert H. Williams
1990
1991 Gilbert Lonzarich
1992
1993 David C. Hanna
1994 Wolfgang Demtröder
1995 Michael H Key
1996 Jürgen Mlynek
1997 Robin Marshall
1998 Gerhard Abstreiter
1999 John Dainton
2000
2001 Volker Heine
2002
2003 Brian Foster
2004 Matthias Scheffler
2005
2006
2007 Alan D. Martin
2008 Hagen Kleinert
2009 Robin Devenish
2010 Simon White
2011 David Philip Woodruff
2012 Martin Bodo Plenio
2013
2014
2015 Andrea Cavalleri
2016 Christian Pfleiderer
2017 Carlos Silvestre Frenk
2018 Ángel Rubio
2019 Michael Coey
2020 Anna Köhler
2021 Hiranya Peiris
2022 Claudia Felser
2023
See also
Institute of Physics Awards
List of physics awards
List of awards named after people
References
Awards established in 1973
Awards of the Institute of Physics
Awards of the German Physical Society
Physics awards
Max Born | Max Born Medal and Prize | [
"Technology"
] | 436 | [
"Science and technology awards",
"Physics awards"
] |
14,656,037 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282005%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2005. This is part of the Wikipedia summary of Oil Megaprojects—see that page for further details. 2005 saw 23 projects come on stream with an aggregate capacity of when full production was reached (which may not have been in 2005).
Quick Links to Other Years
Detailed Project Table for 2005
This table is available in csv format here (updated daily).
References
Oil megaprojects
Oil fields
Proposed energy projects
2005 in technology | Oil megaprojects (2005) | [
"Engineering"
] | 115 | [
"Oil megaprojects",
"Megaprojects"
] |
14,656,069 | https://en.wikipedia.org/wiki/Crown%20sprouting | Crown sprouting is the ability of a plant to regenerate its shoot system after destruction (usually by fire) by activating dormant vegetative structures to produce regrowth from the root crown (the junction between the root and shoot portions of a plant). These dormant structures take the form of lignotubers or basal epicormic buds. Plant species that can accomplish crown sprouting are called crown resprouters (distinguishing them from stem or trunk resprouters) and, like them, are characteristic of fire-prone habitats such as chaparral.
In contrast to plant fire survival strategies that decrease the flammability of the plant, or by requiring heat to germinate, crown sprouting allows for the total destruction of the above ground growth. Crown sprouting plants typically have extensive root systems in which they store nutrients allowing them to survive during fires and sprout afterwards. Early researchers suggested that crown sprouting species might lack species genetic diversity; however, research on Gondwanan shrubland suggests that crown sprouting species have similar genetic diversity to seed sprouters. Some genera, such as Arctostaphylos and Ceanothus, have species that are both resprouters and not, both adapted to fire.
California Buckeye, Aesculus californica, is an example of a western United States tree which can regenerate from its root crown after a fire event, but can also regenerate by seed.
See also
Fire ecology
Lignotuber
Notes
References
Arthur W. Sampson and Arnold M. Schultz, Control of brush and undesirable trees
William J. Bond and Jeremy J. Midgley (2003) The Evolutionary Ecology of Sprouting in Woody Plants, Int. J Plant Sci. 164(S3):S103–S114. 2003, University of Chicago.
C. Michael Hogan. 2008. Aesculus californica, Globaltwitcher.com, ed. N. Stromberg
Wildfire ecology
Plant morphology
Plant physiology
Botanical nomenclature | Crown sprouting | [
"Biology"
] | 426 | [
"Plant physiology",
"Botanical nomenclature",
"Plants",
"Plant morphology",
"Botanical terminology",
"Biological nomenclature"
] |
14,656,113 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282004%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2004. This is part of the Wikipedia summary of Oil Megaprojects—see that page for further details. 2004 saw 24 projects come on stream with an aggregate capacity of when full production was reached (which may not have been in 2004).
Quick Links to Other Years
Detailed Project Table for 2004
See also the 2004 world oil market chronology
This table is available in csv format here (updated daily).
References
Oil megaprojects
Oil fields
Proposed energy projects
2004 in technology | Oil megaprojects (2004) | [
"Engineering"
] | 123 | [
"Oil megaprojects",
"Megaprojects"
] |
14,656,281 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282006%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2006. This is part of the Wikipedia summary of Oil Megaprojects. 2006 saw 30 projects come on stream with an aggregate capacity of when full production was reached this list does not like include any of the enormous project developed in the United States which dwarf these by +-5000 BOE (which may not have been in 2006).
Quick links to other years
Detailed project table for 2006
Terminology
Year startup: year of first oil. Specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref.: list of sources.
Terminology
Year startup: year of first oil. Specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref.: list of sources.
References
Oil fields
Oil megaprojects
2006 in technology | Oil megaprojects (2006) | [
"Engineering"
] | 562 | [
"Oil megaprojects",
"Megaprojects"
] |
14,657,447 | https://en.wikipedia.org/wiki/Spectrochemistry | Spectrochemistry is the application of spectroscopy in several fields of chemistry. It includes analysis of spectra in chemical terms, and use of spectra to derive the structure of chemical compounds, and also to qualitatively and quantitively analyze their presence in the sample. It is a method of chemical analysis that relies on the measurement of wavelengths and intensity of electromagnetic radiation.
History
It was not until 1666 that Isaac Newton showed that white lights from the sun could be dissipated into a continuous series of colors. So Newton introduced the concept which he called spectrum to describe this phenomenon. He used a small aperture to define the beam of light, a lens to collimate it, a glass prism to disperse it, and a screen to display the resulting spectrum. Newton's analysis of light was the beginning of the science of spectroscopy. Later, It became clear that the Sun's radiation might have components outside the visible portion of the spectrum. In 1800 William Hershel showed that the sun's radiation extended into infrared, and in 1801 John Wilhelm Ritter also made a similar observation in the ultraviolet. Joseph Von Fraunhofer extended Newton's discovery by observing the sun's spectrum when sufficiently dispersed was blocked by a fine dark lines now known as Fraunhofer lines. Fraunhofer also developed diffracting grating, which disperses the lights in much the same way as does a glass prism but with some advantages. the grating applied interference of lights to produce diffraction provides a direct measuring of wavelengths of diffracted beams. So by extending Thomas Young's study which demonstrated that a light beam passes slit emerges in patterns of light and dark edges Fraunhofer was able to directly measure the wavelengths of spectral lines. However, despite his enormous achievements, Fraunhofer was unable to understand the origins of the special line in which he observed. It was not until 33 years after his passing that Gustav Kirchhoff established that each element and compound has its unique spectrum and that by studying the spectrum of an unknown source, one could determine its chemical compositions, and with these advancements, spectroscopy became a truly scientific method of analyzing the structures of chemical compounds. Therefore, by recognizing that each atom and molecule has its spectrum Kirchhoff and Robert Bunsen established spectroscopy as a scientific tool for probing atomic and molecular structures and founded the field of spectrochemical analysis for analyzing the composition of materials.
IR Spectra Tables & Charts
IR Spectrum Table by Frequency
IR Spectra Table by Compound Class
To use an IR spectrum table, first need to find the frequency or compound in the first column, depending on which type of chart that is being used. Then find the corresponding values for absorption, appearance and other attributes. The value for absorption is usually in cm−1.
NOTE: NOT ALL FREQUENCIES HAVE A RELATED COMPOUND.
Applications
Evaluation of Dual - Spectrum IR Spectrogram System on Invasive Ductal Carcinoma (IDC) Breast cancer
Invasive Ductal Carcinoma (IDC) is one of the common types of breast cancer which accounts for 8 out of 10 of all invasive breast cancers. According to the American Cancer Society, more than 180,000 women in the United States find out that they have breast cancers each year, and most are diagnosed with this specific type of cancer. While it is essential to detect breast cancer early to reduce the death rate there may be already more than 10,000,000 cells in breast cancer when it can be observed by x-ray mammograms. however, the IR Spectrum proposed by Szu et al seems to be more promising in detecting breast cancer cells several months ahead of a mammogram. Clinical tests have been carried out with approval of Institutional Review Board of National Taiwan University Hospital. So from August 2007 to June 2008 35 patients aged between (30-66) with an average age of 49 were enlisted in this project. the results established that about 63% of the success rate could be achieved with the cross-sectional data. Therefore the results concluded that breast cancers may be detected more accurately by cross-referencing S1 maps of multiple three-points.
Molecular spectroscopic Methods to Elucidation of Lignin Structure
A Ligninin plant cell is a complex amorphous polymer and it is biosynthesized from three aromatic alcohols, namely P-Coumaryl, Coniferyl, and Sinapyl alcohols. Lignin is a highly branched polymer and accounts for 15-30% by weight of lignocellulosic biomass (LCBM), so the structure of lignin will vary significantly according to the type of LCBM and the composition will depend on the degradation process. This biosynthesis process is mainly consists of radical coupling reactions and it generates a particular lignin polymer in each plant species. So due to having a complex structure, various molecular spectroscopic methods have been applied to resolve the aromatic units and different interunit linkages in lignin from distinct plant species.
References
Spectroscopy | Spectrochemistry | [
"Physics",
"Chemistry"
] | 1,014 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
14,658,754 | https://en.wikipedia.org/wiki/Salmon%20Site | The Salmon Site is a tract of land in Lamar County, Mississippi, near Baxterville. The tract is located over a geological formation known as the Tatum Salt Dome and is the location of the only nuclear weapons test detonations known to have been performed in the eastern United States.
Two underground detonations, a joint effort of the United States Atomic Energy Commission and the United States Department of Defense, took place under the designation of Project Dribble, part of a larger program known as Vela Uniform (aimed at assessing remote detonation detection capabilities). The first test, known as the Salmon Event, took place on October 22, 1964. It involved detonation of a 5.3 kiloton device at a depth of . The second test, known as the Sterling Event, took place on December 3, 1966 and involved detonation of a 380-ton device suspended in the cavity left by the previous test. Further non-nuclear explosive tests were later conducted in the remaining cavity as part of the related Project Miracle Play.
In October 2006, responsibility for the site was transferred to the US Department of Energy's Office of Legacy Management. A plaque mounted on a short stone pillar marks the site.
On Wednesday, December 15, 2010, the United States Department of Energy transferred the Salmon Site back to the state of Mississippi. Mississippi Secretary of State Delbert Hosemann said in a press release that the majority of the will be used for timber but an undetermined portion will be open for public access. Access to the Salmon Site had previously been restricted and monitored by the federal government since the tests were first conducted in 1964 and 1966.
A granite monument surrounded by test wells marks the site of the nuclear bomb tests, in a clearing surrounded by a Mississippi state timber preserve.
The US government gave out more than $5 million as compensation for medical problems related to the Salmon Site.
See also
Operation Whetstone
References
American nuclear weapons testing
Nuclear test sites
American nuclear test sites
Salt domes | Salmon Site | [
"Chemistry"
] | 402 | [
"Salt domes",
"Salts"
] |
14,658,821 | https://en.wikipedia.org/wiki/Avotakka | Avotakka is a monthly Finnish interior design magazine published in Helsinki, Finland.
History and profile
Avotakka was first published in December 1967. In 1971 it merged with an older Finnish design magazine Kaunis Koti [Finnish: Beautiful Home], which had first been published in 1948. At the time, the merger represented the combination of a more middle- and professional class magazine (Kaunis Koti) with a more populist magazine (Avotakka). The owner and publisher is A-lehdet Oy and the headquarters of the magazine is in Helsinki. It is published on a monthly basis.
As of 2011 the editor-in-chief of the magazine was Soili Ukkola.
Circulation
In 2005 the annual circulation of Avotakka was 88,193 copies. Its circulation was 85,000 copies in 2007. In 2010 the magazine had a circulation of 85,104 copies. The 2011 circulation of the magazine was 85,431 copies. It fell to 82,245 copies in 2012 and to 71,911 copies in 2013.
See also
List of Finnish magazines
References
External links
Official website
1967 establishments in Finland
Design magazines
Finnish-language magazines
Magazines established in 1967
Magazines published in Helsinki
Monthly magazines published in Finland | Avotakka | [
"Engineering"
] | 252 | [
"Design magazines",
"Design"
] |
14,659,295 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282007%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2007. This is part of the Wikipedia summary of Oil Megaprojects.
Quick links to other years
Detailed project table for 2007
Terminology
Year Startup: year of first oil. put specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref: list of sources.
References
2007
Oil fields
Proposed energy projects
Projects established in 2007
2007 in the environment
2007 in technology | Oil megaprojects (2007) | [
"Engineering"
] | 294 | [
"Oil megaprojects",
"Megaprojects"
] |
14,659,436 | https://en.wikipedia.org/wiki/HBE1 | Hemoglobin subunit epsilon is a protein that in humans is encoded by the HBE1 gene.
Function
The epsilon globin gene (HBE) is normally expressed in the embryonic yolk sac: two epsilon chains together with two zeta chains (an alpha-like globin) constitute the embryonic hemoglobin Hb Gower I; two epsilon chains together with two alpha chains form the embryonic Hb Gower II. Both of these embryonic hemoglobins are normally supplanted by fetal, and later, adult hemoglobin. The five beta-like globin genes are found within a 45 kb cluster on chromosome 11 in the following order: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'.
See also
Hemoglobin
Human β-globin locus
Hemoglobin alpha chains (two genes, same sequence):
HBA1
HBA2
References
Further reading
Hemoglobins | HBE1 | [
"Chemistry"
] | 207 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,659,505 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282008%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2008. This is part of the Wikipedia summary of Oil Megaprojects.
Quick links to other years
Detailed project table for 2008
Terminology
Year Startup: year of first oil. put specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref: list of sources.
References
Oil fields
Proposed energy projects
Oil megaprojects
2008 in technology | Oil megaprojects (2008) | [
"Engineering"
] | 290 | [
"Oil megaprojects",
"Megaprojects"
] |
14,659,532 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282009%29 | This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2009. This is part of the Wikipedia summary of oil megaprojects.
Quick links to other years
Detailed project table for 2009
Terminology
Year startup: year of first oil, specific date if available
Operator: company undertaking the project
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR)
Type: liquid category (i.e. natural gas liquids, natural gas condensate, crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb)
GOR: the ratio of produced gas to produced oil, commonly abbreviated GOR
Peak year: year of the production plateau/peak
Peak: maximum production expected (thousand barrels/day)
Discovery: year of discovery
Capital investment: expected capital cost; FID (Final Investment Decision). If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments and sources
References
2009
Oil fields
Proposed energy projects
Projects established in 2009
2009 in the environment
2009 in technology | Oil megaprojects (2009) | [
"Engineering"
] | 275 | [
"Oil megaprojects",
"Megaprojects"
] |
14,659,610 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282010%29 | Following is a list of Oil megaprojects in the year 2010, projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel. This is part of the Wikipedia summary of Oil Megaprojects.
Quick links to other years
Detailed list of projects for 2010
Terminology
Year Startup: year of first oil. put specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref: list of sources.
References
2010
Oil fields
Proposed energy projects
Projects established in 2010
2010 in the environment
2010 in technology | Oil megaprojects (2010) | [
"Engineering"
] | 304 | [
"Oil megaprojects",
"Megaprojects"
] |
14,659,630 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282011%29 | Following is a list of Oil megaprojects in the year 2011, projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel. This is part of the Wikipedia summary of Oil Megaprojects.
Quick links to other years
Detailed list of projects for 2011
Terminology
Year Startup: year of first oil. put specific date if available.
Operator: company undertaking the project.
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR).
Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb).
GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR.
Peak year: year of the production plateau/peak.
Peak: maximum production expected (thousand barrels/day).
Discovery: year of discovery.
Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes).
Ref: list of sources.
References
2011
Oil fields
Proposed energy projects
Projects established in 2011
2011 in the environment
2011 in technology | Oil megaprojects (2011) | [
"Engineering"
] | 304 | [
"Oil megaprojects",
"Megaprojects"
] |
14,659,653 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282012%29 | This page summarizes projects that propose to bring more than of new liquid fuel capacity to market, with the first production of fuel beginning in 2012. This is part of the Wikipedia summary of oil megaprojects.
Quick links to other years
Detailed list of projects for 2012
Terminology
Year startup: year of first oil; specific date if available
Operator: company undertaking the project
Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR)
Type: liquid category (i.e. natural gas liquids, natural gas condensate, crude oil)
Grade: oil quality (light, medium, heavy, sour) or API gravity
2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb)
GOR: ratio of produced gas to produced oil, commonly abbreviated GOR
Peak year: year of the production plateau/peak
Peak: maximum production expected (thousand barrels/day)
Discovery: year of discovery
Capital investment: expected capital cost; FID (Final Investment Decision); if no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID.
Notes: comments about the project (footnotes)
Ref.: sources
References
2012
Oil fields
Proposed energy projects
Projects established in 2012
2012 in the environment
2012 in technology | Oil megaprojects (2012) | [
"Engineering"
] | 285 | [
"Oil megaprojects",
"Megaprojects"
] |
14,660,112 | https://en.wikipedia.org/wiki/Journey%20planner | A journey planner, trip planner, or route planner is a specialized search engine used to find an optimal means of travelling between two or more given locations, sometimes using more than one transport mode. Searches may be optimized on different criteria, for example fastest, shortest, fewest changes, cheapest. They may be constrained, for example, to leave or arrive at a certain time, to avoid certain waypoints, etc. A single journey may use a sequence of several modes of transport, meaning the system may know about public transport services as well as transport networks for private transportation. Trip planning or journey planning is sometimes distinguished from route planning, which is typically thought of as using private modes of transportation such as cycling, driving, or walking, normally using a single mode at a time. Trip or journey planning, in contrast, would make use of at least one public transport mode which operates according to published schedules; given that public transport services only depart at specific times (unlike private transport which may leave at any time), an algorithm must therefore not only find a path to a destination, but seek to optimize it so as to minimize the waiting time incurred for each leg. In European Standards such as Transmodel, trip planning is used specifically to describe the planning of a route for a passenger, to avoid confusion with the completely separate process of planning the operational journeys to be made by public transport vehicles on which such trips are made.
Trip planners have been widely used in the travel industry since the 1970s, by booking agents. The growth of the internet, the proliferation of geospatial data, and the development of information technologies generally has led to the rapid development of many self-service app or browser-based, on-line intermodal trip planners.
A trip planner may be used in conjunction with ticketing and reservation systems.
History
First-generation systems
In the late 1980s and early 1990s, some national railway operators and major metropolitan transit authorities developed their own specialized trip planners to support their customer enquiry services. These typically ran on mainframes and were accessed internally with terminals by their own staff in customer information centers, call centers, and at ticket counters in order to answer customer queries. The data came from the timetable databases used to publish printed timetables and to manage operations and some included simple route planning capabilities. The HAFAs timetable information system developed in 1989 by the German company Hacon, (now part of Siemens AG) is an example of such a system and was adopted by Swiss Federal Railways (SBB) and Deutsche Bahn in 1989. The ""Routes"" system of London Transport, now TfL, in use before the development of the on-line planner and covering all public transport services in London, was another example of a mainframe OLTP journey planner and included a large database of tourist attractions and popular destinations in London.
Second-generation systems
In the 1990s with the advent of personal computers with sufficient memory and processor power to undertake trip planning (which is relatively expensive computationally in terms of memory and processor requirements), systems were developed that could be installed and run on minicomputers and personal computers. The first digital public transport trip planner systems for a microcomputer was developed by Eduard Tulp, an informatica student at the Amsterdam University on an Atari PC. He was hired by the Dutch Railways to build a digital trip planner for the train services. In 1990 the first digital trip planner for the Dutch Railways (on diskette) was sold to be installed on PC's and computers for off-line consultation. The principles of his software program was published in a Dutch university paper in 1991 This was soon expanded to include all public transport in the Netherlands.
Another pioneer was Hans-Jakob Tobler in Switzerland. His product Finajour, which ran for PC DOS and MS-DOS was the first electronic timetable for Switzerland. The first published version was sold for the timetable period 1989/1990. Other European countries soon followed with their own journey planners.
A further development of this trend was to deploy trip planners onto even smaller platforms such as mobile devices, a Windows CE version of Hafas was launched in 1998 compressing the application and the entire railway timetable of Deutsche Bahn into six megabytes and running as a stand-alone application.
Early internet-based systems
The development of the internet allowed HTML based user interfaces to be added to allow direct querying of trip planning systems by the general public. A test web interface for HaFAs, was launched as Deutsche Bahn's official rail trip planner in 1995 and evolved over time into the main Deutsche Bahn website. In 2001 Transport for London launched the world's first large-scale multimodal trip planner for a world city covering all of London's transport modes as well as rail routes to London; this used a trip planning engine supplied by Mentz Gmbh] of Munich after earlier attempts in the late 1990s to add a web interface to TfL's own mainframe internal trip planner failed to scale. Internet trip planners for major transport networks such as national railways and major cities must sustain very high query rates and so require software architectures optimized to sustain such traffic. The world's first mobile trip planner for a large metropolitan area, a WAP based interface to the London using the Mentz engine, was launched in 2001 by London startup company Kizoom Ltd, who also launched the UK's first rail trip planner for the mobile internet in 2000, also as a WAP service, followed by an SMS service. Starting in 2000 the Traveline service provided all parts of the UK with regional multi-modal trip planning on bus, coach, and rail. A web-based trip planner for UK rail was launched by UK National Rail Enquiries in 2003.
Early public transport trip planners typically required a stop or station to be specified for the endpoints. Some also supported inputting the name of a tourist attraction or other popular destination places by keeping a table of the nearest stop to the destination. This was later extended with ability to add addresses or coordinates to offer true point to point planning.
Critical to the development of large-scale multi-modal trip planning in the late 1990s and early 2000s was the development in parallel of standards for encoding stop and schedule data from many different operators and the setting up of workflows to aggregate and distribute data on a regular basis. This is more challenging for modes such as bus and coach, where there tend to a large number of small operators, than for rail, which typically involves only a few large operators who have exchange formats and processes already in place in order to operate their networks. In Europe, which has a dense and sophisticated public transport network, the CEN Transmodel Reference Model for Public Transport was developed to support the process of creating and harmonizing standard formats both nationally and internationally.
Distributed journey planners
In the 2000s, Several major projects developed distributed trip planning architectures to allow the federation of separate trip planners each covering a specific area to create a composite engine covering a very large area.
The UK Transport Direct Portal launched in 2004 by the UK Department of Transport, used the JourneyWeb protocol to link eight separate regional engines covering data from 140 local transport authorities in England, Scotland and Wales as a unified engine. The portal integrated both road and public transport planners allowing a comparison between modes of travel times, footprint etc..
The German Delfi project developed a distributed trip planning architecture used to federate the German regional planners, launched as a prototype in 2004. The Interface was further developed by the German TRIAS project and led to the development of a CEN Standard [[|Open API for distributed journey planning']] (CEN/TS 17118:2017) published in 2017 to provide a standard interface to trip planners, incorporating features from JourneyWeb and EU-Spirit and making use of the SIRI Protocol Framework and the Transmodel reference model.
The European EU Spirit project developed a long-distance trip planner between a number of different European regions
Second-generation internet systems
Public transport trip planners proved to be immensely popular (for example by 2005 Deutsche Bahn was already sustaining 2.8 million requests per day and journey planning sites constitute some of the highest trafficked information sites in every country that has them. The ability to purchase tickets for travel for the journeys found has further increased the utility and popularity of the sites; early implementations such as the UK's Trainline offered delivery of tickets by mail; this has been complemented in most European countries by self-service print and mobile fulfillment methods. Internet trip planners now constitute a primary sales channel for most rail and air transport operators.
Google started to add trip planning capabilities to its product set with a version of Google Transit in 2005, covering trips in the Portland region, as described by the TriMet agency manager Bibiana McHugh. This led to the development of the General Transit Feed Specification (GTFS), a format for collecting transit data for use in trip planners that has been highly influential in developing an ecosystem of PT data feeds covering many different countries. The successful uptake of GTFS as an available output format by large operators in many countries has allowed Google to extend its trip planner coverage to many more regions around the world. The Google Transit trip planning capabilities were integrated into the Google Map product in 2012.
Further evolution of trip planning engines has seen the integration of real time data so that trip plans for the immediate future take into account real time delays and disruptions. The UK National Rail Enquiries added real time to its rail trip planner in 2007. Also significant has been the integration of other types of data into the trip planning results such as disruption notices, crowding levels, CO2 costs, etc. The trip planners of some major metropolitan cities such as the Transport for London trip planner have the ability to dynamically suspend individual stations and whole lines so that modified trip plans are produced during major disruptions that omit the unavailable parts of the network. Another development has been the addition of accessibility data and the ability for algorithms to optimize plans to take into account the requirements of specific disabilities such as wheelchair access.
For the London 2012 Olympics, an enhanced London trip planner was created that allowed the proposed trip results to be biased to manage available capacity across different routes, spreading traffic to less congested routes. Another innovation was the detailed modelling of all the access paths into and out of every Olympic venue, (from PT stop to individual arena entrance) with predicted and actual queueing times to allow for security checks and other delays being factored into the recommended travel times.
An initiative to develop an open source trip planner, the OpenTripPlanner was seeded by Portland, Oregon's transit agency TriMet in 2009 and developed with the participation of agencies and operators in the US and Europe; a full version 1.0 released in September 2016, is making it possible for smaller transit agencies and operators to provide trip planning without paying proprietary license fees.
Mode-specific considerations
Public transport routing
A public transport route planner is an intermodal journey planner, typically accessed via the web that provides information about available public transport services. The application prompts a user to input an origin and a destination, and then uses algorithms to find a good route between the two on public transit services. Time of travel may be constrained to either time of departure or arrival and other routing preferences may be specified as well.
An intermodal journey planner supports intermodal journeys i.e. using more than one modes of transport, such as cycling, rapid transit, bus, ferry, etc. Many route planners support door-to-door planning while others only work between stops on the transport network, such as stations, airports or bus stops.
For public transport routing the trip planner is constrained by times of arrival or departure. It may also support different optimization criteria – for example, fastest route, fewest changes, most accessible. Optimization by price (cheapest, most flexible fare, etc.) is usually done by a separate algorithm or engine, though trip planners that can return fare prices for the trips they find may also offer sorting or filtering of results by price and product type. For long-distance rail and air trip planning, where price is a significant consideration in price optimizing trip planners may suggest the cheapest dates to travel for customers are flexible as to travel time.
Car routing
The planning of road legs is sometimes done by a separate subsystem within a journey planner, but may consider both single mode trip calculations as well as intermodal scenarios (e.g. Park and Ride, kiss and ride, etc.). Typical optimizations for car routing are shortest route, fastest route, cheapest route and with constraints for specific waypoints. The rise of e-mobility poses new challenges to route planning, e.g. sparse charging infrastructure, limited range, and long charging have to be taken into account and offer room for optimization. Some advanced journey planners can take into account average journey times on road sections, or even real-time predicted average journey times on road sections.
Pedestrian routing
A journey planner will ideally provide detailed routing for pedestrian access to stops, stations, points of interest etc. This will include options to take into account accessibility requirements for different types of users, for example; 'no steps', 'wheelchair access', 'no lifts', etc.
Bicycle routing
Some journey planning systems can calculate bicycle routes, integrating all paths accessible by bicycle and often including additional information like topography, traffic, on-street cycling infrastructure, etc. These systems assume, or allow the user to specify, preferences for quiet or safe roads, minimal elevation change, bicycle lanes, etc.
Data requirements
Trip planners depend on a number of different types of data and the quality and extent of this data limits their capability. Some trip planners integrate many different kinds of data from numerous sources. Others may work with one mode only, such as flight itineraries between airports, or using only addresses and the street network for driving directions.
Contextual data
Point of interest data
Passengers don't travel because they want to go to a particular station or stop, but because they want to go some destination of interest, such as a sports arena, tourist attraction, shopping center, park, law court, etc., etc. Many trip planners allow users to look for such "Points of interest", either by name or by category (museum, stadium, prison, etc.). Data sets of systematically named, geocoded and categorized popular destinations can be obtained commercially, for example, The UK PointX data set, or derived from opensource data sets such as OpenStreetMap. Major operators such as Transport for London or National Rail have historically had well developed sets of such data for use in their Customer Call centers, along with information on the links to the nearest stops. For points of interest that cover a large area, such as parks, country houses or stadia, a precise geocoding of the entrances is important.
Gazetteer data
Trip planning user interfaces can be made more usable by integration of Gazetteer data. This can be associated with stops to assist with stop finding in particular, for example for disambiguation; there are 33 places named Newport in the US and 14 in the UK - a Gazetteer can be used to distinguish which is which and also in some cases to indicate the relationship of transport interchanges with towns and urban centers that passengers are trying to reach - for example only one of London's five or so Airports is actually in London. Data for this purpose typically comes from additional layers in a map data set such as that provided by Esri, Ordnance Survey, Navtech, or specific data sets such as the UK National Public Transport Gazetteer.
Road data
Road network data
Road trip planners, sometimes referred to as route planners, use street and footpath network data to compute a route using simply the network connectivity (i.e. trips may run at any time and not constrained by a timetable). Such data can come from one or more public, commercial or crowdsourced datasets such as TIGER, Esri or OpenStreetMap. The data is fundamental both for computing access legs to reach public transport stops, and to compute road trips in their own right. The fundamental representation is a graph of nodes and edges (i.e. points and links). The data may be further annotated to assist trip planning for different modes;
Road data may be characterized by road type (highway, major road, minor road, track, etc.), turn restrictions, speed restrictions etc., as well as average travel times at different times of day on different day types (Weekday, Weekend, Public Holiday, etc.), so that accurate travel time predictions can be offered
Cycle road and path data may be annotated with characteristics such as cycle route number, traffic levels, surface, lighting, etc. that affect its usability by cyclists.
Footpath data may be annotated with accessibility characteristics such as steps, lifts, wheelchair access, ramps, etc., etc., and also safety indicators (e.g., lighting, CCTV, help points, ) so that accessibility constrained trip plans can be computed.
Real-time data for roads
Advanced road trip planners take into account the real-time state of the network. They use two main types of feed to do this, obtained from road data services using interfaces such as Datex II or UTMC.
Situation data, which described the incidents, events and planned roadworks in a structured form that can be related to the network; this is used to decorate trip plans and road maps to show current bottlenecks and incident locations.
Link traffic flow data, which gives a quantitative measurement of the current flow on each link of the network that is monitored; this can be used to take actual current conditions into account when computing predicted journey times.
Public transport data
For transit route planners to work, transit schedule data must always be kept up to date. To facilitate data exchange and interoperability between different trip planners, several standard data formats have emerged.
The General Transit Feed Specification, developed in 2006, is now used by hundreds of transit agencies around the world.
In the European Union all public passenger travel operators have the obligation to provide the information under the EU railway timetable data exchange format. In other parts of the world there similar exchange standards.
Stop data
The location and identity of public transport access points such as bus, tram and coach stops, stations, airports, ferry landing and ports are fundamental to trip planning and a stop data set is an essential layer of the transport data infrastructure. In order to integrate stops with spatial searches and road routing engines they are geocoded. In order to integrate them with the timetables and routes they are given a unique identifier within the transport network. In order to be recognizable to passengers they are given official names and may also have a public short code (for example the three letter IATA codes for airports) to use in interfaces. Historically, different operators quite often used a different identifier for the same stop and stop numbers were not unique within a country or even a region. Systems for managing stop data, such as the International Union of Railways (UIC) station location code set or the UK's NaPTAN (National Public Transport Access Point) system for stop numbers provide a means of ensuring numbers are unique and the stops are fully described, greatly facilitate the integration of data. Timetable exchange formats, such as GTFS, TransXChange or NeTEx include stop data in their formats and spatial data sets such as OpenStreetMap allow stop identifiers to be geocoded.
Public transport network topology data
For public transport networks with a very high frequency of service, such as urban metro cities and inner city bus services, the topology of the network can also be used for route planning, with an average interval being assumed rather than specific departure times. Data on the routes of trains and buses is also useful for providing visualization of results, for example, to plot the route of a train on a map. National mapping bodies, such as the UK's Ordnance Survey typically include a transport layer in their data sets and the European INSPIRE framework includes public transport infrastructure links in its set of strategic digital data. The CEN NeTEx format allows both the physical layer (e.g. road and railway track infrastructure links) and the logical layer (e.g. links between scheduled stopping points on a given line) of the transport infrastructure to be exchanged
In the UK the Online Journey Planner (OJP) is the engine used by National Rail to plan routes, calculate fares and establish ticket availability. OJP obtains its route information from SilverRail’s planning engine known as IPTIS (Integrated Passenger Transport Information System). The National Rail website provides information on how businesses can access this data directly via online data feed xml files. However, OJP was switched off in 2023 in favour of a new journey planner which is currently integrated into nationalrail.co.uk.
Public transport timetables
Data on public transport schedules is used by trip planners to determine the available journeys at specific times. Historically rail data has been widely available in national formats, and many countries also have bus and other mode data in national formats such as VDV 452 (Germany), TransXChange (UK) and Neptune (France). Schedule data is also increasingly becoming available in international formats such as GTFS and NeTEx. To allow a route to be projected onto a map, GTFS allows the specification of a simple shape plot; whilst Transmodel based standards such as CEN NeTEx, TransXChange additionally allow a more detailed representation which can recognize the constituent links and distinguish several different semantic layers.
Real-time prediction information for public transport
Trip planners may be able to incorporate real-time information into their database and consider them in the selection of optimal routes for travel in the immediate future. Automatic vehicle location (AVL) systems monitor the position of vehicles using GPS systems and can pass on real-time and forecast information to the journey planning system. A trip planner may use a real time interface such as the CEN Service Interface for Real Time Information to obtain this data.
Situation information
A situation is a software representation of an incident or event that is affecting or is likely to affect the transport network. A trip planner can integrate situation information and use it both to revise its trip planning computations and to annotate its responses so as to inform users through both text and map representations. A trip planner will typically use a standard interface such as SIRI, TPEG or Datex II to obtain situation information.
Incidents are captured through an incident capturing system (ICS) by different operators and stakeholders, for example in transport operator control rooms, by broadcasters or by the emergency services. Text and image information can be combined with the trip result. Recent incidents can be considered within the routing as well as visualized in an interactive map.
Technology
Typically journey planners use an efficient in-memory representation of the network and timetable to allow the rapid searching of a large number of paths. Database queries may also be used where the number of nodes needed to compute a journey is small, and to access ancillary information relating to the journey. A single engine may contain the entire transport network, and its schedules, or may allow the distributed computation of journeys using a distributed journey planning protocol such as JourneyWeb or Delfi Protocol. A journey planning engine may be accessed by different front ends, using a software protocol or application program interface specialized for journey queries, to provide a user interface on different types of device.
The development of journey planning engines has gone hand in hand with the development of data standards for representing the stops, routes and timetables of the network, such as TransXChange, NaPTAN, Transmodel or GTFS that ensure that these fit together. Journey planning algorithms are a classic example of problems in the field of Computational complexity theory. Real-world implementations involve a tradeoff of computational resources between accuracy, completeness of the answer, and the time required for calculation.
The sub-problem of route planning is an easier problem to solve as it generally involves less data and fewer constraints. However, with the development of "road timetables", associating different journey times for road links at different times of day, time of travel is increasingly relevant for route planners as well.
Algorithms
Journey planners use a routing algorithm to search a graph representing the transport network. In the simplest case where routing is independent of time, the graph uses (directed) edges to represent street/path segments and nodes to represent intersections. Routing on such a graph can be accomplished effectively using any of a number of routing algorithms such as Dijkstra's, A*, Floyd–Warshall, or Johnson's algorithm. Different weightings such as distance, cost or accessibility may be associated with each edge, and sometimes with nodes.
When time-dependent features such as public transit are included, there are several proposed ways of representing the transport network as a graph and different algorithms may be used such as RAPTOR
Automated trip planner
Automated trip planners generate your itinerary automatically, based on the information you provide. One way is to submit the desired destination, dates of your trip and interests and the plan will be created in a while. Another way is to provide the necessary information by forwarding confirmation e-mails from airlines, hotels and car rental companies.
Custom trip planner
With a custom trip planner the user creates one's own travel itinerary individually by picking the appropriate activities from a database. Some of these websites like Triphobo.com offer pre-built databases of points of interest, while others rely on user generated content.
In 2017, Google released a mobile app called Google Trips. Custom trip planning startups are seeing renewed interest from investors with the advent of data science, AI and voice technologies in 2018. Lola.com, an AI based travel planning startup and Hopper.com have managed to raise significant funding for developing trip planning apps.
When bookings and payments are added to a mobile trip planner app, then the result is considered mobility as a service.
Commercial software
Distribution companies may incorporate route planning software into their fleet management systems to optimize route efficiency. A route planning setup for distribution companies will often include GPS tracking capability and advanced reporting features which enable dispatchers to prevent unplanned stops, reduce mileage, and plan more fuel-efficient routes.
See also
Automotive navigation system
Intelligent transportation system
Multimodal transport
Online diary planners for trips and holidays
Pathfinding
Public transport route planner
Service Interface for Real Time Information
TPEG
Transmodel
Travel technology
Travel website
Public transport timetable
Sustainable transport
Transit district
References
Public transport information systems
Sustainable transport
Route planning software
Cartography
Intermodal passenger transport
Scheduling (transportation)
Travel websites | Journey planner | [
"Physics",
"Technology"
] | 5,435 | [
"Public transport information systems",
"Physical systems",
"Transport",
"Information systems",
"Sustainable transport"
] |
14,660,345 | https://en.wikipedia.org/wiki/Realistic%20conflict%20theory | Realistic conflict theory (RCT), also known as realistic group conflict theory (RGCT), is a social psychological model of intergroup conflict. The theory explains how intergroup hostility can arise as a result of conflicting goals and competition over limited resources, and it also offers an explanation for the feelings of prejudice and discrimination toward the outgroup that accompany the intergroup hostility. Groups may be in competition for a real or perceived scarcity of resources such as money, political power, military protection, or social status.
Feelings of resentment can arise in the situation that the groups see the competition over resources as having a zero-sums fate, in which only one group is the winner (obtained the needed or wanted resources) and the other loses (unable to obtain the limited resource due to the "winning" group achieving the limited resource first). The length and severity of the conflict is based upon the perceived value and shortage of the given resource. According to RCT, positive relations can only be restored if superordinate goals are in place.
Concept
History
The theory was officially named by Donald Campbell, but has been articulated by others since the middle of the 20th century. In the 1960s, this theory developed from Campbell's recognition of social psychologists' tendency to reduce all human behavior to hedonistic goals. He criticized psychologists like John Thibaut, Harold Kelley, and George Homans, who emphasized theories that place food, sex, and pain avoidance as central to all human processes. According to Campbell, hedonistic assumptions do not adequately explain intergroup relations. Campbell believed that these social exchange theorists oversimplified human behavior by likening interpersonal interaction to animal behavior. Similar to the ideas of Campbell, other researchers also began recognizing a problem in the psychological understanding of intergroup behavior. These researchers noted that prior to Campbell, social exchange theorists ignored the essence of social psychology and the importance of interchanges between groups. To the contrary of prior theories, RCT takes into account the sources of conflict between groups, which include, incompatible goals and competition over limited resources.
Robbers Cave study
The 1954 Robbers Cave experiment (or Robbers Cave study) by Muzafer Sherif and Carolyn Wood Sherif represents one of the most widely known demonstrations of RCT. The Sherifs' study was conducted over three weeks in a 200-acre summer camp in Robbers Cave State Park, Oklahoma, focusing on intergroup behavior. In this study, researchers posed as camp personnel, observing 22 eleven- and twelve-year-old boys who had never previously met and had comparable backgrounds (each subject was a white eleven to twelve-year-old boy of average to slightly above average intelligence from a Protestant, middle-class, two-parent home).
The experiments were conducted within the framework of regular camp activities and games. The experiment was divided into three stages. The first stage being "in-group formation", in which upon arrival the boys were housed together in one large bunkhouse. The boys quickly formed particular friendships. After a few days the boys were split randomly into two approximately equal groups. Each group was unaware of the other group's presence. The second stage was the "friction phase", wherein the groups were entered into competition with one another in various camp games. Valued prizes were awarded to the winners. This caused both groups to develop negative attitudes and behaviors towards the outgroup. At this stage 93% of the boys' friendship was within their in-group. The third and final stage was the "integration stage". During this stage, tensions between the groups were reduced through teamwork-driven tasks that required intergroup cooperation.
The Sherifs made several conclusions based on the three-stage Robbers Cave experiment. From the study, they determined that because the groups were created to be approximately equal, individual differences are not necessary or responsible for intergroup conflict to occur. As seen in the study when the boys were competing in camp games for valued prizes, the Sherifs noted that hostile and aggressive attitudes toward an outgroup arise when groups compete for resources that only one group can attain. The Sherifs also established that contact with an outgroup is insufficient, by itself, to reduce negative attitudes. Finally, they concluded that friction between groups can be reduced and positive intergroup relations can be maintained, only in the presence of superordinate goals that promote united, cooperative action.
However a further review of the Robbers Cave experiments, which were in fact a series of three separate experiments carried out by the Sherifs and colleagues, reveals additional deliberations. In two earlier studies the boys ganged up on a common enemy, and in fact on occasion ganged up on the experimenters themselves showing an awareness of being manipulated. In addition, Michael Billig argues that the experimenters themselves constitute a third group, and one that is arguably the most powerful of the three, and that they in fact become the outgroup in the aforementioned experiment.
Lutfy Diab repeated the experiment with 18 boys from Beirut. The 'Blue Ghost' and 'Red Genies' groups each contained 5 Christians and 4 Muslims. Fighting soon broke out, not between the Christians and Muslims but between the Red and Blue groups.
Extensions and applications
Implications for diversity and integration
RCT offers an explanation for negative attitudes toward racial integration and efforts to promote diversity. This is illustrated in the data collected from the Michigan National Election Studies survey. According to the survey, most whites held negative attitudes toward school districts' attempts to integrate schools via school busing in the 1970s. In these surveys, there was a general perceived threat that whites had of African Americans. It can be concluded that, contempt towards racial integration was due to a perception of blacks as a danger to valued lifestyles, goals, and resources, rather than symbolic racism or prejudice attitudes formulated during childhood.
RCT can also provide an explanation for why competition over limited resources in communities can present potentially harmful consequences in establishing successful organizational diversity. In the workplace, this is depicted by the concept that increased racial heterogeneity among employees is associated with job dissatisfaction among majority members. Since organizations are affixed in the communities to which their employees belong, the racial makeup of employees' communities affect attitudes toward diversity in the workplace. As racial heterogeneity increases in a white community, white employees are less accepting of workplace diversity. RCT provides an explanation of this pattern because in communities of mixed races, members of minority groups are seen as competing for economic security, power, and prestige with the majority group.
RCT can help explain discrimination against different ethnic and racial groups. An example of this is shown in cross-cultural studies that determined that violence between different groups escalates in relationship to shortages in resources. When a group has a notion that resources are limited and only available for possession by one group, this leads to attempts to remove the source of competition. Groups can attempt to remove their competition by increasing their group's capabilities (e.g., skill training), decreasing the abilities of the outgroup's competition (e.g., expressing negative attitudes or applying punitive tariffs), or by decreasing proximity to the outgroup (e.g., denying immigrant access).
An extension to unequal groups
Realistic conflict theory originally only described the results of competition between two groups of equal status. John Duckitt suggests that the theory be expanded to include competition between groups of unequal status. To demonstrate this, Duckitt created a scheme of types of realistic conflict with groups of unequal status and their resulting correlation with prejudice.
Duckitt concluded that there are at least two types of conflict based on ingroups competition with an outgroup. The first is 'competition with an equal group' and is explained by realistic conflict theory. Thus being, group-based threat that leads ingroup members to feel hostile towards the outgroup which can lead to conflict as the ingroup focuses on acquiring the threatened resource. The second type of conflict is 'domination of the outgroup by the ingroup'. This occurs when the ingroup and outgroup do not have equal status. If domination occurs, there are two responses the subordinate group may have. One is stable oppression, in which the subordinate group accepts the dominating group's attitudes on some focal issue and sometimes, the dominant group's deeper values to avoid further conflict. The second response that may occur is unstable oppression. This occurs when the subordinate group rejects the lower status forced upon them, and sees the dominating group as oppressive. The dominant group then may view the subordinates' challenge as either justified or unjustified. If it is seen as unjustified, the dominant group will likely respond to the subordinates' rebellion with hostility. If the subordinates' rebellion is viewed as justified, the subordinates are given the power to demand change. An example of this would be the eventual recognition of the civil rights movement in the 1960s in the United States.
An extension to nations
When group conflict extends to nations or tribes, Regality Theory argues that the collective danger leads citizens to start having strong feelings of national or tribal identity, preferring strong, hierarchical political system, adopting strict discipline and punishment of deviants, and expressing xenophobia and strict religious and sexual morality.
See also
Amity-enmity complex
Discrimination
Group conflict
Group threat theory
Intergroup relations
Minimal group paradigm
Prejudice
Social psychology
Stereotypes
References
Group processes
Conflict (process)
Psychological theories | Realistic conflict theory | [
"Biology"
] | 1,900 | [
"Behavior",
"Aggression",
"Human behavior",
"Conflict (process)"
] |
14,660,718 | https://en.wikipedia.org/wiki/Geometallurgy | Geometallurgy relates to the practice of combining geology or geostatistics with metallurgy, or, more specifically, extractive metallurgy, to create a spatially or geologically based predictive model for mineral processing plants. It is used in the hard rock mining industry for risk management and mitigation during mineral processing plant design. It is also used, to a lesser extent, for production planning in more variable ore deposits.
There are four important components or steps to developing a geometallurgical program,:
the geologically informed selection of a number of ore samples
laboratory-scale test work to determine the ore's response to mineral processing unit operations
the distribution of these parameters throughout the orebody using an accepted geostatistical technique
the application of a mining sequence plan and mineral processing models to generate a prediction of the process plant behavior
Sample selection
The sample mass and size distribution requirements are dictated by the kind of mathematical model that will be used to simulate the process plant, and the test work required to provide the appropriate model parameters. Flotation testing usually requires several kg of sample and grinding/hardness testing can required between 2 and 300 kg.
The sample selection procedure is performed to optimize granularity, sample support, and cost. Samples are usually core samples composited over the height of the mining bench. For hardness parameters, the variogram often increases rapidly near the origin and can reach the sill at distances significantly smaller than the typical drill hole collar spacing. For this reason the incremental model precision due to additional test work is often simply a consequence of the central limit theorem, and secondary correlations are sought to increase the precision without incurring additional sampling and testing costs. These secondary correlations can involve multi-variable regression analysis with other, non-metallurgical, ore parameters and/or domaining by rock type, lithology, alteration, mineralogy, or structural domains.
Test work
The following tests are commonly used for geometallurgical modeling:
Bond ball mill work index test
Modified or comparative Bond ball mill index
Bond rod mill work index and Bond low energy impact crushing work index
SAGDesign test
SMC test
JK drop-weight test
Point load index test
Sag Power Index test (SPI(R))
MFT test
FKT, SKT, and SKT-WS tests
Geostatistics
Block kriging is the most common geostatistical method used for interpolating metallurgical index parameters and it is often applied on a domain basis. Classical geostatistics require that the estimation variable be additive, and there is currently some debate on the additive nature of the metallurgical index parameters measured by the above tests. The Bond ball mill work index test is thought to be additive because of its units of energy; nevertheless, experimental blending results show a non-additive behavior. The SPI(R) value is known not to be an additive parameter, however errors introduced by block kriging are not thought to be significant . These issues, among others, are being investigated as part of the Amira P843 research program on Geometallurgical mapping and mine modelling.
Mine plan and process models
The following process models are commonly applied to geometallurgy:
The Bond equation
The SPI calibration equation, CEET
FLEET*
SMC model
Aminpro-Grind, Aminpro-Flot models
See also
Extractive metallurgy
Geostatistics
Mining
Mineral Processing
Notes
General references
Isaaks, Edward H., and Srivastava, R. Mohan. An Introduction to Applied Geostatistics. Oxford University Press, Oxford, NY, USA, 1989.
David, M., Handbook of Applied Advanced Geostatistical Ore Reserve Estimation. Elsevier, Amsterdam, 1988.
Mineral Processing Plant Design, Practice, and Control - Proceedings. Ed. Mular, A., Halbe, D., and Barratt, D. Society for Mining, Metallurgy, and Exploration, Inc. 2002.
Mineral Comminution Circuits - Their Operation and Optimisation. Ed. Napier-Munn, T.J., Morrell, S., Morrison, R.D., and Kojovic, T. JKMRC, The University of Queensland, 1996
Economic geology
Metallurgy
Mining
Materials science | Geometallurgy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 889 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
14,660,860 | https://en.wikipedia.org/wiki/DNA%20beta-glucosyltransferase | In enzymology, a DNA beta-glucosyltransferase () is an enzyme that catalyzes the chemical reaction in which a beta-D-glucosyl residue is transferred from UDP-glucose to an hydroxymethylcytosine residue in DNA. It is analogous to the enzyme DNA alpha-glucosyltransferase.
This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is UDP-glucose:DNA beta-D-glucosyltransferase. Other names in common use include T4-HMC-beta-glucosyl transferase, T4-beta-glucosyl transferase, T4 phage beta-glucosyltransferase, UDP glucose-DNA beta-glucosyltransferase, and uridine diphosphoglucose-deoxyribonucleate beta-glucosyltransferase.
Structural studies
As of late 2007, 20 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , , , , , , and .
Bacteriophage T4 beta-glucosyltransferase
In molecular biology, Bacteriophage T4 beta-glucosyltransferase refers to a protein domain found in a virus of Escherichia coli named bacteriophage T4. Members of this family are enzymes encoded by bacteriophage T4, which modify DNA by transferring glucose from uridine diphosphoglucose to 5-hydroxymethyl cytosine bases of phage T4 DNA.
Function
Beta-glucosyltransferase is an enzyme, or more specifically an inverting glycosyltransferase (GT). In other words, it transfers glucose from uridine diphospho-glucose (UDPglucose) to an acceptor, modified DNA through beta-Glycosidic bond. The role of the enzyme is to protect the infecting viral DNA from the bacteria's restriction enzymes. Glucosylation prevents the virus DNA from being cut up. Furthermore, glucosylation may aid gene expression of the bacteriophage by influencing transcription.
Structure
This structure has both alpha helices and beta strands.
References
Protein domains
EC 2.4.1
Enzymes of known structure
Viral enzymes | DNA beta-glucosyltransferase | [
"Biology"
] | 547 | [
"Protein domains",
"Protein classification"
] |
14,660,945 | https://en.wikipedia.org/wiki/Polychlorinated%20dibenzofurans | Polychlorinated dibenzofurans (PCDFs) are a family of organic compounds with one or several of the hydrogens in the dibenzofuran structure replaced by chlorines. For example, 2,3,7,8-tetrachlorodibenzofuran (TCDF) has chlorine atoms substituted for each of the hydrogens on the number 2, 3, 7, and 8 carbons (see structure in the upper left corner of the second image). Polychlorinated dibenzofurans with chlorines at least in positions 2,3,7 and 8 are much more toxic than the parent compound dibenzofuran, with properties and chemical structures similar to polychlorinated dibenzodioxins. These groups together are often inaccurately called dioxins. They are known developmental toxicants, and suspected human carcinogens. PCDFs tend to co-occur with polychlorinated dibenzodioxins (PCDDs). PCDFs can be formed by pyrolysis or incineration at temperatures below 1200 °C of chlorine containing products, such as PVC, PCBs, and other organochlorides, or of non-chlorine containing products in the presence of chlorine donors. Dibenzofurans are known persistent organic pollutants (POP), classified among the dirty dozen in the Stockholm Convention on Persistent Organic Pollutants.
Congeners
Safety, toxicity, regulation
Occupational exposure to PCDFs may occur through inhalation and contact with the skin, although intake even in workers at waste incineration plants is not particularly high. For general population the most important source is food of animal origin like with other dioxin-like compounds. The most relevant congener is 2,3,4,7,8-pentachlorodibenzofuran (2,3,4,7,8-PCDF) which is more toxic and based on relative toxicity more prevalent than other PCDFs.
See also
Dioxins and dioxin-like compounds
Polychlorinated biphenyl
References
Further resources
Synopsis on dioxins and PCBs
Chemical Profile: DIBENZOFURANS (CHLORINATED)
NPI Polychlorinated dioxins and furans fact sheet
Chloroarenes
Incineration
Immunotoxins
Dibenzofurans
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Persistent organic pollutants under the Stockholm Convention | Polychlorinated dibenzofurans | [
"Chemistry",
"Engineering"
] | 529 | [
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Combustion engineering",
"Persistent organic pollutants under the Stockholm Convention",
"Incineration"
] |
14,661,057 | https://en.wikipedia.org/wiki/Nanomechanics | Nanomechanics is a branch of nanoscience studying fundamental mechanical (elastic, thermal and kinetic) properties of physical systems at the nanometer scale. Nanomechanics has emerged on the crossroads of biophysics, classical mechanics, solid-state physics, statistical mechanics, materials science, and quantum chemistry. As an area of nanoscience, nanomechanics provides a scientific foundation of nanotechnology.
Nanomechanics is that branch of nanoscience which deals with the study and application of fundamental mechanical properties of physical systems at the nanoscale, such as elastic, thermal and kinetic material properties.
Often, nanomechanics is viewed as a branch of nanotechnology, i.e., an applied area with a focus on the mechanical properties of engineered nanostructures and nanosystems (systems with nanoscale components of importance). Examples of the latter include nanomachines, nanoparticles, nanopowders, nanowires, nanorods, nanoribbons, nanotubes, including carbon nanotubes (CNT) and boron nitride nanotubes (BNNTs); nanoshells, nanomembranes, nanocoatings, nanocomposite/nanostructured materials, (fluids with dispersed nanoparticles); nanomotors, etc.
Some of the well-established fields of nanomechanics are: nanomaterials, nanotribology (friction, wear and contact mechanics at the nanoscale), nanoelectromechanical systems (NEMS), and nanofluidics.
As a fundamental science, nanomechanics is based on some empirical principles (basic observations), namely general mechanics principles and specific principles arising from the smallness of physical sizes of the object of study.
General mechanics principles include:
Energy and momentum conservation principles
Variational Hamilton's principle
Symmetry principles
Due to smallness of the studied object, nanomechanics also accounts for:
Discreteness of the object, whose size is comparable with the interatomic distances
Plurality, but finiteness, of degrees of freedom in the object
Importance of thermal fluctuations
Importance of entropic effects (see configuration entropy)
Importance of quantum effects (see quantum machine)
These principles serve to provide a basic insight into novel mechanical properties of nanometer objects. Novelty is understood in the sense that these properties are not present in similar macroscale objects or much different from the properties of those (e.g., nanorods vs. usual macroscopic beam structures). In particular, smallness of the subject itself gives rise to various surface effects determined by higher surface-to-volume ratio of nanostructures, and thus affects mechanoenergetic and thermal properties (melting point, heat capacitance, etc.) of nanostructures. Discreteness serves a fundamental reason, for instance, for the dispersion of mechanical waves in solids, and some special behavior of basic elastomechanics solutions at small scales. Plurality of degrees of freedom and the rise of thermal fluctuations are the reasons for thermal tunneling of nanoparticles through potential barriers, as well as for the cross-diffusion of liquids and solids. Smallness and thermal fluctuations provide the basic reasons of the Brownian motion of nanoparticles. Increased importance of thermal fluctuations and configuration entropy at the nanoscale give rise to superelasticity, entropic elasticity (entropic forces), and other exotic types of elasticity of nanostructures. Aspects of configuration entropy are also of great interest in the context self-organization and cooperative behavior of open nanosystems.
Quantum effects determine forces of interaction between individual atoms in physical objects, which are introduced in nanomechanics by means of some averaged mathematical models called interatomic potentials.
Subsequent utilization of the interatomic potentials within the classical multibody dynamics provide deterministic mechanical models of nano structures and systems at the atomic scale/resolution. Numerical methods of solution of these models are called molecular dynamics (MD), and sometimes molecular mechanics (especially, in relation to statically equilibrated (still) models). Non-deterministic numerical approaches include Monte Carlo, Kinetic More-Carlo (KMC), and other methods. Contemporary numerical tools include also hybrid multiscale approaches allowing concurrent or sequential utilization of the atomistic scale methods (usually, MD) with the continuum (macro) scale methods (usually, field emission microscopy) within a single mathematical model. Development of these complex methods is a separate subject of applied mechanics research.
Quantum effects also determine novel electrical, optical and chemical properties of nanostructures, and therefore they find even greater attention in adjacent areas of nanoscience and nanotechnology, such as nanoelectronics, advanced energy systems, and nanobiotechnology.
See also
Molecular machine
Geometric phase (section Stochastic Pump Effect)
Nanoelectromechanical relay
References
Sattler KD. Handbook of Nanophysics: Vol. 1 Principles and Methods. CRC Press, 2011.
Bhushan B (editor). Springer Handbook of Nanotechnology, 2nd edition. Springer, 2007.
Liu WK, Karpov EG, Park HS. Nano Mechanics and Materials: Theory, Multiscale Methods and Applications. Wiley, 2006.
Cleland AN. Foundations of Nanomechanics. Springer, 2003.
Valeh I. Bakhshali. Nanomechanics and its applications: mechanical properties of materials. International E-Conference on Engineering, Technology and Management - ICETM 2020.
Nanotechnology
ja:ナノマシン | Nanomechanics | [
"Materials_science",
"Engineering"
] | 1,162 | [
"Nanotechnology",
"Materials science"
] |
14,661,752 | https://en.wikipedia.org/wiki/Nicotinamide%20phosphoribosyltransferase | Nicotinamide phosphoribosyltransferase (NAmPRTase or NAMPT), formerly known as pre-B-cell colony-enhancing factor 1 (PBEF1) or visfatin for its extracellular form (eNAMPT), is an enzyme that in humans is encoded by the NAMPT gene. The intracellular form of this protein (iNAMPT) is the rate-limiting enzyme in the nicotinamide adenine dinucleotide (NAD+) salvage pathway that converts nicotinamide to nicotinamide mononucleotide (NMN) which is responsible for most of the NAD+ formation in mammals. iNAMPT can also catalyze the synthesis of NMN from phosphoribosyl pyrophosphate (PRPP) when ATP is present. eNAMPT has been reported to be a cytokine (PBEF) that activates TLR4, that promotes B cell maturation, and that inhibits neutrophil apoptosis.
Reaction
iNAMPT catalyzes the following chemical reaction:
nicotinamide + 5-phosphoribosyl-1-pyrophosphate (PRPP) nicotinamide mononucleotide (NMN) + pyrophosphate (PPi)
Thus, the two substrates of this enzyme are nicotinamide and 5-phosphoribosyl-1-pyrophosphate (PRRP), whereas its two products are nicotinamide mononucleotide and pyrophosphate.
This enzyme belongs to the family of glycosyltransferases, to be specific, the pentosyltransferases. This enzyme participates in nicotinate and nicotinamide metabolism.
Expression and regulation
The liver has the highest iNAMPT activity of any organ, about 10-20 times greater activity than kidney, spleen, heart, muscle, brain or lung. iNAMPT is downregulated by an increase of miR-34a in obesity via a 3'UTR functional binding site of iNAMPT mRNA resulting in a reduction of NAD(+) and decreased SIRT1 activity.
Endurance-trained athletes have twice the expression of iNAMPT in skeletal muscle compared with sedentary type 2 diabetic persons. In a six-week study comparing legs trained by endurance exercise with untrained legs, iNAMPT was increased in the endurance-trained legs. A study of 21 young (under 36) and 22 old (over 54) adults subject to 12 weeks of aerobic and resistance exercise showed aerobic exercise to increase skeletal muscle iNAMPT 12% and 28% in young and old (respectively) and resistance exercise to increase skeletal muscle iNAMPT 25% and 30% in young and old (respectively).
Aging, obesity, and chronic inflammation all reduce iNAMPT (and consequently NAD+) in multiple tissues, and NAMPT activity was shown to promote a proinflammatory transcriptional reprogramming of immune cells (e.g. macrophages) and brain-resident astrocytes.
Function
iNAMPT catalyzes the condensation of nicotinamide (NAM) with 5-phosphoribosyl-1-pyrophosphate to yield nicotinamide mononucleotide (NMN), the first step in the biosynthesis of nicotinamide adenine dinucleotide (NAD+). This salvage pathway, reusing NAM from enzymes using NAD+ (sirtuins, PARPs, CD38) and producing NAM as a waste product, is the major source of NAD+ production in the body. De novo synthesis of NAD+ from tryptophan occurs only in the liver and kidney, overwhelmingly in the liver.
Nomenclature
The systematic name of this enzyme class is nicotinamide-nucleotide:diphosphate phospho-alpha-D-ribosyltransferase. Other names in common use include:
NMN pyrophosphorylase,
nicotinamide mononucleotide pyrophosphorylase,
nicotinamide mononucleotide synthetase, and
NMN synthetase.
Extracellular NAMPT
Extracellular NAMPT (eNAMPT) is functionally different from intracellular NAMPT (iNAMPT), and less well understood (which is why the enzyme has been given so many names: NAMPT, PBEF and visfatin). iNAMPT is secreted by many cell types (nobably adipocytes) to become eNAMPT. The sirtuin 1 (SIRT1) enzyme is required for eNAMPT secretion from adipose tissue. eNAMPT may act more as a cytokine, although its receptor (possibly TLR4) has not been proven. It has been demonstrated that eNAMPT could bind to and activate TLR4.
eNAMPT can exist as a dimer or as a monomer, but is normally a circulating dimer. As a monomer, eNAMPT has pro-inflammatory effects that are independent of NAD+, whereas the dimeric form of eNAMPT protects against these effects.
eNAMPT/PBEF/visfatin was originally cloned as a putative cytokine shown to enhance the maturation of B cell precursors in the presence of Interleukin-7 (IL-7) and stem cell factor, it was therefore named "pre-B cell colony-enhancing factor" (PBEF). When the gene encoding the bacterial nicotinamide phosphoribosyltransferase (nadV) was first isolated in Haemophilus ducreyi, it was found to exhibit significant homology to the mammalian PBEF gene. Rongvaux et al. demonstrated genetically that the mouse PBEF gene conferred Nampt enzymatic activity and NAD-independent growth to bacteria lacking nadV. Revollo et al. determined biochemically that the mouse PBEF gene product encodes an eNAMPT enzyme, capable of modulating intracellular NAD levels. Others have since confirmed these findings. More recently, several groups have reported the crystal structure of Nampt/PBEF/visfatin and they all show that this protein is a dimeric type II phosphoribosyltransferase enzyme involved in NAD biosynthesis.
eNAMPT has been shown to be more enzymatically active than iNAMPT, supporting the proposal that eNAMPT from adipose tissue enhances NAD+ in tissues with low levels of iNAMPT, notably pancreatic beta cells and brain neurons.
Hormone claim retracted
Although the original cytokine function of PBEF has not been confirmed to date, others have since reported or suggested a cytokine-like function for this protein. In particular, Nampt/PBEF was recently re-identified as a "new visceral fat-derived hormone" named visfatin. It is reported that visfatin is enriched in the visceral fat of both humans and mice and that its plasma levels increase during the development of obesity. Noteworthy is that visfatin is reported to exert insulin-mimetic effects in cultured cells and to lower plasma glucose levels in mice by binding to and activating the insulin receptor. However, the physiological relevance of visfatin is still in question because its plasma concentration is 40 to 100-fold lower than that of insulin despite having similar receptor-binding affinity. In addition, the ability of visfatin to bind and activate the insulin-receptor has yet to be confirmed by other groups.
On 26 October 2007, A. Fukuhara (first author), I.Shimomura (senior author) and the other co-authors of the paper, who first described Visfatin as a visceral-fat derived hormone that acts by binding and activating the insulin receptor, retracted the entire paper at the suggestion of the editor of the journal 'Science' and recommendation of the Faculty Council of Osaka University Medical School after a report of the Committee for Research Integrity.
As a drug target
Because cancer cells utilize increased glycolysis, and because NAD enhances glycolysis, iNAMPT is often amplified in cancer cells. APO866 is an experimental drug that inhibits this enzyme. It is being tested for treatment of advanced melanoma, cutaneous T-cell lymphoma (CTL), and refractory or relapsed B-chronic lymphocytic leukemia.
The NAMPT inhibitor FK866 has been shown to inhibit epithelial–mesenchymal transition (EMT), and may also inhibit tumor-associated angiogenesis.
Anti-aging biomedical company Calico has licensed the experimental P7C3 analogs involved in enhancing iNAMPT activity. P7C3 compounds have been shown in a number of publications to be beneficial in animal models for age-related neurodegeneration.
References
Further reading
External links
EC 2.4.2
Enzymes of known structure
Cytokines
Obesity | Nicotinamide phosphoribosyltransferase | [
"Chemistry"
] | 1,867 | [
"Cytokines",
"Signal transduction"
] |
9,496,122 | https://en.wikipedia.org/wiki/Supertramp%20%28ecology%29 | In ecology, a supertramp species is any type of animal which follows the "supertramp" strategy of high dispersion among many different habitats, towards none of which it is particularly specialized. Supertramp species are typically the first to arrive in newly available habitats, such as volcanic islands and freshly deforested land; they can have profoundly negative effects on more highly specialized flora and fauna, both directly through predation and indirectly through competition for resources.
The name was coined by Jared Diamond in 1974, as an allusion to both the itinerant lifestyle of the tramp, and the then-popular band Supertramp. Although Diamond originally applied the term only to birds, the term has since been applied to insects and reptiles as well, among others; any species which can migrate can be a supertramp.
In an evolutionary context, the supertramp may represent the first stage of the taxon cycle.
See also
Assembly rules
References
Ecological processes
Ecology terminology
Itinerant living | Supertramp (ecology) | [
"Physics",
"Biology"
] | 201 | [
"Ecology terminology",
"Physical phenomena",
"Ecological processes",
"Earth phenomena"
] |
9,496,834 | https://en.wikipedia.org/wiki/Carry%20operator | The carry operator, symbolized by the ¢ sign, is an abstraction of the operation of determining whether a portion of an adder network generates or propagates a carry. It is defined as follows:
¢
External links
http://www.aoki.ecei.tohoku.ac.jp/arith/mg/algorithm.html
Computer arithmetic | Carry operator | [
"Mathematics"
] | 76 | [
"Computer arithmetic",
"Arithmetic"
] |
9,497,451 | https://en.wikipedia.org/wiki/Levobetaxolol | Levobetaxolol is a drug used to lower the pressure in the eye in treating conditions such as glaucoma. It is marketed as a 0.25 or 0.5% ophthalmic solution of levobetaxolol hydrochloride under the trade name Betaxon. Levobetaxolol is a beta-adrenergic receptor inhibitor (beta blocker).
Indications
It is indicated for intraocular pressure reduction in patients with open-angle glaucoma or ocular hypertension.
Effect
Levobetaxolol inhibits the beta-1-adrenergic receptor. When applied topically, it reduces intra-ocular pressure (IOP) by 16-23% depending on time of day and the individual. It also has neuroprotective effects. Levobetaxolol has fewer cardiovascular side effects than other beta blockers.
Contraindications and side effects
Levobetaxolol should not be used by people who have sinus bradycardia, atrioventricular block, cardiogenic shock, or overt cardiac failure. The drug has been associated with bradycardia and hypertension.
History
Levobetaxolol was developed in the 1980s. It was FDA approved in 2000.
References
Beta blockers
N-isopropyl-phenoxypropanolamines
Enantiopure drugs
Isopropylamino compounds
Cyclopropyl compounds | Levobetaxolol | [
"Chemistry"
] | 305 | [
"Stereochemistry",
"Enantiopure drugs"
] |
9,497,546 | https://en.wikipedia.org/wiki/Universal%20Interface%20Language | A Universal Interface Language is a language that allows for an interchange of deep information between objects. It does this by allowing an object to experiment on another object to determine what it thinks the object is capable of.
The concept was introduced by Alan Kay as early as 1997 in his keynote speech at OOPSLA.
The goal of a Universal Interface Language is to achieve (automatic) interoperability beyond that provided by an Interface description language such as CORBA or a message exchange protocol such as SOAP.
There are currently no known implementations of a Universal Interface Language. Based on Kay's description, we would expect each object involved in the conversation to have a URL or IP address.
References
Alan Kay: The Computer Revolution Hasn't Happened Yet (Keynote OOPSLA 1997)
Component-based software engineering | Universal Interface Language | [
"Technology",
"Engineering"
] | 161 | [
"Software engineering",
"Component-based software engineering",
"Software engineering stubs",
"Components"
] |
9,497,908 | https://en.wikipedia.org/wiki/Nanotoxicology | Nanotoxicology is the study of the toxicity of nanomaterials. Because of quantum size effects and large surface area to volume ratio, nanomaterials have unique properties compared with their larger counterparts that affect their toxicity. Of the possible hazards, inhalation exposure appears to present the most concern, with animal studies showing pulmonary effects such as inflammation, fibrosis, and carcinogenicity for some nanomaterials. Skin contact and ingestion exposure are also a concern.
Background
Nanomaterials have at least one primary dimension of less than 100 nanometers, and often have properties different from those of their bulk components that are technologically useful. Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, is not yet fully understood. Nanoparticles can be divided into combustion-derived nanoparticles (like diesel soot), manufactured nanoparticles like carbon nanotubes and naturally occurring nanoparticles from volcanic eruptions, atmospheric chemistry etc. Typical nanoparticles that have been studied are titanium dioxide, alumina, zinc oxide, carbon black, carbon nanotubes, and buckminsterfullerene.
Nanotoxicology is a sub-specialty of particle toxicology. Nanomaterials appear to have toxicity effects that are unusual and not seen with larger particles, and these smaller particles can pose more of a threat to the human body due to their ability to move with a much higher level of freedom while the body is designed to attack larger particles rather than those of the nanoscale. For example, even inert elements like gold become highly active at nanometer dimensions. Nanotoxicological studies are intended to determine whether and to what extent these properties may pose a threat to the environment and to human beings. Nanoparticles have much larger surface area to unit mass ratios which in some cases may lead to greater pro-inflammatory effects in, for example, lung tissue. In addition, some nanoparticles seem to be able to translocate from their site of deposition to distant sites such as the blood and the brain.
Nanoparticles can be inhaled, swallowed, absorbed through skin and deliberately or accidentally injected during medical procedures. They might be accidentally or inadvertently released from materials implanted into living tissue. One study considers release of airborne engineered nanoparticles at workplaces, and associated worker exposure from various production and handling activities, to be very probable.
Properties that affect toxicity
Size is a key factor in determining the potential toxicity of a particle. However it is not the only important factor. Other properties of nanomaterials that influence toxicity include: chemical composition, shape, surface structure, surface charge, aggregation and solubility,
and the presence or absence of functional groups of other chemicals.
The large number of variables influencing toxicity means that it is difficult to generalise about health risks associated with exposure to nanomaterials – each new nanomaterial must be assessed individually and all material properties must be taken into account.
Composition
Metal-based
Metal based nanoparticles (NPs) are a prominent class of NPs synthesized for their functions as semiconductors, electroluminescents, and thermoelectric materials. Biomedically, these antibacterial NPs have been utilized in drug delivery systems to access areas previously inaccessible to conventional medicine. With the recent increase in interest and development of nanotechnology, many studies have been performed to assess whether the unique characteristics of these NPs, namely their large surface area to volume ratio, might negatively impact the environment upon which they were introduced. Researchers have found that some metal and metal oxide NPs may affect cells inducing DNA breakage and oxidation, mutations, reduced cell viability, warped morphology, induced apoptosis and necrosis, and decreased proliferation. Moreover, metal nanoparticles may persist in the organisms after administration if not carefully engineered.
Carbon-based
The latest toxicology studies on mice as of 2013 involving exposure to carbon nanotubes (CNT) showed a limited pulmonary inflammatory potential of MWCNT at levels corresponding to the average inhalable elemental carbon concentrations observed in U.S.-based CNT facilities. The study estimated that considerable years of exposure are necessary for significant pathology to occur.
One review concludes that the evidence gathered since the discovery of fullerenes overwhelmingly points to C60 being non-toxic. As is the case for toxicity profile with any chemical modification of a structural moiety, the authors suggest that individual molecules be assessed individually.
Other
Other classes of nanomaterials include polymers such as nanocellulose, and dendrimers.
Size
There are many ways that size can affect the toxicity of a nanoparticle. For example, particles of different sizes can deposit in different places in the lungs, and are cleared from the lungs at different rates. Size can also affect the particles' reactivity and the specific mechanism by which they are toxic.
Dispersion state
Many nanoparticles agglomerate or aggregate when they are placed in environmental or biological fluids. The terms agglomeration and aggregation have distinct definitions according to the standards organizations ISO and ASTM, where agglomeration signifies more loosely bound particles and aggregation signifies very tightly bound or fused particles (typically occurring during synthesis or drying). Nanoparticles frequently agglomerate due to the high ionic strength of environmental and biological fluids, which shields the repulsion due to charges on the nanoparticles. Unfortunately, agglomeration has frequently been ignored in nanotoxicity studies, even though agglomeration would be expected to affect nanotoxicity since it changes the size, surface area, and sedimentation properties of the nanoparticles. In addition, many nanoparticles will agglomerate to some extent in the environment or in the body before they reach their target, so it is desirable to study how toxicity is affected by agglomeration.
The agglomeration/deagglomeration (mechanical stability) potentials of airborne engineered nanoparticle clusters also have significant influences on their size distribution profiles at the end-point of their environmental transport routes. Different aerosolization and deagglomeration systems have been established to test stability of nanoparticle agglomerates.
Surface chemistry and charge
NPs, in their implementation, are covered with coatings and sometimes given positive or negative charges depending upon the intended function. Studies have found that these external factors affect the degree of toxicity of NPs.
Routes of administration
Respiratory
Inhalation exposure is the most common route of exposure to airborne particles in the workplace. The deposition of nanoparticles in the respiratory tract is determined by the shape and size of particles or their agglomerates, and they are deposited in the lungs to a greater extent than larger respirable particles. Based on animal studies, nanoparticles may enter the bloodstream from the lungs and translocate to other organs, including the brain. The inhalation risk is affected by the dustiness of the material, the tendency of particles to become airborne in response to a stimulus. Dust generation is affected by the particle shape, size, bulk density, and inherent electrostatic forces, and whether the nanomaterial is a dry powder or incorporated into a slurry or liquid suspension.
Animal studies indicate that carbon nanotubes and carbon nanofibers can cause pulmonary effects including inflammation, granulomas, and pulmonary fibrosis, which were of similar or greater potency when compared with other known fibrogenic materials such as silica, asbestos, and ultrafine carbon black. Some studies in cells or animals have shown genotoxic or carcinogenic effects, or systemic cardiovascular effects from pulmonary exposure. Although the extent to which animal data may predict clinically significant lung effects in workers is not known, the toxicity seen in the short-term animal studies indicate a need for protective action for workers exposed to these nanomaterials. As of 2013, further research was needed in long-term animal studies and epidemiologic studies in workers. No reports of actual adverse health effects in workers using or producing these nanomaterials were known as of 2013. Titanium dioxide (TiO2) dust is considered a lung tumor risk, with ultrafine (nanoscale) particles having an increased mass-based potency relative to fine TiO2, through a secondary genotoxicity mechanism that is not specific to TiO2 but primarily related to particle size and surface area.
Dermal
Some studies suggest that nanomaterials could potentially enter the body through intact skin during occupational exposure. Studies have shown that particles smaller than 1 μm in diameter may penetrate into mechanically flexed skin samples, and that nanoparticles with varying physicochemical properties were able to penetrate the intact skin of pigs. Factors such as size, shape, water solubility, and surface coating directly affect a nanoparticle's potential to penetrate the skin. At this time, it is not fully known whether skin penetration of nanoparticles would result in adverse effects in animal models, although topical application of raw SWCNT to nude mice has been shown to cause dermal irritation, and in vitro studies using primary or cultured human skin cells have shown that carbon nanotubes can enter cells and cause release of pro-inflammatory cytokines, oxidative stress, and decreased viability. It remains unclear, however, how these findings may be extrapolated to a potential occupational risk. In addition, nanoparticles may enter the body through wounds, with particles migrating into the blood and lymph nodes.
Gastrointestinal
Ingestion can occur from unintentional hand-to-mouth transfer of materials; this has been found to happen with traditional materials, and it is scientifically reasonable to assume that it also could happen during handling of nanomaterials. Ingestion may also accompany inhalation exposure because particles that are cleared from the respiratory tract via the mucociliary escalator may be swallowed.
Biodistribution
The extremely small size of nanomaterials also means that they much more readily gain entry into the human body than larger sized particles. How these nanoparticles behave inside the body is still a major question that needs to be resolved. The behavior of nanoparticles is a function of their size, shape and surface reactivity with the surrounding tissue. In principle, a large number of particles could overload the body's phagocytes, cells that ingest and destroy foreign matter, thereby triggering stress reactions that lead to inflammation and weaken the body's defense against other pathogens. In addition to questions about what happens if non-degradable or slowly degradable nanoparticles accumulate in bodily organs, another concern is their potential interaction or interference with biological processes inside the body. Because of their large surface area, nanoparticles will, on exposure to tissue and fluids, immediately adsorb onto their surface some of the macromolecules they encounter. This may, for instance, affect the regulatory mechanisms of enzymes and other proteins.
Nanomaterials are able to cross biological membranes and access cells, tissues and organs that larger-sized particles normally cannot.
Nanomaterials can gain access to the blood stream via inhalation
or ingestion. Broken skin is an ineffective particle barrier, suggesting that acne, eczema, shaving wounds or severe sunburn may accelerate skin uptake of nanomaterials. Then, once in the blood stream, nanomaterials can be transported around the body and be taken up by organs and tissues, including the brain, heart, liver, kidneys, spleen, bone marrow and nervous system.
Nanomaterials can be toxic to human tissue and cell cultures (resulting in increased oxidative stress, inflammatory cytokine production and cell death) depending on their composition and concentration.
Mechanisms of toxicity
Oxidative stress
For some types of particles, the smaller they are, the greater their surface area to volume ratio and the higher their chemical reactivity and biological activity. The greater chemical reactivity of nanomaterials can result in increased production of reactive oxygen species (ROS), including free radicals.
ROS production has been found in a diverse range of nanomaterials including carbon fullerenes, carbon nanotubes and nanoparticle metal oxides. ROS and free radical production is one of the primary mechanisms of nanoparticle toxicity; it may result in oxidative stress, inflammation, and consequent damage to proteins, membranes and DNA. For example, the application of nanoparticle metal oxide with magnetic fields that modulate ROS leading to enhanced tumor growth.
Cytotoxicity
A primary marker for the damaging effects of NPs has been cell viability as determined by state and exposed surface area of the cell membrane. Cells exposed to metallic NPs have, in the case of copper oxide, had up to 60% of their cells rendered unviable. When diluted, the positively charged metal ions often experience an electrostatic attraction to the cell membrane of nearby cells, covering the membrane and preventing it from permeating the necessary fuels and wastes. With less exposed membrane for transportation and communication, the cells are often rendered inactive.
NPs have been found to induce apoptosis in certain cells primarily due to the mitochondrial damage and oxidative stress brought on by the foreign NPs electrostatic reactions.
Genotoxicity
Metal and metal oxide NPs such as silver, zinc, copper oxide, uraninite, and cobalt oxide have also been found to cause DNA damage. The damage done to the DNA will often result in mutated cells and colonies as found with the HPRT gene test.
Methods and standards
Characterization of a nanomaterial's physical and chemical properties is important for ensuring the reproducibility of toxicology studies, and is also vital for studying how the properties of nanomaterials determine their biological effects. The properties of a nanomaterial such as size distribution and agglomeration state can change as a material is prepared and used in toxicology studies, making it important to measure them at different points in the experiment.
With comparison to more conventional toxicology studies, in nanotoxicology, characterisation of the potential contaminants is challenging. The biological systems are themselves still not completely known at this scale. Visualisation methods such as electron microscopy (SEM and TEM) and atomic force microscopy (AFM) analysis allow visualisation of the nano world. Further nanotoxicology studies will require precise characterisation of the specificities of a given nano-element: size, chemical composition, detailed shape, level of aggregation, combination with other vectors, etc. Above all, these properties would have to be determined not only on the nanocomponent before its introduction in the living environment but also in the (mostly aqueous) biological environment.
There is a need for new methodologies to quickly assess the presence and reactivity of nanoparticles in commercial, environmental, and biological samples since current detection techniques require expensive and complex analytical instrumentation.
Policy and regulatory aspects
Toxicology studies of nanomaterials are a key input into determining occupational exposure limits.
The Royal Society identifies the potential for nanoparticles to penetrate the skin, and recommends that the use of nanoparticles in cosmetics be conditional upon a favorable assessment by the relevant European Commission safety advisory committee.
The Woodrow Wilson Centre's Project on Emerging Technologies conclude that there is insufficient funding for human health and safety research, and as a result there is currently limited understanding of the human health and safety risks associated with nanotechnology. While the US National Nanotechnology Initiative reports that around four percent (about $40 million) is dedicated to risk related research and development, the Woodrow Wilson Centre estimate that only around $11 million is actually directed towards risk related research. They argued in 2007 that it would be necessary to increase funding to a minimum of $50 million in the following two years so as to fill the gaps in knowledge in these areas.
The potential for workplace exposure was highlighted by the 2004 Royal Society report which recommended a review of existing regulations to assess and control workplace exposure to nanoparticles and nanotubes. The report expressed particular concern for the inhalation of large quantities of nanoparticles by workers involved in the manufacturing process.
Stakeholders concerned by the lack of a regulatory framework to assess and control risks associated with the release of nanoparticles and nanotubes have drawn parallels with bovine spongiform encephalopathy (‘mad cow's disease'), thalidomide, genetically modified food, nuclear energy, reproductive technologies, biotechnology, and asbestosis. In light of such concerns, the Canadian-based ETC Group have called for a moratorium on nano-related research until comprehensive regulatory frameworks are developed that will ensure workplace safety.
See also
International Center for Technology Assessment
Toxicology
References
External links
The Center for Biological and Environmental Nanotechnology (CBEN), Rice University
Toxicology
Nanomedicine | Nanotoxicology | [
"Materials_science",
"Environmental_science"
] | 3,510 | [
"Nanomedicine",
"Nanotechnology",
"Toxicology"
] |
9,498,847 | https://en.wikipedia.org/wiki/Pyruvate%20dehydrogenase%20kinase | Pyruvate dehydrogenase kinase (also pyruvate dehydrogenase complex kinase, PDC kinase, or PDK; ) is a kinase enzyme which acts to inactivate the enzyme pyruvate dehydrogenase by phosphorylating it using ATP.
PDK thus participates in the regulation of the pyruvate dehydrogenase complex of which pyruvate dehydrogenase is the first component. Both PDK and the pyruvate dehydrogenase complex are located in the mitochondrial matrix of eukaryotes. The complex acts to convert pyruvate (a product of glycolysis in the cytosol) to acetyl-coA, which is then oxidized in the mitochondria to produce energy, in the citric acid cycle. By downregulating the activity of this complex, PDK will decrease the oxidation of pyruvate in mitochondria and increase the conversion of pyruvate to lactate in the cytosol.
The opposite action of PDK, namely the dephosphorylation and activation of pyruvate dehydrogenase, is catalyzed by a phosphoprotein phosphatase called pyruvate dehydrogenase phosphatase.
(Pyruvate dehydrogenase kinase should not be confused with Phosphoinositide-dependent kinase-1, which is also sometimes known as "PDK1".)
Phosphorylation sites
PDK can phosphorylate a serine residue on pyruvate dehydrogenase at three possible sites. Some evidence has shown that phosphorylation at site 1 will nearly completely deactivate the enzyme while phosphorylation at sites 2 and 3 had only a small contribution to complex inactivation. Therefore, it is phosphorylation at site 1 that is responsible for pyruvate dehydrogenase deactivation.
Isozymes
There are four known isozymes of PDK in humans:
PDK1
PDK2
PDK3
PDK4
The primary sequencing between the four isozymes are conserved with 70% identity. The greatest differences occur near the N-terminus.
PDK1 is the largest of the four with 436 residues while PDK2, PDK3 and PDK4 have 407, 406, and 411 residues respectively. The isozymes have different activity and phosphorylation rates at each site. At site 1 in order from fastest to slowest, PDK2 > PDK4 ≈ PDK1 > PDK3. For site 2, PDK3 > PDK4 > PDK2 > PDK1. Only PDK1 can phosphorylate site 3. However, it has been shown that these activities are sensitive to slight changes in pH so the microenvironment of the PDK isozymes may change the reaction rates.
Isozyme abundance has also been shown to be tissue specific. PDK1 is ample in heart cells. PDK3 is most abundant in testis. PDK2 is present in most tissues but low in spleen and lung cells. PDK4 is predominantly found in skeletal muscle and heart tissues.
Mechanism
Pyruvate dehydrogenase is deactivated when phosphorylated by PDK. Normally, the active site of pyruvate dehydrogenase is in a stabilized and ordered conformation supported by a network of hydrogen bonds. However, phosphorylation by PDK at site 1 causes steric clashes with another nearby serine residue due to both the increased size and negative charges associated with the phosphorylated residue. This disrupts the hydrogen bond network and disorders the conformation of two phosphorylation loops. These loops prevent the reductive acetylation step, thus halting overall activity of the enzyme. The conformational changes and mechanism of deactivation for phosphorylation at sites 2 and 3 are not known at this time.
Regulation
Pyruvate dehydrogenase kinase is activated by ATP, NADH and acetyl-CoA. It is inhibited by ADP, NAD+, CoA-SH and pyruvate.
Each isozyme responds to each of these factors slightly differently. NADH stimulates PDK1 activity by 20% and PDK2 activity by 30%. NADH with acetyl-CoA increases activity in these enzymes by 200% and 300% respectively. In similar conditions, PDK3 is unresponsive to NADH and inhibited by NADH with acetyl-CoA. PDK4 has a 200% activity increase with NADH, but adding acetyl-CoA does not increase activity further.
Disease relevance
PDK isoforms are elevated in obesity, diabetes, heart failure, and cancer. Some studies have shown that cells that lack insulin (or are insensitive to insulin) overexpress PDK4. As a result, the pyruvate formed from glycolysis cannot be oxidized which leads to hyperglycaemia due to the fact that glucose in the blood cannot be used efficiently. Therefore, several drugs target PDK4 hoping to treat type II diabetes.
PDK1 has shown to have increased activity in hypoxic cancer cells due to the presence of HIF-1. PDK1 shunts pyruvate away from the citric acid cycle and keeps the hypoxic cell alive. Therefore, PDK1 inhibition has been suggested as an antitumor therapy since PDK1 prevents apoptosis in these cancerous cells. Similarly, PDK3 has been shown to be overexpressed in colon cancer cell lines. Three proposed inhibitors are AZD7545 and dichloroacetate which both bind to PDK1, and Radicicol which binds to PDK3.
Mutations in the PDK3 gene are a rare cause of X-linked Charcot-Marie-Tooth disease (CMTX6).
In dogs, specifically Doberman Pinschers, a mutation in the PDK4 gene is associated with dilated cardiomyopathy (DCM).
References
External links
EC 2.7.11
Citric acid cycle
Glycolysis | Pyruvate dehydrogenase kinase | [
"Chemistry"
] | 1,316 | [
"Carbohydrate metabolism",
"Glycolysis",
"Citric acid cycle"
] |
9,499,251 | https://en.wikipedia.org/wiki/Antinaturalism%20%28politics%29 | Antinaturalism, or anti-naturalism, is the opposition to essentialist invocations of nature or natural order. It is associated with antispeciesism, anti-racism, feminism, and transhumanism.
Antinaturalist philosophy is closely linked to the French animal rights movement and materialist feminism. It is also supported by xenofeminists, who advocate for a form of feminism holding that if nature is unjust, it should be changed. Notable advocates include David Olivier and Yves Bonnardel.
Philosophy
Antinaturalists defend the inherent and absolute moral permissibility of abortion, body modification, divorce, contraception, sex reassignment surgery, and other means by which they believe human beings can assume control of their own bodies and their own environments. Antinaturalism stands in contrast to some radical environmentalist movements, which state that nature itself is sacred and should be preserved for its own sake; instead it advances the idea that all human acts are natural and that ecological preservation is important inasmuch as it is necessary for the well-being of sentient beings, not because of some inherently sacred attribute of nature as a whole. Yves Bonnardel argues that naturalist ideology "goes hand in hand with and legitimises speciesist oppression of non-human sentient beings", and that using natural law to justify the reintroduction of predatory animals to control populations of other animals is a form of speciesism.
See also
Appeal to nature
Culturalism
Gender essentialism
Gnosticism
Morphological freedom
Naturalistic fallacy
Predation problem
Wild animal suffering
References
Further reading
Haber, Stéphane (2006). Critique de l'antinaturalisme. Études sur Foucault, Butler, Habermas ["Critique of Antinaturalism. Studies on Foucault, Butler, Habermas"]. France University Press (1, 2).
Animal rights movement
Applied ethics
Feminist movements and ideologies
Transhumanism | Antinaturalism (politics) | [
"Technology",
"Engineering",
"Biology"
] | 404 | [
"Behavior",
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology",
"Human behavior",
"Applied ethics"
] |
9,499,590 | https://en.wikipedia.org/wiki/Postnaturalism | Postnaturalism is the theory of the postnatural, a term coined to describe organisms that have been intentionally and heritably altered by humans. Postnaturalism is a cultural process whereby organisms are bred to satisfy a specific cultural purpose. It can be used to read these organisms, which serve as insights into our culture by reflecting desires and beliefs prevalent at the time of breeding. This has direct implications for the evolutionary path of these organisms, whittling down undesirable traits to leave only those culturally sought out. Postnaturalism argues that in so doing, humans have and continue to actively alter the evolutionary path of a postnatural organism to suit our cultural desires. The agricultural practice of monoculture, for instance, is just one example of postnatural organisms who have been bred to such an extent that the modern-day species look nothing like their pre-neolithic counterparts. The breeding of these species for this purpose can be seen to be reflected in notable diet changes during this period, which proliferated during ensuing sedentism and urbanisation.
Postnaturalism is a highly selective process. For every organism that has become used in our society, there are countless more that have remained non-postnatural for whatever reason ranging from a perceived lack of future use from them or traits that make them too difficult to farm. One such example is the golden orb-weaver spider which produces a strong, light and useful silk, however they are known to be cannibalistic and thus impossible to farm on a large scale.
Postnatural History
Postnatural history is defined as the "study of the origins, habitats, and evolution of organisms that have been intentionally and heritably altered by humans", which serves as a "record of the influence of human culture on evolution". So termed to differentiate it from traditional natural history studies, it is the subject of a storefront in Pittsburgh, United States called the Center for PostNatural History, which builds on this concept to produce an array of displays of organisms which are all postnatural.
Domestication
The commencement of the postnatural can be considered to date back to prehistoric civilisation's early interaction with wild species. Here, domestication occurred as a way of adapting the environment around prehistoric people to suit their needs and desires through a gradual process of refinement. Domestication succeeds through the heritable continuation of an organism's phenotype and genotype allowing each generation to continue on from their previous generation. In its simplest terms, domestication alters a species through survival to a change in habitat, food source, or other significant change. These changes can be sought for a variety of different reasons including physical attributes, behavioural characteristics, lifespan, and adaptability to a change in environment.
It is thought human beings have been experimenting with selectively breeding organisms for around 10,000 years; over thousands of years humans have influenced many taxonomic groups, with bioengineering representing new forms of genetic information transfer, creation, and inheritance, coupled to climate change, scientists and policy makers are prioritising “ecosystem services” essential to humans, such as pollination, the replenishment of fish stocks, and a phenomenon being researched in the Great Barrier Reef being the terminal decline of coral.
Selective breeding
Postnatural practices include selective breeding, a process by which humans purposefully breed certain organisms for particular biological traits. The practice was known to the Romans, and has been commonly used continuing to this day. Michael Pollan argues that Charles Darwin saw this process and considered it artificial, rather than natural, selection, but in terms of evolutionary progress this distinction becomes irrelevant to the species; the change is irreversible all the same. This is simply because evolution is understood to be unable to 'undo' previous changes, but, particularly in proteins, continues along in a progression of its biological structure depending on what traits are required for survival.
Induced Mutation
Induced mutation in the context of postnaturalism is the process whereby a specific genetic mutation - usually a rare occurrence - is selectively isolated by people and encouraged to reproduce in future offspring. This differs from the general understanding of a mutation that was induced by treatment from a particular chemical agent in a living species.
A good example of this is the albino rat which possesses, in the wild a notoriously fatal genetic make-up making it much easier to identify to predators, a coat which drew interest in breeders so as to distinguish them from their more unhygienic-looking sewer counterparts prevalent in major cities in the 1800s. So numerous were rats in industrial cities, they became the subject of a sport based on their extermination, rat-baiting. The albino rat thus became distinguished from the regular unsightly sewer rat and even became sold as pets, the owning of one as a child allegedly the basis for Beatrix Potter's book Samuel Whiskers.
Genetic Engineering
Genetic Engineering can be considered to be the purposeful alteration of the genetic makeup of an organism through the introduction of genes from sources not belonging to that organism. The isolation of a particular gene and introduction of another is often achieved through the use of biotechnology more broadly.
Genetic engineering is a contentious topic and even in searching for a definition, there are several alternatives available highlighting the variation in perceptions around what is considered to be the goal and process of genetic engineering. However, genetic engineering embodies much of what is considered postnatural, but doing so with the next level of technology than used in previous methods such as those mentioned in the above sections. Increased use of advanced technology allows for improved precision and accuracy of methods, and carrying out of transgenics, whilst the use of laboratories as settings is designed to prevent the release of experimented organisms into the wild unless cleared with the relevant protocols and regulations beforehand.
Concerns over postnaturalism's proliferation in monoculture
The current agricultural practice of monoculture is intricately connected with postnaturalism, particularly now much of common contemporary agricultural practice has become mechanised. This mechanisation sometimes requires universality to comply with existing tools, machines and practices, such as the breeding of chickens to be a uniform size to ensure they fit into chicken harvesting machines. On other occasions rather than breed a particular species to a uniform size, some parts of organisms have been so heavily bred and altered that dystocia can regularly occur, such as with the Belgian Blue cattle whose birth canal regularly becomes constricted or even entirely blocked due to the birth canal's reduced size and the increased size of calves. The result is the routine scheduling of Caesarian sections.
See also
Monoculture
Anthropocene
Center for PostNatural History
Culture
Cultural artifact
Natural environment
References
Biological evolution
Breeding
Selection | Postnaturalism | [
"Biology"
] | 1,362 | [
"Evolutionary processes",
"Behavior",
"Selection",
"Reproduction",
"Breeding"
] |
9,499,731 | https://en.wikipedia.org/wiki/Raja%20Ampat%20Islands | Raja Ampat, or the Four Kings, is an archipelago located off of the northwest tip of Bird's Head Peninsula (on the island of New Guinea), Southwest Papua province, Indonesia. It comprises over 1,500 small islands, cays, and shoals around the four main islands of Misool, Salawati, Batanta, and Waigeo, and the smaller island of Kofiau.
The Raja Ampat archipelago straddles the equator and forms part of the Coral Triangle, an area of Southeast Asian seas containing the richest marine biodiversity on earth. The Coral Triangle itself is an approximate area west-southwest of the Philippines, east-northeast and southeast of the island of Borneo, and north, east and west of the island of New Guinea, including the seas in between. Thousands of species of marine organisms, from the tiniest cleaner shrimp and camouflaged pygmy seahorses to the majestic cetaceans and whale sharks, thrive in these waters.
Administratively, the archipelago is part of the province of Southwest Papua. Most of the islands constitute the Raja Ampat Regency, which was separated from Sorong Regency in 2004. The regency encompasses around of land and sea, of which 8,034.44 km2 constitutes the land area and has a population of 64,141 at the 2020 Census; the official estimate as at mid 2022 was 66,839. This excludes the southern half of Salawati Island, which is not part of this regency but instead constitutes the Salawati Selatan and Salawati Tengah Districts of Sorong Regency.
History
Archaeological evidence indicates that the Raja Ampat Islands were first visited by humans over 50,000 years ago. At this time, Misool and Salawati were connected to New Guinea, while Waigeo and Batanta formed an island called Waitanta. At the Mololo Cave site, excavations show that early people were processing tree resins and hunting native animals. Pottery-making communities moved into Raja Ampat about 3500–3000 years ago and may have brought Austronesian languages to the area.
The name of Raja Ampat (Raja means king, and empat means four) comes from local mythology that told of a woman who found seven eggs, in one version this woman was Boki Tabai, daughter of Al-Mansur of Tidore and wife to Gurabesi. Three of the seven hatched and became kings who occupied Raja Ampat Islands, the fourth hatched and settled in Waigama but later migrated to Kalimuri (Seram). In another version, the fifth egg hatched into a woman (Pin Take) who later washed away to Biak, married Manar Makeri, and later gave birth to Gurabesi. The sixth egg hatched into a spirit, while the seventh egg did not hatch and turned to stone and worshipped as a king in Kali Raja (Wawiyai, Waigeo). Historically the 'four' kingdoms were Waigeo, Salawati, Sailolof, Misool, and Waigama. Locally Waigama is not considered one of the Raja Ampat, while Sailolof is not considered one of the Raja Ampat by Tidore.
The first recorded sighting and landing by Europeans of the Raja Ampat Islands was by the Portuguese navigator Jorge de Menezes and his crew in 1526, en route from Biak, the Bird's Head Peninsula, and Waigeo, to Halmahera (Ternate).
Islam first arrived in the Raja Ampat Islands in the 15th century due to political and economic contacts with the Bacan Sultanate. During the 16th and 17th centuries, the Maluku-based Sultanate of Tidore had close economic and political ties with the islands, especially with Gurabesi. During this period, Islam became firmly established, and local chiefs began adopting Islam.
As a consequence of these ties, Raja Ampat was considered a part of the Sultanate of Tidore. After the Dutch invaded Maluku, it was claimed by the Netherlands.
The English explorer William Dampier gave his name to Dampier Strait, which separates Batanta Island from Waigeo Island. To the east, there is a strait that separates Batanta from Salawati. In 1759 Captain William Wilson sailing in the East Indiaman Pitt navigated these waters and named a strait the 'Pitt Strait', after his vessel; this was probably the channel between Batanta and Salawati.
Climate
The islands have a tropical climate, with temperatures ranging from .
Water temperature in North Raja Ampat ranges from , while in the South in Misool, it ranges from (Water temperature chart in Misog ol).
Ecology
Terrestrial
The islands are part of the Vogelkop-Aru lowland rain forests ecoregion. The rainforests that cover the islands are the natural habitat of many species of birds, mammals, reptiles, and insects. Two species of bird-of-paradise, the red bird-of-paradise (Paradisaea rubra) and Wilson's bird-of-paradise (Diphyllodes respublica), are endemic to the islands of Waigeo, Gam, and Batanta.
The recently [when?] discovered palm tree Wallaceodoxa raja-ampat is endemic to the Raja Ampat Islands.
Marine
Raja Ampat is considered the global epicentre of tropical marine biodiversity and is referred to as "The Crown Jewel" of the Bird's Head Seascape, which also includes Cenderawasih Bay and Triton Bay. The region contains more than 600 species of hard corals, constituting about 75% of the world's known species, and more than 1,700 species of reef fish – including on both shallow and mesophotic reefs. Compared to similarly-sized ecosystems elsewhere in the world, Raja Ampat's biodiversity is arguably the richest in the world. Endangered and rare marine mammals, such as dugongs, whales (such as blue, pygmy blue, Bryde's, Omura's, sperm), dolphins, and even orcas occur here. Endangered whale sharks, the largest extant fish species on earth, also thrive in this region.
In the northeast region of Waigeo Island, local villagers have been involved in turtle conservation initiatives by protecting nests or relocating eggs of leatherback, olive ridley, and hawksbill turtles. Their works are supported by the local government and NGOs.
Raja Ampat Marine Recreation Park was designated in 2009. It is composed of four marine areas – the waters around northern Salawati, Batanta, and southwestern Waigeo, Mayalibit Bay in central Waigeo, the waters southeast of Misool, and waters around the Sembilan Islands north of Misool and west of Salawati.
The oceanic natural resources around Raja Ampat give the area significant potential as a tourist area, drawing divers, researchers and others with an interest in the marine life there.
According to Conservation International, marine surveys suggest that the marine life diversity in the Raja Ampat area is the highest recorded on Earth. Diversity is considerably greater than any other area sampled in the Coral Triangle composed of Indonesia, Malaysia, Philippines, Papua New Guinea, the Solomon Islands, and East Timor. The Coral Triangle is the heart of the world's coral reef biodiversity, making Raja Ampat quite possibly the richest coral reef ecosystem in the world.
The area's massive coral colonies, along with relatively high sea surface temperatures, also suggest that its reefs may be relatively resistant to threats like coral bleaching and coral disease, which now jeopardize the survival of other coral ecosystems around the world. The Raja Ampat islands are remote and relatively undisturbed by humans.
The crown-of-thorns starfish eats Raja Ampat's corals, and the destruction this causes among reefs has posed a threat to tourism. The crown-of-thorns starfish, which "can grow around as big as a trash-can lid" and is covered in sharp, stinging spines, has proliferated due to increasing nitrogen in the water from human waste, which in turn causes a spike in phytoplankton on which the starfish feed. In 2019, local divers began the task of reducing starfish populations by injecting the starfish with a 10% vinegar solution; the dead starfish can then be eaten by local fish.
The high marine diversity in Raja Ampat is strongly influenced by its position between the Indian and Pacific Oceans, as coral and fish larvae are more easily shared between the two oceans. Raja Ampat's coral diversity, resilience, and role as a source for larval dispersal make it a global priority for marine protection. Its location results in it being a biogeographic crossroads between Indonesia, Micronesia and the Arafura Sea.
1,508 fish species, 537 coral species (a remarkable 96% of all scleractinians recorded from Indonesia are likely to occur in these islands and 75% of all species that exist in the world), and 699 mollusk species, the variety of marine life is staggering. Raja Ampat is identified as the epicenter of restricted-range reef fishes, in the Coral Triangle with over 100 species of endemic reef fishes, and also an extremely high diversity of reef coral species (over 600
475 species).
The Raja Ampat Islands have at least three ponds containing harmless jellyfish, all in the Misool area.
The submarine world around the islands was the subject of the documentary film Edies Paradies 3 (by Otto C. Honegger), which has been broadcast by the Swiss television network Schweizer Fernsehen.
In March 2017 the cruise ship Caledonian Sky owned by British tour operator Noble Caledonia got caught in a low tide and ran aground in the reef. An evaluation team estimated that of the reef was destroyed, which will likely result in a compensation claim of $1.28 million – $1.92 million. A team of environmentalists and academics estimated much more substantial damage, with potential losses to Indonesia estimated at $18.6 million and a recovery time for the reef spanning decades.
A zebra shark breeding and release initiative started in 2022, aiming to release 500 sharks by 2032. The wild population was formerly abundant, but a fishing industry that ballooned starting in the 1990s reduced the population to perhaps just 20 individuals.
Population
The main occupation for people around this area is fishing since the area is dominated by the sea. They live in a small colony of tribes that spreads around the area. Although traditional culture still strongly exists, they are very welcoming to visitors. Raja Ampat people have similarities with the surrounding Moluccan people and Papuan people, as they speak Papuan and Austronesian languages. The Muslim proportion is much higher compared with other Papuan areas. However, the West Papua province as a whole has a larger Muslim population because of its extensive history with the Sultanate of Tidore.
Administration
Most of the islands make up the Raja Ampat Regency, a regency () forming part of Southwest Papua. It came into existence in 2004, before which the archipelago was part of Sorong Regency. The southern part of the island of Salawati is not part of the Raja Ampat Regency. Instead, it constitutes the Salawati Selatan and Salawati Tengah Districts of Sorong Regency.
Raja Ampat Regency is subdivided into the following districts (kecamatan):
Note: (a) including the Boo Islands, which lie some distance to the west of Kofiau. (b) Not to be confused with Salawati Tengah District of Sorong Regency, Salawati Tengah District of Rajah Ampat Regency actually forms the southeast portion of Salawati Island. (c) the Ayau Islands (including Ayau District) lie some distance to the north of Waigeo.
Taking account of the 2,757 people of Salawati Selatan and Salawati Tengah Districts which are administratively in Sorong Regency, the total population of the archipelago added up to 69,596 in mid 2022.
There are proposals to divide the current Raja Ampat Regency into three, with Waigeo and its surrounding small islands forming a new North Raja Ampat Regency (Kabupaten Raja Ampat Utara), and with Misool and Kofiau and their surrounding small islands forming a new South Raja Ampat Regency (Kabupaten Raja Ampat Selatan), leaving the residue of the existing Regency to cover the northern part of Salawati Island (the rest of Salawati Island still lies within Sorong Regency) and Batanta Island (which forms Selat Sagawin District).
See also
Raja Ampat languages
References
External links
Bird's Head Seascape
Raja Ampat dive sites, map, videos and pictures
Population Viability Analysis (PVA) Report for Population Augmentation of Zebra Sharks (Stegostoma tigrinum) in Raja Ampat, Indonesia
Coral reefs
Islands of Western New Guinea
Landforms of Southwest Papua
Archipelagoes of Indonesia
Regencies of Southwest Papua | Raja Ampat Islands | [
"Biology"
] | 2,695 | [
"Biogeomorphology",
"Coral reefs"
] |
9,499,804 | https://en.wikipedia.org/wiki/Probabilistic%20design | Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.
Objective and motivations
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.
Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma.
Sources of variability
Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships.
The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain. Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.
We can represent variance due to measurement uncertainties as a corrective factor , which is multiplied by the true mean to yield the measured mean of . Equivalently, .
This yields the result , and the variance of the corrective factor is given as:
where is the correction factor, is the true mean, is the measured mean, and is the number of measurements made.
The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.
The measured value is equivalent to the theoretical model prediction multiplied by a model error of , plus the experimental error . Equivalently,
and the model error takes the general form:
where are coefficients of regression determined from experimental data.
Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.
Comparison to classical design principles
Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.
The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value. Let the probability distribution function of the yield strength be given as .
Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as .
The probability of failure is equivalent to the area between these two distribution functions, mathematically:
or equivalently, if we let the difference between yield stress and applied load equal a third function , then:
where the variance of the mean difference is given by .
The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength. It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.
Methods used to determine variability
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:
Finite element analysis
Stochastic finite element method
Boundary element method
Meshfree methods
Analytical methods (refer to classical design principles)
Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:
the Monte Carlo method (including Latin hypercubes);
propagation of error;
design of experiments (DOE)
the method of moments
Statistical interference
quality function deployment
Failure mode and effects analysis
See also
Interval finite element
Stochastic modeling
First-order second-moment method
Weibull distribution
Footnotes
References
Ang and Tang (2006) Probability Concepts in Engineering: Emphasis on Applications to Civil and Environmental Engineering. John Wiley & Sons.
Ash (1993) The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else). Wiley-IEEE Press.
Clausing (1994) Total Quality Development: A Step-By-Step Guide to World-Class Concurrent Engineering. American Society of Mechanical Engineers.
Haugen (1980) Probabilistic mechanical design. Wiley.
Papoulis (2002) Probability, Random Variables and Stochastic Process. McGraw-Hill Publishing Co.
Siddall (1982) Optimal Engineering Design. CRC.
Dodson, B., Hammett, P., and Klerx, R. (2014) Probabilistic Design for Optimization and Robustness for Engineers John Wiley & Sons, Inc.
Cederbaum G., Elishakoff I., Aboudi J. and Librescu L., Random Vibration and Reliability of Composite Structures, Technomic, Lancaster, 1992, XIII + pp. 191;
Elishakoff I., Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994, VIII + pp. 296;
Elishakoff I., Probabilistic Methods in the Theory of Structures: Random Strength of Materials, Random Vibration, and Buckling, World Scientific, Singapore, , 2017
External links
Probabilistic design
Non Deterministic Approaches in Engineering
Engineering statistics
Design
Quality | Probabilistic design | [
"Engineering"
] | 1,634 | [
"Design",
"Engineering statistics"
] |
9,500,194 | https://en.wikipedia.org/wiki/Valsa%20sordida | Valsa sordida is a species of fungus within the family Valsaceae. A plant pathogen, it causes dieback of small branches and twigs of broad-leaved trees, usually poplar. It is found in Africa, Australasia, Europe, and North and South America. The anamorph is Cytospora chrysosperma.
References
Sordariomycetes
Fungi described in 1870
Fungi of Africa
Fungi of Europe
Fungi of Australia
Fungi of North America
Fungi of South America
Fungal tree pathogens and diseases
Diaporthales
Taxa named by Theodor Rudolph Joseph Nitschke
Fungus species | Valsa sordida | [
"Biology"
] | 127 | [
"Fungi",
"Fungus species"
] |
9,500,213 | https://en.wikipedia.org/wiki/Ophiostoma%20ulmi | Ophiostoma ulmi is a species of fungus in the family Ophiostomataceae. It is one of the causative agents of Dutch elm disease. It was first described under the name Graphium ulmi, and later transferred to the genus Ophiostoma.
Dutch elm disease originated in Europe in the early 1900s. Elm trees were once an ecologically valuable tree that dominated mixed broadleaf forests, floodplains, and low areas near rivers and streams. They were planted in urban settings because of their aesthetic appeal and their ability to provide shade due to their V like shape. An outbreak of Dutch elm disease in the 1920s and again in the 1970s was responsible for the death of more than 40 million American elm trees.
Ophiostoma ulmi was the first known cause of Dutch elm disease . Since its discovery in 1910, new forms of the fungus, specifically Ophiostoma novo-ulmi, have emerged and appear to be more resistant to control measures and more aggressive in their infection.
Host range and symptoms
Ophiostoma ulmi has a relatively narrow host range as it infects only elm trees (Ulmus spp.) and Zelkova carpinifolia. Habitat preferences of elms play a large part in determining their susceptibility as a host for Dutch elm disease. For example, of the three native European elm species (Ulmus glabra Huds., Ulmus laevis Pall. and Ulmus minor), all are susceptible to infection by O. ulmi, but Ulmus glabra has a much smaller chance of being inoculated than Ulmus minor. This is because the insect vector prefers the warm humid habitat of Ulmus minor to the cold hemiboreal habitat of Ulmus glabra. For this reason, Ulmus minor has been almost eliminated by the disease. In North America, Ulmus americana, U. thomasii, U. alata, U. serotina and U. rubra are listed as highly susceptible to Dutch Elm disease, while U. crassifolia is less threatened.
Ophiostoma ulmi causes symptoms commonly associated with most vascular wilts. Trees that have been infected by a vector will exhibit symptoms of leaf wilting and yellowing on branches and twigs that have been colonized by the Scolytid beetle. These symptoms are most often apparent from July into the autumn months. Trees that have contracted the disease via root grafts will often proceed much more quickly because the whole tree is compromised at once. Diagnosis of this disease is usually done by examining the xylem tissue of twigs and branches of the trees. Symptoms of brown streaking that runs in the direction of the grain of the wood, and tylosis formation by the tree as a reaction to the fungal infection are characteristic of this disease.
Breeding efforts began as early as the 1920s to try to combat this disease, and some crosses bred from resistant Asian species of elm and susceptible European species have shown a decrease in susceptibility to the pathogen. However, with the introduction of Ophiostoma novo-ulmi many of these resistant species struggle to survive.
Environment
Ophiostoma ulmi infects the bark and xylem tissue of elm trees. It has been found in northern Africa and Oceania, but the vast majority of elms that are or have been colonized by O. ulmi can be found in Europe, west central Asia, and North America. While there is some speculation about how the disease traveled to North America, most experts agree that it was the fault of humans.
In the spring, trees produce what is known as “springwood” from the stored starches of the previous growing season. This tissue is characterized by long xylem vessels with relatively thin walls, making it the ideal habitat for the pathogen. In springwood, the fungus spreads rapidly, and it is likely that the tree will die. Later in the growing season, the elm will utilize sugars produced by the leaves to nurture the formation of “summerwood”. Summerwood vessels are typically shorter with thicker walls, making it harder for the infection to spread.
The pathogen enters its host with assistance from the Scolytid beetle, and will colonize the tunnels, or breeding galleries, made by the insect. The greatest impact of this disease is seen in urban settings and in trees that have previously been impaired by drought or insects.
O. ulmi prefers a subtropical climate for sporulation, with optimal temperatures between 27.5 °C and 30 °C and high moisture, which has largely limited the pathogen's reach in high altitudes and northern latitudes. The formation of other structures can tolerate cooler environments. Conidia will form at or around 20 °C, while perithecia form at 8-10 °C. Subjection to high summer temperatures combined with low moisture content and ensuing low nutrient levels in the bark of elm trees greatly restricts sporulation of the fungi. Because of this, it is common for the fungus to avoid branches with small diameters and localize in areas with thick bark, high moisture, and abundant nutrients.
Chemical control of this disease through insecticides and fungicides has not proven successful in the past and is often expensive. Many communities have adopted cultural practices to help manage the spread of this disease. This includes sanitation, avoiding planting elm monocultures and breaking root grafts between elms.
Disease cycle
Ophiostoma ulmi can reproduce asexually by overwintering in both the bark and upper layers of dead or dying elm wood as mycelia and synnemata. Synnemata produce conidia that are sticky and can be spread by vectors. In Dutch elm disease, the vectors that transmit Ophiostoma ulmi are Scolytid beetles. The conidia stick to the bodies of adult beetles and are spread throughout the tunnels (galleries) the beetle makes as it eats. Once in a tunnel, the spores will germinate to produce mycelium. During the late winter months and early spring, mycelia spread rapidly. At the same time, the fungus secretes enzymes that break down the cell walls of the tree and allow the mycelia to grow into the xylem tissue. Here, it will release millions of conidia that travel with the xylem sap. As the fungus grows it creates blockages in the vascular system of the tree, causing the characteristic symptom of wilting in the leaves. As new beetles bore through the xylem tissue, they come into contact with conidia in the sap which stick to their bodies and can be transmitted to other trees that they feed on. The disease can also be spread if mature roots of an infected tree graft to another tree and the conidia travel through xylem sap to the new host.
The fungi can also reproduce sexually. O. ulmi is a heterothallic ascomycete disease with mating types A and B. When these mating types are present in the same host, ascospores will be produced inside of perithecia. The perithecia can form singly or in large groups, and typically will have a long neck like structure with a black ball at the top. This ball contains the asci and ascospores. Once they are mature, the ascospores are released from an opening in the perithecia in a sticky liquid that can attach to the body of the Scolytid beetle and be spread throughout the host or to new hosts.
References
Fungi described in 1922
Fungal tree pathogens and diseases
Ophiostomatales
Fungus species | Ophiostoma ulmi | [
"Biology"
] | 1,552 | [
"Fungi",
"Fungus species"
] |
9,500,362 | https://en.wikipedia.org/wiki/Dalmarnock%20fire%20tests | The Dalmarnock fire tests are a series of fire experiments that were conducted in a real high-rise building in the United Kingdom.
In 2006, the BRE Centre for Fire Safety Engineering at the University of Edinburgh conducted this series of large-scale fire tests in a high-rise building in collaboration with the BBC series Horizon, EPSRC, Glasgow Housing Association, Strathclyde Fire Brigade, Glasgow Caledonian University, Lion TV, Arup, BRE, Xtralis and Powerwall Systems Ltd, among other contributors.
Description
The building, a 23-storey reinforced-concrete tower located at 4 Millerfield Place in Dalmarnock, Glasgow, was scheduled for demolition and hence was empty of tenants. Three main experiments were conducted over two days, from 25 to 26 July 2006. Tests One and Two took place in identical flats, the main compartment of which had been fitted with regular living room and office furniture, arranged to provide conditions that favour repeatability. Test Three was a smaller smoke management experiment held in one of the two main emergency exit stairwells.
Tests One and Two were fully instrumented with a high sensor density, including measurements of temperature, incident heat, gas velocities, smoke obscuration, wall temperature and structure deflection, among others. The tests varied in that Test One allowed the fire to develop freely to post-flashover conditions, while Test Two incorporated sensor-informed ventilation management and was extinguished before post-flashover conditions were attained. Both tests had approximately 300 sensors monitoring several different characteristics of the fire environment, but Test One had about 160 additional sensors monitoring the structural response.
These experiments endeavoured to establish a highly monitored fire in a realistic residential scenario, allowing for several different modern fire safety engineering tools to be tested. The comprehensive set of data collected is being used for validation of different mathematical models of fire dynamics. Because the data has a spatial resolution high enough to be comparable to typical resolutions of CFD fire models, the tests are specially well suited for validation of this type of models. The tests also form an integral part of the research conducted for the FireGrid research project, that is, the development of emergency control systems that use continuous sensor data and computer modelling, to provide forecast of the fire evolution and aid the efficient deployment of resources.
Analysis and dissemination
The overall aim of the Dalmarnock fire tests is to improve understanding of how building fire emergencies can be handled in the most effective manner, while providing further insights into the fundamentals of compartment fire dynamics and fire-induced structural behavior.
Dissemination of the findings includes several related journal and conference papers; a one-day seminar introducing the tests and highlighting the analysis and conclusions (Edinburgh, November 2007); and a book detailing the tests and analysis published in November 2007. The book, entitled The Dalmarnock Fire Tests: Experiments and Modelling, comprises material covering characterization and comparison of both the Test One and Test Two fires; experimental error analysis; evaluation of the fire detection systems; calculation of heat transfer to the structure; analysis of the structural behavior; evaluation of ceiling fiber reinforced polymer (FRP) performance; and comparison of both a priori and a posteriori computational fire modelling against experimental data.
See also
Fire test
BRE Centre for Fire Safety Engineering
References
External links
FireGrid Project (web.archive.org)
"Skyscraper Fire Fighters", Horizon, aired 24 April 2007 on BBC Two
"Glasgow tower block to shed light on 9/11 fire", The Scotsman
Digital collection of papers on The Dalmarnock Fire Tests at the Edinburgh Research Archive
Fire protection
Building engineering
Fire and rescue in the United Kingdom
Fire prevention
Bridgeton–Calton–Dalmarnock
2006 in Scotland | Dalmarnock fire tests | [
"Engineering"
] | 743 | [
"Building engineering",
"Fire protection",
"Civil engineering",
"Architecture"
] |
9,500,396 | https://en.wikipedia.org/wiki/Mobilome | The mobilome is the entire set of mobile genetic elements in a genome. Mobilomes are found in eukaryotes, prokaryotes, and viruses. The compositions of mobilomes differ among lineages of life, with transposable elements being the major mobile elements in eukaryotes, and plasmids and prophages being the major types in prokaryotes. Virophages contribute to the viral mobilome.
Mobilome in eukaryotes
Transposable elements are elements that can move about or propagate within the genome, and are the major constituents of the eukaryotic mobilome. Transposable elements can be regarded as genetic parasites because they exploit the host cell's transcription and translation mechanisms to extract and insert themselves in different parts of the genome, regardless of the phenotypic effect on the host.
Eukaryotic transposable elements were first discovered in maize (Zea mays) in which kernels showed a dotted color pattern. Barbara McClintock described the maize Ac/Ds system in which the Ac locus promotes the excision of the Ds locus from the genome, and excised Ds elements can mutate genes responsible for pigment production by inserting into their coding regions.
Other examples of transposable elements include: yeast (Saccharomyces cerevisiae) Ty elements, a retrotransposon which encodes a reverse transcriptase to convert its mRNA transcript into DNA which can then insert into other parts of the genome; and fruit fly (Drosophila melanogaster) P-elements, which randomly inserts into the genome to cause mutations in germ line cells, but not in somatic cells.
Mobilome in prokaryotes
Plasmids were discovered in the 1940s as genetic materials outside of bacterial chromosomes. Prophages are genomes of bacteriophages (a type of virus) that are inserted into bacterial chromosomes; prophages can then be spread to other bacteria through the lytic cycle and lysogenic cycle of viral replication.
While transposable elements are also found in prokaryotic genomes, the most common mobile genetic elements in the prokaryotic genome are plasmids and prophages.
Plasmids and prophages can move between genomes through bacterial conjugation, allowing horizontal gene transfer. Plasmids often carry genes that are responsible for bacterial antibiotic resistance; as these plasmids replicate and pass from one genome to another, the whole bacterial population can quickly adapt to the antibiotic. Prophages can loop out of bacterial chromosomes to produce bacteriophages that go on to infect other bacteria with the prophages; this allows prophages to propagate quickly among the bacterial population, to the harm of the bacterial host.
Mobilome in viruses
Discovered in 2008 in a strain of Acanthamoeba castellanii mimivirus, virophages are an element of the virus mobilome. Virophages are viruses that replicate only when host cells are co-infected with helper viruses. Following co-infection, helper viruses exploit the host cell's transcription/translation machinery to produce their own machinery; virophages replicate through the machinery of either the host cell or the viruses. The replication of virophages can negatively impact the replication of helper viruses.
Sputnik and mavirus are examples of virophages.
References
Genetics | Mobilome | [
"Biology"
] | 721 | [
"Genetics"
] |
9,500,722 | https://en.wikipedia.org/wiki/Monilinia%20fructicola | Monilinia fructicola is a species of fungus in the order Helotiales. A plant pathogen, it is the causal agent of brown rot of stone fruits.
Stone fruit (summer fruit)
Stone fruits such as apricot and peaches originated in China and spread through old trade routes 3–4000 years ago. Nectarines are more recent (at least 2000 years). Cherries and European plums originated in Europe, although the Japanese plum originated in China.
Trees exposed to cold in autumn and early spring can develop cankers under the bark of the trunk or branches. Cankers are usually associated with the production of amber-coloured gum that contains bacteria and oozes on to the outer bark. Unfortunately, there are few control methods for fungal spores apart from copper sprays.
Symptoms
Brown rot causes blossom blight, twig blight; twig canker and fruit rot. Brown rot is caused by a fungus that produces spores, and can be a major problem during particularly wet seasons. Prolonged wet weather during bloom may result in extensive blossom infection. The length of wet periods required for blossom infection depends upon the temperature. Humid wet conditions are when the fruit trees are most at risk from infection. Young green fruit can be infected just before autumn, but the infection often remains inactive until near maturity of the fruit. Brown rot can spread after harvest. Mature fruit can decay in only 2 days under warm conditions.
Blossom Blight: Infected blossoms wilt, shrivel and become covered with greyish mould. Petals may appear light brown or water-soaked. Blighted blossoms do not produce fruit. Dead blossoms may stick to spurs and twigs until harvest, providing a source of spores for the fruit rot phase.
Twig Blight and Canker: On peaches and apricots the infection may spread to twigs, causing brownish, oval cankers that may girdle and kill twigs.
Fruit rot
Fruit rot appears as small, circular brown spots that increase rapidly in size causing the entire fruit to rot. Greyish spores appear in tufts on rotted areas. Infected fruit eventually turn into shrivelled, black mummies that may drop or remain attached to the tree through the winter. Brown rot can be serious on injured fruit such as cherries split by rain.
Life cycle
Overwintering: The fungus over-winters in mummified fruit on the ground or in the tree and in twig cankers.
Spring Infection: two types of spores are produced in spring which can infect blossoms. Conidia are produced on cankers and fruit mummies on the tree. Apothecia (small mushroom-like structures) form on mummies lying on the ground. The apothecia discharge ascospores during the bloom period, but don't contribute to fruit infection later in season.
Secondary Infection: Spores produced on blighted blossoms provide a source of infection for ripening fruit. Infected fruit become covered with greyish spores which spread by wind and rain to healthy fruit. Insects may also contribute to the spread of brown rot spores.
Plant defenses
A plant's first line of defense against infection is the physical barrier of the plant's “skin”, the epidermis of the primary plant body and the periderm of the secondary plant body. This first defense system, however, is not impenetrable. Viruses, bacteria, and the spores and hyphae of fungi can still enter the plant through injuries or through the natural openings in the epidermis, such as stomata. Once a pathogen invades, the plant mounts a chemical attack as a second line of defense that destroys the pathogens and prevents their spread from the site of infection. This second defense system is enhanced by the plant's inherited ability to recognize certain pathogens.
Elicitors: Oligosaccharins, derived from cellulose fragments released by cell wall damage, are one of the major classes of elicitors. Elicitors stimulate the production of antimicrobial compounds called phytoalexins. Infections also activate genes that produce PR proteins (pathogenesis-related proteins). Some of these proteins are antimicrobial, attacking molecules in the cell wall of a bacterium. Others may function as signals that spread “news” of the infection to nearby cells. Infection also stimulates the cross-linking of molecules in the cell wall and the deposition of lignin, responses that set up a local barricade that slows spread of the pathogen to other parts of the plant.
Control
Orchard sanitation, removing fruit mummies and pruning any cankered or dead twigs will reduce inoculum levels, which will improve the effectiveness of fungicide sprays.
Primarily treatment is chemical; using fungicidal sprays to control the spread of the fungus. Spraying occurs during all phases, blossoms, green fruit, and mature fruit. Stone fruit trees' only natural defences are “skin” and chemical reactions to being attacked by the fungi, but this is a limited defence, so spraying and orchard sanitation are the best way to control spread of the fungus.
References
Fungi described in 1883
Fungal plant pathogens and diseases
Stone fruit tree diseases
Sclerotiniaceae
Fungus species | Monilinia fructicola | [
"Biology"
] | 1,055 | [
"Fungi",
"Fungus species"
] |
9,501,159 | https://en.wikipedia.org/wiki/Well-pointed%20category | In category theory, a category with a terminal object is well-pointed if for every pair of arrows such that , there is an arrow such that . (The arrows are called the global elements or points of the category; a well-pointed category is thus one that has "enough points" to distinguish non-equal arrows.)
See also
Pointed category
References
Category theory | Well-pointed category | [
"Mathematics"
] | 75 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
9,501,745 | https://en.wikipedia.org/wiki/Thermal%20effusivity | In thermodynamics, a material's thermal effusivity, also known as thermal responsivity, is a measure of its ability to exchange energy with its surroundings. It is an intensive quantity defined as the square root of the product of the material's thermal conductivity () and its volumetric heat capacity () or as the ratio of thermal conductivity to the square root of thermal diffusivity ().
Some authors use the symbol to denote the thermal responsivity, although its usage along with an exponential becomes difficult. The SI units for thermal effusivity are or, equivalently, .
Thermal effusivity can also be a measure of a solid or rigid material's thermal inertia.
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature when placed into thermal contact.
If and are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's intensive heat transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation, and measures the speed at which thermal equilibrium can be reached by a body. By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function.
Applications
Temperature at a contact surface
If two semi-infinite bodies initially at temperatures and are brought in perfect thermal contact, the temperature at the contact surface will be a weighted mean based on their relative effusivities. This relationship can be demonstrated with a very simple "control volume" back-of-the-envelope calculation:
Consider the following 1D heat conduction problem. Region 1 is material 1, initially at uniform temperature , and region 2 is material 2, initially at uniform temperature . Given some period of time after being brought into contact, heat will have diffused across the boundary between the two materials. The thermal diffusivity of a material is . From the heat equation (or diffusion equation), a characteristic diffusion length into material 1 is
, where .
Similarly, a characteristic diffusion length into material 2 is
, where .
Assume that the temperature within the characteristic diffusion length on either side of the boundary between the two materials is uniformly at the contact temperature (this is the essence of a control-volume approach). Conservation of energy dictates that
.
Substitution of the expressions above for and and elimination of yields an expression for the contact temperature.
This expression is valid for all times for semi-infinite bodies in perfect thermal contact. It is also a good first guess for the initial contact temperature for finite bodies.
Even though the underlying heat equation is parabolic and not hyperbolic (i.e. it does not support waves), if we in some rough sense allow ourselves to think of a temperature jump as two materials are brought into contact as a "signal", then the transmission of the temperature signal from 1 to 2 is . Clearly, this analogy must be used with caution; among other caveats, it only applies in a transient sense, to media which are large enough (or time scales short enough) to be considered effectively infinite in extent.
Heat sensed by human skin
An application of thermal effusivity is the quasi-qualitative measurement of coolness or warmth "feel" of materials, also known as thermoception. It is a particularly important metric for textiles, fabrics, and building materials. Rather than temperature, skin thermoreceptors are highly responsive to the inward or outward flow of heat. Thus, despite having similar temperatures near room temperature, a high effusivity metal object is detected as cool while a low effusivity fabric is sensed as being warmer.
Diathermal walls
For a diathermal wall having a stepped "constant heat" boundary condition imposed at onto one side, thermal effusivity performs nearly the same role in limiting the initial dynamic thermal response (rigorously, during times less than the heat diffusion time to transit the wall) as the insulation U-factor plays in defining the static temperature obtained by the side after a long time. A dynamic U-factor and a diffusion time for the wall of thickness , thermal diffusivity and thermal conductivity are specified by:
; during where and
Planetary science
For planetary surfaces, thermal inertia is a key phenomenon controlling the diurnal and seasonal surface temperature variations. The thermal inertia of a terrestrial planet such as Mars can be approximated from the thermal effusivity of its near-surface geologic materials. In remote sensing applications, thermal inertia represents a complex combination of particle size, rock abundance, bedrock outcropping and the degree of induration (i.e. thickness and hardness).
A rough approximation to thermal inertia is sometimes obtained from the amplitude of the diurnal temperature curve (i.e. maximum minus minimum surface temperature). The temperature of a material with low thermal effusivity changes significantly during the day, while the temperature of a material with high thermal effusivity does not change as drastically. Deriving and understanding the thermal inertia of the surface can help to recognize small-scale features of that surface. In conjunction with other data, thermal inertia can help to characterize surface materials and the geologic processes responsible for forming these materials.
On Earth, thermal inertia of the global ocean is a major factor influencing climate inertia. Ocean thermal inertia is much greater than land inertia because of convective heat transfer, especially through the upper mixed layer. The thermal effusivities of stagnant and frozen water underestimate the vast thermal inertia of the dynamic and multi-layered ocean.
Thermographic inspection
Thermographic inspection encompasses a variety of nondestructive testing methods that utilize the wave-like characteristics of heat propagation through a transfer medium. These methods include Pulse-echo thermography and thermal wave imaging. Thermal effusivity and diffusivity of the materials being inspected can serve to simplify the mathematical modelling of, and thus interpretation of results from these techniques.
Measurement interpretation
When a material is measured from the surface with short test times by any transient method or instrument, the heat transfer mechanisms generally include thermal conduction, convection, radiation and phase changes. The diffusive process of conduction may dominate the thermal behavior of solid bodies near and below room temperature.
A contact resistance (due to surface roughness, oxidation, impurities, etc.) between the sensor and sample may also exist. Evaluations with high heat dissipation (driven by large temperature differentials) can likewise be influenced by an interfacial thermal resistance. All of these factors, along with the body's finite dimensions, must be considered during execution of measurements and interpretation of results.
Thermal effusivity of selected materials and substances
This is a list of the thermal effusivity of some common substances, evaluated at room temperature unless otherwise indicated.
(*) minimal advection
See also
Thermal contact conductance
Thermal diffusivity
Heat equation
Heat capacity
References
External links
You
Thermodynamic properties
Physical quantities
Heat conduction
Materials testing | Thermal effusivity | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,590 | [
"Physical phenomena",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Materials science",
"Materials testing",
"Thermodynamics",
"Heat conduction",
"Physical properties"
] |
9,501,827 | https://en.wikipedia.org/wiki/Algebraic%20differential%20equation | In mathematics, an algebraic differential equation is a differential equation that can be expressed by means of differential algebra. There are several such notions, according to the concept of differential algebra used.
The intention is to include equations formed by means of differential operators, in which the coefficients are rational functions of the variables (e.g. the hypergeometric equation). Algebraic differential equations are widely used in computer algebra and number theory.
A simple concept is that of a polynomial vector field, in other words a vector field expressed with respect to a standard co-ordinate basis as the first partial derivatives with polynomial coefficients. This is a type of first-order algebraic differential operator.
Formulations
Derivations D can be used as algebraic analogues of the formal part of differential calculus, so that algebraic differential equations make sense in commutative rings.
The theory of differential fields was set up to express differential Galois theory in algebraic terms.
The Weyl algebra W of differential operators with polynomial coefficients can be considered; certain modules M can be used to express differential equations, according to the presentation of M.
The concept of Koszul connection is something that transcribes easily into algebraic geometry, giving an algebraic analogue of the way systems of differential equations are geometrically represented by vector bundles with connections.
The concept of jet can be described in purely algebraic terms, as was done in part of Grothendieck's EGA project.
The theory of D-modules is a global theory of linear differential equations, and has been developed to include substantive results in the algebraic theory (including a Riemann-Hilbert correspondence for higher dimensions).
Algebraic solutions
It is usually not the case that the general solution of an algebraic differential equation is an algebraic function: solving equations typically produces novel transcendental functions. The case of algebraic solutions is however of considerable interest; the classical Schwarz list deals with the case of the hypergeometric equation. In differential Galois theory the case of algebraic solutions is that in which the differential Galois group G is finite (equivalently, of dimension 0, or of a finite monodromy group for the case of Riemann surfaces and linear equations). This case stands in relation with the whole theory roughly as invariant theory does to group representation theory. The group G is in general difficult to compute, the understanding of algebraic solutions is an indication of upper bounds for G.
External links
Differential equations
Differential algebra | Algebraic differential equation | [
"Mathematics"
] | 486 | [
"Differential algebra",
"Mathematical objects",
"Differential equations",
"Equations",
"Fields of abstract algebra"
] |
9,501,969 | https://en.wikipedia.org/wiki/Boyle%20temperature | The Boyle temperature, named after Robert Boyle, is formally defined as the temperature for which the second virial coefficient, , becomes zero.
It is at this temperature that the attractive forces and the repulsive forces acting on the gas particles balance out
This is the virial equation of state and describes a real gas.
Since higher order virial coefficients are generally much smaller than the second coefficient, the gas tends to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature (or when or are minimized).
In any case, when the pressures are low, the second virial coefficient will be the only relevant one because the remaining concern terms of higher order on the pressure. Also at Boyle temperature the dip in a PV diagram tends to a straight line over a period of pressure. We then have
where is the compressibility factor.
Expanding the van der Waals equation in one finds that .
See also
Virial equation of state
References
Temperature
Thermodynamics
Robert Boyle | Boyle temperature | [
"Physics",
"Chemistry",
"Mathematics"
] | 203 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Dynamical systems"
] |
9,502,303 | https://en.wikipedia.org/wiki/Flux-corrected%20transport | Flux-corrected transport (FCT) is a conservative shock-capturing scheme for solving Euler equations and other hyperbolic equations which occur in gas dynamics, aerodynamics, and magnetohydrodynamics. It is especially useful for solving problems involving shock or contact discontinuities. An FCT algorithm consists of two stages, a transport stage and a flux-corrected anti-diffusion stage. The numerical errors introduced in the first stage (i.e., the transport stage) are corrected in the anti-diffusion stage.
References
Jay P. Boris and David L. Book, "Flux-corrected transport, I: SHASTA, a fluid transport algorithm that works", J. Comput. Phys. 11, pp. 38 (1973).
External links
Fully multidimensional flux-corrected transport algorithms for fluids
See also
Computational fluid dynamics
Computational magnetohydrodynamics
Shock capturing methods
Volume of fluid method
Computational fluid dynamics | Flux-corrected transport | [
"Physics",
"Chemistry"
] | 192 | [
"Computational physics stubs",
"Computational fluid dynamics",
"Computational physics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,502,948 | https://en.wikipedia.org/wiki/Bottleneck%20%28network%29 | In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows.
In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows.
A data rate allocation is max-min fair if and only if a data flow between any two nodes has at least one bottleneck link. This concept is critical in understanding network efficiency and fairness, as it ensures that no single flow can monopolize network resources to the detriment of others.
Bottleneck links are significant in network design and management because they determine the maximum throughput of a network. Identifying and managing bottlenecks is crucial for maintaining optimal performance in networked systems. Strategies to mitigate the impact of bottleneck links include increasing the capacity of the bottleneck link, optimizing traffic management, and using load-balancing techniques to distribute data flows across multiple paths.
See also
Fairness measure
Max-min fairness
References
Notes
Network performance | Bottleneck (network) | [
"Technology"
] | 317 | [
"Computing stubs",
"Computer network stubs"
] |
9,503,180 | https://en.wikipedia.org/wiki/Generation%20time | In population biology and demography, generation time is the average time between two consecutive generations in the lineages of a population. In human populations, generation time typically has ranged from 20 to 30 years, with wide variation based on gender and society. Historians sometimes use this to date events, by converting generations into years to obtain rough estimates of time.
Definitions and corresponding formulas
The existing definitions of generation time fall into two categories: those that treat generation time as a renewal time of the population, and those that focus on the distance between individuals of one generation and the next. Below are the three most commonly used definitions:
Time for a population to grow by a factor of its net reproductive rate
The net reproductive rate is the number of offspring an individual is expected to produce during its lifetime: means demographic equilibrium. One may then define the generation time as the time it takes for the population to increase by a factor of . For example, in microbiology, a population of cells undergoing exponential growth by mitosis replaces each cell by two daughter cells, so that and is the population doubling time.
If the population grows with exponential growth rate , so the population size at time is given by
,
then generation time is given by
.
That is, is such that , i.e. .
Average difference in age between parent and offspring
This definition is a measure of the distance between generations rather than a renewal time of the population. Since many demographic models are female-based (that is, they only take females into account), this definition is often expressed as a mother-daughter distance (the "average age of mothers at birth of their daughters"). However, it is also possible to define a father-son distance (average age of fathers at the birth of their sons) or not to take sex into account at all in the definition. In age-structured population models, an expression is given by:
,
where is the growth rate of the population, is the survivorship function (probability that an individual survives to age ) and the maternity function (birth function, age-specific fertility). For matrix population models, there is a general formula:
,
where is the discrete-time growth rate of the population, is its fertility matrix, its reproductive value (row-vector) and its stable stage distribution (column-vector); the are the elasticities of to the fertilities.
Age at which members of a cohort are expected to reproduce
This definition is very similar to the previous one but the population need not be at its stable age distribution. Moreover, it can be computed for different cohorts and thus provides more information about the generation time in the population. This measure is given by:
.
Indeed, the numerator is the sum of the ages at which a member of the cohort reproduces, and the denominator is R0, the average number of offspring it produces.
References
Ecology
Population dynamics
Time in life | Generation time | [
"Physics",
"Biology"
] | 589 | [
"Physical quantities",
"Time",
"Ecology",
"Spacetime",
"Time in life"
] |
9,503,346 | https://en.wikipedia.org/wiki/Multiple%20EM%20for%20Motif%20Elicitation | Multiple Expectation maximizations for Motif Elicitation (MEME) is a tool for discovering motifs in a group of related DNA or protein sequences.
A motif is a sequence pattern that occurs repeatedly in a group of related protein or DNA sequences and is often associated with some biological function. MEME represents motifs as position-dependent letter-probability matrices which describe the probability of each possible letter at each position in the pattern. Individual MEME motifs do not contain gaps. Patterns with variable-length gaps are split by MEME into two or more separate motifs.
MEME takes as input a group of DNA or protein sequences (the training set) and outputs as many motifs as requested. It uses statistical modeling techniques to automatically choose the best width, number of occurrences, and description for each motif.
MEME is the first of a collection of tools for analyzing motifs called the MEME suite.
Definition
The MEME algorithm could be understood from two different perspectives. From a biological point of view, MEME identifies and characterizes shared motifs in a set of unaligned sequences. From the computer science aspect, MEME finds a set of non-overlapping, approximately matching substrings given a starting set of strings.
Use
MEME can be used to find similar biological functions and structures in different sequences. It is necessary to take into account that the sequences variation can be significant and that the motifs are sometimes very small. It is also useful to take into account that the binding sites for proteins are very specific. This makes it easier to reduce wet-lab experiments (saving cost and time). Indeed, to better discover the motifs relevant from a biological point it is necessary to carefully choose: the best width of motifs, the number of occurrences in each sequence, and the composition of each motif.
Algorithm components
The algorithm uses several types of well known functions:
Expectation maximization (EM).
EM based heuristic for choosing the EM starting point.
Maximum likelihood ratio based (LRT-based) heuristic for determining the best number of model-free parameters.
Multi-start for searching over possible motif widths.
Greedy search for finding multiple motifs.
However, one often doesn't know where the starting position is. Several possibilities exist: exactly one motif per sequence, or one or zero motif per sequence, or any number of motifs per sequence.
See also
Sequence motif
Sequence alignment
References
External links
The MEME Suite — Motif-based sequence analysis tools
GPU Accelerated version of MEME
EXTREME — An online EM implementation of the MEME model for fast motif discovery in large ChIP-Seq and DNase-Seq Footprinting data
Bioinformatics | Multiple EM for Motif Elicitation | [
"Engineering",
"Biology"
] | 533 | [
"Bioinformatics",
"Biological engineering"
] |
9,503,407 | https://en.wikipedia.org/wiki/Standard%20map | The standard map (also known as the Chirikov–Taylor map or as the Chirikov standard map) is an area-preserving chaotic map from a square with side onto itself. It is constructed by a Poincaré's surface of section of the kicked rotator, and is defined by:
where and are taken modulo .
The properties of chaos of the standard map were established by Boris Chirikov in 1969.
Physical model
This map describes the Poincaré's surface of section of the motion of a simple mechanical system known as the kicked rotator. The kicked rotator consists of a stick that is free of the gravitational force, which can rotate frictionlessly in a plane around an axis located in one of its tips, and which is periodically kicked on the other tip.
The standard map is a surface of section applied by a stroboscopic projection on the variables of the kicked rotator. The variables and respectively determine the angular position of the stick and its angular momentum after the n-th kick. The constant K measures the intensity of the kicks on the kicked rotator.
The kicked rotator approximates systems studied in the fields of mechanics of particles, accelerator physics, plasma physics, and solid state physics. For example, circular particle accelerators accelerate particles by applying periodic kicks, as they circulate in the beam tube. Thus, the structure of the beam can be approximated by the kicked rotor. However, this map is interesting from a fundamental point of view in physics and mathematics because it is a very simple model of a conservative system that displays Hamiltonian chaos. It is therefore useful to study the development of chaos in this kind of system.
Main properties
For the map is linear and only periodic and quasiperiodic orbits are possible. When plotted in phase space (the θ–p plane), periodic orbits appear as closed curves, and quasiperiodic orbits as necklaces of closed curves whose centers lie in another larger closed curve. Which type of orbit is observed depends on the map's initial conditions.
Nonlinearity of the map increases with K, and with it the possibility to observe chaotic dynamics for appropriate initial conditions. This is illustrated in the figure, which displays a collection of different orbits allowed to the standard map for various values of . All the orbits shown are periodic or quasiperiodic, with the exception of the green one that is chaotic and develops in a large region of phase space as an apparently random set of points. Particularly remarkable is the extreme uniformity of the distribution in the chaotic region, although this can be deceptive: even within the chaotic regions, there are an infinite number of diminishingly small islands that are never visited during iteration, as shown in the close-up.
Circle map
The standard map is related to the circle map, which has a single, similar iterated equation:
as compared to
for the standard map, the equations reordered to emphasize similarity. In essence, the circle map forces the momentum to a constant.
See also
Ushiki's theorem
Notes
References
link
Springer link
External links
Standard map at MathWorld
Chirikov standard map at Scholarpedia
Website dedicated to Boris Chirikov
Interactive Java Applet visualizing orbits of the Standard Map, by Achim Luhn
Mac Application for the Standard Map, by James Meiss
Interactive Javascript Applet Standard Map on experiences.math.cnrs.fr
Chaotic maps
Articles containing video clips | Standard map | [
"Mathematics"
] | 694 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
9,503,418 | https://en.wikipedia.org/wiki/Marlette%20Lake%20Water%20System | The Marlette Lake Water System was created to provide water for the silver mining boom in Virginia City, Nevada. These structures are now listed as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers, and are also listed on the National Register of Historic Places. The listed area included two contributing buildings and 12 contributing structures on . It has also been known historically as the Virginia and Gold Hill Water Company Water System.
The mines required large amounts of water and timber to supply the houses and mines in Virginia City and Gold Hill. To feed these mines, the dam at Carson Tahoe Lumber and Fluming Company's Marlette Lake was increased, and Hobart Reservoir was created, and a number of flumes and pipelines were built to transport water down to Virginia City. This included a 3,994-foot-long tunnel through the watershed basin divide, and an ingenious inverted siphon pipe to get water through Washoe Valley. The Virginia and Gold Hill Water Company Marlette flume location is now a trail for mountain biking and hiking.
The collection portion of the water system is now located inside Lake Tahoe-Nevada State Park.
History
Civil engineer Hermann Schussler was hired in 1871 as a consultant by the Virginia and Gold Hill Water Company to design a pipeline to carry water from the east slope of the Carson Range to a ridge above the town of Gold Hill, approximately 7 miles. The maximum head at the low point of the siphon was approximately 1,870 feet, or 810 psi. This pressure, which was the highest head pipeline in the world when the project was completed in 1873, was double the next highest head pipeline, the Cherokee Mining Company inverted siphon in California.
Virginia City was the biggest high-grade silver and gold ore producer of the United States in the mid-1800s. Natural springs supplied water to the camps at the beginning of the mining activities. For addressing the need for more water because of the population growth, the Virginia and Gold Hill Water Company was established. Water was primarily drawn from tunnels that had been driven into the mountains by prospectors. The water was stored in wooden tanks, and later was sent to the towns through pipes. As demand for the water increased, additional sources of water were needed. The Virginia and Gold Hill Water Company determined that they needed to bring water from the eastern Sierra Nevada and hired Hermann Schussler design a system. The 7 miles of pipeline were constructed in 6 weeks, a significant accomplishment in a time before powered construction equipment.
Description
The ultimate Marlette Lake Water System, completed in 1887, involved a pipeline that was 21.5 miles long. It also involved a 45.7 mile long flume, an inclined tunnel 3,994 feet long, and storage reservoir with a capacity of over 6,200 acre feet. This water system could deliver around 6 million gallons of water per day (GPD). The initial stage of the project included the construction of a diversion dam on Hobart Creek, a wooden flume from the dam to an inlet tank which was 4.6 miles long, and the 7 miles of twelve-inch riveted wrought iron pipeline, the inverted siphon. Another flume from the Hobart Diversion Reservoir and a second inverted siphon were completed in 1875 by the water company. An incline tunnel through the Sierra was completed in 1877 by the company. The tunnel was 4,000 feet long.
See also
List of Historic Civil Engineering Landmarks
References
Lake Tahoe-Nevada State Park website
Lake Tahoe-Nevada State Park - Marlette-Hobart Backcountry
Flume Trail Ride Description
External links
American Society of Civil Engineers - Marlette Lake Water System
United States Geologic Survey (USGS), professional paper series- The story of the water supply for the Comstock.
Marlette Flume Trail
Infrastructure completed in 1873
Lake Tahoe
History of Storey County, Nevada
Buildings and structures in Storey County, Nevada
National Register of Historic Places in Carson City, Nevada
Water supply infrastructure on the National Register of Historic Places
Industrial buildings and structures on the National Register of Historic Places in Nevada
Reservoirs in Nevada
Historic Civil Engineering Landmarks
Historic districts on the National Register of Historic Places in Nevada
1873 establishments in Nevada | Marlette Lake Water System | [
"Engineering"
] | 842 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
9,503,459 | https://en.wikipedia.org/wiki/Emilio%20Zavattini | Emilio Zavattini (March 14, 1927 – January 9, 2007) was an Italian particle physicist.
Biography
He was born in Rimini, Italy and enrolled in the University of Rome La Sapienza as a physics student in 1950 and earned his doctorate in 1954.
Zavattini joined CERN in 1955 and remained a staff member until he retired in 1992. Early in this period he made a short post-doctoral visit to Nevis Laboratory at Columbia University where he worked with Leon Lederman. After retirement, he held a position as a professor at the University of Trieste from 1988–1999.
Zavattini is known for the muon g-2 experiment and the PVLAS experiment at the INFN Laboratory in Legnaro (Padua, Italy). He made contributions within the fields of strong, weak and electromagnetic interactions—especially using muons—both at CERN and at other European and U.S. laboratories. In later years his studies focused on a better understanding of the structure of vacuum.
He was a member of the Accademia dei Lincei.
Zavattini died at the age of 79 of a heart attack.
External links
Scientific publications of Emilio Zavattini on INSPIRE-HEP
Links to scientific papers (partial list)
Homepage of PVLAS experiment
References
1927 births
2007 deaths
People from Rimini
20th-century Italian physicists
People associated with CERN
Particle physicists | Emilio Zavattini | [
"Physics"
] | 290 | [
"Particle physicists",
"Particle physics"
] |
9,503,702 | https://en.wikipedia.org/wiki/Fauna%20of%20Great%20Britain | The island of Great Britain, along with the rest of the archipelago known as the British Isles, has a largely temperate climate. It contains a relatively small fraction of the world's wildlife. The biota was severely diminished in the last ice age, and shortly (in geological terms) thereafter was separated from the continent by the English Channel's formation. Since then, humans have hunted the most dangerous forms (the wolf, the brown bear and the wild boar) to extinction, though domesticated forms such as the dog and the pig remain. The wild boar has subsequently been reintroduced as a meat animal.
Overview
In most of Great Britain there is a temperate climate, with high levels of precipitation and medium levels of sunlight. Further northwards, the climate becomes colder and coniferous forests appear, replacing the largely deciduous forests of the south. There are a few variations in the generally temperate British climate, with some areas of subarctic conditions, such as the Scottish Highlands and Teesdale, and even sub-tropical in the Isles of Scilly. Plants have to cope with seasonal changes across the British Isles, such as in levels of sunlight, rainfall and temperature, as well as the risk of snow and frost during the winter.
Since the mid 18th century, Great Britain has gone through industrialisation and increasing urbanisation. A DEFRA study from 2006 suggested that 100 species became extinct in the UK during the 20th century: about 100 times the background extinction rate. This has had a major impact on indigenous animal populations. Song birds in particular are becoming scarcer, and habitat loss has affected larger mammalian species. Some species have however adapted to the expanding urban environment, particularly the red fox, which is the most successful urban mammal after the brown rat, and other creatures such as common wood pigeon.
Invertebrates
Molluscs
There are 220 species of non-marine molluscs that have been recorded as living in the wild in Britain. Two of them (Fruticicola fruticum and Cernuella neglecta) are locally extinct. In addition there are 14 gastropod species that live only in greenhouses.
Insects
Vertebrates
Amphibians
The species of amphibian native to Britain are the great crested newt, smooth newt, palmate newt, common toad, natterjack toad, common frog and the pool frog. Several other species have become naturalised.
Reptiles
Like many temperate areas, Great Britain has few snake species: the European adder is the only venomous snake to be found there. The other notable snakes found in Great Britain are the barred grass snake and the smooth snake. Great Britain has three native species of lizard: slowworms, sand lizards and viviparous lizards. There are also turtles, such as leatherback turtles to be found in the Irish Sea, although these are rarely seen. Other reptile species exist but are not native: aesculapian snake, wall lizard and green lizard.
Birds
In general the avifauna of Britain is similar to that of Europe, consisting largely of Palaearctic species. As an island, it has fewer breeding species than continental Europe. Some species, like the crested lark, breed as close as northern France, but have not colonised Britain. The mild winters mean that many species that cannot cope with harsher conditions can winter in Britain, and also that there is a large influx of wintering birds from the European continent and beyond. There are about 250 species regularly recorded in Great Britain, and another 350 that occur with varying degrees of rarity.
Mammals
Large mammals are not particularly numerous in Great Britain. Many of the large mammal species, such as the grey wolf and the brown bear, were hunted to extinction many centuries ago. However, in recent times some of these large mammals have been tentatively reintroduced to some areas of Britain. The largest wild mammals that remain in Britain today are predominantly members of the deer family. The red deer is the largest native mammal species, and is common throughout England, Scotland and Wales.
The other indigenous species is the roe deer. The common fallow deer was not naturally present Britain during the Holocene, having been brought over from France by the Normans in the late 11th century. It has become well established, though the fallow deer was naturally present in Britain during the previous Eemian interglacial. The sika deer is another small species of deer which is not indigenous, originating from Japan. It is widespread and expanding in Scotland from west to east, with a strong population in Peeblesshire. Bands of sika exist across the north and south of England though the species is absent in Wales.
There are also several species of insectivore found in Britain. The hedgehog is probably the most widely known as it is a regular visitor to urban gardens. The mole is also widely recognised and its subterranean lifestyle causes much damage to garden lawns. Shrews are also fairly common, and the smallest, the pygmy shrew, is one of the smallest mammals in the world. There are also seventeen species of bat found in Britain: the pipistrelle is the smallest and the most common.
Rodents are also numerous across Britain, particularly the brown rat which is by far the most abundant urban mammal after humans. Some however, are becoming increasingly rare. Habitat destruction has led to a decrease in the population of dormice and bank voles found in Britain. Due to the introduction of the North American grey squirrel, the red squirrel had become largely extinct in England and Wales, with the last populations existing in parts of North West England and on the Isle of Wight. European rabbit and European hare were introduced in Roman times, while the indigenous mountain hare remains only in Scotland and a small re-introduced population in Derbyshire.
Eurasian beavers were formerly native to Britain before becoming extinct by the early 16th century due to hunting. Efforts are being made to reintroduce beavers.
There are a variety of carnivores, especially from the weasel family (ranging in size from the weasel, stoat and European polecat to the European badger, pine marten, recently introduced mink and semiaquatic otter). In the absence of the locally extinct grey wolf and brown bear the largest carnivores are the badger, red fox, the adaptability and opportunism of which has allowed it to proliferate in the urban environment, and the European wildcat whose elusiveness has caused some confusion over population numbers, and is believed to be highly endangered, partly by hybridisation with the domestic cat.
Various species of seal and dolphin are found seasonally on British shores and coastlines, along with harbour porpoises, orcas, and many other sea mammals.
Fish
Great Britain has about forty species of native freshwater fish, of which the largest is the salmon. The saltwater fish include some larger species such as sharks.
Extinct or extirpated animals
During the previous Eemian Interglacial (130-115,000 years ago) when Britain had a similar or slightly warmer temperate climate as it does today, the large mammal fauna of Britain was considerably more diverse than it is at present or earlier in the Holocene. Large herbivore species present during the Eemian not present in Britain during the Holocene include the large straight-tusked elephant, the narrow-nosed rhinoceros, the hippopotamus, Irish elk and bison, in addition to the currently present roe, fallow and red deer. Large carnivores present during this time include hyenas (Crocuta spelaea) and lions (Panthera spelaea) in addition to wolves and brown bears. During the Holocene, Britain was inhabited by the aurochs (the wild ancestor of modern domestic cattle) until its extinction around 3,500 years ago. The Eurasian lynx was also formerly native to Britain during the Holocene, with its youngest records dating to around 1,500 years ago during the early Medieval period. The moose/elk was present in Britain during the early Holocene, but became extinct by around 5600 years ago. The European pond turtle was also present in Britain during the Holocene (as it had been during the Eemian), with the youngest radiocarbon-dated records dating to around 5,500 years ago.
See also
Biodiversity in British Overseas Territories
Fauna of Europe
Fauna of Scotland
Flora and fauna of the Outer Hebrides
Flora and fauna of Cornwall
Fauna of the Isles of Scilly
Fauna of Ireland
Fauna of England
Atlases of the flora and fauna of Britain and Ireland
Biota of the Isle of Man
List of endangered species in the British Isles
Introduced species of the British Isles
List of extinct animals of Britain
Animal welfare in the United Kingdom
References
Footnotes
Bibliography
Clarke, Philip; Jackman, Brian; and Mercer, Derrik (eds): The Sunday Times Book of the Countryside. London: Macdonald General Books, 1980.
Citationsanal
External links
Warmer seas bring rare turtles to Britain
More than a quarter of UK mammals face extinction
Great Britain
Biota of archipelagoes | Fauna of Great Britain | [
"Biology"
] | 1,830 | [
"Biota of archipelagoes",
"Biota by biogeographic realm"
] |
9,503,943 | https://en.wikipedia.org/wiki/Equine%20nutrition | Equine nutrition is the feeding of horses, ponies, mules, donkeys, and other equines. Correct and balanced nutrition is a critical component of proper horse care.
Horses are non-ruminant herbivores of a type known as a "hindgut fermenter." Horses have only one stomach, as do humans. However, unlike humans, they also need to digest plant fiber (largely cellulose) that comes from grass or hay. Ruminants like cattle are foregut fermenters, and digest fiber in plant matter by use of a multi-chambered stomach, whereas horses use microbial fermentation in a part of the digestive system known as the cecum (or caecum) to break down the cellulose.
In practical terms, horses prefer to eat small amounts of food steadily throughout the day, as they do in nature when grazing on pasture lands. Although this is not always possible with modern stabling practices and human schedules that favor feeding horses twice a day, it is important to remember the underlying biology of the animal when determining what to feed, how often, and in what quantities.
The digestive system of the horse is somewhat delicate. Horses are unable to regurgitate food, except from the esophagus. Thus, if they overeat or eat something poisonous, vomiting is not an option. They also have a long, complex large intestine and a balance of beneficial microbes in their cecum that can be upset by rapid changes in feed. Because of these factors, they are very susceptible to colic, which is a leading cause of death in horses. Therefore, horses require clean, high-quality feed, provided at regular intervals, plus water or they may become ill if subjected to abrupt changes in their diets. Horses are also sensitive to molds and toxins. For this reason, they must never be fed contaminated fermentable materials such as lawn clippings. Fermented silage or "haylage" is fed to horses in some places; however, contamination or failure of the fermentation process that allows any mold or spoilage may be toxic.
The digestive system
Horses and other members of the genus Equus are adapted by evolutionary biology to eating small amounts of the same kind of food all day long. In the wild, horses ate prairie grasses in semi-arid regions and traveled significant distances each day in order to obtain adequate nutrition. Therefore, their digestive system was made to work best with a small but steady flow of food that does not change much from day to day.
Chewing and swallowing
Digestion begins in the mouth. First, the animal selects pieces of forage and picks up finer foods, such as grain, with sensitive, prehensile, lips. The front teeth of the horse, called incisors, nip off forage, and food is ground up for swallowing by the premolars and molars.
The esophagus carries food to the stomach. The esophagus enters the stomach at an acute angle, creating a one-way valve, with a powerful sphincter mechanism at the gastroesophageal junction, which is why horses cannot vomit. The esophagus is also the area of the digestive tract where horses may suffer from choke. (see Illnesses related to improper feeding below)
The stomach and small intestine
Horses have a small stomach for their large size, which limits the amount of food that can be taken in at one time. The average sized horse has a stomach with a capacity of only , and works best when it contains about . One reason continuous foraging or several small feedings per day are better than one or two large meals is because the stomach begins to empty when it is two-thirds full, whether the food in the stomach is processed or not.
The small intestine is long and holds to . This is the major digestive organ where 50 to 70 percent of all nutrients are absorbed into the bloodstream. Bile from the liver acts here, combined with enzymes from the pancreas and small intestine itself. Equids do not have a gall bladder, so bile flows constantly, an adaptation to a slow but steady supply of food, and another reason for providing fodder to horses in several small feedings.
The cecum and large intestine
The cecum is the first section of the large intestine. It is also known as the "water gut" or "hind gut." It is a blind-ended pouch, about long that holds to . The small intestine opens into the cecum, and the cellulose plant fiber in the food is fermented by microbes for approximately seven hours. The fermented material leaves the cecum through another orifice and passes to the large colon. The microbes in the cecum produce vitamin K, B-complex vitamins, proteins, and fatty acids. The reason horses must have their diets changed slowly is so the microbes in the cecum are able to modify and adapt to the different chemical structure of new feedstuffs. Too abrupt a change in diet can cause colic, because new materials are not properly digested.
The large colon, small colon, and rectum make up the remainder of the large intestine. The large colon is long and holds up to of semi-liquid matter. Its main purpose is to absorb carbohydrates which were broken down from cellulose in the cecum. Due to its many twists and turns, it is a common place for a type of horse colic called an impaction. The small colon is also long, holds about , is the area where the majority of water is absorbed, and where fecal balls are formed. The rectum is about one foot long, and acts as a holding chamber for waste, which is then expelled from the body via the anus.
Nutrients
Like all animals, equines require five main classes of nutrients to survive: water, energy (primarily in the form of fats and carbohydrates), proteins, vitamins, and minerals.
Water
Water makes up between 62-68% of a horse's body weight and is essential for life. Horses can only live a few days without water, becoming dangerously dehydrated if they lose 8-10% of their natural body water. Therefore, it is critically important for horses to have access to a fresh, clean, and adequate supply of water.
An average horse drinks of water per day, more in hot weather, when eating dry forage such as hay, or when consuming high levels of salt, potassium, and magnesium. Horses drink less water in cool weather or when on lush pasture, which has a higher water content. When under hard work, or if a mare is lactating, water requirements may be as much as four times greater than normal. In the winter, snow is not a sufficient source of water for horses. Though they need a great deal of water, horses spend very little time drinking; usually 1–8 minutes a day, spread out in 2-8 episodes.
Water plays an important part in digestion. The forages and grains horses eat are mixed with saliva in the mouth to make a moist bolus that can be easily swallowed. Therefore, horses produce up to or 85 lb. of saliva per day.
Energy nutrients and protein
Nutritional sources of energy are fat and carbohydrates. Protein is a critical building block for muscles and other tissues. Horses that are heavily exercised, growing, pregnant or lactating need increased energy and protein in their diet. However, if a horse has too much energy in its diet and not enough exercise, it can become too high-spirited and difficult to handle.
Fat exists in low levels in plants and can be added to increase the energy density of the diet. Fat has per kilogram of energy, which is 2.25 times that of any carbohydrate source. Because equids have no gall bladder to store large quantities of bile, which flows continuously from the liver directly into the small intestine, fat, though a necessary nutrient, is difficult for them to digest and utilize in large quantities. However, they are able to digest a greater amount of fat than can cattle. Horses benefit from up to 8% fat in their diets, but more does not always provide a visible benefit. Horses can only have 15-20% fat in their diet without the risk of developing diarrhea.
Carbohydrates, the main energy source in most rations, are usually fed in the form of hay, grass, and grain. Soluble carbohydrates such as starches and sugars are readily broken down to glucose in the small intestine and absorbed. Insoluble carbohydrates, such as fiber (cellulose), are not digested by the horse's own enzymes, but are fermented by microbes in the cecum and large colon to break down and release their energy sources, volatile fatty acids.
Soluble carbohydrates are found in nearly every feed source; corn has the highest amount, then barley and oats. Forages normally have only 6-8% soluble carbohydrate, but under certain conditions can have up to 30%. Sudden ingestion of large amounts of starch or high sugar feeds can cause at the least an indigestion colic, and at the worst potentially fatal colitis or laminitis.
Protein is used in all parts of the body, especially muscle, blood, hormones, hooves, and hair cells. The main building blocks of protein are amino acids. Alfalfa and other legumes in hay are good sources of protein that can be easily added to the diet. Most adult horses only require 8-10% protein in their diet; however, higher protein is important for lactating mares and young growing foals.
Vitamins and minerals
Horses that are not subjected to hard work or extreme conditions usually have more than adequate amounts of vitamins in their diet if they are receiving fresh, green, leafy forages. Sometimes a vitamin/mineral supplement is needed when feeding low-quality hay, if a horse is under stress (illness, traveling, showing, racing, and so on), or not eating well. Grain has a different balance of nutrients than forage, and so requires specialized supplementation to prevent an imbalance of vitamins and minerals.
Minerals are required for maintenance and function of the skeleton, nerves, and muscles. These include calcium, phosphorus, sodium, potassium, and chloride, and are commonly found in most good-quality feeds. Horses also need trace minerals such as magnesium, selenium, copper, zinc, and iodine. Normally, if adult animals at maintenance levels are consuming fresh hay or are on pasture, they will receive adequate amounts of minerals in their diet, with the exception of sodium chloride (salt), which needs to be provided, preferably free choice. Some pastures are deficient in certain trace minerals, including selenium, zinc, and copper, and in such situations, health problems, including deficiency diseases, may occur if horses' trace mineral intake is not properly supplemented.
Calcium and phosphorus are needed in a specific ratio of between 1:1 and 2:1. Adult horses can tolerate up to a 5:1 ratio, foals no more than 3:1. A total ration with a higher ratio of phosphorus than calcium is to be avoided. Over time, imbalance will ultimately lead to a number of possible bone-related problems such as osteoporosis.
Foals and young growing horses through their first three to four years have special nutritional needs and require feeds that are balanced with a proper calcium:phosphorus ratio and other trace minerals. A number of skeletal problems may occur in young animals with an unbalanced diet. Hard work increases the need for minerals; sweating depletes sodium, potassium, and chloride from the horse's system. Therefore, supplementation with electrolytes may be required for horses in intense training, especially in hot weather.
Types of feed
Equids can consume approximately 2–2.5% of their body weight in dry feed each day. Therefore, a adult horse could eat up to of food. Foals less than six months of age eat 2-4% of their weight each day.
Solid feeds are placed into three categories: forages (such as hay and grass), concentrates (including grain or pelleted rations), and supplements (such as prepared vitamin or mineral pellets). Equine nutritionists recommend that 50% or more of the animal's diet by weight should be forages. If a horse is working hard and requires more energy, the use of grain is increased and the percentage of forage decreased so that the horse obtains the energy content it needs for the work it is performing. However, forage amount should never go below 1% of the horse's body weight per day.
Forages
Forages, also known as "roughage," are plant materials classified as legumes or grasses, found in pastures or in hay. Often, pastures and hayfields will contain a blend of both grasses and legumes. Nutrients available in forage vary greatly with maturity of the grasses, fertilization, management, and environmental conditions. Grasses are tolerant of a wide range of conditions and contain most necessary nutrients. Some commonly used grasses include timothy, brome, fescue, coastal Bermuda, orchard grass, and Kentucky bluegrass. Another type of forage sometimes provided to horses is beet pulp, a byproduct left over from the processing of sugar beets, which is high in energy as well as fiber.
Legumes such as clover or alfalfa are usually higher in protein, calcium, and energy than grasses. However, they require warm weather and good soil to produce the best nutrients. Legume hays are generally higher in protein than the grass hays. They are also higher in minerals, particularly calcium, but have an incorrect ratio of calcium to phosphorus. Because they are high in protein, they are very desirable for growing horses or those subjected to very hard work, but the calcium:phosphorus ratio must be balanced by other feeds to prevent bone abnormalities.
Hay is a dried mixture of grasses and legumes. It is cut in the field and then dried and baled for storage. Hay is most nutritious when it is cut early on, before the seed heads are fully mature and before the stems of the plants become tough and thick. Hay that is very green can be a good indicator of the amount of nutrients in the hay; however, color is not the sole indicator of quality—smell and texture are also important. Hay can be analyzed by many laboratories and that is the most reliable way to tell the nutritional values it contains.
Hay, particularly alfalfa, is sometimes compressed into pellets or cubes. Processed hay can be of more consistent quality and is more convenient to ship and to store. It is also easily obtained in areas that may be suffering localized hay shortages. However, these more concentrated forms can be overfed and horses are somewhat more prone to choke on them. On the other hand, hay pellets and cubes can be soaked until they break apart into a pulp or thick slurry, and in this state are a very useful source of food for horses with tooth problems such as dental disease, tooth loss due to age, or structural anomalies.
Haylage, also known as Round bale silage is a term for grass sealed in airtight plastic bags, a form of forage that is frequently fed in the United Kingdom and continental Europe, but is not often seen in the United States. Because haylage is a type of silage, hay stored in this fashion must remain completely sealed in plastic, as any holes or tears can stop the preservation properties of fermentation and lead to mold or spoilage. Rodents chewing through the plastic can also spoil the hay introducing contamination to the bale. If a rodent dies inside the plastic, the subsequent botulism toxins released can contaminate the entire bale.
Sometimes, straw or chaff is fed to animals. However, this is roughage with little nutritional value other than providing fiber. It is sometimes used as a filler; it can slow down horses who eat their grain too fast, or it can provide additional fiber when the horse must meet most nutritional needs via concentrated feeds. Straw is more often used as a bedding in stalls to absorb wastes.
Concentrates
Grains
Whole or crushed grains are the most common form of concentrated feed, sometimes referred to generically as "oats" or "corn" even if those grains are not present, also sometimes called straights in the UK.
Oats are the most popular grain for horses. Oats have a lower digestible energy value and higher fiber content than most other grains. They form a loose mass in the stomach that is well suited to the equine digestive system. They are also more palatable and digestible than other grains.
Corn (USA), or maize (British English), is the second most palatable grain. It provides twice as much digestible energy as an equal volume of oats and is low in fiber. Because of these characteristics, it is easy to over-feed, causing obesity, so horses are seldom fed corn all by itself. Nutritionists caution that moldy corn is poisonous if fed to horses.
Barley is also fed to horses, but needs to be processed to crack the seed hull and allow easier digestibility. It is frequently fed in combination with oats and corn, a mix informally referred to by the acronym "COB" (for Corn, Oats and Barley).
Wheat is generally not used as a concentrate. However, wheat bran is sometimes added to the diet of a horse for supplemental nutrition, usually moistened and in the form of a bran mash. Wheat bran is high in phosphorus, so must be fed carefully so that it does not cause an imbalance in the Ca:P ratio of a ration. Once touted for a laxative effect, this use of bran is now considered unnecessary, as horses, unlike humans, obtain sufficient fiber in their diets from other sources.
Mixes and pellets
Many feed manufacturers combine various grains and add additional vitamin and mineral supplements to create a complete premixed feed that is easy for owners to feed and of predictable nutritional quality. Some of these prepared feeds are manufactured in pelleted form, others retain the grains in their original form. In many cases molasses is used as a binder to keep down dust and for increased palatability. Grain mixes with added molasses are usually called "sweet feed" in the United States and "coarse mix" in the United Kingdom. Pelleted or extruded feeds (sometimes referred to as "nuts" in the UK) may be easier to chew and result in less wasted feed. Horses generally eat pellets as easily as grain. However, pellets are also more expensive, and even "complete" rations do not eliminate the necessity for forage.
Supplements
The average modern horse on good hay or pasture with light work usually does not need supplements; however, horses subjected to stress due to age, intensive athletic work, or reproduction may need additional nutrition. Extra fat and protein are sometimes added to the horse's diet, along with vitamin and mineral supplements. There are hundreds, if not thousands of commercially prepared vitamin and mineral supplements on the market, many tailored to horses with specialized needs.
Soybean meal is a common protein supplement, and averages about 44% crude protein. The protein in soybean meal is high-quality, with the proper ratio of dietary essential amino acids for equids. Cottonseed meal, linseed meal, and peanut meal are also used, but are not as common.
Feeding practices
Most horses only need quality forage, water, and a salt or mineral block. Grain or other concentrates are often not necessary. But, when grain or other concentrates are fed, quantities must be carefully monitored. To do so, horse feed is measured by weight, not volume. For example, of oats has a different volume than of corn. When continuous access to feed is not possible, it is more consistent with natural feeding behavior to provide three small feedings per day instead of one or two large ones. However, even two daily feedings is preferable to only one. To gauge the amount to feed, a weight tape can be used to provide a reasonably accurate estimate of a horse's weight. The tape measures the circumference of the horse's barrel, just behind the withers and elbows, and the tape is calibrated to convert circumference into approximate weight.
Actual amounts fed vary by the size of the horse, the age of the horse, the climate, and the work to which the animal is put. In addition, genetic factors play a role. Some animals are naturally easy keepers (good doers), which means that they can thrive on small amounts of food and are prone to obesity and other health problems if overfed. Others are hard keepers (poor doers), meaning that they are prone to be thin and require considerably more food to maintain a healthy weight.
Veterinarians are usually a good source for recommendations on appropriate types and amounts of feed for a specific horse. Animal nutritionists are also trained in how to develop equine rations and make recommendations. There are also numerous books written on the topic. Feed manufacturers usually offer very specific guidelines for how to select and properly feed products from their company, and in the United States, the local office of the Cooperative Extension Service can provide educational materials and expert recommendations.
Feeding forages
Equids always require forage. When possible, nutritionists recommend it be available at all times, at least when doing so does not overfeed the animal and lead to obesity. It is safe to feed a ration that is 100% forage (along with water and supplemental salt), and any feed ration should be at least 50% forage. Hay with alfalfa or other legumes has more concentrated nutrition and so is fed in smaller amounts than grass hay, though many hays have a mixture of both types of plant.
When beet pulp is fed, a ration of to is usually soaked in water for 3 to 4 hours prior to feeding in order to make it more palatable, and to minimize the risk of choke and other problems. It is usually soaked in a proportion of one part beet pulp to two parts water. Beet pulp is usually fed in addition to hay, but occasionally is a replacement for hay when fed to very old horses who can no longer chew properly. It is available in both pelleted and shredded form, pellets must be soaked significantly longer than shredded beet pulp.
Some pelleted rations are designed to be a "complete" feed that contains both hay and grain, meeting all the horse's nutritional needs. However, even these rations should have some hay or pasture provided, a minimum of a half-pound of forage for every of horse, in order to keep the digestive system functioning properly and to meet the horse's urge to graze.
When horses graze under natural conditions, they may spend up to 18 hours per day doing so. However, on modern irrigated pastures, they may have their nutritional needs for forage met in as little as three hours per day, depending on the quality of grass available.
Recent studies address the level of various non-structural carbohydrates (NSC), such as fructan, in forages. Too high an NSC level causes difficulties for animals prone to laminitis or equine polysaccharide storage myopathy (EPSM). NSC cannot be determined by looking at forage, but hay and pasture grasses can be tested for NSC levels.
Feeding concentrates
Concentrates, when fed, are recommended to be provided in quantities no greater than 1% of a horse's body weight per day, and preferably in two or more feedings of no more than 0.5% of body weight each. If a ration needs to contain a higher percent of concentrates, such as that of a race horse, bulky grains such as oats should be used as much as possible; a loose mass of feed helps prevent impaction colic. Peptic ulcers are linked to a too-high concentration of grain in the diet, particularly noticed in modern racehorses, where some studies show such ulcers affecting up to 90% of all race horses.
In general, the portion of the ration that should be grain or other concentrated feed is 0-10% grain for mature idle horses; between 20-70% for horses at work, depending on age, intensity of activity, and energy requirements. Concentrates should not be fed to horses within one hour before or after a heavy workout. Concentrates also need to be adjusted to level of performance. Not only can excess grain and inadequate exercise lead to behavior problems, it may also trigger serious health problems that include Equine Exertional Rhabdomyolysis, or "tying up," in horses prone to the condition. Another possible risk are various forms of horse colic. A relatively uncommon, but usually fatal concern is colitis-X, which may be triggered by excess protein and lack of forage in the diet that allows for the multiplication of clostridial organisms, and is exacerbated by stress.
Access to water
Horses normally require free access to all the fresh, clean water they want, and to avoid dehydration, should not be kept from water longer than four hours at any one time. However, water may need to be temporarily limited in quantity when a horse is very hot after a heavy workout. As long as a hot horse continues to work, it can drink its fill at periodic intervals, provided that common sense is used and that an overheated horse is not forced to drink from extremely cold water sources. But when the workout is over, a horse needs to be cooled out and walked for 30–90 minutes before it can be allowed all the water it wants at one time. However, dehydration is also a concern, so some water needs to be offered during the cooling off process. A hot horse will properly rehydrate while cooling off if offered a few swallows of water every three to five minutes while being walked. Sometimes the thirst mechanism does not immediately kick in following a heavy workout, which is another reason to offer periodic refills of water throughout the cooling down period.
Even a slightly dehydrated horse is at higher risk of developing impaction colic. Additionally, dehydration can lead to weight loss because the horse cannot produce adequate amounts of saliva, thus decreasing the amount of feed and dry forage consumed. Thus, it is especially important for horse owners to encourage their horses to drink when there is a risk of dehydration; when horses are losing a great deal of water in hot weather due to strenuous work, or in cold weather due to horses' natural tendency to drink less when in a cold environment. To encourage drinking, owners may add electrolytes to the feed, additives to make the water especially palatable (such as apple juice), or, when it is cold, to warm the water so that it is not at a near-freezing temperature.
Special feeding issues for ponies
Ponies and miniature horses are usually easy keepers and need less feed than full-sized horses. This is not only because they are smaller, but also, because they evolved under harsher living conditions than horses, they use feed more efficiently. Ponies easily become obese from overfeeding and are at high risk for colic and, especially, laminitis. Fresh grass is a particular danger to ponies; they can develop laminitis in as little as one hour of grazing on lush pasture.
Incorrect feeding is also as much a concern as simple overfeeding. Ponies and miniatures need a diet relatively low in sugars and starches and calories, but higher in fibers. Miniature horses in particular need fewer calories pound for pound than a regular horse, and are more prone to hyperlipemia than regular horses, and are also at higher risk of developing equine metabolic syndrome.
It is important to track the weight of a pony carefully, by use of a weight tape. Forages may be fed based on weight, at a rate of about of forage for every . Forage, along with water and a salt and mineral block, is all most ponies require. If a hard-working pony needs concentrates, a ratio of no more than 30% concentrates to 70% forage is recommended. Concentrates designed for horses, with added vitamins and minerals, will often provide insufficient nutrients at the small serving sizes needed for ponies. Therefore, if a pony requires concentrates, feed and supplements designed specially for ponies should be used. In the UK, extruded pellets designed for ponies are sometimes called "pony nuts".
Special feeding issues for mules and donkeys
Like ponies, mules and donkeys are also very hardy and generally need less concentrated feed than horses. Mules need less protein than horses and do best on grass hay with a vitamin and mineral supplement. If mules are fed concentrates, they only need about half of what a horse requires. Like horses, mules require fresh, clean water, but are less likely to over-drink when hot.
Donkeys, like mules, need less protein and more fiber than horses. Although the donkey's gastrointestinal tract has no marked differences in structure to that of the horse, donkeys are more efficient at digesting food and thrive on less forage than a similar sized pony. They only need to eat 1.5% of their body weight per day in dry matter. It is not fully understood why donkeys are such efficient digestors, but it is thought that they may have a different microbial population in the large intestine than do horses, or possibly an increased gut retention time.
Donkeys do best when allowed to consume small amounts of food over long periods, as is natural for them in an arid climate. They can meet their nutritional needs on 6 to 7 hours of grazing per day on average dryland pasture that is not stressed by drought. If they are worked long hours or do not have access to pasture, they require hay or a similar dried forage, with no more than a 1:4 ratio of legumes to grass. They also require salt and mineral supplements, and access to clean, fresh water. Like ponies and mules, in a lush climate, donkeys are prone to obesity and are at risk of laminitis.
Treats
Many people like to feed horses special treats such as carrots, sugar cubes, peppermint candies, or specially manufactured horse "cookies." Horses do not need treats, and due to the risk of colic or choke, many horse owners do not allow their horses to be given treats. There are also behavioral issues that some horses may develop if given too many treats, particularly a tendency to bite if hand-fed, and for this reason many horse trainers and riding instructors discourage the practice.
However, if treats are allowed, carrots and compressed hay pellets are common, nutritious, and generally not harmful. Apples are also acceptable, though it is best if they are first cut into slices. Horse "cookies" are often specially manufactured out of ordinary grains and some added molasses. They generally will not cause nutritional problems when fed in small quantities. However, many types of human foods are potentially dangerous to a horse and should not be fed. This includes bread products, meat products, candy, and carbonated or alcoholic beverages.
It was once a common practice to give horses a weekly bran mash of wheat bran mixed with warm water and other ingredients. It is still done regularly in some places. While a warm, soft meal is a treat many horses enjoy, and was once considered helpful for its laxative effect, it is not nutritionally necessary. An old horse with poor teeth may benefit from food softened in water, a mash may help provide extra hydration, and a warm meal may be comforting in cold weather, but horses have far more fiber in their regular diet than do humans, and so any assistance from bran is unnecessary. There is also a risk that too much wheat bran may provide excessive phosphorus, unbalancing the diet, and a feed of unusual contents fed only once a week could trigger a bout of colic.
Feed storage
All hay and concentrated feeds must be kept dry and free of mold, rodent feces, and other types of contamination that may cause illness in horses. Feed kept outside or otherwise exposed to moisture can develop mold quite quickly. Due to fire hazards, hay is often stored under an open shed or under a tarp, rather than inside a horse barn itself, but should be kept under some kind of cover. Concentrates take up less storage space, are less of a fire hazard, and are usually kept in a barn or enclosed shed. A secure door or latched gate between the animals and any feed storage area is critical. Horses accidentally getting into stored feed and eating too much at one time is a common but preventable way that horses develop colic or laminitis. (see Illnesses related to improper feeding below)
It is generally not safe to give a horse feed that was contaminated by the remains of a dead animal. This is a potential source of botulism. This is not an uncommon situation. For example, mice and birds can get into poorly stored grain and be trapped; hay bales sometimes accidentally contain snakes, mice, or other small animals that were caught in the baling machinery during the harvesting process.
Feeding behavior
Horses can become anxious or stressed if there are long periods of time between meals. They also do best when they are fed on a regular schedule; they are creatures of habit and easily upset by changes in routine. When horses are in a herd, their behavior is hierarchical; the higher-ranked animals in the herd eat and drink first. Low-status animals, who eat last, may not get enough food, and if there is little available feed, higher-ranking horses may keep lower-ranking ones from eating at all. Therefore, unless a herd is on pasture that meets the nutritional needs of all individuals, it is important to either feed horses separately, or spread feed out in separate areas to be sure all animals get roughly equal amounts of food to eat. In some situations where horses are kept together, they may still be placed into separate herds, depending on nutritional needs; overweight horses are kept separate from thin horses so that rations may be adjusted accordingly. Horses may also eat in undesirable ways, such as bolting their feed, or eating too fast. This can lead to either choke or colic under some circumstances.
Dental issues
Horses' teeth continually erupt throughout their life, are worn down as they eat, and can develop uneven wear patterns that can interfere with chewing. For this reason, horses need a dental examination at least once a year, and particular care must be paid to the dental needs of older horses. The process of grinding off uneven wear patterns on a horse's teeth is called floating and can be performed by a veterinarian or a specialist in equine dentistry.
Illnesses related to improper feeding
Colic, choke, and laminitis can be life-threatening when a horse is severely affected, and veterinary care is necessary to properly treat these conditions. Other conditions, while not life-threatening, may have serious implications for the long-term health and soundness of a horse.
Colic
Horse colic itself is not a disease, but rather a description of symptoms connected to abdominal pain. It can occur due to any number of digestive upsets, from mild bloating due to excess intestinal gas to life-threatening impactions. Colic is most often caused by a change in diet, either a planned change that takes place too quickly, or an accidental change, such as a horse getting out of its barn or paddock and ingesting unfamiliar plants. But colic has many other possible triggers including insufficient water, an irregular feeding schedule, stress, and illness. Because the horse cannot vomit and has a limited capacity to detoxify harmful substances, anything upsetting to the horse must travel all the way through the digestive system to be expelled.
Choke
Choke is not as common as colic, but is nonetheless commonly considered a veterinary emergency. The most common cause of choke is horses not chewing their food thoroughly, usually because of eating their food too quickly, especially if they do not have sufficient access to water, but also sometimes due to dental problems that make chewing painful. It is exceedingly difficult for a horse to expel anything from the esophagus, and immediate treatment is often required. Unlike choking in humans, choke in horses does not cut off respiration.
Laminitis
Horses are also susceptible to laminitis, a disease of the lamina of the hoof. Laminitis has many causes, but the most common is related to a sugar and starch overload from a horse overeating certain types of food, particularly too much pasture grass high in fructan in early spring and late fall, or by consuming excessive quantities of grain.
Growth disorders
Young horses that are overfed or are fed a diet with an improper calcium:phosphorus ratio over time may develop a number of growth and orthopedic disorders, including osteochondrosis (OCD), angular limb deformities (ALD), and several conditions under the umbrella of the developmental orthopedic diseases (DOD). If not properly treated, damage can be permanent. However, they can be treated if caught in time, given proper veterinary care, and any improper feeding practices are corrected. Young horses being fed for rapid growth in order to be shown or sold as yearlings are at particularly high risk. Adult horses with an improper diet may also develop a range of metabolic problems.
Heaves
Moldy or dusty hay fed to horses is the most common cause of Recurrent airway obstruction, also known as COPD or "heaves." This is a chronic condition of horses involving an allergic bronchitis characterized by wheezing, coughing, and labored breathing.
"Tying up"
Equine exertional rhabdomyolysis, also known as "tying up" or azoturia, is a condition to which only some horses are susceptible and most cases are linked to a genetic mutation. In horses prone to the condition, it usually occurs when a day of rest on full grain ration is followed by work the next day. This pattern of clinical signs led to the archaic nickname "Monday morning sickness". The condition may also be related to electrolyte imbalance. Proper diet management may help minimize the risk of an attack.
See also
Easy keeper (US) Good doer (UK)
Fodder
Forage
Geriatric horses
Grain
Hard keeper (US) Poor doer (UK)
Hay
Henneke horse body condition scoring system
Horse body mass
Horse tongue
Horse care
List of plants poisonous to equines
Footnotes and other references
"Horse Nutrition - Table of Contents." Bulletin 762-00, Ohio State University. Web site accessed February 9, 2007.
Mowrey, Robert A. "Horse Feeding Management - Nutrient Requirements for Horses." from North Carolina Cooperative Extension Center (PDF) Web site accessed July 4, 2009.
Horse management
Animal nutrition | Equine nutrition | [
"Biology"
] | 8,167 | [
"Animals",
"Animal nutrition"
] |
9,504,079 | https://en.wikipedia.org/wiki/BT%20Smart%20Hub | The BT Smart Hub (formerly BT Home Hub) is a family of wireless residential gateway router modems distributed by BT for use with their own products and services and those of wholesale resellers (i.e. LLUs) but not with other Internet services. Since v 5, Home/Smart Hubs support the faster Wi-Fi 802.11ac standard, in addition to the 802.11b/g/n standards. All models of the Home Hub prior to Home Hub 3 support VoIP Internet telephony via BT's Broadband Talk service, and are compatible with DECT telephone handsets. Since the Home Hub 4, all models have been dual band (i.e. both 2.4GHz and 5GHz).
The BT Home Hub works with the now defunct BT Fusion service and with the BT Vision video on demand service. The BT Home Hub 1.0, 1.5 and 2.0 devices connect to the Internet using a standard ADSL connection. The BT Home Hub 3 and 4 models support PPPoA for ADSL and PPPoE for VDSL2, in conjunction with an Openreach-provided VDSL2 modem to support BT's FTTC network (BT Infinity). Version 5 of the Home Hub, released in August 2013, includes a VDSL2 modem for fibre-optic connections. New firmware is pushed out to Home Hubs connected to the Internet automatically by BT.
The Home Hub 5 was followed on 20 June 2016 by the Smart Hub, a further development of the Home Hub, internally referred to as "Home Hub 6". It has more WiFi antennas than its predecessor. It supports Wave 2 802.11ac WiFi, found on review to be 50% faster than non-Wave 2. The Smart Hub was subsequently replaced with the Smart Hub 2 (Home Hub 6DX).
History
Prior to release of the Home Hub (2004–2005), BT offered a product based on the 2Wire 1800HG, and manufactured by 2Wire. This was described as the "BT Wireless Hub 1800HG", or in some documentation as the "BT Wireless Home Hub 1800". This provided one USB connection, four Ethernet ports and Wi-Fi 802.11b or 802.11g wireless connection. A total of ten devices in any combination of these was supported.
The Home Hub 3B was manufactured by Huawei and also supports ADSL2+. The Home Hub 3B is powered by a highly integrated Broadcom BCM6361 System-on-a-chip (SoC). The BCM6361 has a 400 MHz dual MIPS32 core processor as well as an integrated DSL Analog Front End (AFE) and line driver, gigabit Ethernet switch controller and 802.11 Wi-Fi transceiver.
Features
The BT Home Hub 2.0 was a combined wireless router and phone. It supports the 802.11b/g/n wireless networking standards, and the WEP and WPA security protocols. It supports many of BT's services such as BT Fusion, BT Vision and BT Broadband Anywhere. It can also be used as a VOIP phone through BT Broadband Talk.
The BT Home Hub 3 incorporated WPS functionality, seen on other routers, which enables the user to connect to their encrypted network by the use of a "one touch" button, and also includes "smart wireless technology", which automatically chooses the wireless channel to give the strongest possible wireless signal. WPS has since been (temporarily) disabled by firmware updates due to security issues with the standard.
The BT Home Hub supports port forwarding.
The BT Home Hub versions 3, 4 and 5 may be used for access to files stored on an attached USB stick - USB 2.0 is supported. The server by default has the address File://192.168.1.254 and is available to the entire network.
The BT Smart Hub (initially branded Home Hub 6) upgraded the wifi provision to Wave 2 of the 802.11ac specification, and increased the number of antennae for improved MIMO.
The BT Ultra Smart Hub appeared visually similar to the Smart Hub, but featured a G.fast capable modem and included a BS6312 socket which subscribers to BT Digital Voice can use to attach an analogue telephone. Digital Voice launched in January 2020 as the replacement for analogue voice service, which planned to be turned off by 2025.
The BT Smart Hub 2 provided the same technical features as the Ultra Smart Hub in a redesigned body, as well as supporting BT's "Complete Wifi" mesh product.
Hub Phone
The BT Hub Phone is an optional handset that can be bought to work in conjunction with the BT Home Hub 1, 1.5, and 2.0. It calls using the BT Broadband Talk service, and may sit in a dock in the front of the Home Hub or be used on its own stand. It uses Hi-def sound technology when calls between Hub Phones are made. A DECT telephone may be used instead.
With each BT Home Hub released up to 2.0, a new phone model was made to accompany it:
BT Home Hub 1.0: was supplied with the BT Hub Phone 1010
BT Home Hub 1.5: was supplied with the BT Hub Phone 1020 (The only difference between the 1010 and the 1020 was the lack of the colour screen and supporting features on the 1020.)
BT Home Hub 2.0: was supplied with the BT Hub Phone 2.1
The BT Home Hub 3 and 4 do not work with the BT Broadband Talk service or DECT telephones. After 29 January 2011, BT Broadband Talk was no longer provided as part of BT's broadband packages.
The phones are only partially compatible with newer or older versions of the hub, able to make and receive calls, but with the loss of features including call waiting, call transfer, internal calls, phonebook, call lists and Hi-def sound.
Design
the following versions of the BT Home/Smart Hub had been released:
Version 0.5: grey (no Hub Phone was available, not technically a Home Hub but rather BT Fusion Hub)
Version 1.0: white (matching Hub Phone was available)
Version 1.5: white or black (matching Hub Phone was available)
Version 2.0: black (matching black Hub Phone was available)
Version 3.0: black (Hub Phones and DECT phones are not compatible) released on 29 January 2011.
Version 4.0: black (Hub Phones and DECT phones are not compatible) released on 10 May 2013.
Version 5.0 (HH5A/5B): black, released in mid-October 2013
Smart Hub (Home Hub 6A /6B), mid-2016
Smart Hub 2 (Home Hub 6DX), early 2019
There were two different versions of the BT Home Hub 2.0: v2.0A (2.0 Type A), manufactured by Thomson, and v2.0B (2.0 Type B), manufactured by Gigaset Communications (now Sagem Communications, Sagem having acquired Gigaset's broadband business in July 2009). Whilst the looks and functionality appear to be identical, the Home Hub 2.0A has been plagued with problems relating to poorly tested firmware upgrades which, amongst other problems, cause the Home Hub 2.0A to restart when uploading files using the wireless connection.
There are also two versions of the BT Home Hub 3: v3A (by Gigaset, now Sagem) and v3B, (Huawei).
The BT Home Hub can only be used with the BT Total Broadband package without modification; the 1.0, 1.5, 2A, 2B and 3A versions can be unlocked. The BT Home Hub configuration software is compatible with both Macintosh and Windows operating systems, although use of this is optional and computers without the BT software will still be able to connect to the Hub and browse the Internet normally.
The 4th generation of the BT Home Hub was released on 10 May 2013. It has been built with a smart dual band technology, making it unique amongst other UK-based ISP provided routers. The Home Hub 4 was supplied free of charge to new customers, with a £35 charge to existing customers. It has intelligent power management technology which monitors the hub functions and puts them individually into power-save mode when not in use. There two variants of the Hub 4, Type A and B.
The 5th generation Home Hub was released in mid-October 2013 and is an upgrade to the Home Hub 4, with Gigabit Ethernet connections, 802.11ac Wi-Fi (Wave 1) and an integrated VDSL modem. Customers upgrading from ADSL Broadband pay only a delivery charge; existing Broadband customers pay a £45 upgrade charge. There are two variants of the Hub 5, Type A with Lantiq chipset (ECI), and Type B with Broadcom. It is possible to replace the firmware of the Hub 5 Type A (and the identical 'Plusnet Hub One' and 'BT Business Hub 5' Type A) with OpenWrt, unlocking it from BT and providing the features of OpenWrt. In April 2018, scripts for modifying the stock firmware of a BT Hub 5 Type A to enable SSH access, were published on the GitHub repository; this enables access to the native OpenRG command-line interface.
Models and technical specifications
The BT Home Hub package includes:
Broadband cable (RJ11 to RJ11)
Ethernet cable (RJ45 to RJ45) (Cat5e)
Power adapter
2× ADSL microfilters
Phone to RJ11 converter
User guide and CD
A USB lead was provided with the Home Hub 1 only.
Reported issues
The security of older BT Home Hub has been questioned
In May 2017, it was reported that many BT Smart Hub customers were suffering problems with the router constantly rebooting and being unable to maintain a reliable internet connection.
In May 2021, it was reported that the "BT Smart Hub 2 router [was] 'disrupting' home networks
References
External links
Official BT Home Hub Page
BT Group
Linux-based devices
Broadband
Digital subscriber line
Wireless networking hardware
Hardware routers | BT Smart Hub | [
"Technology"
] | 2,120 | [
"Wireless networking hardware",
"Wireless networking"
] |
9,504,117 | https://en.wikipedia.org/wiki/Charge%20density%20wave | A charge density wave (CDW) is an ordered quantum fluid of electrons in a linear chain compound or layered crystal. The electrons within a CDW form a standing wave pattern and sometimes collectively carry an electric current. The electrons in such a CDW, like those in a superconductor, can flow through a linear chain compound en masse, in a highly correlated fashion. Unlike a superconductor, however, the electric CDW current often flows in a jerky fashion, much like water dripping from a faucet, due to its electrostatic properties. In a CDW, the combined effects of pinning (due to impurities) and electrostatic interactions (due to the net electric charges of any CDW kinks) likely play critical roles in the CDW current's jerky behavior, as discussed in sections 4 & 5 below.
Most CDW's in metallic crystals form due to the wave-like nature of electrons – a manifestation of quantum mechanical wave–particle duality – causing the electronic charge density to become spatially modulated, i.e., to form periodic "bumps" in charge. This standing wave affects each electronic wave function, and is created by combining electron states, or wavefunctions, of opposite momenta. The effect is somewhat analogous to the standing wave in a guitar string, which can be viewed as the combination of two interfering, traveling waves moving in opposite directions (see interference (wave propagation)).
The CDW in electronic charge is accompanied by a periodic distortion – essentially a superlattice – of the atomic lattice. The metallic crystals look like thin shiny ribbons (e.g., quasi-1-D NbSe3 crystals) or shiny flat sheets (e.g., quasi-2-D, 1T-TaS2 crystals). The CDW's existence was first predicted in the 1930s by Rudolf Peierls. He argued that a 1-D metal would be unstable to the formation of energy gaps at the Fermi wavevectors ±kF, which reduce the energies of the filled electronic states at ±kF as compared to their original Fermi energy EF. The temperature below which such gaps form is known as the Peierls transition temperature, TP.
The electron spins are spatially modulated to form a standing spin wave in a spin density wave (SDW). A SDW can be viewed as two CDWs for the spin-up and spin-down subbands, whose charge modulations are 180° out-of-phase.
Fröhlich model of superconductivity
In 1954, Herbert Fröhlich proposed a microscopic theory, in which energy gaps at ±kF would form below a transition temperature as a result of the interaction between the electrons and phonons of wavevector Q=2kF. Conduction at high temperatures is metallic in a quasi-1-D conductor, whose Fermi surface consists of fairly flat sheets perpendicular to the chain direction at ±kF. The electrons near the Fermi surface couple strongly with the phonons of 'nesting' wave number Q = 2kF. The 2kF mode thus becomes softened as a result of the electron-phonon interaction. The 2kF phonon mode frequency decreases with decreasing temperature, and finally goes to zero at the Peierls transition temperature. Since phonons are bosons, this mode becomes macroscopically occupied at lower temperatures, and is manifested by a static periodic lattice distortion. At the same time, an electronic CDW forms, and the Peierls gap opens up at ±kF. Below the Peierls transition temperature, a complete Peierls gap leads to thermally activated behavior in the conductivity due to normal uncondensed electrons.
However, a CDW whose wavelength is incommensurate with the underlying atomic lattice, i.e., where the CDW wavelength is not an integer multiple of the lattice constant, would have no preferred position, or phase φ, in its charge modulation ρ0 + ρ1cos[2kFx – φ]. Fröhlich thus proposed that the CDW could move and, moreover, that the Peierls gaps would be displaced in momentum space along with the entire Fermi sea, leading to an electric current proportional to dφ/dt. However, as discussed in subsequent sections, even an incommensurate CDW cannot move freely, but is pinned by impurities. Moreover, interaction with normal carriers leads to dissipative transport, unlike a superconductor.
CDWs in quasi-2-D layered materials
Several quasi-2-D systems, including layered transition metal dichalcogenides, undergo Peierls transitions to form quasi-2-D CDWs. These result from multiple nesting wavevectors coupling different flat regions of the Fermi surface. The charge modulation can either form a honeycomb lattice with hexagonal symmetry or a checkerboard pattern. A concomitant periodic lattice displacement accompanies the CDW and has been directly observed in 1T-TaS2 using cryogenic electron microscopy. In 2012, evidence for competing, incipient CDW phases were reported for layered cuprate high-temperature superconductors such as YBCO.
CDW transport in linear chain compounds
Early studies of quasi-1-D conductors were motivated by a proposal, in 1964, that certain types of polymer chain compounds could exhibit superconductivity with a high critical temperature Tc. The theory was based on the idea that pairing of electrons in the BCS theory of superconductivity could be mediated by interactions of conducting electrons in one chain with nonconducting electrons in some side chains. (By contrast, electron pairing is mediated by phonons, or vibrating ions, in the BCS theory of conventional superconductors.) Since light electrons, instead of heavy ions, would lead to the formation of Cooper pairs, their characteristic frequency and, hence, energy scale and Tc would be enhanced. Organic materials, such as TTF-TCNQ were measured and studied theoretically in the 1970s. These materials were found to undergo a metal-insulator, rather than superconducting, transition. It was eventually established that such experiments represented the first observations of the Peierls transition.
The first evidence for CDW transport in inorganic linear chain compounds, such as transition metal trichalcogenides, was reported in 1976 by Monceau et al., who observed enhanced electrical conduction at increased electric fields in NbSe3. The nonlinear contribution to the electrical conductivity σ vs. field E was fit to a Landau-Zener tunneling characteristic ~ exp[-E0/E] (see Landau–Zener formula), but it was soon realized that the characteristic Zener field E0 was far too small to represent Zener tunneling of normal electrons across the Peierls gap. Subsequent experiments showed a sharp threshold electric field, as well as peaks in the noise spectrum (narrow band noise) whose fundamental frequency scales with the CDW current. These and other experiments (e.g.,) confirm that the CDW collectively carries an electric current in a jerky fashion above the threshold field.
Classical models of CDW depinning
Linear chain compounds exhibiting CDW transport have CDW wavelengths λcdw = π/kF incommensurate with (i.e., not an integer multiple of) the lattice constant. In such materials, pinning is due to impurities that break the translational symmetry of the CDW with respect to φ. The simplest model treats the pinning as a sine-Gordon potential of the form u(φ) = u0[1 – cosφ], while the electric field tilts the periodic pinning potential until the phase can slide over the barrier above the classical depinning field. Known as the overdamped oscillator model, since it also models the damped CDW response to oscillatory (AC) electric fields, this picture accounts for the scaling of the narrow-band noise with CDW current above threshold.
However, since impurities are randomly distributed throughout the crystal, a more realistic picture must allow for variations in optimum CDW phase φ with position – essentially a modified sine-Gordon picture with a disordered washboard potential. This is done in the Fukuyama-Lee-Rice (FLR) model, in which the CDW minimizes its total energy by optimizing both the elastic strain energy due to spatial gradients in φ and the pinning energy. Two limits that emerge from FLR include weak pinning, typically from isoelectronic impurities, where the optimum phase is spread over many impurities and the depinning field scales as ni2 (ni being the impurity concentration) and strong pinning, where each impurity is strong enough to pin the CDW phase and the depinning field scales linearly with ni. Variations of this theme include numerical simulations that incorporate random distributions of impurities (random pinning model).
Quantum models of CDW transport
Early quantum models included a soliton pair creation model by Maki and a proposal by John Bardeen that condensed CDW electrons tunnel coherently through a tiny pinning gap, fixed at ±kF unlike the Peierls gap. Maki's theory lacked a sharp threshold field and Bardeen only gave a phenomenological interpretation of the threshold field. However, a 1985 paper by Krive and Rozhavsky pointed out that nucleated solitons and antisolitons of charge ±q generate an internal electric field E* proportional to q/ε. The electrostatic energy (1/2)ε[E ± E*]2 prevents soliton tunneling for applied fields E less than a threshold ET = E*/2 without violating energy conservation. Although this Coulomb blockade threshold can be much smaller than the classical depinning field, it shows the same scaling with impurity concentration since the CDW's polarizability and dielectric response ε vary inversely with pinning strength.
Building on this picture, as well as a 2000 article on time-correlated soliton tunneling, a more recent quantum model proposes Josephson-like coupling (see Josephson effect) between complex order parameters associated with nucleated droplets of charged soliton dislocations on many parallel chains. Following Richard Feynman in The Feynman Lectures on Physics, Vol. III, Ch. 21, their time-evolution is described using the Schrödinger equation as an emergent classical equation. The narrow-band noise and related phenomena result from the periodic buildup of electrostatic charging energy and thus do not depend on the detailed shape of the washboard pinning potential. Both a soliton pair-creation threshold and a higher classical depinning field emerge from the model, which views the CDW as a sticky quantum fluid or deformable quantum solid with dislocations, a concept discussed by Philip Warren Anderson.
Aharonov–Bohm quantum interference effects
The first evidence for phenomena related to the Aharonov–Bohm effect in CDWs was reported in a 1997 paper, which described experiments showing oscillations of period h/2e in CDW (not normal electron) conductance versus magnetic flux through columnar defects in NbSe3. Later experiments, including some reported in 2012, show oscillations in CDW current versus magnetic flux, of dominant period h/2e, through TaS3 rings up to 85 μm in circumference above 77 K. This behavior is similar to that of the superconducting quantum interference device (see SQUID), lending credence to the idea that CDW electron transport is fundamentally quantum in nature (see quantum mechanics).
See also
Spin density wave
High-temperature superconductivity
References
Cited references
General references
Grüner, George. Density Waves in Solids. Addison-Wesley, 1994.
Review of experiments as of 2013 by Pierre Monceau. Electronic crystals: an experimental overview.
Superconductivity
Phases of matter
Condensed matter physics | Charge density wave | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,503 | [
"Electrical resistance and conductance",
"Physical quantities",
"Superconductivity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
9,504,249 | https://en.wikipedia.org/wiki/Marvell%20Software%20Solutions%20Israel | Marvell Software Solutions Israel is an Israeli technology company headquartered in Tel Aviv. It is an wholly owned subsidiary of Marvell Technology, that specializes in local area network (LAN) technologies.
History
The company was founded in 1998 as a spin-off from RND, which was founded by brothers Yehuda and Zohar Zisapel. RND was also the product of a spin-off, from the Zisapel brothers' RAD Group. Eventually, RND was split into two companies, Radware and RADLAN.
In February 2003, the integrated circuit (IC) designer Marvell Technology Group closed the deal to acquire RADLAN Computer Communications for $49.7 million in cash and shares.
California-based Marvell said it would incorporate its mixed-signal ICs with RADLAN's networking infrastructure drivers, interfaces and software modules to make improved networking communications products like routers. Currently, Marvell's product lineup includes read channels (which convert analog data from a magnetic disk into digital data for computing), preamplifiers, and Ethernet switch controllers and transceivers.
In May 2007 Radlan was officially renamed Marvell Software Solutions Israel (MSSI), to complete the integration into Marvell.
The company is located in the Petah Tikva technology park, Ezorim.
Yuval Cohen replaced Jacob Zankel as chief executive in late 2006.
Technology
RADLAN's core technology, Open and Portable Embedded Networking System (OpENS), provided IP-routed core software coupled with customizable management application, development environment and testing tools. RADLAN's product lines are divided into three areas of
development: Intelligent Intranet Switching; Intranet Accelerator Engines; Intelligent Network Services.
See also
Economy of Israel
References
External links
Marvell's Official Web-Site
Technology companies of Israel
Telecommunications equipment vendors
Computer hardware companies
Networking hardware companies
Software companies of Israel | Marvell Software Solutions Israel | [
"Technology"
] | 391 | [
"Computer hardware companies",
"Computers"
] |
9,504,656 | https://en.wikipedia.org/wiki/One%20Thousand%20Children | The One Thousand Children (OTC) is a designation, created in 2000, which is used to refer to the approximately 1,400 Jewish children who were rescued from Nazi Germany and other Nazi-occupied or threatened European countries, and who were taken directly to the United States during the period 1934–1945. The phrase "One Thousand Children" only refers to those children who came unaccompanied and left their parents behind back in Europe. In nearly all cases, their parents were not able to escape with their children, because they could not get the necessary visas among other reasons. Later, nearly all these parents were murdered by the Nazis.
The United States Holocaust Memorial Museum (USHMM), in its online "Holocaust Encyclopedia," in the article on "Immigration of Refugee Children to the United States," recognizes this official name: the "One Thousand Children," for this group of children.
The archives of the "One Thousand Children," which contain much documentary material, including audio and video of both the One Thousand Children 2002 Conference, and individual interviews of OTC children, as well as many other original materials, and which all together are the fundamental reference-source, are held by the YIVO Institute.
The OTC children were rescued by both American and European organizations, as well as by individuals.
Originally only about one thousand such children had been identified as OTC children — hence the name "The One Thousand Children". By 2017 about 1,400 have been identified.
The One Thousand Children, Inc. (OTC, Inc.) was an organization created for further welfare of the OTC children.
Definition and early history
Some 1.5 million children, nearly all of them Jewish, were deliberately murdered by the Nazis in the Holocaust. (This includes those who died of starvation, or illness due to inhumane conditions in the ghettoes.)
A relatively few Jewish children were saved by being hidden by courageous Gentiles in various ways, in or close to their Nazi-occupied hometown (see Hidden Children).
Another relatively few Jewish children were saved by moving to non-Nazi-occupied lands. Naturally this required the aid of adults - these were saved by the efforts of programs, groups, individuals, or actual parents. In western Europe these would include the Kindertransport program which included the individual efforts of Sir Nicholas Winton; and the work of the French Jewish organization Œuvre de Secours aux Enfants (OSE). Most of the programs that worked specifically to save children had the children remain within Western Europe.
Other well organized programs prepared and sent children to Palestine, for example Youth Aliyah, Youth village and Sh'erit ha-Pletah.
In contrast, in the One Thousand Children "program," approximately 1,400 children, nearly all Jewish, were successfully rescued and brought across the ocean to the United States.
In general, they were brought in quiet operations designed to avoid negative attention from isolationist and other antisemitic forces. Originally 1,177 such children had been identified as OTC — hence the name the "One Thousand Children" (OTC).
These OTC children:
either came from Europe directly to the United States during the period 1934 to 1945;
were of age up to sixteen (the cut-off age, before they were considered adults). The youngest was fourteen months old;
arrived unaccompanied, leaving their parents behind; and
were usually placed with foster families, schools and other facilities across the U.S. However some came under individual arrangements with various final arrangements.
The OTC history is divided into four periods:
the first: 1934, until Kristallnacht on November 9/10, 1938 – during which there were very few OTC;
second: starting with Kristallnacht November 9/10, 1938, which had strongly alerted the American public to the oppression of the Jews in Nazi Germany, until the outbreak of the European War on September 1, 1939;
third: September 1, 1939 until Pearl Harbor December 7, 1941, during the period while Europe was at war but the United States was officially neutral, until Pearl Harbor December 7, 1941, when America joined the war. This was a period when travel from all of Europe to the neutral United States was still permitted, but only if one could obtain the required travel documents; and
fourth: December 7, 1941 until Victory in Europe May 8, 1945. During this period America and Germany were at war, so that legal travel from the Nazi-occupied lands to America was not available.
The first small group of six children arrived at New York City in November 1934. This was followed by subsequent small groups, totaling about 100 children annually, that occurred in the early years of operation, and they were taken to foster homes arranged through appeals to congregations and other organizations' members.
Most of the children came through programs run by private refugee agencies such as the German Jewish Children's Aid (GJCA). The Hebrew Immigrant Aid Society (HIAS) as well as the American Jewish Joint Distribution Committee, (colloquially known as "the Joint"), HICEM, and the Society of Friends (the Quakers). Many of these efforts were combined to form the U.S. Committee for the Care of European Children (USCOM) which was registered with the US government and later became part of the National War Fund. Fundraising efforts were assisted by the American Jewish Committee, the American Jewish Congress, and the National Council of Jewish Women.
For instance, many of the OTC were initially gathered together, supported, taken care of, and educated by the French Oeuvre de Secours aux Enfants (OSE)<ref, sometimes for many months in the OSE "chateaux" (these were typically very large houses and grounds). They stayed with OSE until OSE able to pass them on to "the Joint", or the Quakers, which then took them to the United States. Under the leadership of Andree Salomon, OSE did manage to gather together about 350 such children in three large groups, who travelled to America with the aid of the organizations mentioned. Many of these children came from the Gurs internment camp.
Other OTC came under private arrangements and sponsorship, typically made by the parent(s) with a family relative or friend. Such children would live with their sponsor, or sometimes live in a boarding school in close contact with their sponsor.
Before 1938, only small groups were brought into the country by such organizations, because of concern for anti-semitism and social hostility to allowing foreigners to enter the U.S. during the Depression. The sponsoring organizations wanted to avoid drawing undue attention to the children. Furthermore, their immigration was limited by the U.S. immigration quota system for their countries of origin.
The demand on these organizations increased markedly after Kristallnacht on November 9/10, 1938 convinced more European parents that the destruction of Jews was an element of the Nazi agenda.
In the later period of 1941–1942, when news of Nazi atrocities was more widely circulated, larger groups of OTC were organized and arrived in the U.S. A few of the OTC came under the British Children's Overseas Reception Board (CORB) program, as well as the "U.S. Committee for the Care of European Children" (USCOM).
In the OTC programs under the Hebrew Immigrant Aid Society (now nearly always contracted to HIAS), German Jewish Children's Aid Society, (GJCA), the Quakers, etc., foster families in the U.S. agreed to care for the children until age twenty-one, see that they were educated, and provided a guarantee that they would not become public charges. Most of these children were assigned a social worker from a local social service agency to oversee the child's resettlement process. Jewish children were generally placed in Jewish homes. These children, and their sponsors, expected that they would be reunited with their own families at the end of the war. Most of the children lost one or both parents, and most of their extended families, by the time World War II had ended.
Where the Children Came From, And Some of Their Journeys
Most of the OTC children came from Germany, Austria, and Czechoslovakia. Some came from France, Belgium, Holland and Luxembourg. Only very very few came directly from Eastern Europe. However, some families of OTC had previously made it from Eastern to Western Europe; and then the OTC child fled from Western Europe. (For instance, this was the case for Wulf Wolodia Grajonca, who later became the "rock-and-roll" impresario Bill Graham (promoter)).
Before the war, many simply managed to get to Hamburg or another port, and sail from there. though this itself was not easy.
After the German blitzkrieg of May 1940 through Belgium, Holland, and Luxembourg, and rapidly into France, many OTC children fled from Occupied France or Vichy France by going south and west to the Spanish border. Then they made the difficult climb over the Pyrenees, usually guided by a passeur guide/smuggler. From Spain, they traveled to neutral Portugal and Lisbon. From there, they sailed to America, often on one of the Portuguese liners Serpa Pinto (also known as the RMS Ebro), or Mouzinho, or Nyassa. This escape route was also taken by many families.
This escape route, through France and over the Pyrenees to Spain, then Portugal and Lisbon, was also followed by those who fled from Belgium, Holland, and Luxembourg.
Some other OTC children managed to get to Casablanca in North Africa, and sailed from there.
It was often the case that whole families made some or all the journey to the port, before the sad parting when the OTC child continued alone. For instance, some French intact families followed a trajectory that led them to one of the French concentration camps such as Gurs. Then OSE was able to extract the child(ren) from that camp, but with the parents still interned. Then OSE would aid the child(ren) over the next OTC stages to their final transport across the Atlantic. In the later stages of this journey, often HIAS, or the "Joint" would also assist the OTC children.
Remarkably, a similar small group of about 6-8 unaccompanied Jewish children fled to the United States from Venezuela. Their parents, in the small Jewish community of Maracaibo in Venezuela, were well aware of Hitler's possible global threat, which included German submarines off the Venezuelan coast.
The "OTC" Children
For many of the OTC children, the period before they reached America was very difficult. Before World War II, most were simply assembled by rescue agencies directly from their home towns in Germany and Austria, and then easily escorted to America. But after the war started, nearly all of them went through extreme hardships and dangers before they boarded ship for the United States. Some did travel to the port with parents, but many traveled alone, at least for part of their flight. Some were smuggled over the Pyrenees (usually with their parents). Some were incarcerated for a time in concentration camps such as Gurs internment camp in southern France, while some spent time in a French "château" (large mansion) run by the Oeuvre de Secours aux Enfants or OSE. It was usually only late in a journey that a Rescue Agency would start positively escorting the children.
Some of the OTC children came by individual arrangements made by their family, in which the child would be sent into the care of a relative in America. In America, they would either live with that family, or perhaps be placed in a boarding school.
Many OTC children made notable contributions to American society. Among them are:
One OTC (and Kindertransport) child, Jack Steinberger, became a Nobel Laureate in physics. His experiment, done with two others, greatly clarified the understanding of fundamental elementary particle physics at the time.
Another OTC child, Ambassador Richard Schifter, during World War II and shortly afterwards, was one of the Ritchie Boys. The Ritchie Boys were a unit of the U.S. Army, who were chosen because of their excellent language skills in German, French, etc., and were trained in military intelligence at Fort Ritchie. They then operated primarily in interrogation on the battlefield during World War II. There were about 20,000, of whom about 2,000 were German or Austrian refugees to America. Several OTC children became Ritchie Boys, or served in the Armed Forces. After his Ritchie Boy stint, Ambassador Schifter had a very significant diplomatic and legal career, and was the U.S. Ambassador for Human Relations at the United Nations. As a child in Austria, his father told him that no Jew could become an Ambassador (in Austria). Schifter did become an American Ambassador to the United Nations, but his father had been already murdered by the Nazis.
Another child Wulf Wolodia Grajonca renamed himself Bill Graham and became prominent in the 1960s as the concert promoter and music venue operator in the rock music and psychedelic rock scene in San Francisco. Bill Graham was the promoter for the musical groups Grateful Dead and Jefferson Airplane. He operated the Fillmore East, Fillmore West, The Fillmore and Winterland Ballroom rock music venues, and organized the large Summer Jam at Watkins Glen and US Festival events before his death in a helicopter crash in 1991. He was posthumously inducted into the Rock and Roll Hall of Fame in 1992.
Henri Parens became a child psychiatrist, and wrote several professional books that presented methods he had developed to help children who had been traumatized, such as OTC children. Naturally he made use of his own OTC experience at ages 11–12, and subsequently. In his autobiographical book "Renewal of Life: Healing from the Holocaust", Parens importantly explicitly presents his feelings and emotions during his OTC experiences and his later slow "healing." (Most autobiographies are purely factual.)
Herbert Freudenberger, a psychologist and author of the book "Burn-out: The High Cost of High Achievement" (1980)
Arthur Hans Weiss, one of the Ritchie Boys. As a United States military counter intelligence figure, who found Adolf Hitler's last will and political testament in autumn 1945. He later became a lawyer.
Guy Stern (born Günther Stern) was one of the Ritchie Boys, like Ambassador Schifter (above). He then had a distinguished career as a university professor and Holocaust Museum director.
Harry Eckstein (born Horst Eckstein) became a political science professor at Princeton University and the University of California, Irvine.
Emotional and Practical Effects
At the time, the OTC children went through much emotional stress and trauma and practical difficulties in Europe before their arrival in America, also during their initial period in America, and even in later war-time years. At war's end, most OTC children would learn that their parents had been murdered by the Nazis. At that time, they had to adjust to, and create for themselves, another new set of life experiences. As is now well-understood, such stress and trauma will continue to create conscious and unconscious trauma, and their repercussions, for many years.
These OTC trauma, both at the time and afterwards, are closely similar to those met by other Child Holocaust Survivors who survived in Europe. These OTC trauma were caused by the Holocaust. The OTC child's situation in the United States was very different than that of a child who remained in Europe, but it was equally damaging to his childhood development.
Very importantly, these trauma were different and more extreme than those of an adult Holocaust Survivor. An adult had developed coping methods to survive through life. Children are not yet adults. Children still have to learn a full set of mature coping skills - that is why their parents support, teach, and protect their children. Society also accepts an important responsibility in this regard.
Like other Child Holocaust Survivors, the actual experiences and psychological trauma received by the OTC Child Survivors differed fundamentally, crucially and negatively from that of an adult Holocaust Survivor. An adult generally would have developed a sense of self and ego, which would provide him or her with a way of attempting to deal with the practical and emotional trauma. Usually a child has not yet developed a strong ego nor sense of self. Also, a child depends on his or her parents for support and protection, and especially love. Yet for these Child Holocaust Survivors, they lacked their parents and their parents' love; and they only had weakly developed egos and sense of self, at the very time that the external and psychological trauma were most extreme. (Some exceptions may have been older teenage OTCers, such as those who joined the Ritchie Boys).
The first very significant trauma occurred at the moment of parting from their parent(s), whether when boarding ship or at an earlier time and event. For the younger OTC children, regardless of whatever reassuring words the parent(s) might say, the child would feel abandoned by their parents. Even older children, who could understand the reality "intellectually," none-the-less would feel as having been abandoned.
For the very young children, even though in later years they would have no actual memory of the parting and of these feelings, and also no memory of (part of) the later years, yet the trauma of parting and of continued separation will still have current and continuing subconscious impact.
Most OTC children were placed in "foster-families," some of which were loving and some not; or sometimes they were placed in various types of institutions, some caring, and some not. But in most cases these could not replace the love and support from his own family; and the new relationship would take time to develop. In some fortunate instances, the OTC child would grow to totally blend into the foster family, and learn to love them as if they were his own parents and siblings.
The older OTC children fully knew the dangers their left-behind parents faced from the Nazi threat. And then, at the end of the war, nearly always the OTC child would find out, sooner or later, that his or her parents had been murdered by the Nazis; and there would also have been the prior stress of waiting and hoping, before that final factual discovery.
Not surprisingly, some of the OTC children became very angry with their situation. Some would act out, sometimes so much so that their "foster-parents" decided they had to return the OTC child to the original Organization, such as HIAS, to be placed elsewhere – but then the cycle might repeat.
As an example, one OTC child, Phyllis Helene Mattson, also acted out at the "Orphanage" Institution, where twice she had to be placed – she ultimately stayed with four "foster-families," and was in the "Orphanage" twice. She ran away from two of her "foster-families," and one "foster-family" sent her back to the "Orphanage." As she herself writes, it is somewhat remarkable that she managed to become a responsible mature adult. She recounts all this in her book "War Orphan in San Francisco".
At a more practical level, nearly every OTC child arrived in America not being able to speak English, and so he or she was held behind in school grade placement (though most rapidly learned English, and then advanced rapidly into his proper school-grade). He or she had to adapt to a new culture and way of behaving.
Emotional Trauma Continued After The War Was Over
Like other Holocaust child survivors, after the war the OTC child had traumatic life adjustments to make. Their parents had nearly certainly been murdered by the Nazis, when he was still a young child, which he either learned soon after the war ended - or no information was available, and then he could hope that they were still alive, and fear that they were dead.
Yet, even after the war, they most emphatically still needed their parents. At best, he found a good "substitute" in what now would become permanent "foster parents." This "switch" was naturally also traumatic.
He, too, had to create a sense of self-identity and autonomy as he moved towards adulthood, from his under-developed posture as an OTC.
Descriptions: Holocaust Child Survivors, Hidden Children, OTC, Kindertransport
Hidden Children of the Holocaust; OTC as Holocaust Child Survivors
Hidden Children of the Holocaust are those children who were hidden in some way during the Holocaust from the Nazis in occupied Europe, hidden so as to avoid capture by the Nazis.
One sub-group even of Hidden Children are children who, during the Holocaust, were placed into the care of a "foster-family," usually Catholic, and raised as-if one of the family.
These Hidden Children were saved from murder by the Nazis. Nonetheless, all of them, including those in "foster-families," suffered great trauma at the time, and also later both consciously and subconsciously.
OTC children went through very significant trauma, both in terms of the psychological and the practical – caused by the Holocaust. These trauma in large part correspond to many aspects of those trauma developed by the "foster-family" sub-group of the Hidden Children – Child Survivors who had been raised as-if one of a (generally Catholic) family. The OTC trauma similarly was caused by the Holocaust. For this reason alone, we see that OTC are Child Survivors of the Holocaust.
"American Kindertransport"
The OTC are the "American Kindertransport." The Kindertransport Program is discussed in more detail in a later section. Here, we only need to know that, during the period between Kristallnacht and the start of World War II, the Kindertransport Program brought about 10,000 Jewish children to England from Germany, German-annexed Austria, and German-occupied Czechoslovakia - but the children (kinder) had been forced to flee by themselves, and forced to leave their parents behind. (Nearly all the children's parents were later murdered by the Nazis.)
In November 2018, the German Government announced a "Kindertransport Fund" that would pay each surviving Kindertransport "child" a token symbolic amount of 2,500 Euros (about $2,850 at that time). This was intended to be in recognition of the especial trauma these Kindertransport had suffered as children during their flight from Hitler, but that they had had to flee unaccompanied, and forced to leave their parents behind. The German Government created this Fund precisely to recognize that the Kinder were Child Survivors of the Holocaust.
The OTC, as children suffered exactly the same trauma as did the kinder. This is the second reason that we see that the OTC are Child Survivors of the Holocaust.
Research and discovery
The fact that some unaccompanied children fled from Europe directly to the U.S.A. was first researched by Judith Baumel-Schwartz in a doctoral thesis and related book.
However, only in 2000 did Iris Posner have the realization and then implement it, that these children should be considered a significant distinct group of Holocaust Survivors, which should be discussed in the truly public domain.
Specifically, in 2000, Iris Posner had learned of the British Government-assisted Kindertransport effort, and was intrigued by the question of whether there was a similar actual official American Government effort. Posner wrote letters to newspapers asking any such children to contact her. Posner and Leonore Moskowitz also researched ship manifests and other documents. In this way, Posner "created" the story of this group of unaccompanied children to America ("The One Thousand Children," as she later named them). At that time, they managed to identify slightly over 1,000, hence the name. Posner and Moskowitz managed to locate about 500 of these who were still alive, and invited each of them to the 2002 OTC Conference.
Soon after, in 2001, Posner and Moskowitz jointly founded the non-profit organization The One Thousand Children, Inc.
Posner and Moskowitz, under the aegis of their organization "The One Thousand Children, Inc" organized a three-day International OTC Conference and Reunion in Chicago in 2002. Approximately 200 attendees had the opportunity to listen and interact with over 50 speakers drawn from OTC children, their children and grandchildren, and "foster" family members and rescuers from rescue organizations.
At the time of the Conference in 2002, they had found the names of about 1,200 OTC'ers. Since that time the known number has increased to about 1,400.
American "Response"
The OTC story is unlike that of the kindertransport in which unaccompanied children came from mainland Europe to Great Britain. In contrast to the OTC 'program," the British kindertransport program was officially created by the British Government in very speedy response to kristallnacht on Nov 9/10, 1938. Within six days the British Government presented an official bill in Parliament which was rapidly passed, which waived all immigration and visa requirements for unaccompanied children, though it left actual arrangements to private relief organizations and individual sponsors.
In contrast, the United States did not change any immigration laws. In 1939, the proposed Wagner–Rogers Bill to admit 20,000 Jewish refugees under the age of 14 to the United States from Nazi Germany, cosponsored by Sen. Robert F. Wagner (D-N.Y.) and Rep. Edith Rogers (R-Mass.), failed to get Congressional approval. Jewish organizations did not challenge this decision. The full story of the failure of the Wagner-Rogers bill shows the power of the isolationist forces at that time, which included "an undercurrent of resentment toward Jews". Even the Ickes plan for settling Jews in Alaska, known as the Slattery Report, did not come to any success.
Furthermore, the State Department had a deliberately obstructionist "Paper Walls" policy in operation to delay or prevent the issuing of any officially permitted visas for all refugees who desired entry to America.", This Paper Wall contributed to the low number of refugees. From July 1941 all immigration applications went to a special inter-departmental committee, and under the "relatives rule" special scrutiny was given to any applicant with relatives in German, Italian or Russian territory.
Beginning in July 1943, a new State Department visa application form over four feet long was used, with details required of the refugee and of the two sponsors; and six copies had to be submitted. Applications took about nine months, and were not expedited even in cases of imminent danger. Furthermore, from fall 1943, applications from refugees "not in acute danger" could be refused (e.g. people who had reached Spain, Portugal or North Africa). This created a huge barrier, since many of these children (usually with their parents) had fled there from other parts of Europe, some by being smuggled over the Pyrenees.
The American public also resisted the OTC program, because of social hostility to allowing foreigners to enter the U.S. during the Depression, and generally from isolationist and antisemitic forces.
Some of the Groups of OTC Sailings, and their Rescuers
The "Brith Sholom" Group of 50 OTC, who were rescued in 1938 by Gilbert and Eleanor Kraus. Their detailed story is presented in a later section.
In June 1942, about 50 OTC sailed from Casablanca for New York on board the Serpa Pinto. They had been helped originally by OSE in France. Then the American Friends Service Committee helped them to leave France from Marseilles to Casablanca, under the auspices of the U.S. Committee for the Care of European Children (USCOM) (and see that WIKI section).
3 groups totalling 311 OTC, who sailed from Lisbon, who were helped by OSE and specifically by Andree Salomon.
Iris Posner's contributions to the OTC story
Iris Posner's contributions started with her "discovering" and "creating" the One Thousand Children as a concept.
Posner and Moskowitz then went on to search for information about these OTC children – their names and other OTC information – and then searched for the actual OTC "children."
Posner and Moskowitz then put on the 2002 OTC Conference (see above).
Posner created the OTC story. Posner provided the new "One Thousand Children" group-identity for these children. Posner enabled these children to realize they were "Child Survivors of the Holocaust." Posner caused all Holocaust groups and scholars to recognize this "new" group of Holocaust Child Survivors."
Personal Realization that They were "Child Survivors of the Holocaust"
It was at this 2002 OTC Conference that many of the OTC Children first realized that they had two new identities – both as OTC, and as "Child Survivors of the Holocaust." They realized that they truly were Child Survivors of the Holocaust. For a very emphatic audio-visual statement by an OTC that she indeed was a Child Survivor of the Holocaust, listen to her YouTube testimony "I am a Holocaust Survivor! Hitler wanted to put me on the dung-heap of History! He failed!"
One Thousand Children, Inc (OTC, Inc.)
Iris Posner and Leonore Moskowitz created the non-profit research and education organization One Thousand Children, Inc (OTC, Inc.), whose primary purposes are to maintain a connection between the OTC children, to explore this little-known segment of American history, and to create archival materials and depositories. OTC, Inc's print, photo, and audio-visual archives, and some of its activities have been transferred to the "YIVO Institute for Jewish Research" though OTC, Inc itself has ceased to exist (see next section).
Video Summation of the OTC Experience, and the "Disbanding" of OTC, Inc in Oct 2013
OTC, Inc. formally disbanded in Oct 2013, but its work goes on. The "closing" took place at a two-hour conference at the YIVO Institute for Jewish Research. Testimony by individual OTC'ers make up a significant part of this conference.
British Kindertransport, and Compared with the One Thousand Children Effort
A larger but similar British program, the Kindertransport, is more well-known. That effort brought approximately 10,000 similarly defined mainly Jewish children to the United Kingdom, between November 21, 1938, and September 3, 1939. It had to stop at that date, since that was the beginning of World War II.
The Kindertransport program was created by the British Government which within six days of Kristallnacht presented an Act in the British Parliament. This act waived all visa and immigration requirements for an unspecified number of unaccompanied children. Naturally, the children had to be privately financed and guaranteed, and placed by various British Jewish Organizations. (Some of the "kinder" from Britain subsequently migrated to America, e.g. the Nobel Prize-winning scientists Arno Penzias and Walter Kohn.)
In contrast, the United States Government did nothing to aid any of the OTC children, and did not waive any quota or immigration requirements. The 12-year OTC effort required each OTC child to meet the American immigration requirements.
OTC Archived Documents and Other Media Are Now Mainly at the "YIVO Institute for Jewish Research"
The Organization's archives have been donated to and now reside at the YIVO Institute for Jewish Research in New York City. These primary archives include video-recordings of the complete 2002 OTC Conference as well as partial written transcripts. Many artifacts, including personal diaries written as children or later as adults, are included; as well as data about each individual (identified) child, other information, and photographs. This archive is open to scholars.
The list of the 1177 originally-identified OTC'ers, with names and many other details, is at YIVO. It is also at the United States Holocaust Memorial Museum (In both cases, it is confidential, and is only available to researchers.)
Other documents and artifacts are located at the United States Holocaust Memorial Museum (USHMM), and the National Museum of American Jewish History (NMAJH) in Philadelphia.
Story of the Rescue of 50 Brith Sholom OTC Children
Some of the OTC children were rescued by Jewish organizations such as HIAS. But some were rescued by individuals. For example, one American 51 year-old bachelor distant cousin sponsored and took official responsibility for a six-year-old OTC'er, but the OTC'er had to be placed in a small year-round boarding-school
A remarkable rescue was made by a private wealthy Philadelphia family, Gilbert and Eleanor Kraus. On their own, they rescued 25 boys and 25 girls from Vienna after Kristallnacht, but before the war in Europe started. They had many practical difficulties, including those to obtain the necessary 50 American visas from within the immigration quota system. On arrival, these children were first placed in the summer camp facilities of the fraternal order Brith Sholom, and then were placed into the homes of Philadelphia families.
See also
Kindertransport
Hidden children during the Holocaust
Children in the Holocaust
References
Further reading
Fern Schumer Chapman. "Is It Night or Day?"
Thea Kahn Lindauer. "There Must Be An Ocean Between Us" (iUniverse)
Louis Maier. "In Lieu of Flowers"
Parens, Henri. "Renewal of Life: healing from the Holocaust." Schreiber Publishing, Rockville MD, 2004. In this important book, Parens explicitly presents his feelings such as fear, disgust, anxiety at being alone and only twelve, etc. in his very dangerous OTC situations in France.. These feelings lead to the trauma, and we need to understand the trauma in order to understand the full OTC experience. (And see Phyllis Helene Mattson (below). Parens became a psychiatrist and worked to help his patients overcome childhood trauma, for which he used his own trauma.
Phyllis Helene Mattson. "War Orphan in San Francisco" (Stevens Creek Press) . This book presents an example of the possible psychological effects of the OTC experience. The author describes her many behavioral issues and placement transitions that she went through, because of the drastic disruption in her life at age 12. She had become an orphan.
"Forced Journey: The Saga of Werner Berlinger" (2013), by Rosemary Zibart. (hardback), (paperback). Written for teenagers, this tells the fictionized OTC story of "Werner," from Hamburg to America.
External links
The official One Thousand Children web-page: www.onethousandchildren.yivo.org
Jewish emigration from Nazi Germany
International response to the Holocaust
Kindertransport
Rescue of Jews during the Holocaust | One Thousand Children | [
"Biology"
] | 7,211 | [
"Rescue of Jews during the Holocaust",
"Behavior",
"Altruism"
] |
9,504,741 | https://en.wikipedia.org/wiki/Copiotroph | A copiotroph is an organism found in environments rich in nutrients, particularly carbon. They are the opposite to oligotrophs, which survive in much lower carbon concentrations.
Copiotrophic organisms tend to grow in high organic substrate conditions. For example, copiotrophic organisms grow in Sewage lagoons. They grow in organic substrate conditions up to 100x higher than oligotrophs. Due to this substrate concentration inclination, copiotrophs are often found in nutrient rich waters near coastlines or estuaries.
Classification and Identification
The bacterial phyla can be differentiated into copiotrophic or oligotrophic categories that correspond and structure the functions of soil bacterial communities.
Interaction with other organisms
Copiotrophic relation between oligotrophic bacteria depends on the amount of concentration the soil has of C compounds. If the soil has large amounts of organic C, it would then favor the copiotrophic bacteria.
Ecology
Copiotrophic bacteria are a key component in the soil C cycle. It is most important during the period of the year when vegetation is photosynthetically active and exudes large amounts of simple C compounds like sugar, amino acids, and organic acids. Copiotrophic bacteria are also found within marine life.
Lifestyle
Copiotrophs have a higher Michaelis-Menten constant than oligotrophs. This constant is directly correlated to environmental substrate preference. In these high resource environments, copiotrophs exhibit a “feast-and-famine” lifestyle. They utilize the available nutrients in the environment rapidly resulting in nutrient depletion which forces them to starve. This is possible through increasing their growth rate with nutrient uptake. However, when nutrients in the environment get depleted, copiotrophs struggle to survive for long periods of time. Copiotrophs do not have the ability to respond to starvation. It is hypothesized that this may be a lost trait. Another possibility is that microbes never evolved to survive these extreme conditions. Oligotrophs can outcompete copiotrophs in low-nutrient environments. This causes low-nutrient conditions to continue for extended periods of time, making it difficult for copiotrophs to sustain life. Copiotrophs are larger than oligotrophs and need more energy, requiring larger concentrations of substrate for survival.
Copiotrophs are motile. Copiotrophs can have external organelles such as flagella that extend out of a microbe’s cell to facilitate movement. Copiotrophs are also chemotactic, meaning they can detect nutrients in the environment. These help the microbes travel quickly to nearby food sources. Chemotaxis also enables the organism to travel away from a restricting compound. There are multiple methods for chemotaxis in these organisms. This includes the “run and tumble” strategy in which the organism randomly picks a direction to move in. However, if it senses that the concentration gradient is decreasing they stop and choose another random direction to travel in.Another strategy includes the “run and reverse” in which the organism runs towards a nutrient. If it notices the gradient decreasing, it moves back to where the gradient is larger and heads in another direction from this new position.
Through their motility and chemotaxis, copiotrophic microbes respond quickly to nutrients in their environment. With the help of these mechanisms, copiotrophs can travel to and stay in nutrient dense areas long enough for transcriptional regulatory systems to increase gene expression. This in turn helps them increase metabolic processes in high nutrient areas allowing them to maximize their growth during these patches.
Growth characteristics
Copiotrophs are characterized by a high maximum growth rate. This high growth rate allows for copiotrophs to have a larger genome and cell size than their oligotrophic counterparts.
The copiotrophic genome encompasses more ribosomal RNA operons than the oligotrophic genome. Ribosomal RNA operons are linearly related to growth rate. The ribosomal RNA operons are responsible for expression of genes in clusters. The larger amount of ribosomal content allows for more rapid growth. Oligotrophs have one ribosomal RNA operon while copiotrophs can contain up to fifteen operons.
Copiotrophs tend to have a lower carbon use efficiency than oligotrophs. This is the ratio of carbon used for production of biomass per total carbon consumed by the organism. Carbon use efficiency can be used to understand organisms lifestyles, whether they primarily create biomass or require carbon for maintenance energy. Energy is necessary for the copiotrophic lifestyle which includes motility and chemotaxis. This energy could otherwise be used for biomass production. This results in a lower efficiency than the oligotrophic lifestyle which primarily uses energy for the creation of biomass.
Copiotrophs have a lower protein yield than oligotrophs. Protein yield is the amount of protein synthesized per O2 consumed. This is also associated with the higher ribosomal RNA operons. Overall, copiotrophs create more protein than their oligotrophic peers, however due to the copiotrophs' lower carbon use efficiency, less protein is produced per gram O2 consumed by the organisms.
References
Fierer, N., Bradford, M. A., & Jackson, R. B. (2007). Toward an ecological classification of soil bacteria. Ecology, 88(6), 1354-1364.
Ivars-Martinez, E., Martin-Cuadrado, A. B., D'auria, G., Mira, A., Ferriera, S., Johnson, J., ... & Rodriguez-Valera, F. (2008). Comparative genomics of two ecotypes of the marine planktonic copiotroph Alteromonas macleodii suggests alternative lifestyles associated with different kinds of particulate organic matter. The ISME journal, 2(12), 1194-1212.
Lladó, S., & Baldrian, P. (2017). Community-level physiological profiling analyses show potential to identify the copiotrophic bacteria present in soil environments. PLoS One, 12(2), e0171638.
Organisms by adaptation
Trophic ecology | Copiotroph | [
"Biology"
] | 1,285 | [
"Organisms by adaptation"
] |
9,504,881 | https://en.wikipedia.org/wiki/Lax%E2%80%93Wendroff%20method | The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences. It is second-order accurate in both space and time. This method is an example of explicit time integration where the function that defines the governing equation is evaluated at the current time.
Definition
Suppose one has an equation of the following form:
where and are independent variables, and the initial state, is given.
Linear case
In the linear case, where , and is a constant,
Here refers to the dimension and refers to the dimension.
This linear scheme can be extended to the general non-linear case in different ways. One of them is letting
Non-linear case
The conservative form of Lax-Wendroff for a general non-linear equation is then:
where is the Jacobian matrix evaluated at .
Jacobian free methods
To avoid the Jacobian evaluation, use a two-step procedure.
Richtmyer method
What follows is the Richtmyer two-step Lax–Wendroff method. The first step in the Richtmyer two-step Lax–Wendroff method calculates values for at half time steps, and half grid points, . In the second step values at are calculated using the data for and .
First (Lax) steps:
Second step:
MacCormack method
Another method of this same type was proposed by MacCormack. MacCormack's method uses first forward differencing and then backward differencing:
First step:
Second step:
Alternatively,
First step:
Second step:
References
Michael J. Thompson, An Introduction to Astrophysical Fluid Dynamics, Imperial College Press, London, 2006.
Numerical differential equations
Computational fluid dynamics | Lax–Wendroff method | [
"Physics",
"Chemistry"
] | 351 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
9,505,399 | https://en.wikipedia.org/wiki/Electrochromatography | Electrochromatography is a chemical separation technique in analytical chemistry, biochemistry and molecular biology used to resolve and separate mostly large biomolecules such as proteins. It is a combination of size exclusion chromatography (gel filtration chromatography) and gel electrophoresis. These separation mechanisms operate essentially in superposition along the length of a gel filtration column to which an axial electric field gradient has been added. The molecules are separated by size due to the gel filtration mechanism and by electrophoretic mobility due to the gel electrophoresis mechanism. Additionally there are secondary chromatographic solute retention mechanisms.
Capillary electrochromatography
Capillary electrochromatography (CEC) is an electrochromatography technique in which the liquid mobile phase is driven through a capillary containing the chromatographic stationary phase by electroosmosis. It is a combination of high-performance liquid chromatography and capillary electrophoresis. The capillaries is packed with HPLC stationary phase and a high voltage is applied to achieve separation is achieved by electrophoretic migration of the analyte and differential partitioning in the stationary phase.
See also
Chromatography
Protein electrophoresis
Electrofocusing
Two-dimensional gel electrophoresis
Temperature gradient gel electrophoresis
References
Chromatography
Protein methods
Molecular biology
Laboratory techniques
Electrophoresis
Biological techniques and tools | Electrochromatography | [
"Chemistry",
"Biology"
] | 304 | [
"Biochemistry methods",
"Chromatography",
"Separation processes",
"Instrumental analysis",
"Protein methods",
"Protein biochemistry",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Electrophoresis"
] |
9,505,522 | https://en.wikipedia.org/wiki/ISO/IEC%2019770 | International standards in the ISO/IEC 19770 family of standards for IT asset management address both the processes and technology for managing software assets and related IT assets. Broadly speaking, the standard family belongs to the set of Software Asset Management (or SAM) standards and is integrated with other Management System Standards.
ISO/IEC 19770-1: Processes
ISO/IEC 19770-1 is a framework of ITAM processes to enable an organization to prove that it is performing software asset management that meets corporate governance standards. ISO/IEC 19770-1:2017 specifies the requirements for the establishment, implementation, maintenance and improvement of a management system for IT asset management (ITAM), referred to as an “IT asset management system” (ITAMS).
While ISO 55001:2014 specifies the requirements for the establishment, implementation, maintenance and improvement of a management system for asset management, referred to as an “asset management system”, it is primarily focused on physical assets with little provision for the management of software assets. There are a number of characteristics of IT assets which create additional or more detailed requirements. As a result of these characteristics of IT assets, the 19770-1 management system for IT assets has explicit additional requirements dealing with:
controls over software modification, duplication and distribution, with particular emphasis on access and integrity controls;
audit trails of authorizations and of changes made to IT assets;
controls over licensing, underlicensing, overlicensing, and compliance with licensing terms and conditions;
controls over situations involving mixed ownership and responsibilities, such as in cloud computing and with ‘Bring-Your-Own-Device’ (BYOD) practices; and
reconciliation of IT asset management data with data in other information systems when justified by business value, in particular with financial information systems recording assets and expenses.
Updates to 19770-1
The first generation was published in 2006.
The second generation was published in 2012. It retained the original content (with only minor changes) but splits the standard up into four tiers which can be attained sequentially. These tiers are:
Tier 1: Trustworthy Data
Tier 2: Practical Management
Tier 3: Operational Integration
Tier 4: Full ISO/IEC ITAM Conformance
ISO 19770-1 Edition 3 (current version)
The most recent version, known as ISO 19770-1:2017 and published in December 2017, specifies the requirements for the establishment, implementation, maintenance, and improvement of a management system for IT asset management (ITAM), referred to as an IT asset management system. ISO 19770-1:2017 was a major update and rewrote the standard to conform to the ISO Management System Standards (MSS) format. The tiered structure from 197701:2012 was moved to an appendix within the updated standard.
ISO/IEC 19770-2: software identification tag
ISO/IEC 19770-2 provides an ITAM data standard for software identification (SWID) tags. Software ID tags provide authoritative identifying information for installed software or other licensable item (such as fonts or copyrighted papers).
Overview of SWID tags in use
Providing accurate software identification data improves organizational security, and lowers the cost and increases the capability of many IT processes such as patch management, desktop management, help desk management, software policy compliance, etc.
Discovery tools, or processes that utilize SWID tag data to determine the normalized names and values that are associated with a software application and ensure that all tools and processes used by an organization refer to software products with the same exact names and values.
Standards development information
This standard was first published in November 2009. A revision of this standard was published in October 2015.
Steve Klos is the editor of 19770-2 and works for 1E, Inc as a SAM Subject Matter Expert.
ISO/IEC 19770-3: software entitlement schema (ENT)
This part of ISO/IEC 19770 does not provide requirements or recommendations for processes related to software asset management or ENTs. The software asset management processes are in the scope of ISO/IEC 19770-1.
Standards development information
The ISO/IEC 19770-3 Other Working Group ("OWG") was convened by teleconference call on 9 September 2008.
John Tomeny of Sassafras Software Inc served as the convener and lead author of the ISO/IEC 19770-3 "Other Working Group" (later renamed the ISO/IEC 19770-3 Development Group). Mr Tomeny was appointed by Working Group 21 (ISO/IEC JTC 1/SC 7/WG 21) together with Krzysztof Bączkiewicz of Eracent who served as Project Editor concurrent with Mr. Tomeny's leadership. In addition to WG21 members, other participants in the 19770-3 Development Group served as "individuals considered to have relevant expertise by the Convener".
Jason Keogh of 1E and part of the delegation from Ireland is the current editor of 19770-3.
ISO/IEC 19770-3 was published on April 15, 2016.
Principles
This part of ISO/IEC 19770 has been developed with the following practical principles in mind:
Maximum possible usability with legacy entitlement information
The ENT, or software entitlement schema, is intended to provide the maximum possible usability with existing entitlement information, including all historical licensing transactions. While the specifications provide many opportunities for improvement in entitlement processes and practices, they must be able to handle existing licensing transactions without imposing requirements which would prevent such transactions being codified into Ent records.
Maximum possible alignment with the software identification tag specification (ISO/IEC 19770-2)
This part of ISO/IEC 19770 (entitlement schema) is intended to align closely with part 2 of the standard (software identification tags). This should facilitate both understanding and their joint use. Furthermore, any of the elements, attributes, or other specifications of part 2 which the ENT creator may wish to utilize may be used in this part as well.
ISO/IEC 19770-3: Entitlement Management
ISO 19770-3 relates to Entitlement tags - encapsulations of licensing terms, rights and limitations in a machine-readable, standardized format. The transport method (XML, JSON, etc.) is not defined, rather the meaning and name of specific data stores is outlined to facilitate a common schema between vendors and customers and tools providers.
The first commercial SAM tool to encapsulate ISO 19770-3 was AppClarity by 1E. Since then K2 by Sassafras Software has also encompassed 19770-3. As of the time of writing (February 2018) although other tools vendors have indicated interest in the standard but have not implemented same.
It is of note that Jason Keogh, Editor of the released 19770-3 works for 1E and John Tomeny (initial Editor of 19770-3) worked for Sassafras Software.
ISO/IEC 19770-5: overview and vocabulary
ISO/IEC 19770-5:2015 provides an overview of ITAM.
References
External links
ISO/IEC 19770-1:2017
ISO/IEC 19770-2:2015
ISO/IEC 19770-3:2016
ISO/IEC 19770-4:2017
ISO/IEC 19770-5:2015
Official WG21 web site
Business Software Alliance
International Association of Information Technology Asset Managers
National Cybersecurity Center of Excellence
National Institute for Standards and Technology
Trusted Computing Group
ITAM.ORG - Organization for IT Asset Management Professionals and ITAM Providers
Australian Software Asset Management Association (ASAMA)
Information technology management
19770 | ISO/IEC 19770 | [
"Technology"
] | 1,563 | [
"Information technology",
"Information technology management"
] |
9,505,941 | https://en.wikipedia.org/wiki/List%20of%20quantum-mechanical%20systems%20with%20analytical%20solutions | Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form
where is the wave function of the system, is the Hamiltonian operator, and is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation,
which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below.
Solvable systems
The two-state quantum system (the simplest possible quantum system)
The free particle
The one-dimensional potentials
The particle in a ring or ring wave guide
The delta potential
The single delta potential
The double-well delta potential
The steps potentials
The particle in a box / infinite potential well
The finite potential well
The step potential
The rectangular potential barrier
The triangular potential
The quadratic potentials
The quantum harmonic oscillator
The quantum harmonic oscillator with an applied uniform field
The Inverse square root potential
The periodic potential
The particle in a lattice
The particle in a lattice of finite length
The Pöschl–Teller potential
The quantum pendulum
The three-dimensional potentials
The rotating system
The linear rigid rotor
The symmetric top
The particle in a spherically symmetric potential
The hydrogen atom or hydrogen-like atom e.g. positronium
The hydrogen atom in a spherical cavity with Dirichlet boundary conditions
The Mie potential
The Hooke's atom
The Morse potential
The Spherium atom
Zero range interaction in a harmonic trap
Multistate Landau–Zener models
The Luttinger liquid (the only exact quantum mechanical solution to a model including interparticle interactions)
Solutions
See also
List of quantum-mechanical potentials – a list of physically relevant potentials without regard to analytic solubility
List of integrable models
WKB approximation
Quasi-exactly-solvable problems
References
Reading materials
Quantum models
Quantum-mechanical systems with analytical solutions
Exactly solvable models | List of quantum-mechanical systems with analytical solutions | [
"Physics"
] | 454 | [
"Quantum models",
"Quantum mechanics"
] |
9,506,236 | https://en.wikipedia.org/wiki/Truncated%20differential%20cryptanalysis | In cryptography, truncated differential cryptanalysis is a generalization of differential cryptanalysis, an attack against block ciphers. Lars Knudsen developed the technique in 1994. Whereas ordinary differential cryptanalysis analyzes the full difference between two texts, the truncated variant considers differences that are only partially determined. That is, the attack makes predictions of only some of the bits instead of the full block. This technique has been applied to SAFER, IDEA, Skipjack, E2, Twofish, Camellia, CRYPTON, and even the stream cipher Salsa20.
References
Cryptographic attacks | Truncated differential cryptanalysis | [
"Technology"
] | 117 | [
"Cryptographic attacks",
"Computer security exploits"
] |
9,506,496 | https://en.wikipedia.org/wiki/TGF%20beta%20receptor | Transforming growth factor beta (TGFβ) receptors are single pass serine/threonine kinase receptors that belong to TGFβ receptor family. They exist in several different isoforms that can be homo- or heterodimeric. The number of characterized ligands in the TGFβ superfamily far exceeds the number of known receptors, suggesting the promiscuity that exists between the ligand and receptor interactions.
TGFβ is a growth factor and cytokine involved in paracrine signalling and can be found in many different tissue types, including brain, heart, kidney, liver, bone, and testes. Over-expression of TGFβ can induce renal fibrosis, causing kidney disease, as well as diabetes, and ultimately end-stage renal disease. Recent developments have found that, using certain types of protein antagonists against TGFβ receptors, can halt and in some cases reverse the effects of renal fibrosis.
Three TGFβ superfamily receptors specific for TGFβ, the TGFβ receptors, can be distinguished by their structural and functional properties. TGFβR1 (ALK5) and TGFβR2 have similar ligand-binding affinities and can be distinguished from each other only by peptide mapping. Both TGFβR1 and TGFβR2 have a high affinity for TGFβ1 and low affinity for TGFβ2. TGFβR3 (β-glycan) has a high affinity for both homodimeric TGFβ1 and TGFβ2 and in addition the heterodimer TGF-β1.2. The TGFβ receptors also bind TGFβ3.
See also
TGFβ superfamily receptor
TGFβ signaling pathway
TGFβ superfamily
References
External links
Transmembrane receptors
TGF beta receptors | TGF beta receptor | [
"Chemistry"
] | 373 | [
"Transmembrane receptors",
"Signal transduction"
] |
9,506,545 | https://en.wikipedia.org/wiki/JGroups | JGroups is a library for reliable one-to-one or one-to-many communication written in the Java language.
It can be used to create groups of processes whose members send messages to each other. JGroups enables developers to create reliable multipoint (multicast) applications where reliability is a deployment issue. JGroups also relieves the application developer from implementing this logic themselves. This saves significant development time and allows for the application to be deployed in different environments without having to change code.
Features
Group creation and deletion. Group members can be spread across LANs or WANs
Joining and leaving of groups
Membership detection and notification about joined/left/crashed members
Detection and removal of crashed members
Sending and receiving of member-to-group messages (point-to-multipoint)
Sending and receiving of member-to-member messages (point-to-point)
Code sample
This code below demonstrates the implementation of a simple command-line IRC client using JGroups:
public class Chat extends ReceiverAdapter {
private JChannel channel;
public Chat(String props, String name) {
channel = new JChannel(props)
.setName(name)
.setReceiver(this)
.connect("ChatCluster");
}
public void viewAccepted(View view) {
System.out.printf("** view: %s\n", view);
}
public void receive(Message msg) {
System.out.printf("from %s: %s\n", msg.getSource(), msg.getObject());
}
private void send(String line) {
try {
channel.send(new Message(null, line));
} catch (Exception e) {}
}
public void run() throws Exception {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while (true) {
System.out.print("> ");
System.out.flush();
send(in.readLine().toLowerCase());
}
}
public void end() throws Exception {
channel.close();
}
public static void start(Chat client) throws Exception {
try {
client.run();
} catch (Exception e) {
} finally {
client.end();
}
}
public static void main(String[] args) throws Exception {
String props = "udp.xml";
String name;
for (int i = 0; i < args.length; i++) {
if (args[i].equals("-props")) {
props = args[++i];
continue;
}
if (args[i].equals("-name")) {
name = args[++i];
continue;
}
System.out.println("Chat [-props XML config] [-name name]");
return;
}
start(new Chat(props, name));
}
}
A JChannel is instantiated from an XML configuration (e.g. ). The channel is the endpoint for joining a cluster.
Next, the receiver is set, which means that two callbacks will be invoked:
when a new member joins, or an existing member leaves the cluster
when a message from some other cluster member is received
Then, the channel joins cluster "ChatCluster". From now, messages can be sent and received, plus a new view (including this member) will be installed in all cluster members (including the newly joined member).
Anything typed in the main loop results in the creation of a message to be sent to all cluster members, including the sender.
Instances of the chat application can be run in the same process, on the same box, on different hosts in the local network, on hosts in different networks, or in the cloud. The code remains the same; only the configuration needs to be changed.
For example, in a local network, IP multicasting might be used. When IP multicasting is disabled, TCP can be used as transport. When run in the cloud, TCP plus a cloud discovery protocol would be used and so on...
Flexible protocol stack
The most powerful feature of JGroups is its flexible protocol stack, which allows developers to adapt it to exactly match their application requirements and network characteristics. The benefit of this is that you only pay for what you use. By mixing and matching protocols, various differing application requirements can be satisfied. JGroups comes with a number of protocols (but anyone can write their own), for example
Transport protocols: UDP (IP multicast), TCP
Fragmentation of large messages
Discovery protocols to discover the initial membership for a joining node
Reliable unicast and multicast message transmission. Lost messages are retransmitted
Failure detection: crashed members are excluded from the membership
Ordering protocols: Fifo, Total Order (sequencer or token based)
Membership and notification of joined or crashed members
Network partition (split brain) detection and merging
Flow control
Encryption and authentication (including SASL support)
Compression
Building blocks
Building blocks are classes layered over JGroups channels, which provide higher-level abstractions such as
RPCs to individual or all cluster nodes
Distributed caches
Distributed locks
Distributed atomic counters
Distributed task execution
References
External links
The JGroups website
A simple request distribution example in JGroups
A slideshow presenting JGroups
Computer networking
Java (programming language) software | JGroups | [
"Technology",
"Engineering"
] | 1,151 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
8,819,015 | https://en.wikipedia.org/wiki/Drug%20Master%20File | Drug Master File or DMF is a document prepared by a pharmaceutical manufacturer and submitted solely at its discretion to the appropriate regulatory authority in the intended drug market. There is no regulatory requirement to file a DMF. However, the document provides the regulatory authority with confidential, detailed information about facilities, processes, or articles used in the manufacturing, processing, packaging, and storing of one or more human drugs. Typically, a DMF is filed when two or more firms work in partnership on developing or manufacturing a drug product. The DMF filing allows a firm to protect its intellectual property from its partner while complying with regulatory requirements for disclosure of processing details.
Description
Drug Master File (DMF) is a document containing complete information on an Active Pharmaceutical Ingredient (API) or finished drug dosage form. It is known as European Drug Master File (EDMF) or Active Substance Master File (ASMF) and US-Drug Master file (US-DMF) in Europe and United States respectively.
DMFs in the United States
In the United States, DMFs are submitted to the Food and Drug Administration (FDA). The Main Objective of the DMF is to support regulatory requirements and to prove the quality, safety and efficacy of the medicinal product for obtaining an Investigational New Drug Application (IND), a New Drug Application (NDA),As an Abbreviated New Drug Application (ANDA), another DMF, or an Export Application.
In United States there are 5 types of Drug Master file:
Type I Manufacturing Site, Facilities, Operating Procedures and Personnel
Type II Drug Substance, Drug Substance Intermediate, and Material Used in Their Preparation, or Drug Product
Type III Packaging Material
Type IV Excipient, Colorant, Flavor, Essence, or Material Used in Their Preparation
Type V FDA Accepted Reference Information
DMFs in Europe
The content and the format for drug master file used in United States differs from that used in European Countries to obtain market authorization (MA). The Main Objective of the EDMF is to support regulatory requirements of a medicinal product to prove its quality, safety and efficacy. This helps to obtain a Marketing Authorisation grant.
References
External links
FDA: Guideline for drug master files
DMF search engine
Draft Note for Guidance on the EDMF procedure
Pharmaceutical industry | Drug Master File | [
"Chemistry",
"Biology"
] | 473 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
8,819,112 | https://en.wikipedia.org/wiki/Rockin%27%20Tug | Rockin' Tug is a flat tugboat ride manufactured by Zamperla. The ride is manufactured in both traveling and park versions. It is the first of a line of new "halfpiperides". Zamperla's Disk'O is another popular ride from that "family". The difference is that the Rockin' Tug has a friction wheel, while the Disk'O is powered driven.
Design and operation
Twenty four riders are loaded into a tugboat shaped gondola, in six rows of four. The rows face into the center of the ride. The ride is driven back and forth along a track shaped in a concave arc, rocking back and forth. While this is happening, the entire tugboat rotates around its center.
The traveling version of the ride racks onto a single 28 foot trailer.
Variations
Several theme variations exist. The most common one is a tug boat, but other versions include a longboat, a pirate ship, and a skateboard.
Appearances
Australia - Two, a travelling model owned by Better Amusement Hire and Big Red Boat Ride at Dreamworld
Austria - At least two travelling models (Grubelnik)
Belgium - At least one travelling model
Canada - One: Canada's Wonderland where it is known as "Lucy's Tugboat". A former Rockin' Tug was found at Galaxyland in West Edmonton Mall, named the "Rockin' Rocket", which closed in 2006.
Germany - At least five: one travelling model (Schäfer), and at least four permanent versions in Germany (Kernwasserwunderland, Legoland)
Ireland - in 2016 a Rockin' Tug (named that way too) was added to Tayto Park in the Eagle's Nest nearby Shot Tower
Japan - At least one permanent version at Toshimaen Amusement Park, Tokyo. Acquired in 2005.
The Netherlands - At least six: Rolling Stones at Drouwenerzand, Dolle Dobber at DippieDoe, Alpenrutsche at Toverland, Moby Dick at Deltapark Neeltje Jans and Koning Juliana Toren, Fogg's Trouble at Attractiepark Slagharen.
New Zealand - One, a traveling model owned by Mahons Amusements
Sweden - At least one: Lilla Lots at Liseberg.
Switzerland - At least one travelling model (Lang)
United Arab Emirates - Magic Planet, Mall of the Emirates, Dubai. 24 Seater Rocking Tug. Stationary model.
United Kingdom - At least eleven: Rockin` Tug at Butlins Skegness, Butlins Minehead and Butlins Bognor Regis, Rocking Tug at Flamingo Land Resort., Rocking Bulstrode at Drayton Manor's Thomas Land (themed after the Thomas and Friends character Bulstrode the barge), Heave Ho at Alton Towers, Longboat Invader at Legoland Windsor (a Lego/Viking themed longship), Rockin' Tug at The Flambards Experience, Trawler Trouble at Chessington World of Adventures, Rockin' Tug at Woodlands Family Theme Park Devon, Timber Tug Boat at Thorpe Park, Sk8boarda at Adventure Island (debated - it's unknown if it's built by Zamperla or by the park itself), and Kontiki at Paultons Park.
United States of America - Thirteen: traveling models owned by Murphy Brothers Exposition, Shamrock Shows, American Traveling Shows, and by D & K Amusements; and park models owned by Alabama Splash Adventure, Six Flags over Georgia, Valleyfair (as Lucy's Tugboat), Edaville (as Rockin' Bulstrode), Knott's Berry Farm (as Rapid River Fun), Knoebels, Elitch Gardens Theme Park, Kennywood (as SS Kenny), Trimper's Rides, Waldameer & Water World (as SS Wally), SeaWorld San Antonio, Oaks Amusement Park and Santa's Village (as SS Peppermint Twist).
References
External links
Zamperla page on Rockin' Tug
Schwarzkopf.Coaster.net
Amusement rides
Zamperla
Articles containing video clips | Rockin' Tug | [
"Physics",
"Technology"
] | 855 | [
"Physical systems",
"Machines",
"Amusement rides"
] |
8,819,388 | https://en.wikipedia.org/wiki/Rajiv%20Gandhi%20Centre%20for%20Biotechnology | Rajiv Gandhi Centre for Biotechnology is a research institute in India, exclusive devoted to research in Molecular Biology and Biotechnology. It is located at Thiruvananthapuram, the capital city of the state of Kerala in India. This centre is an autonomous institute under the Department of Biotechnology of the Govt. of India. Previously, it was an R&D centre under Kerala State Council for Science, Technology and Environment which is a funding agency for research Institutes and centers in Kerala.
History
The centre was inaugurated on 18 November 2002 by then President of India, Dr. APJ Abdul Kalam. The institute has highly focused research departments working on different areas of biological sciences under following areas.
Cancer Research
Cardiovascular Disease & Diabetes Biology
Pathogen Biology
Regenerative Biology
Plant Biotechnology & Disease Biology
Neurobiology
Reproduction Biology
Transdisciplinary Biology
The Center has a regional facility for Genetic Fingerprinting, which provides DNA analysis services for forensic & criminal investigations, paternity disputes, identification of wildlife remains, authentication of plants and seeds besides a battery of molecular diagnostics for genetic and infectious diseases. RGCB is also a major provider of laboratory and infrastructure services to other academic and research institutions. RGCB has a strength of 25 scientists, 120 Ph.D. students and around 100 research project staff. The centre has good infrastructural facilities for carrying out research in the field of Biotechnology. Financial support of Rs. 100 crores sanctioned by the Govt. of India in 2008, for a period of 3 years, apart from the yearly allocation of Rs. 25 crores, aims at making RGCB a world class research centre in the near future. RGCB is set to expand further into a second campus at Aakkulum shortly. It would focus on R & D and also provide a unique "TEST & PROVE" facility to encourage biotechnology.
BioSpectrum magazine ranked Bio-Technology course at RGCB as second best in the country only after Institute of Chemical Technology, Mumbai.
RGCB started setting up a BSL4 lab in 2020.
References
External links
See also
Rajiv Gandhi Cancer Institute and Research Centre
Research institutes in Thiruvananthapuram
Medical research institutes in India
Biotechnology in India
2002 establishments in Kerala | Rajiv Gandhi Centre for Biotechnology | [
"Biology"
] | 454 | [
"Biotechnology in India",
"Biotechnology by country"
] |
8,820,363 | https://en.wikipedia.org/wiki/Dunt | Dunting is a fault that can occur during the firing of ceramic articles. It is the "cracking that occurs in fired ceramic bodies as a result of a thermally induced stress" and is caused by a "ware cooled too quickly after it has been fired".
Although usually occurring during, cooling dunts can also be caused by excessively fast heating rates. Heating dunts can be recognised by rounded edges to the cracks as the glaze matured after they occurred, whereas cooling dunts have sharp edges.
It has been found that bodies formulated with quartz rather than flint were more susceptible to dunting, especially on re-fire. It was postulated this may be related to the lower Young's modulus of the quartz based bodies.
Dunting also occurs in wares produced from montmorillonite clay bodies due to the volume expansion of cristobalite during its inversion upon cooling. "The release of free silica, takes place in montmorillonite above 950 C, but almost double the silica is released, compared to kaolin. Therefore, clay bodies with high amounts of montmorillonite contain a high percentage of free silica after firing, which may cause the ware to crack during cooling.(Dunting)"
See also
Pottery
Ceramics (art)
References
Pottery
Ceramic engineering | Dunt | [
"Engineering"
] | 264 | [
"Ceramic engineering"
] |
8,821,011 | https://en.wikipedia.org/wiki/Windows%20Home%20Server | Windows Home Server (code-named Quattro) is a home server operating system from Microsoft. It was announced on 7 January 2007 at the Consumer Electronics Show by Bill Gates, released to manufacturing on 16 July 2007 and officially released on 4 November 2007.
Windows Home Server was based on Windows Server 2003 R2 and was intended to be a solution for homes with multiple connected PCs to offer file sharing, automated backups, print server, and remote access. It is paired with the Windows Home Server Console—client software accessed from another computer on the network to provide a graphical management interface.
Power Pack 1 for Windows Home Server was released on 20 July 2008. Power Pack 2 was released on 24 March 2009 and Power Pack 3 was released on 24 November 2009.
Windows Home Server 2011, the next version of this operating system, was released on 6 April 2011. Microsoft confirmed Windows Home Server 2011 to be last release in the Windows Home Server product line.
Windows Home Server was the brainchild of Charlie Kindel who was the General Manager for the product from 2005 through 2009.
Microsoft has ended support for Windows Home Server on 8 January 2013.
Features
10 computers and 10 users: Allows a maximum of ten user accounts to be created on the server console and ten computers to have WHS connector installed, without any client access licenses.
Centralized backup: Allows backup of up to 10 PCs, using Single-instance storage technology to avoid multiple copies of the same file, even if that file exists on multiple PCs.
Health monitoring: Can centrally track the health of all PCs on the network, including antivirus and firewall status.
File sharing: Creates and operates network shares for computers to store the files remotely, acting as a network-attached storage device. Separate categories are provided for common file types like Documents, Music, Pictures and Videos. The files are indexed for fast searching.
Printer sharing: Allows a print server to handle print jobs for all users.
Shadow Copy: Uses Volume Shadow Copy Service to take point in time snapshots that allow older versions of files to be recovered.
Headless operation: No monitor or keyboard is required to manage the device. Remote administration is performed by using the Windows Home Server Console client software provided in the bundle. Remote Desktop Services connections to the server are supported while connected to the same LAN.
Remote access gateway: Allows remote access to any connected PC on the network, including the server itself, over the Internet.
Media streaming: Can stream media to an Xbox 360 or other devices supporting Windows Media Connect.
Selective data redundancy: Guards against a single drive failure by duplicating selected data across multiple drives.
Expandable storage: Provides a unified single and easily expandable storage space, removing the need for drive letters.
Extensibility through add-ins: Add-ins allow third-party developers to extend the features and functionality of the server. Add-Ins can be developed using the Windows Home Server SDK, to provide additional services to the client computers or work with the data already on the server. Add-ins can also be ASP.NET applications, hosted in IIS 6 running on WHS.
Server backup: Backs up files which are stored within shared folders on the server to an external hard drive.
Technology
Home Server Console
The configuration interface was designed to be user-friendly enough that it could be set up without prior knowledge of server administration. The configuration interface, called the Home Server Console, was delivered as a Remote Desktop Protocol application to remote PCs while the application ran on the server itself, the GUI was rendered on the remote system. The Home Server Console client application could be accessed from any Windows PC. The server itself required no video card or peripherals; it was designed to require only an Ethernet card and at least one Windows XP, Windows Vista or Windows 7 computer.
Drive Extender
Windows Home Server Drive Extender was a file-based replication system that provided three key capabilities:
Multi-disk redundancy so that if any given disk failed, data was not lost
Arbitrary storage expansion by supporting any type of hard disk drive (e.g. Serial ATA, USB, FireWire) in any mixture and capacity, similar in concept to JBOD
A single folder namespace (no drive letters)
With Drive Extender, users could add larger capacity hard disk drives and then could offline lesser capacity drives to upgrade capacity online. For example, if the user was reaching capacity of the share with five terabytes of the six-terabyte capacity used with six one-terabyte drives then the user could offline one of the one-terabyte drives and physically replace it with a two-terabyte drive. The WHS automatically equalizes the redistribution of used space across all available drives on a regular basis. The offline process would compress the used data across the minimum amount of drives allowing for the removal of one of the lesser capacity drives. Once replaced with a drive of higher capacity the system will automatically redistribute used capacity among the pool to ensure space capacity on each drive.
Users (specifically those who configure a family's home server) dealt with storage at two levels: Shared Folders and Disks. The only concepts relevant regarding disks was whether they had been "added" to the home server's storage pool or not and whether the disk appeared healthy to the system or not. This was in contrast with Windows' Logical Disk Manager which requires a greater degree of technical understanding in order to correctly configure a RAID array.
Shared Folders had a name, a description, permissions, and a flag indicating whether duplication (redundancy) was on or off for that folder.
If duplication was on for a Shared Folder (which was the default on multi-disk Home Server systems and not applicable to single disk systems) then the files in that Shared Folder were duplicated and the effective storage capacity was halved. However, in situations where a user may not have wanted data duplicated (e.g. TV shows that had been archived to a Windows Home Server from a system running Windows Media Center), Drive Extender provided the capability to not duplicate such files if the server was short on capacity or manually mark a complete content store as not for duplication.
A known limitation of Drive Extender was that it in some cases changed timestamp of directories and files when data was moved around between disks. According to Microsoft this was expected behavior. This caused unexpected behavior when using clients that sort media based on date. Examples are XBMC, MediaPortal, and Squeezebox Server. The aforementioned programs worked fine with WHS; however, files may have appeared out of order due to this caveat.
Cancellation
On 23 November 2010, Microsoft announced that Drive Extender would be removed from Windows Home Server 2011. This led to public outcry in the announcement's comments section. Criticism of Drive Extender's removal mainly related to it being seen as a core feature of Windows Home Server and a key reason for adoption. As a replacement for Drive Extender, Microsoft stated that OEMs would use RAID on their Windows Home Server products.
Computer Backup and Restore
Windows Home Server Computer Backup automatically backs up all of the computers in a home to the server using an image-based system that ensures point-in-time-based restoration of either entire PCs or specific files and folders. Complete bare-metal restores are initiated through a restore bootable CD, file based restores are initiated through the WHS client software which allows the users to open a backup and "drag and drop" files from it. This technology uses Volume Shadow Services (VSS) technology on the client computer to take an image based backup of a running computer. Because the backup operates on data at the cluster level, single instancing can be performed to minimize the amount of data that travels over the network and that will ultimately be stored on the home server. This single instancing gives the server the ability to store only one instance of data, no matter if the data originated from another computer, another file, or even data within the same file.
Computer backup images are not duplicated on the server, so if a server hard drive fails, backups could be lost. The "Server Backup" feature added in Power Pack 1 does not include duplication of backup images.
Remote File Access
The system also offers an SSL secured web browser based interface over the Internet to the shared file stores. The release version offers access to the web interface via a free Windows Live-provided URL, which uses Dynamic DNS. The web interface also allows the uploading to and downloading of files from the content stores. However, there is a limit of 2 GB for a single batch of upload.
Remote Desktop Services
The system also supports Terminal Services Gateway, allowing remote control of the desktop of any Windows computer on the home network. Currently supported systems are those which would normally support Remote Desktop: Windows XP Professional, Tablet and Media Center editions, Windows Vista Business, Enterprise and Ultimate editions and Windows 7 Professional, Enterprise and Ultimate editions. The web interface also supports embedding the Remote Desktop ActiveX control, to provide remote access to home computers from within the web interface directly. Remote sessions can also connect to the Home Server console to configure the server over the internet.
Add-Ins
Windows Home Server allows for developers to publish community and commercial add-ins designed to enhance the Windows Home Server with added functionality. As of January 2010, nearly 100 of these add-ins have been developed for WHS, including applications for antivirus & security, backups, disk management, automation, media, network/power management, remote access, BitTorrent and more. The Windows Home Server SDK (Software Development Kit) provides developers with a set of APIs and tools to use when developing for and extending Windows Home Server.
Compatibility
Windows Home Server features integration with Windows XP (SP2 or newer), Windows Vista, and Windows 7 (after the release of Power Pack 3) through a software installation, either from a client CD or via a network share. The connector software may also be installed by accessing yourserver:55000 through a web browser, where a link is provided to download the connector software and to install troubleshooting tools. Files stored on Windows Home Server are also available through a Windows share, opening compatibility to a wide variety of operating systems. Also, the Administration console is available via Remote Desktop, allowing administration from unsupported platforms.
Windows Home Server does not support Microsoft Security Essentials.
64-bit Windows client support was introduced in Power Pack 1, though the Restore Wizard on the Windows Home Server Restore CD is unable to restore clients running 64-bit operating systems, due to the fact that the Restore CD does not support 64-bit drivers. Windows XP Professional x64 isn't officially supported. However, unofficial workarounds allow Connector software to work on XP x64.
Integration of the file sharing service as a location for Mac OS X's Time Machine was apparently being considered, but upon Mac OS X Leopard's release, Apple had removed the ability to use the SMB file sharing protocol for Time Machine backups. One WHS provider, HP, provides their own plug-in with their home server line capable of Time Machine backup to a home server.
Windows Home Server has not officially supported Domain Controller capability and cannot readily join a Windows Server domain. Wireless networking is supported.
Dedicated devices will have the operating system pre-installed and may be supplied with a server recovery disk which reloads the OS over a network connection. This is utilized on the HP MediaSmart Server, and the Fujitsu Siemens Scaleo Home Server.
Resolved issues
File corruption
The first release of Windows Home Server, RTM (release to manufacturing), suffered from a file corruption flaw whereby files saved directly to or edited on shares on a WHS device could become corrupted. Only the files that had NTFS Alternate Data Streams were susceptible to the flaw. The flaw led to data corruption only when the server was under heavy load at the time when the file (with ADS) was being saved onto a share.
Backups of client PCs made by Windows Home Server were not susceptible to the flaw.
Even though the issue was first acknowledged in October 2007, Microsoft formally warned users of the seriousness of the flaw on 20 December 2007. Microsoft then issued a list of applications, including Windows Live Photo Gallery, Microsoft OneNote, Microsoft Outlook and SyncToy 2.0, which might have triggered the flaw if they were used to edit the files on a WHS share directly.
This issue was fixed by Power Pack 1, released on 21 July 2008.
No native backup
Windows Home Server RTM did not include a mechanism for backing up the server. Power Pack 1 added the ability to back up files stored on the Shared Folders, to an external drive. Users can also subscribe to 3rd-party online services, for a fee. However, there remains no way to back up the installed server operating system. Backing-up of the client backup database is available either manually using the instructions provided by Microsoft on page 24 of this document or can be done using the WHS BDBB add-in written by Alex Kuretz and available from this website.
Pricing
While some hardware manufacturers have developed dedicated boxes, Microsoft has also released Windows Home Server under the OEM/System Builder license. In November 2008, Microsoft lowered the price of the WHS System Builder SKU to US$100.
Users can also choose to use an existing PC or build their own systems, which would include the use of WHS System Builder.
As of 23 March 2009, Microsoft has also made Windows Home Server available to MSDN and Microsoft Technet subscribers.
Some computer systems are available only with a bundled Windows Home Server license. As is the case with other versions of Windows it is possible to request a refund of the license fees paid for Windows Home Server.
See also
File server
Media server
References
Further reading
External links
Official
(from archive.org)
Home Server
Backup software
Home servers
Home Server | Windows Home Server | [
"Technology"
] | 2,859 | [
"Computing platforms",
"Microsoft Windows"
] |
8,821,432 | https://en.wikipedia.org/wiki/Barium%20star | Barium stars are spectral class G to K stars whose spectra indicate an overabundance of s-process elements by the presence of singly ionized barium, Ba II, at λ 455.4 nm. Barium stars also show enhanced spectral features of carbon, the bands of the molecules CH, CN and C2. The class was originally recognized and defined by William P. Bidelman and Philip Keenan. Initially, after their discovery, they were thought to be red giants, but the same chemical signature has been observed in main-sequence stars as well.
Observational studies of their radial velocity suggested that all barium stars are binary stars. Observations in the ultraviolet using International Ultraviolet Explorer detected white dwarfs in some barium star systems.
Barium stars are believed to be the result of mass transfer in a binary star system. The mass transfer occurred when the now-observed giant star was on the main sequence. Its companion, the donor star, was a carbon star on the asymptotic giant branch (AGB), and had produced carbon and s-process elements in its interior. These nuclear fusion products were mixed by convection to its surface. Some of that matter "polluted" the surface layers of the main-sequence star as the donor star lost mass at the end of its AGB evolution, and it subsequently evolved to become a white dwarf. These systems are being observed at an indeterminate amount of time after the mass transfer event, when the donor star has long been a white dwarf. Depending on the initial properties of the binary system, the polluted star can be found at different evolutionary stages.
During its evolution, the barium star will at times be larger and cooler than the limits of the spectral types G or K. When this happens, ordinarily such a star is spectral type M, but its s-process excesses may cause it to show its altered composition as another spectral peculiarity. While the star's surface temperature is in the M-type regime, the star may show molecular features of the s-process element zirconium, zirconium oxide (ZrO) bands. When this happens, the star will appear as an "extrinsic" S star.
Historically, barium stars posed a puzzle, because in standard stellar evolution theory G and K giants are not far enough along in their evolution to have synthesized carbon and s-process elements and mix them to their surfaces. The discovery of the stars' binary nature resolved the puzzle, putting the source of their spectral peculiarities into a companion star which should have produced such material. The mass transfer episode is believed to be quite brief on an astronomical timescale.
Prototypical barium stars include Zeta Capricorni, HR 774, and HR 4474.
The CH stars are Population II stars with similar evolutionary state, spectral peculiarities, and orbital statistics, and are believed to be the older, metal-poor analogs of the barium stars.
References
Star types
Barium | Barium star | [
"Astronomy"
] | 610 | [
"Star types",
"Astronomical classification systems"
] |
8,821,670 | https://en.wikipedia.org/wiki/Voglibose | Voglibose (INN and USAN, trade name Voglib, marketed by Mascot Health Series) is an alpha-glucosidase inhibitor used for lowering postprandial blood glucose levels in people with diabetes mellitus. Voglibose is a research product of Takeda Pharmaceutical Company, Japan's largest pharmaceutical company. Voglibose was discovered in 1981, and was first launched in Japan in 1994, under the trade name BASEN, to improve postprandial hyperglycemia in diabetes mellitus.
Postprandial hyperglycemia (PPHG) is primarily due to first phase insulin secretion. Alpha glucosidase inhibitors delay glucose absorption at the intestine level and thereby prevent sudden surge of glucose after a meal.
There are three major drugs which belong to this class, acarbose, miglitol and voglibose, of which voglibose is the newest.
Efficacy
A Cochrane systematic review assessed the effect of alpha-glucosidase inhibitors in people with impaired glucose tolerance, impaired fasting blood glucose, elevated glycated hemoglobin A1c (HbA1c). It was found that there was no conclusive evidence that voglibose compared to diet and exercise or placebo reduced incidence of diabetes mellitus type 2, improved all-cause mortality, reduced or increased risk of cardiovascular mortality, serious or non-serious adverse events, non-fatal stroke, congestive heart failure, or non-fatal myocardial infarction.
References
Further reading
Alpha-glucosidase inhibitors
Amino sugars | Voglibose | [
"Chemistry"
] | 336 | [
"Amino sugars",
"Carbohydrates"
] |
8,822,386 | https://en.wikipedia.org/wiki/Random%20dopant%20fluctuation | Random dopant fluctuation (RDF) is a form of process variation resulting from variation in the implanted impurity concentration. In MOSFET transistors, RDF in the channel region can alter the transistor's properties, especially threshold voltage. In newer process technologies RDF has a larger effect because the total number of dopants is fewer, and the addition or deletion of a few impurity atoms can significantly alter transistor properties. RDF is a local form of process variation, meaning that two neighbouring transistors may have significantly different dopant concentrations.
References
Semiconductor device fabrication | Random dopant fluctuation | [
"Materials_science"
] | 129 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
8,822,417 | https://en.wikipedia.org/wiki/Flatbed%20digital%20printer | Flatbed digital printers, also known as flatbed printers or flatbed UV printers, are printers characterized by a flat surface upon which a material is placed to be printed on. Flatbed printers are capable of printing on a wide variety of materials such as photographic paper, film, cloth, plastic, pvc, acrylic, glass, ceramic, metal, wood, leather, etc.). Flatbed digital printers usually use UV curable inks made of acrylic monomers that are then exposed to strong UV-light to cure, or polymerize them. This process allows for printing on a wide variety of surfaces such as wood or canvas, carpet, tile, and even glass. The adjustable printing bed makes it possible to print on surfaces ranging in thickness from a sheet of paper often up to as much as several inches. Typically used for commercial applications (retail and event signage), flatbed printing is often a substitute for screen-printing. Since no printing plates or silkscreens must be produced, digital printing technology allows shorter runs of signs to be produced economically. Many of the high-end flatbed printers allow for roll-feed, allowing for unattended printing.
Environmentally, flatbed digital printing is based on a more sustainable system than its commercial predecessor of solvent printing as it produces fewer waste cartridges and less indoor air pollution. The resolution of flatbed printers range from 72 DPI (dots per inch) to about 2400 DPI. One of the advantages of a flatbed printer is its versatility of printable materials although this is limited to only flat materials and occupies a lot of surface area.
"Hybrid" Flatbed Digital Printers
Although most flatbed printers are limited to printing on flat some are capable of printing of cylindrical objects, such as bottles and cans, using rotary attachments that position the object and rotate it while the printhead applies ink. Flatbed printers have sometimes been used to print on small spherical objects such as ping pong balls, however, the print resolution tends to decrease around the edges of the printed image due to the inkjets firing ink onto an inclined and further away surface.
Flatbed printers can sometimes execute multiple passes on a surface to achieve an 3D embossing effect. This is either done with colored inks or a clear varnish which is used to create glossy finishes or highlights on the print.
"Hybrid" UV printers may also refer to printers capable of printing of a flatbed surface as well as roll-to-roll, which enables the use of flexible substrates stored in rolls.
See also
Digital printing
Variable Data Printing
Digital image processing
Graphical output device
References
Printing devices | Flatbed digital printer | [
"Physics",
"Technology"
] | 531 | [
"Physical systems",
"Machines",
"Printing devices"
] |
8,824,490 | https://en.wikipedia.org/wiki/2007%20Russia%E2%80%93Belarus%20energy%20dispute | The Russia–Belarus energy dispute began when Russian state-owned gas supplier Gazprom demanded an increase in gas prices paid by Belarus, a country which has been closely allied with Moscow and forms a loose union state with Russia. It escalated on 8 January 2007, when the Russian state-owned pipeline company Transneft stopped pumping oil into the Druzhba pipeline which runs through Belarus because Belarus was siphoning the oil off the pipe without mutual agreement.
On 10 January, Transneft resumed oil exports through the pipeline after Belarus ended the tariff that sparked the shutdown, despite differing messages from the parties on the state of negotiations.
The Druzhba pipeline, the world's longest, supplies around 20% of Germany's oil. It also supplies oil to Poland, Ukraine, Slovakia, the Czech Republic, and Hungary.
Background
For a long time, the gas price for most of the former USSR republics was significantly lower than for the Western European countries. In 2006 Belarus paid only $46 per 1000 m³, a fraction compared to $290 per 1000 m³ paid by Germany. The annual Russian subsidies to the Belarusian economy were around $4 billion, as Russian president Vladimir Putin said on 9 January 2007. In 2006 Russia announced a higher price for 2007. After Alexander Lukashenko, President of Belarus, rejected this price change, and without a new treaty, Gazprom threatened to cut gas supplies to Belarus from 10:00 MSK on 1 January 2007. Both sides finally agreed on the following terms:
Russian gas to be sold to Belarus for $100 per 1000 m³ (compared to Gazprom's original request of $200 per 1000 m³)
Belarus to sell Gazprom 50% of its national gas supplier Beltransgaz for the maximal price of $2.5 billion
Gas prices for Belarus to gradually rise to the European market price by 2011
Belarus's transit fees for Russian gas to increase by around 70%
Another part of the energy dispute is the dispute over oil. In 1995, Russia and Belarus agreed that no customs would be imposed on oil exported to Belarus. In exchange, the revenues from this oil processed in Belarus would be shared by 15% for Belarus and 85% for Russia. In 2001, Belarus unilaterally canceled this agreement while Russia continued its duty-free exports. Lukashenko's state kept all the revenues, and many Russian oil companies moved their processing capacities to Belarus. On this arrangement, Russia also lost billions of dollars annually. Belarus imposed a tariff of US$45 per ton of oil flowing through the Druzhba pipeline, prompting Russia to claim that the move was illegal and to threaten retaliation, since it contradicts bilateral trade agreements and worldwide practice. Only imported or exported goods are being tariffed while transit goods are not objects of tariffing. Russia rejected paying the newly imposed Belarusian tariffs.
In compensation, Belarus began siphoning off oil from the pipeline. According to Semyon Vainshtok, the head of Russia's pipeline monopoly Transneft, Belarus had siphoned off 79,900 metric tons of oil since 6 January. Vainshtok said this was illegal and the move was made "without warning anyone." In response, Russia stopped oil transport on 8 January.
A Belarusian team led by Vice-Premier Andrei Kobyakov flew to Moscow on 9 January to pursue a solution but initially reported that they had not been able to start negotiations.
On 10 January, the Belarusian government lifted the tariff, and Russia agreed to start negotiations. The oil flow was resumed at 05:30 GMT on 11 January. In the wake of the dispute, Gazprom acquired 50% stake in the Belarusian gas pipeline operator Beltransgaz for US$2.5 Billion.
August 2007 developments
Following the alleged violation of previous agreements and the failure of negotiations, on 1 August 2007 Gazprom announced that it would cut gas supplies to Belarus by 45% from 3 August over a $456 million debt. Talks are continuing and Belarus has asked for more time to pay. Although the revived dispute is not expected to hit supplies to Europe, the European Commission is said to view the situation 'very seriously'.
Following overnight negotiations in Moscow, on 3 August, $190 million of the debt was repaid, and Belarus was given a further week to pay the remainder or face a 30% cut in supplies.
As of 8 August Belarus has fully paid its $460 million debt for Russian natural gas supplies, ending a dispute between the country and Gazprom [RTS: GAZP].
Related disputes
The situation is reminiscent of other recent price tensions between Russia, one of the world's energy superpowers, and other states since the start of 2005. These have resulted in increases in the prices paid for gas by Moldova (now paying US$170 per 1,000 cubic meter), Georgia (US$235 per 1,000 cubic meter) and Ukraine (following the 2006 Russia-Ukraine gas dispute, which also resulted in a 4-day cut to European gas supplies).| with Azerbaijan having recently stopped oil exports to Russia.
On 29 July 2006 Russia shut down oil export to Mažeikių oil refinery in Lithuania after an oil spill on the Druzhba pipeline system occurred in Russia's Bryansk oblast, near the point where a line to Belarus and Lithuania branches off the main export pipeline. Transneft said it would need one year and nine months to repair the damaged section. Although Russia cited technical reasons for stopping oil deliveries to Lithuania, Lithuania claims that the oil supply was stopped because Lithuania sold the Mažeikių refinery to Polish company PKN Orlen.
Impact
All IEA member countries who are net oil importers have legal obligation to hold emergency oil reserves, which is equivalent to at least 90 days of net oil imports of the previous year. Furthermore, under the EU regulations there is obligation to hold reserves equivalent to 90 days of consumption, so unlike the gas dispute with Ukraine in 2006, consumers were not affected. Poland had an 80-day oil reserve. The Czech Republic reported drawing oil from its 100-day reserves. Had the dispute prolonged, it is likely that alternative supplies would have been secured. International oil prices were not significantly affected.
The involved countries have, however, expressed concerns about the reliability of the Russia-Belarus oil pipeline and Belarus as an oil middleman supplier.
The events have also provoked renewed discussion on the government policy of phasing out nuclear power in Germany.
Reaction
The European Union has demanded an "urgent and detailed" explanation, according to a spokesman for Energy Commissioner Andris Piebalgs.
Piotr Naimski, Poland's deputy economics minister who is responsible for energy security, stated "This shows once again that arguments among various countries of the former Soviet Union between suppliers and transit countries mean that these deliveries are unreliable from our perspective."
German Economy Minister Michael Glos stated that the dispute showed that "one-side dependencies must not be allowed to develop."
Following a meeting with European Commission President José Manuel Barroso in Berlin, German Chancellor Angela Merkel condemned the action, stating "It is not acceptable when there are no consultations about such actions". Commenting on the importance of trust in energy security, she said "That always destroys trust and no trusting, undisturbed cooperation can be built on that." Merkel continued by saying "We will certainly say to our Russian partners but also to Belarus that such consultations are the minimum when there are problems, and I think that that must become normality, as it would be within the European Union." Barroso said that "while there is no immediate risk to supplies, it is not acceptable" for such actions to be undertaken without prior consultation.
See also
Foreign relations of Russia towards Belarus
2004 Russia–Belarus gas dispute
Russia-Ukraine gas dispute
Energy crisis
Nord Stream 1, an undersea pipeline constructed to bypass transit countries.
Energy Charter Treaty, including principles for energy trade and transit which Russia is refusing to ratify.
Energy policy of Russia
Energy policy of the European Union
Milk War
References
External links
Transneft, Russian state owned pipeline monopoly.
Beltransgaz, Belarusian gas pipeline company.
Belarus–Russia relations
Energy in Belarus
Energy policy
Energy policy of Russia
Russia-Belarus Energy Dispute, 2007
Russia-Belarus Energy Dispute, 2007
Russia-Belarus Energy Dispute, 2007
Natural resource conflicts
Political history of Belarus
Political history of Russia
Price disputes involving Gazprom
Transneft | 2007 Russia–Belarus energy dispute | [
"Environmental_science"
] | 1,717 | [
"Environmental social science",
"Energy policy"
] |
8,824,828 | https://en.wikipedia.org/wiki/Huarizo | A huarizo, also known as a llapaca, is a hybrid cross between a male llama and a female alpaca. Misti is a similar hybrid; it is a cross between a male alpaca and a female llama. The most common hybrid between South American camelids, huarizo tend to be much smaller than llamas, with their fibre being longer. Huarizo are sterile, but recent genetic research conducted at the University of Minnesota Rochester suggests that it may be possible to preserve fertility with minimal genetic modification.
However, many owners have reported that their Huarizos and Mistis are fertile.
Other camelidae hybridizations
Camel hybrids
Cama, a hybrid with camel and llama.
Llamanaco, a cross between guanaco and llama has been reported in the wild in the Magallanes Region of Chile.
See also
Mule and Hinny – two equine cross-species between a horse and a donkey which are also unable to reproduce.
References
Camelid hybrids
Intergeneric hybrids | Huarizo | [
"Biology"
] | 213 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
8,825,092 | https://en.wikipedia.org/wiki/Compression%20member | A compression member is a structural element that primarily resists forces, which act to shorten or compress the member along its length. Commonly found in engineering and architectural structures, such as columns, struts, and braces, compression members are designed to withstand loads that push or press on them without buckling or failing. The behavior and strength of a compression member depends on factors like material properties, cross-sectional shape, length, and the type of loading applied. These components are critical in frameworks like bridges, buildings, and towers, where they provide stability and support against vertical and lateral forces.In buildings, posts and columns are almost always compression members, as are the top chord of trusses in bridges, etc.
Design
For a compression member, such as a column, the principal stress primarily arises from axial forces, which act along a single axis, typically through the centroid of the member cross section. As detailed in the article on buckling, the slenderness of a compression member, which is defined as the ratio of its effective length to its radius of gyration (), has a critical role in determining its strength and behavior with axial loading:
The load capacity of low slenderness (stocky) members is governed by their material compressive strength;
Both material strength and buckling influence the load capacity of intermediate members; and
The strength of slender (long) members is dominated by their buckling load.
Formulas for calculating the buckling strength of slender members were first developed by Euler, while equations like the Perry-Robertson formula are commonly applied to describe the behavior of intermediate members. The Eurocodes published by the Comité Européen de Normalisation provide guidance of the calculation of strength for compression members in concrete, masonry, steel and timber. There are other codes for steel compression members only.
See also
Arch
Brown truss
List of structural elements
Strut
Notes
External links
Columns and other compression members
Bicycle compression members
Numerical load numbers for reinforced concrete compression members
Columns and entablature | Compression member | [
"Technology"
] | 404 | [
"Structural system",
"Columns and entablature"
] |
8,826,171 | https://en.wikipedia.org/wiki/List%20of%20substances%20used%20in%20rituals | This page lists substances used in ritual context.
Psychoactive substances may be illegal to obtain, while non-psychoactive substances are legal, generally.
Psychoactive use
This sections lists entheogens; drugs that are consumed for their intoxicating effect in combination with spiritual practice.
Hallucinogens used in rituals
This is a list of species and genera that are used as entheogens or are used in an entheogenic concoction (such as ayahuasca). For ritualistic use they may be classified as hallucinogens. The active principles and historical significance of each are also listed to illustrate the requirements necessary to be categorized as an entheogen. The psychoactive substances are usually classified as soft drugs in terms of drug harmfulness.
Animal
Mushroom
Plant
Chemicals
Many man-made chemicals with little human history have been recognized to catalyze intense spiritual experiences, and many synthetic entheogens are simply slight modifications of their naturally occurring counterparts. Some synthetic substances like 4-AcO-DMT are thought to be prodrugs that metabolize into psychoactive substances that have been used as entheogens. While synthetic DMT and mescaline are reported to have identical entheogenic qualities as extracted or plant-based sources, the experience may wildly vary due to the lack of numerous psychoactive alkaloids that constitute the material. This is similar to how isolated THC produces very different effects than an extract that retains the many cannabinoids of the plant such as cannabidiol and cannabinol. A pharmaceutical version of the entheogenic brew ayahuasca is called Pharmahuasca.
Prodrugs
This page lists non-psychedelic psychoactive substances which are consumed in ritual contexts for their consciousness-altering effects. Non-psychoactive consumption like symbolic ingestion of psychoactive substances is not mentioned here.
Non-hallucinogenic substances used in rituals
This is a list of psychoactive substances which are consumed in ritual contexts for their consciousness-altering effects. Some of these drugs are classified as hard drugs in terms of drug harmfulness.
Plant
The plant parts are listed to prevent accidents. For example, kava roots should always be used because the leaves of the plant are known to cause hepatoxicity and death.
Alcohol
Chemicals
Poly drug use
Alternative medicine
Animal
Complements to psychoactive substances
Sober use
Non-psychoactive substances
Psychoactive substances
Shown in the table below, Aztec tobacco, morning glories, and Syrian rue (also listed in the table), and cacao beans are (mildly) psychoactive when consumed.
Psychedelic substances used in sober rituals
Flora
Non-psychedelic substances used in sober rituals
Alcohol
See also
Entheogenic drugs and the archaeological record
Ethnobotany
God in a Pill?
List of Acacia species known to contain psychoactive alkaloids
List of plants used for smoking
N,N-Dimethyltryptamine
Psilocybin mushrooms
Psychoactive cacti
Psychoactive plant
References
Works cited
Internet Archive EPub file – freely downloadable (37Mb)
Databases
USDA online database compiled from:
Further reading
Biological sources of psychoactive drugs
Hallucinations
Rituals
Comparison of psychoactive substances
Religious rituals | List of substances used in rituals | [
"Chemistry"
] | 651 | [
"Drug-related lists"
] |
8,826,196 | https://en.wikipedia.org/wiki/Boudoir | A (; ) is a woman's private sitting room or salon in a furnished residence, usually between the dining room and the bedroom, but can also refer to a woman's private bedroom. The term derives from the French verb bouder (to sulk or pout) or adjective boudeur (sulking)—the room was originally a space to withdraw to.
Architecture
A cognate of the English "bower", historically, the boudoir formed part of the private suite of rooms of a "lady" or upper-class woman, for bathing and dressing, adjacent to her bedchamber, being the female equivalent of the male cabinet. In later periods, the boudoir was used as a private drawing room, and was used for other activities, such as embroidery or spending time with one's husband.
English-language usage varies between countries, and is now largely historical. In the United Kingdom, in the period when the term was most often used (Victorian era and early 20th century), a boudoir was a lady's evening sitting room, and was separate from her morning room, and her dressing room. As this multiplicity of rooms with overlapping functions suggests, boudoirs were generally found only in grand houses. In the United States, in the same era, boudoir was an alternative term for dressing room, favored by those who felt that French terms conferred more prestige.
In Caribbean English, a boudoir is the front room of the house where women entertain family and friends.
Furnishing
Recently, the term boudoir has come to denote a style of furnishing for the bedroom that is traditionally described as ornate or busy. The plethora of links available on the Internet to furnishing sites using the term boudoir tend to focus on Renaissance and French inspired bedroom styles. In recent times, they have also been used to describe the 'country cottage' style with whitewashed-style walls, large and heavy bed furniture, and deep bedding.
Gallery
See also
Harem
Ladyfinger (biscuit), which translates as boudoirs in French
References
Rooms
Women's quarters | Boudoir | [
"Engineering"
] | 441 | [
"Rooms",
"Architecture"
] |
8,827,849 | https://en.wikipedia.org/wiki/Assyrian%20eclipse | The Assyrian eclipse, also known as the Bur-Sagale eclipse, was a solar eclipse recorded in Assyrian eponym lists that most likely dates to the tenth year of the reign of king Ashur-dan III. The eclipse is identified with the one that occurred on 15 June 763 BC in the proleptic Julian calendar.
Historical account
The entry from Assyrian records is short and reads:
"[year of] Bur-Sagale of Guzana. Revolt in the city of Assur. In the month Simanu an eclipse of the sun took place."
The phrase used – shamash ("the sun") akallu ("bent", "twisted", "crooked", "distorted", "obscured") – has been interpreted as a reference to a solar eclipse since the first decipherment of cuneiform in the mid 19th century.
The name Bur-Sagale (also rendered Bur-Saggile, Pur-Sagale or Par-Sagale) is the name of the limmu official in the eponymous year.
Modern research
In 1867, Henry Rawlinson identified the near-total eclipse of 15 June 763 BC as the most likely candidate (the month Simanu corresponding to the May/June lunation), visible in northern Assyria just before noon.
This date has been widely accepted ever since; the identification is also substantiated by other astronomical observations from the same period.
This record is one of the crucial pieces of evidence that anchor the absolute chronology of the ancient Near East for the Assyrian period.
Role in the Bible
The Bur-Sagale eclipse occurred over the Assyrian capital city of Nineveh in the middle of the reign of Jeroboam II, who ruled Israel from 786 to 746 B.C. According to 2 Kings 14:25, the prophet Jonah lived and prophesied in Jeroboam's reign. The biblical scholar Donald Wiseman has speculated that the eclipse took place around when Jonah arrived in Nineveh and urged the people to repent, otherwise the city would be destroyed. This would explain the dramatic repentance of the people of Nineveh as described in the Book of Jonah. Ancient cultures, including Assyria, viewed eclipses as omens of imminent destruction, and the empire was in chaos at this time, struggling with revolts, famines and two separate outbreaks of plague.
This eclipse is also mentioned by the prophet Amos. Amos was also preaching during the reign of Jeroboam II and refers to the eclipse in Amos 5:8 & 8:5,9. In these passages Amos uses the eclipse as a prophecy of doom, and exhorts Judeans to repentance.
See also
Chronology of the ancient Near East
Akitu
Historical astronomy
Eclipse of Thales
Mursili's eclipse
References
External links
Path map of eclipses 780–761 BCE (NASA) – Includes total eclipse of June 15, 763 BC (labeled -0762 June 15)
Path map of eclipses 800–781 BCE (NASA) – includes annular eclipse of June 24, 791 BC (labeled -0790 June 24)
Five Millennium (-1999 to +3000) Canon of Solar Eclipses Database – maps the visibility of the total solar eclipse of June 15, 763 BC.
Five Millennium (-1999 to +3000) Canon of Solar Eclipses Database – maps the visibility of the annular solar eclipse of June 24, 791 BC.
Chronology
Solar eclipses
Ancient astronomy
Eclipse
760s BC
Nineveh | Assyrian eclipse | [
"Physics",
"Astronomy"
] | 731 | [
"Chronology",
"Physical quantities",
"Time",
"History of astronomy",
"Spacetime",
"Ancient astronomy"
] |
8,828,258 | https://en.wikipedia.org/wiki/MOS-controlled%20thyristor | An MOS-controlled thyristor (MCT) is a voltage-controlled fully controllable thyristor, controlled by MOSFETs (metal–oxide–semiconductor field-effect transistors). It was invented by V.A.K. Temple in 1984, and was principally similar to the earlier insulated-gate bipolar transistor (IGBT). MCTs are similar in operation to GTO thyristors, but have voltage controlled insulated gates. They have two MOSFETs of opposite conductivity types in their equivalent circuits. One is responsible for turn-on and the other for turn-off. A thyristor with only one MOSFET in its equivalent circuit, which can only be turned on (like normal SCRs), is called an MOS-gated thyristor.
Positive voltage on the gate terminal with respect to the cathode turns the thyristor to the on state.
Negative voltage on the gate terminal with respect to the anode, which is close to cathode voltage during the on state, turns the thyristor to the off state.
MCTs were commercialized only briefly.
External links
Field-effect-controlled thyristor
"MOS GTO—A Turn Off Thyristor with MOS-Controlled Emitter Shorts," IEDM 85, M. Stoisiek and H. Strack, Siemens AG, Munich FRG pp. 158–161.
"MOS-Controlled Thyristors—A New Class of Power Devices", IEEE Transactions on Electron Devices, Vol. ED-33, No. 10, Oct. 1986, Victor A. K. Temple, pp. 1609 through 1618.
References
Solid state switches
Power electronics
MOSFETs | MOS-controlled thyristor | [
"Engineering"
] | 360 | [
"Electronic engineering",
"Power electronics"
] |
8,828,449 | https://en.wikipedia.org/wiki/Public%20health%20laboratory | Public health laboratories (PHLs) or National Public Health Laboratories (NPHL) are governmental reference laboratories that protect the public against diseases and other health hazards. The 2005 International Health Regulations came into force in June 2007, with 196 binding countries that recognised that certain public health incidents, extending beyond disease, ought to be designated as a Public Health Emergency of International Concern (PHEIC), as they pose a significant global threat. The PHLs serve as national hazard detection centres, and forward these concerns to the World Health Organization.
International accreditation
In 2007, Haim Hacham et al. published a paper addressing the need for and the process of international standardised accreditation for laboratory proficiency in Israel. With similar efforts, both the Japan Accreditation Board for Conformity Assessment (JAB) and the European Communities Confederation of Clinical Chemistry and Laboratory Medicine (EC4) have validated and convened ISO 15189 Medical laboratories — Requirements for quality and competence, respectively.
In 2006, Spitzenberger and Edelhäuser expressed concerns that ISO accreditation may include obstacles arising from new emerging medical devices and the new approach of assessment; in so doing, they indicate the time dependence of standards.
Africa
WHO-Afro HIV/AIDS Laboratory Network
East African Laboratory Network
African Society for Laboratory Medicine
National Public Health Laboratory (Sudan)
Canada
Canadian Public Health Laboratory Network
Europe
European Union Reference Laboratories cf. Commission Regulation (EC) No 776/2006 and Commission Regulation (EC) No 882/2004
EpiSouth Network
United Kingdom
The Public Health Laboratory Service (PHLS) was established as part of the National Health Service in 1946. An Emergency Public Health Laboratory Service was established in 1940 as a response to the threat of bacteriological warfare. There was originally a central laboratory at Colindale and a network of regional and local laboratories. By 1955 there were about 1000 staff. These laboratories were primarily preventive with an epidemiological focus. They were, however, in some places located with hospital laboratories which had a diagnostic focus.
The PHLS was replaced by the Health Protection Agency in 2003; the HPA was disbanded and in its stead was constituted Public Health England, which later became the UK Health Security Agency in 2021.
United States
United States laboratory networks and organizations
Association of Public Health Laboratories
Laboratory Response Network (CDC)
PulseNet (CDC)
Integrated Consortium of Laboratory Networks
Food Emergency Response Network
Environmental Laboratory Response Network
Council to Improve Foodborne Outbreak Response
US State Public Health Laboratories
US City and County Public Health Laboratories
US State Environmental and Agriculture Laboratories
Other international laboratory networks
WHO Global Influenza Surveillance and Response System
WHO H5 Reference Laboratories
WHO Emerging and Dangerous Pathogens Laboratory Network
See also
Association of Public Health Laboratories
ISO 9000
ISO 15189
ISO/IEC 17025
References
Clinical pathology
Laboratory types
Public health organizations
Public health emergencies of international concern | Public health laboratory | [
"Chemistry"
] | 563 | [
"Laboratory types"
] |
8,828,688 | https://en.wikipedia.org/wiki/Fluid%20animation | Fluid animation refers to computer graphics techniques for generating realistic animations of fluids such as water and smoke. Fluid animations are typically focused on emulating the qualitative visual behavior of a fluid, with less emphasis placed on rigorously correct physical results, although they often still rely on approximate solutions to the Euler equations or Navier–Stokes equations that govern real fluid physics. Fluid animation can be performed with different levels of complexity, ranging from time-consuming, high-quality animations for films, or visual effects, to simple and fast animations for real-time animations like computer games.
Relationship to computational fluid dynamics
Fluid animation differs from computational fluid dynamics (CFD) in that fluid animation is used primarily for visual effects, whereas computational fluid dynamics is used to study the behavior of fluids in a scientifically rigorous way.
Development
The development of fluid animation techniques based on the Navier–Stokes equations began in 1996, when Nick Foster and Dimitris Metaxas implemented solutions to 3D Navier-Stokes equations in a computer graphics context, basing their work on a scientific CFD paper by Harlow and Welch from 1965. Up to that point, a variety of simpler methods had primarily been used, including ad-hoc particle systems, lower dimensional techniques such as height fields, and semi-random turbulent noise fields.
In 1999, Jos Stam published the "Stable Fluids" method, which exploited a semi-Lagrangian advection technique and implicit integration of viscosity to provide unconditionally stable behaviour. This allowed for much larger time steps and therefore faster simulations. This general technique was extended by Ronald Fedkiw and co-authors to handle more realistic smoke and fire, as well as complex 3D water simulations using variants of the level-set method.
Some notable academic researchers in this area include Jerry Tessendorf, James F. O'Brien, Ron Fedkiw, Mark Carlson, Greg Turk, Robert Bridson, Ken Museth, and Jos Stam.
Software
Many 3D computer graphics programs implement fluid animation techniques. RealFlow is a standalone commercial package that has been used to produce visual effects in movies, television shows, commercials, and games. RealFlow implements a fluid-implicit particle (FLIP; an extension of the Particle-in-cell method) solver, a hybrid grid, and a particle method that allows for advanced features such as foam and spray. Maya and Houdini are two other commercial 3D computer graphics programs that allow for fluid animation.
Blender is an open-source 3D computer graphics program that utilized a particle-based Lattice Boltzmann method for animating fluids until the integration of the open-source mantaflow project in 2020 with a wide range of Navier-Stokes solver variants.
See also
RealFlow
Maya
Houdini
Physically based animation
References
External links
RealFlow Homepage
Blender Homepage
Berkeley Computer Animation Homepage
3D computer graphics
Computational fluid dynamics | Fluid animation | [
"Physics",
"Chemistry"
] | 592 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
8,828,948 | https://en.wikipedia.org/wiki/Heterochromatin%20protein%201 | The family of heterochromatin protein 1 (HP1) ("Chromobox Homolog", CBX) consists of highly conserved proteins, which have important functions in the cell nucleus. These functions include gene repression by heterochromatin formation, transcriptional activation, regulation of binding of cohesion complexes to centromeres, sequestration of genes to the nuclear periphery, transcriptional arrest, maintenance of heterochromatin integrity, gene repression at the single nucleosome level, gene repression by heterochromatization of euchromatin, and DNA repair. HP1 proteins are fundamental units of heterochromatin packaging that are enriched at the centromeres and telomeres of nearly all eukaryotic chromosomes with the notable exception of budding yeast, in which a yeast-specific silencing complex of SIR (silent information regulatory) proteins serve a similar function. Members of the HP1 family are characterized by an N-terminal chromodomain and a C-terminal chromoshadow domain, separated by a hinge region. HP1 is also found at some euchromatic sites, where its binding can correlate with either gene repression or gene activation. HP1 was originally discovered by Tharappel C James and Sarah Elgin in 1986 as a factor in the phenomenon known as position effect variegation in Drosophila melanogaster.
Paralogs and orthologs
Three different paralogs of HP1 are found in Drosophila melanogaster, HP1a, HP1b and HP1c. Subsequently orthologs of HP1 were also discovered in S. pombe (Swi6), Xenopus (Xhp1α and Xhp1γ), Chicken (CHCB1, CHCB2 and CHCB3), Tetrahymena (Pdd1p) and many other metazoans. In mammals, there are three paralogs: HP1α, HP1β and HP1γ. In Arabidopsis thaliana (a plant), there is one structural homolog: Like Heterochromatin Protein 1 (LHP1), also known as Terminal Flower 2 (TFL2).
HP1β in mammals
HP1β interacts with the histone methyltransferase (HMTase) Suv(3-9)h1 and is a component of both pericentric and telomeric heterochromatin. HP1β is a dosage-dependent modifier of pericentric heterochromatin-induced silencing and silencing is thought to involve a dynamic association of the HP1β chromodomain with the tri-methylated histone H3 K9me3. The binding of the K9me3-modified H3 N-terminal tail by the chromodomain is a defining feature of HP1 proteins.
Interacting proteins
HP1 interacts with numerous other proteins/molecules (in addition to H3K9me3) with different cellular functions in different organisms. Some of these HP1 interacting partners are: histone H1, histone H3, histone H4, histone methyltransferase, DNA methyltransferase, methyl CpG binding protein MeCP2, and the origin recognition complex protein ORC2.
Binding affinity and cooperativity
HP1 has a versatile structure with three main components; a chromodomain, a chromoshadow domain, and a hinge domain. The chromodomain is responsible for the specific binding affinity of HP1 to histone H3 when tri-methylated at the 9th lysine residue. HP1 binding affinity to nucleosomes containing histone H3 methylated at lysine K9 is significantly higher than to those with unmethylated lysine K9. HP1 binds nucleosomes as a dimer and in principle can form multimeric complexes. Some studies have interpreted HP1 binding in terms of nearest-neighbor cooperative binding. However, the analysis of available data on HP1 binding to nucleosomal arrays in vitro shows that experimental HP1 binding isotherms can be explained by a simple model without cooperative interactions between neighboring HP1 dimers. Nevertheless, favorable interactions between nearest neighbors of HP1 lead to limited spreading of HP1 and its marks along the nucleosome chain in vivo.
The binding affinity of the HP1 chromodomain has also been implicated in regulation of alternative splicing. HP1 can act as both an enhancer and silencer of splicing alternative exons. The exact role it plays in regulation varies by gene and is dependent on the methylation patterns within the gene body. In humans, the role of HP1 on splicing has been characterized for alternative splicing of the EDA exon from the fibronectin gene. In this pathway HP1 acts as a mediator protein for repression of alternative splicing of the EDA exon. When the chromatin within the gene body is not methylated, HP1 does not bind and the EDA exon is transcribed. When the chromatin is methylated, HP1 binds the chromatin and recruits the splicing factor SRSF3 which binds HP1 and splices the EDA exon from the mature transcript. In this mechanism HP1 recognizes the H3K9me3 methylated chromatin and recruits a splicing factor to alternatively splice the mRNA, thereby excluding the EDA exon from the mature transcript.
Role in DNA repair
All HP1 isoforms (HP1-alpha, HP1-beta, and HP1-gamma) are recruited to DNA at sites of UV-induced damages, at oxidative damages and at DNA breaks. The HP1 protein isoforms are required for DNA repair of these damages. The presence of the HP1 protein isoforms at DNA damages assists with the recruitment of other proteins involved in subsequent DNA repair pathways. The recruitment of the HP1 isoforms to DNA damage is rapid, with half maximum recruitment (t1/2) by 180 seconds in response to UV damage, and a t1/2 of 85 seconds in response to double-strand breaks. This is a bit slower than the recruitment of the very earliest proteins recruited to sites of DNA damage, though HP1 recruitment is still one of the very early steps in DNA repair. Other earlier proteins may be recruited with a t1/2 of 40 seconds for UV damage and a t1/2 of about 1 second in response to double-strand breaks (see DNA damage response).
See also
Epigenetics
nucleosome
Heterochromatin
References
Further reading
Review
Transcription factors
Epigenetics | Heterochromatin protein 1 | [
"Chemistry",
"Biology"
] | 1,407 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
8,829,251 | https://en.wikipedia.org/wiki/Bryant%20Electric%20Company | The Bryant Electric Company was a manufacturer of wiring devices, electrical components, and switches founded in 1888 in Bridgeport, Connecticut. It grew to become for a time both the world's largest plant devoted to the manufacture of wiring devices and Bridgeport's largest employer and was involved in a number of notable strikes, before being closed in 1988 and having its remaining interests sold to Hubbell in 1991.
Founding and growth
Bryant was founded by Waldo Calvin Bryant in 1888 (incorporated 1889) in Bridgeport, Connecticut, with seven employees working in a loft on John Street in Bridgeport. Waldo Bryant and others at Bryant invented and patented a number of switch and electrical component designs, including "the first push-pull switch". Although responsible for more than 500 patents by 1935, Bryant's most significant contribution to the wiring devices industry was the idea of standardization. For example, in 1888 there were eight different types of electrical light bases. Bryant led the industry to accept standardized devices.
Bryant grew quickly and, in 1890, acquired the Standard Electric Time Company and Empire China Works. In 1891, Bryant relocated to a former school building owned by P. T. Barnum off State Street and, by 1905, employed 700 people. Perkins Electric Switch Company was acquired in 1899, with the employees and plant relocating to Bridgeport. Waldo Bryant needed more capital for expansion and sold the majority interest to Westinghouse Electric in 1901, though he continued to run the company as the Bryant Electric subsidiary of Westinghouse until 1927. One reason for downplaying the Westinghouse ownership was to keep Bryant distributors who had exclusive franchises to sell products of Westinghouse's competitors from dropping the Bryant line.
For a time, Bryant was Bridgeport's largest employer and, by 1912, its plant in Bridgeport's West End neighborhood was the largest in the world "devoted exclusively to the manufacture of wiring devices". As electrical components began to be made with plastic, Bryant acquired Hemco Plastics Company in 1928. By that year, Bryant was selling over 4,000 different products. By 1938, the plant had grown to and employed 1,537 people, increased to 1,600 in 1946.
Labor relations
At the time of Bryant's founding and rapid growth Bridgeport's West End was a dense, congested working-class neighborhood and a large population of mostly Hungarian immigrants, as well as Swedes, Slovenians and French Canadians, lived to the south of the industrial zone where Bryant was located. Subsequently, a large number of Hungarians were employed by the company in its early days. In 1944, in an effort to maintain good relations with its Hungarian employees, Bryant transferred a strip of land to the Hungarian Reformed Church to be used for construction of a basketball court, gymnasium and auditorium.
Workers at the Bryant plant were involved in a number of notable strikes over the years, including a 1915 strike when a number of Bridgeport companies were closed down amid demands for union representation and an eight-hour day and a 1955 United Electrical Workers strike over working conditions and pay.
1915 strike
While thousands took part in the Bridgeport strikes of 1915, few were actually union members and many were women who had been denied membership in craft unions. The Bryant Electric strike was started by five hundred women assemblers and a handful of men who walked off the job on August 20, marched downtown for a mass meeting at Eagle's Hall and elected a strike committee with equal representation for women. The company responded by shutting the plant and charging the strikers with "rioting." The remaining two thirds of the plant joined the strikers and after two weeks the company acceded to the workers' demands for an eight-hour day, overtime pay and union representation.
Deindustrialization and plant closing
As part of a larger process of regional deindustrialization, Westinghouse shut down the Bryant Electric plant in 1988 after transferring most of the work to non-union plants in North Carolina, Puerto Rico and the Dominican Republic. The closing exacerbated the neighborhood's already bleak economic situation. Westinghouse sold its remaining interests in Bryant Electric to Hubbell Incorporated in 1991 with the rebranded Distribution and Controls Business Unit going to Eaton in 1994. Bryant's 20-building, site in Bridgeport's West End was torn down in 1996 to make way for a new industrial park.
See also
History of Bridgeport, Connecticut
Manufacturing in the U.S.
References
External links
Bridgeport Working: Voices from the 20th Century - Bridgeport Public Library
Keep Bryant In Bridgeport Photograph (1986)
Westinghouse Electric Company
Electrical engineering companies of the United States
Defunct manufacturing companies based in Connecticut
Companies based in Bridgeport, Connecticut
Electronics companies established in 1888
1888 establishments in Connecticut
History of labor relations in the United States
Labor disputes in the United States
Historic American Engineering Record in Connecticut
1991 mergers and acquisitions
Electrical equipment manufacturers | Bryant Electric Company | [
"Engineering"
] | 964 | [
"Electrical engineering organizations",
"Electrical equipment manufacturers"
] |
606,874 | https://en.wikipedia.org/wiki/Einstein%E2%80%93Cartan%20theory | In theoretical physics, the Einstein–Cartan theory, also known as the Einstein–Cartan–Sciama–Kibble theory, is a classical theory of gravitation, one of several alternatives to general relativity. The theory was first proposed by Élie Cartan in 1922.
Overview
Einstein–Cartan theory differs from general relativity in two ways:
(1) it is formulated within the framework of Riemann–Cartan geometry, which possesses a locally gauged Lorentz symmetry, while general relativity is formulated within the framework of Riemannian geometry, which does not;
(2) an additional set of equations are posed that relate torsion to spin.
This difference can be factored into
general relativity (Einstein–Hilbert) → general relativity (Palatini) → Einstein–Cartan
by first reformulating general relativity onto a Riemann–Cartan geometry, replacing the Einstein–Hilbert action over Riemannian geometry by the Palatini action over Riemann–Cartan geometry; and second, removing the zero torsion constraint from the Palatini action, which results in the additional set of equations for spin and torsion, as well as the addition of extra spin-related terms in the Einstein field equations themselves.
The theory of general relativity was originally formulated in the setting of Riemannian geometry by the Einstein–Hilbert action, out of which arise the Einstein field equations. At the time of its original formulation, there was no concept of Riemann–Cartan geometry. Nor was there a sufficient awareness of the concept of gauge symmetry to understand that Riemannian geometries do not possess the requisite structure to embody a locally gauged Lorentz symmetry, such as would be required to be able to express continuity equations and conservation laws for rotational and boost symmetries, or to describe spinors in curved spacetime geometries. The result of adding this infrastructure is a Riemann–Cartan geometry. In particular, to be able to describe spinors requires the inclusion of a spin structure, which suffices to produce such a geometry.
The chief difference between a Riemann–Cartan geometry and Riemannian geometry is that in the former, the affine connection is independent of the metric, while in the latter it is derived from the metric as the Levi-Civita connection, the difference between the two being referred to as the contorsion. In particular, the antisymmetric part of the connection (referred to as the torsion) is zero for Levi-Civita connections, as one of the defining conditions for such connections.
Because the contorsion can be expressed linearly in terms of the torsion, it is also possible to directly translate the Einstein–Hilbert action into a Riemann–Cartan geometry, the result being the Palatini action (see also Palatini variation). It is derived by rewriting the Einstein–Hilbert action in terms of the affine connection and then separately posing a constraint that forces both the torsion and contorsion to be zero, which thus forces the affine connection to be equal to the Levi-Civita connection. Because it is a direct translation of the action and field equations of general relativity, expressed in terms of the Levi-Civita connection, this may be regarded as the theory of general relativity, itself, transposed into the framework of Riemann–Cartan geometry.
Einstein–Cartan theory relaxes this condition and, correspondingly, relaxes general relativity's assumption that the affine connection have a vanishing antisymmetric part (torsion tensor). The action used is the same as the Palatini action, except that the constraint on the torsion is removed. This results in two differences from general relativity:
(1) the field equations are now expressed in terms of affine connection, rather than the Levi-Civita connection, and so have additional terms in Einstein's field equations involving the contorsion that are not present in the field equations derived from the Palatini formulation;
(2) an additional set of equations are now present which couple the torsion to the intrinsic angular momentum (spin) of matter, much in the same way in which the affine connection is coupled to the energy and momentum of matter.
In Einstein–Cartan theory, the torsion is now a variable in the principle of stationary action that is coupled to a curved spacetime formulation of spin (the spin tensor). These extra equations express the torsion linearly in terms of the spin tensor associated with the matter source, which entails that the torsion generally be non-zero inside matter.
A consequence of the linearity is that outside of matter there is zero torsion, so that the exterior geometry remains the same as what would be described in general relativity. The differences between Einstein–Cartan theory and general relativity (formulated either in terms of the Einstein–Hilbert action on Riemannian geometry or the Palatini action on Riemann–Cartan geometry) rest solely on what happens to the geometry inside matter sources. That is: "torsion does not propagate". Generalizations of the Einstein–Cartan action have been considered which allow for propagating torsion.
Because Riemann–Cartan geometries have Lorentz symmetry as a local gauge symmetry, it is possible to formulate the associated conservation laws. In particular, regarding the metric and torsion tensors as independent variables gives the correct generalization of the conservation law for the total (orbital plus intrinsic) angular momentum to the presence of the gravitational field.
History
The theory was first proposed by Élie Cartan in 1922 and expounded in the following few years. Albert Einstein became affiliated with the theory in 1928 during his unsuccessful attempt to match torsion to the electromagnetic field tensor as part of a unified field theory. This line of thought led him to the related but different theory of teleparallelism.
Dennis Sciama and Tom Kibble independently revisited the theory in the 1960s, and an important review was published in 1976.
Einstein–Cartan theory has been historically overshadowed by its torsion-free counterpart and other alternatives like Brans–Dicke theory because torsion seemed to add little predictive benefit at the expense of the tractability of its equations. Since the Einstein–Cartan theory is purely classical, it also does not fully address the issue of quantum gravity. In the Einstein–Cartan theory, the Dirac equation becomes nonlinear. Even though renowned physicists such as Steven Weinberg "never understood what is so important physically about the possibility of torsion in differential geometry", other physicists claim that theories with torsion are valuable.
The theory has indirectly influenced loop quantum gravity (and seems also to have influenced twistor theory).
Field equations
The Einstein field equations of general relativity can be derived by postulating the Einstein–Hilbert action to be the true action of spacetime and then varying that action with respect to the metric tensor.
The field equations of Einstein–Cartan theory come from exactly the same approach,
except that a general asymmetric affine connection is assumed rather than the symmetric Levi-Civita connection
(i.e., spacetime is assumed to have torsion in addition to curvature),
and then the metric and torsion are varied independently.
Let represent the Lagrangian density of matter and represent the Lagrangian density of the gravitational field. The Lagrangian density for the gravitational field in the Einstein–Cartan theory is proportional to the Ricci scalar:
where is the determinant of the metric tensor, and is a physical constant involving the gravitational constant and the speed of light. By Hamilton's principle, the variation of the total action for the gravitational field and matter vanishes:
The variation with respect to the metric tensor yields the Einstein equations:
{| class="wikitable"
|-
|
|}
where is the Ricci tensor and is the canonical stress–energy–momentum tensor.
The Ricci tensor is no longer symmetric because the connection contains a nonzero torsion tensor; therefore, the right-hand side of the equation cannot be symmetric either, implying that must include an asymmetric contribution that can be shown to be related to the spin tensor. This canonical energy–momentum tensor is related to the more familiar symmetric energy–momentum tensor by the Belinfante–Rosenfeld procedure.
The variation with respect to the torsion tensor yields the Cartan spin connection equations
{| class="wikitable"
|-
|
|}
where is the spin tensor. Because the torsion equation is an algebraic constraint rather than a partial differential equation, the torsion field does not propagate as a wave, and vanishes outside of matter. Therefore, in principle the torsion can be algebraically eliminated from the theory in favor of the spin tensor, which generates an effective "spin–spin" nonlinear self-interaction inside matter. Torsion is equal to its source term and can be replaced by a boundary or a topological structure with a throat such as a "wormhole".
Avoidance of singularities
Recently, interest in Einstein–Cartan theory has been driven toward cosmological implications, most importantly, the avoidance of a gravitational singularity at the beginning of the universe, such as in the black hole cosmology, static universe, or cyclic model.
Singularity theorems which are premised on and formulated within the setting of Riemannian geometry (e.g. Penrose–Hawking singularity theorems) need not hold in Riemann–Cartan geometry. Consequently, Einstein–Cartan theory is able to avoid the general-relativistic problem of the singularity at the Big Bang. The minimal coupling between torsion and Dirac spinors generates an effective nonlinear spin–spin self-interaction, which becomes significant inside fermionic matter at extremely high densities. Such an interaction is conjectured to replace the singular Big Bang with a cusp-like Big Bounce at a minimum but finite scale factor, before which the observable universe was contracting. This scenario also explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic, providing a physical alternative to cosmic inflation. Torsion allows fermions to be spatially extended instead of "pointlike", which helps to avoid the formation of singularities such as black holes, removes the ultraviolet divergence in quantum field theory, and leads to the toroidal ring model of electrons. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular black hole. In the Einstein–Cartan theory, instead, the collapse reaches a bounce and forms a regular Einstein–Rosen bridge (wormhole) to a new, growing universe on the other side of the event horizon; pair production by the gravitational field after the bounce, when torsion is still strong, generates a finite period of inflation.
See also
Alternatives to general relativity
Metric-affine gravitation theory
Gauge theory gravity
Loop quantum gravity
References
Further reading
Lord, E. A. (1976). "Tensors, Relativity and Cosmology" (McGraw-Hill).
de Sabbata, V. and Gasperini, M. (1985). "Introduction to Gravitation" (World Scientific).
de Sabbata, V. and Sivaram, C. (1994). "Spin and Torsion in Gravitation" (World Scientific).
Theories of gravity
Albert Einstein | Einstein–Cartan theory | [
"Physics"
] | 2,332 | [
"Theoretical physics",
"Theories of gravity"
] |
606,970 | https://en.wikipedia.org/wiki/Minimal%20Supersymmetric%20Standard%20Model | The Minimal Supersymmetric Standard Model (MSSM) is an extension to the Standard Model that realizes supersymmetry. MSSM is the minimal supersymmetrical model as it considers only "the [minimum] number of new particle states and new interactions consistent with "Reality". Supersymmetry pairs bosons with fermions, so every Standard Model particle has a (yet undiscovered) superpartner. If discovered, such superparticles could be candidates for dark matter, and could provide evidence for grand unification or the viability of string theory. The failure to find evidence for MSSM using the Large Hadron Collider has strengthened an inclination to abandon it.
Background
The MSSM was originally proposed in 1981 to stabilize the weak scale, solving the hierarchy problem. The Higgs boson mass of the Standard Model is unstable to quantum corrections and the theory predicts that weak scale should be much weaker than what is observed to be. In the MSSM, the Higgs boson has a fermionic superpartner, the Higgsino, that has the same mass as it would if supersymmetry were an exact symmetry. Because fermion masses are radiatively stable, the Higgs mass inherits this stability. However, in MSSM there is a need for more than one Higgs field, as described below.
The only unambiguous way to claim discovery of supersymmetry is to produce superparticles in the laboratory. Because superparticles are expected to be 100 to 1000 times heavier than the proton, it requires a huge amount of energy to make these particles that can only be achieved at particle accelerators. The Tevatron was actively looking for evidence of the production of supersymmetric particles before it was shut down on 30 September 2011. Most physicists believe that supersymmetry must be discovered at the LHC if it is responsible for stabilizing the weak scale. There are five classes of particle that superpartners of the Standard Model fall into: squarks, gluinos, charginos, neutralinos, and sleptons. These superparticles have their interactions and subsequent decays described by the MSSM and each has characteristic signatures.
The MSSM imposes R-parity to explain the stability of the proton. It adds supersymmetry breaking by introducing explicit soft supersymmetry breaking operators into the Lagrangian that is communicated to it by some unknown (and unspecified) dynamics. This means that there are 120 new parameters in the MSSM. Most of these parameters lead to unacceptable phenomenology such as large flavor changing neutral currents or large electric dipole moments for the neutron and electron. To avoid these problems, the MSSM takes all of the soft supersymmetry breaking to be diagonal in flavor space and for all of the new CP violating phases to vanish.
Theoretical motivations
There are three principal motivations for the MSSM over other theoretical extensions of the Standard Model, namely:
Naturalness
Gauge coupling unification
Dark Matter
These motivations come out without much effort and they are the primary reasons why the MSSM is the leading candidate for a new theory to be discovered at collider experiments such as the Tevatron or the LHC.
Naturalness
The original motivation for proposing the MSSM was to stabilize the Higgs mass to radiative corrections that are quadratically divergent in the Standard Model (the hierarchy problem). In supersymmetric models, scalars are related to fermions and have the same mass. Since fermion masses are logarithmically divergent, scalar masses inherit the same radiative stability. The Higgs vacuum expectation value (VEV) is related to the negative scalar mass in the Lagrangian. In order for the radiative corrections to the Higgs mass to not be dramatically larger than the actual value, the mass of the superpartners of the Standard Model should not be significantly heavier than the Higgs VEV – roughly 100 GeV. In 2012, the Higgs particle was discovered at the LHC, and its mass was found to be 125–126 GeV.
Gauge-coupling unification
If the superpartners of the Standard Model are near the TeV scale, then measured gauge couplings of the three gauge groups unify at high energies. The beta-functions for the MSSM gauge couplings are given by
where is measured in SU(5) normalization—a factor of different
than the Standard Model's normalization and predicted by Georgi–Glashow SU(5) .
The condition for gauge coupling unification at one loop is whether the following expression is satisfied
.
Remarkably, this is precisely satisfied to experimental errors in the values of . There are two loop corrections and both TeV-scale and GUT-scale threshold corrections that alter this condition on gauge coupling unification, and the results of more extensive calculations reveal that gauge coupling unification occurs to an accuracy of 1%, though this is about 3 standard deviations from the theoretical expectations.
This prediction is generally considered as indirect evidence for both the MSSM and SUSY GUTs. Gauge coupling unification does not necessarily imply grand unification and there exist other mechanisms to reproduce gauge coupling unification. However, if superpartners are found in the near future, the apparent success of gauge coupling unification would suggest that a supersymmetric grand unified theory is a promising candidate for high scale physics.
Dark matter
If R-parity is preserved, then the lightest superparticle (LSP) of the MSSM is stable and is a Weakly interacting massive particle (WIMP) – i.e. it does not have electromagnetic or strong interactions. This makes the LSP a good dark matter candidate, and falls into the category of cold dark matter (CDM).
Predictions of the MSSM regarding hadron colliders
The Tevatron and LHC have active experimental programs searching for supersymmetric particles. Since both of these machines are hadron colliders – proton antiproton for the Tevatron and proton proton for the LHC – they search best for strongly interacting particles. Therefore, most experimental signature involve production of squarks or gluinos. Since the MSSM has R-parity, the lightest supersymmetric particle is stable and after the squarks and gluinos decay each decay chain will contain one LSP that will leave the detector unseen. This leads to the generic prediction that the MSSM will produce a 'missing energy' signal from these particles leaving the detector.
Neutralinos
There are four neutralinos that are fermions and are electrically neutral, the lightest of which is typically stable. They are typically labeled , , , (although sometimes is used instead). These four states are mixtures of the bino and the neutral wino (which are the neutral electroweak gauginos), and the neutral higgsinos. As the neutralinos are Majorana fermions, each of them is identical with its antiparticle. Because these particles only interact with the weak vector bosons, they are not directly produced at hadron colliders in copious numbers. They primarily appear as particles in cascade decays of heavier particles usually originating from colored supersymmetric particles such as squarks or gluinos.
In R-parity conserving models, the lightest neutralino is stable and all supersymmetric cascade decays end up decaying into this particle which leaves the detector unseen and its existence can only be inferred by looking for unbalanced momentum in a detector.
The heavier neutralinos typically decay through a to a lighter neutralino or through a to chargino. Thus a typical decay is
{|
|
| →
|
| +
|
| colspan=6|
| →
| Missing energy
| +
|
| +
|
|-
|
| →
|
| +
|
| →
|
| +
|
| +
|
| →
| Missing energy
| +
|
| +
|
|}
Note that the “Missing energy” byproduct represents the mass-energy of the neutralino ( ) and in the second line, the mass-energy of a neutrino-antineutrino pair ( + ) produced with the lepton and antilepton in the final decay, all of which are undetectable in individual reactions with current technology.
The mass splittings between the different neutralinos will dictate which patterns of decays are allowed.
Charginos
There are two charginos that are fermions and are electrically charged. They are typically labeled and (although sometimes and is used instead). The heavier chargino can decay through to the lighter chargino. Both can decay through a to neutralino.
Squarks
The squarks are the scalar superpartners of the quarks and there is one version for each Standard Model quark. Due to phenomenological constraints from flavor changing neutral currents, typically the lighter two generations of squarks have to be nearly the same in mass and therefore are not given distinct names. The superpartners of the top and bottom quark can be split from the lighter squarks and are called stop and sbottom.
In the other direction, there may be a remarkable left-right mixing of the stops and of the sbottoms because of the high masses of the partner quarks top and bottom:
A similar story holds for bottom with its own parameters and .
Squarks can be produced through strong interactions and therefore are easily produced at hadron colliders. They decay to quarks and neutralinos or charginos which further decay. In R-parity conserving scenarios, squarks are pair produced and therefore a typical signal is
2 jets + missing energy
2 jets + 2 leptons + missing energy
Gluinos
Gluinos are Majorana fermionic partners of the gluon which means that they are their own antiparticles. They interact strongly and therefore can be produced significantly at the LHC. They can only decay to a quark and a squark and thus a typical gluino signal is
4 jets + Missing energy
Because gluinos are Majorana, gluinos can decay to either a quark+anti-squark or an anti-quark+squark with equal probability. Therefore, pairs of gluinos can decay to
4 jets+ + Missing energy
This is a distinctive signature because it has same-sign di-leptons and has very little background in the Standard Model.
Sleptons
Sleptons are the scalar partners of the leptons of the Standard Model. They are not strongly interacting and therefore are not produced very often at hadron colliders unless they are very light.
Because of the high mass of the tau lepton there will be left-right mixing of the stau similar to that of stop and sbottom (see above).
Sleptons will typically be found in decays of a charginos and neutralinos if they are light enough to be a decay product.
MSSM fields
Fermions have bosonic superpartners (called sfermions), and bosons have fermionic superpartners (called bosinos). For most of the Standard Model particles, doubling is very straightforward. However, for the Higgs boson, it is more complicated.
A single Higgsino (the fermionic superpartner of the Higgs boson) would lead to a gauge anomaly and would cause the theory to be inconsistent. However, if two Higgsinos are added, there is no gauge anomaly. The simplest theory is one with two Higgsinos and therefore two scalar Higgs doublets.
Another reason for having two scalar Higgs doublets rather than one is in order to have Yukawa couplings between the Higgs and both down-type quarks and up-type quarks; these are the terms responsible for the quarks' masses. In the Standard Model the down-type quarks couple to the Higgs field (which has Y=−) and the up-type quarks to its complex conjugate (which has Y=+). However, in a supersymmetric theory this is not allowed, so two types of Higgs fields are needed.
MSSM superfields
In supersymmetric theories, every field and its superpartner can be written together as a superfield. The superfield formulation of supersymmetry is very convenient to write down manifestly supersymmetric theories (i.e. one does not have to tediously check that the theory is supersymmetric term by term in the Lagrangian). The MSSM contains vector superfields associated with the Standard Model gauge groups which contain the vector bosons and associated gauginos. It also contains chiral superfields for the Standard Model fermions and Higgs bosons (and their respective superpartners).
MSSM Higgs mass
The MSSM Higgs mass is a prediction of the Minimal Supersymmetric Standard Model. The mass of the lightest Higgs boson is set by the Higgs quartic coupling. Quartic couplings are not soft supersymmetry-breaking parameters since they lead to a quadratic divergence of the Higgs mass. Furthermore, there are no supersymmetric parameters to make the Higgs mass a free parameter in the MSSM (though not in non-minimal extensions). This means that Higgs mass is a prediction of the MSSM. The LEP II and the IV experiments placed a lower limit on the Higgs mass of 114.4 GeV. This lower limit is significantly above where the MSSM would typically predict it to be but does not rule out the MSSM; the discovery of the Higgs with a mass of 125 GeV is within the maximal upper bound of approximately 130 GeV that loop corrections within the MSSM would raise the Higgs mass to. Proponents of the MSSM point out that a Higgs mass within the upper bound of the MSSM calculation of the Higgs mass is a successful prediction, albeit pointing to more fine tuning than expected.
Formulas
The only susy-preserving operator that creates a quartic coupling for the Higgs in the MSSM arise for the D-terms of the SU(2) and U(1) gauge sector and the magnitude of the quartic coupling is set by the size of the gauge couplings.
This leads to the prediction that the Standard Model-like Higgs mass (the scalar that couples approximately to the VEV) is limited to be less than the Z mass:
.
Since supersymmetry is broken, there are radiative corrections to the quartic coupling that can increase the Higgs mass. These dominantly arise from the 'top sector':
where is the top mass and is the mass of the top squark. This result can be interpreted as the RG running of the Higgs quartic coupling from the scale of supersymmetry to the top mass—however since the top squark mass should be relatively close to the top mass, this is usually a fairly modest contribution and increases the Higgs mass to roughly the LEP II bound of 114 GeV before the top squark becomes too heavy.
Finally there is a contribution from the top squark A-terms:
where is a dimensionless number. This contributes an additional term to the Higgs mass at loop level, but is not logarithmically enhanced
by pushing (known as 'maximal mixing') it is possible to push the Higgs mass to 125 GeV without decoupling the top squark or adding new dynamics to the MSSM.
As the Higgs was found at around 125 GeV (along with no other superparticles) at the LHC, this strongly hints at new dynamics beyond the MSSM, such as the 'Next to Minimal Supersymmetric Standard Model' (NMSSM); and suggests some correlation to the little hierarchy problem.
MSSM Lagrangian
The Lagrangian for the MSSM contains several pieces.
The first is the Kähler potential for the matter and Higgs fields which produces the kinetic terms for the fields.
The second piece is the gauge field superpotential that produces the kinetic terms for the gauge bosons and gauginos.
The next term is the superpotential for the matter and Higgs fields. These produce the Yukawa couplings for the Standard Model fermions and also the mass term for the Higgsinos. After imposing R-parity, the renormalizable, gauge invariant operators in the superpotential are
The constant term is unphysical in global supersymmetry (as opposed to supergravity).
Soft SUSY breaking
The last piece of the MSSM Lagrangian is the soft supersymmetry breaking Lagrangian. The vast majority of the parameters of the MSSM are in the susy breaking Lagrangian. The soft susy breaking are divided into roughly three pieces.
The first are the gaugino masses
where are the gauginos and is different for the wino, bino and gluino.
The next are the soft masses for the scalar fields
where are any of the scalars in the MSSM and are Hermitian matrices for the squarks and sleptons of a given set of gauge quantum numbers. The eigenvalues of these matrices are actually the masses squared, rather than the masses.
There are the and terms which are given by
The terms are complex matrices much as the scalar masses are.
Although not often mentioned with regard to soft terms, to be consistent with observation, one must also include Gravitino and Goldstino soft masses given by
The reason these soft terms are not often mentioned are that they arise through local supersymmetry and not global supersymmetry, although they are required otherwise if the Goldstino were massless it would contradict observation. The Goldstino mode is eaten by the Gravitino to become massive, through a gauge shift, which also absorbs the would-be "mass" term of the Goldstino.
Problems
There are several problems with the MSSM—most of them falling into understanding the parameters.
The mu problem: The Higgsino mass parameter μ appears as the following term in the superpotential: μHuHd. It should have the same order of magnitude as the electroweak scale, many orders of magnitude smaller than that of the Planck scale, which is the natural cutoff scale. The soft supersymmetry breaking terms should also be of the same order of magnitude as the electroweak scale. This brings about a problem of naturalness: why are these scales so much smaller than the cutoff scale yet happen to fall so close to each other?
Flavor universality of soft masses and A-terms: since no flavor mixing additional to that predicted by the standard model has been discovered so far, the coefficients of the additional terms in the MSSM Lagrangian must be, at least approximately, flavor invariant (i.e. the same for all flavors).
Smallness of CP violating phases: since no CP violation additional to that predicted by the standard model has been discovered so far, the additional terms in the MSSM Lagrangian must be, at least approximately, CP invariant, so that their CP violating phases are small.
Theories of supersymmetry breaking
A large amount of theoretical effort has been spent trying to understand the mechanism for soft supersymmetry breaking that produces the desired properties in the superpartner masses and interactions. The three most extensively studied mechanisms are:
Gravity-mediated supersymmetry breaking
Gravity-mediated supersymmetry breaking is a method of communicating supersymmetry breaking to the supersymmetric Standard Model through gravitational interactions. It was the first method proposed to communicate supersymmetry breaking. In gravity-mediated supersymmetry-breaking models, there is a part of the theory that only interacts with the MSSM through gravitational interaction. This hidden sector of the theory breaks supersymmetry. Through the supersymmetric version of the Higgs mechanism, the gravitino, the supersymmetric version of the graviton, acquires a mass. After the gravitino has a mass, gravitational radiative corrections to soft masses are incompletely cancelled beneath the gravitino's mass.
It is currently believed that it is not generic to have a sector completely decoupled from the MSSM and there should be higher dimension operators that couple different sectors together with the higher dimension operators suppressed by the Planck scale. These operators give as large of a contribution to the soft supersymmetry breaking masses as the gravitational loops; therefore, today people usually consider gravity mediation to be gravitational sized direct interactions between the hidden sector and the MSSM.
mSUGRA stands for minimal supergravity. The construction of a realistic model of interactions within supergravity framework where supersymmetry breaking is communicated through the supergravity interactions was carried out by Ali Chamseddine, Richard Arnowitt, and Pran Nath in 1982. mSUGRA is one of the most widely investigated models of particle physics due to its predictive power requiring only 4 input parameters and a sign, to determine the low energy phenomenology from the scale of Grand Unification. The most widely used set of parameters is:
Gravity-Mediated Supersymmetry Breaking was assumed to be flavor universal because of the universality of gravity; however, in 1986 Hall, Kostelecky, and Raby showed that Planck-scale physics that are necessary to generate the Standard-Model Yukawa couplings spoil the universality of the supersymmetry breaking.
Gauge-mediated supersymmetry breaking (GMSB)
Gauge-mediated supersymmetry breaking is method of communicating supersymmetry breaking to the supersymmetric Standard Model through the Standard Model's gauge interactions. Typically a hidden sector breaks supersymmetry and communicates it to massive messenger fields that are charged under the Standard Model. These messenger fields induce a gaugino mass at one loop and then this is transmitted on to the scalar superpartners at two loops. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.5GeV. With the Higgs being discovered at 125GeV - this model requires stops above 2 TeV.
Anomaly-mediated supersymmetry breaking (AMSB)
Anomaly-mediated supersymmetry breaking is a special type of gravity mediated supersymmetry breaking that results in supersymmetry breaking being communicated to the supersymmetric Standard Model through the conformal anomaly. Requiring stop squarks below 2 TeV, the maximum Higgs boson mass predicted is just 121.0GeV. With the Higgs being discovered at 125GeV - this scenario requires stops heavier than 2 TeV.
Phenomenological MSSM (pMSSM)
The unconstrained MSSM has more than 100 parameters in addition to the Standard Model parameters.
This makes any phenomenological analysis (e.g. finding regions in parameter space consistent
with observed data) impractical. Under the following three assumptions:
no new source of CP-violation
no Flavour Changing Neutral Currents
first and second generation universality
one can reduce the number of additional parameters to the following 19 quantities of the phenomenological MSSM (pMSSM):
The large parameter space of pMSSM makes searches in pMSSM extremely challenging and makes pMSSM difficult to exclude.
Experimental tests
Terrestrial detectors
XENON1T (a dark matter WIMP detector - being commissioned in 2016) is expected to explore/test supersymmetry candidates such as CMSSM.
See also
Desert (particle physics)
References
External links
MSSM on arxiv.org
Particle Data Group review of MSSM and search for MSSM predicted particles
Supersymmetric quantum field theory
Physics beyond the Standard Model | Minimal Supersymmetric Standard Model | [
"Physics"
] | 4,992 | [
"Supersymmetric quantum field theory",
"Unsolved problems in physics",
"Particle physics",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry"
] |
606,999 | https://en.wikipedia.org/wiki/Pioglitazone | Pioglitazone, sold under the brand name Actos among others, is an anti-diabetic medication used to treat type 2 diabetes. It may be used with metformin, a sulfonylurea, or insulin. Use is recommended together with exercise and diet. It is not recommended in type 1 diabetes. It is taken by mouth.
Common side effects include headaches, muscle pains, inflammation of the throat, and swelling. Serious side effects may include bladder cancer, low blood sugar, heart failure, and osteoporosis. Use is not recommended in pregnancy or breastfeeding. It is in the thiazolidinedione (TZD) class and works by improving sensitivity of tissues to insulin.
Pioglitazone was patented in 1985, and came into medical use in 1999. It is available as a generic medication. In 2022, it was the 120th most commonly prescribed medication in the United States, with more than 5million prescriptions. It was withdrawn in France and Germany in 2011.
Medical uses
Pioglitazone is used to lower blood glucose levels in type 2 diabetes either alone or in combination with sulfonylurea, metformin, or insulin. The effects of pioglitazone have been compared in a Cochrane systematic review to that of other blood sugar lowering-medicine, including metformin, acarbose, and repaglinide, as well as with appropriate diet and exercise, not showing any benefit in reducing the chance of developing type 2 diabetes in people at risk. It did, however, show reduction of risk of developing type 2 diabetes when compared to a placebo or to no treatment. These results should be interpreted considering that most of the data of the studies included in this review were of low or very-low certainty.
While pioglitazone does decrease blood sugar levels, the main study that looked at the medication found no difference in the main cardiovascular outcomes that were looked at. The secondary outcome of death from all causes, myocardial infarction, and stroke were lower.
Pioglitazone has been found to reduce all-cause mortality in type 2 diabetic patients compared to other therapies, with a 60% reduction in mortality in those exposed to pioglitazone, compared to those never exposed. Another study found an all-cause mortality hazard ratio of 0.33 for pioglitazone after adjusting for >40 covariates, compared to insulin. Due to insufficient data on all-cause mortality, cardiovascular mortality, myocardial infarction and stroke, this was not possible to compare in a more recent review.
Contraindications
Pioglitazone cannot be used in patients with a known hypersensitivity to pioglitazone, other thiazolidinediones or any of components of its pharmaceutical forms. It is ineffective and possibly harmful in diabetes mellitus type 1 and diabetic ketoacidosis. Its safety in pregnancy, lactation (breastfeeding) and people under 18 is not established.
Given previous experiences with the related drug troglitazone, acute diseases of the liver are regarded as a contraindication for pioglitazone.
Side effects
A press release by GlaxoSmithKline in February 2007 noted that there is a greater incidence of fractures of the upper arms, hands and feet in female diabetics given rosiglitazone compared with those given metformin or glyburide. The information was based on data from the ADOPT trial. Following release of this statement, Takeda Pharmaceutical Company, the developer of pioglitazone (sold as Actos in many markets) admitted that it has similar implications for female patients.
The risk of hypoglycemia is low in the absence of other drugs that lower blood glucose.
Pioglitazone can cause fluid retention and peripheral edema. As a result, it may precipitate congestive heart failure (which worsens with fluid overload in those at risk). It may cause anemia. Mild weight gain is common due to increase in subcutaneous adipose tissue. In studies, patients on pioglitazone had an increased proportion of upper respiratory tract infection, sinusitis, headache, myalgia and tooth problems.
Chronic administration of the drug has led to occasional instances of cholestatic hepatitis, reversible upon drug discontinuation.
On 30 July 2007, an Advisory Committee of the Food and Drug Administration concluded that the use of rosiglitazone for the treatment of type 2 diabetes was associated with a greater risk of "myocardial ischemic events" when compared to placebo, but when compared to other diabetes drugs, there was no increased risk. Pioglitazone is currently being reviewed. A meta-analysis released subsequently showed that pioglitazone reduced the risk of ischemic cardiac events rather than increased the risk, but increased CHF.
A 2020 Cochrane systematic review assessed occurrence of adverse effects with use of pioglitazone, but was not able to reach any conclusions due to insufficient data on included studies.
Bladder cancer
On 9 June 2011, the French Agency for the Safety of Health Products decided to withdraw pioglitazone due to high risk of bladder cancer. This suspension was based on the results of an epidemiological study conducted by the French National Health Insurance. According to the results of the epidemiological study, the French agency found that patients, who were taking Actos for a long time to aid in type 2 diabetes mellitus, significantly increased risk of bladder cancer compared with patients who were taking other diabetes medications. On 10 June 2011, Germany's Federal Institute for Drugs and Medical Devices also advised doctors not to prescribe the medication until further investigation of the cancer risk had been conducted.
On 15 June 2011, the U.S. FDA announced that pioglitazone use for more than one year may be associated with an increased risk of bladder cancer, and two months later the label was updated with an additional warning about this risk.
A 2017 meta-analysis found no difference in the rates of bladder cancer attributed to pioglitazone.
Drug interactions
Combination with sulfonylureas or insulin reciprocally exponentiate risk of hypoglycemia. Therapy with pioglitazone increase the chance of pregnancy in individuals taking oral contraception.
Mechanism of action
Pioglitazone selectively stimulates the nuclear receptor peroxisome proliferator-activated receptor gamma (PPAR-γ) and to a lesser extent PPAR-α. It modulates the transcription of the genes involved in the control of glucose and lipid metabolism in the muscle, adipose tissue, and the liver. As a result, pioglitazone reduces insulin resistance in the liver and peripheral tissues, decreases gluconeogenesis in the liver, and reduces quantity of glucose and glycated hemoglobin in the bloodstream.
Since 2004, pioglitazone and other active TZDs have been shown to bind to the outer mitochondrial membrane protein mitoNEET with affinity comparable to that of pioglitazone for PPARγ.
Leriglitazone is a metabolite.
Society and culture
Economics
In 2008, it generated the tenth-highest amount of money for a medication in the U.S. in 2008, with sales exceeding $2.4 billion.
To 2020, no study has examined the socioeconomic effects of utilization of pioglitazone.
Brand names
Pioglitazone is marketed as Actos in the United States, Canada, the UK and Germany, Glustin in the European Union, Glizone and Pioz in India by Zydus Cadila and USV Limited, respectively and Zactos in Mexico by Takeda Pharmaceuticals. On 17 August 2012, the US FDA announced its approval of the first generic version of Actos.
Research
Psychiatry
Bipolar disorder
Pioglitazone has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of pioglitazone in the treatment of bipolar depression.
Major depression
There is research that suggests that pioglitazone may be useful for treating major depression.
Other illnesses
Pioglitazone has been found to exert anti-ageing effects in Drosophila.
Pioglitazone has been tried for non-alcoholic fatty liver disease, showing promising results according to several meta-analyses.
Because it is thought to reduce inflammatory activity in neuroglia, it was studied in a small clinical trial involving children with autism, under the autoimmune/inflammatory hypotheses of the causes of autism.
Pioglitazone may improve symptoms of psoriasis.
Pioglitazone is also being researched as a potential treatment for Alzheimer's disease in preclinical studies, however testing for the efficacy of Pioglitazone has been fraught with failure and confusing results from clinical trials.
Pioglitazone has been shown in animal models to be a possible treatment for Opioid use disorder.
References
3β-Hydroxysteroid dehydrogenase inhibitors
CYP3A4 inducers
CYP17A1 inhibitors
Drugs developed by Eli Lilly and Company
IARC Group 2A carcinogens
Phenol ethers
Pyridines
Drugs developed by Takeda Pharmaceutical Company
Thiazolidinediones
Wikipedia medicine articles ready to translate
Withdrawn drugs | Pioglitazone | [
"Chemistry"
] | 1,932 | [
"Drug safety",
"Withdrawn drugs"
] |
607,062 | https://en.wikipedia.org/wiki/Frequency%20compensation | In electronics engineering, frequency compensation is a technique used in amplifiers, and especially in amplifiers employing negative feedback. It usually has two primary goals: To avoid the unintentional creation of positive feedback, which will cause the amplifier to oscillate, and to control overshoot and ringing in the amplifier's step response. It is also used extensively to improve the bandwidth of single pole systems.
Explanation
Most amplifiers use negative feedback to trade gain for other desirable properties, such as decreased distortion, improved noise reduction or increased invariance to variation of parameters such as temperature. Ideally, the phase characteristic of an amplifier's frequency response would be linear; however, device limitations make this goal physically unattainable. More particularly, capacitances within the amplifier's gain stages cause the output signal to lag behind the input signal by up to 90° for each pole they create. If the sum of these phase lags reaches 180°, the output signal will be the negative of the input signal. Feeding back any portion of this output signal to the inverting (negative) input when the gain of the amplifier is sufficient will cause the amplifier to oscillate. This is because the feedback signal will reinforce the input signal. That is, the feedback is then positive rather than negative.
Frequency compensation is implemented to avoid this result.
Another goal of frequency compensation is to control the step response of an amplifier circuit as shown in Figure 1. For example, if a step in voltage is input to a voltage amplifier, ideally a step in output voltage would occur. However, the output is not ideal because of the frequency response of the amplifier, and ringing occurs. Several figures of merit to describe the adequacy of step response are in common use. One is the rise time of the output, which ideally would be short. A second is the time for the output to lock into its final value, which again should be short. The success in reaching this lock-in at final value is described by overshoot (how far the response exceeds final value) and settling time (how long the output swings back and forth about its final value). These various measures of the step response usually conflict with one another, requiring optimization methods.
Frequency compensation is implemented to optimize step response, one method being pole splitting.A
Use in operational amplifiers
Because operational amplifiers are so ubiquitous and are designed to be used with feedback, the following discussion will be limited to frequency compensation of these devices.
It should be expected that the outputs of even the simplest operational amplifiers will have at least two poles. A consequence of this is that at some critical frequency, the phase of the amplifier's output = −180° compared to the phase of its input signal. The amplifier will oscillate if it has a gain of one or more at this critical frequency. This is because (a) the feedback is implemented through the use of an inverting input that adds an additional −180° to the output phase making the total phase shift −360° and (b) the gain is sufficient to induce oscillation.
A more precise statement of this is the following: An operational amplifier will oscillate at the frequency at which its open loop gain equals its closed loop gain if, at that frequency,
The open loop gain of the amplifier is ≥ 1 and
The difference between the phase of the open loop signal and phase response of the network creating the closed loop output = −180°. Mathematically:
Practice
Frequency compensation is implemented by modifying the gain and phase characteristics of the amplifier's open loop output or of its feedback network, or both, in such a way as to avoid the conditions leading to oscillation. This is usually done by the internal or external use of resistance-capacitance networks.
Dominant-pole compensation
The method most commonly used is called dominant-pole compensation, which is a form of lag compensation. It is an external compensation technique and is used for relatively low closed loop gain. A pole placed at an appropriate low frequency in the open-loop response reduces the gain of the amplifier to one (0 dB) for a frequency at or just below the location of the next highest frequency pole. The lowest frequency pole is called the dominant pole because it dominates the effect of all of the higher frequency poles. The result is that the difference between the open loop output phase and the phase response of a feedback network having no reactive elements never falls below −180° while the amplifier has a gain of one or more, ensuring stability.
Dominant-pole compensation can be implemented for general purpose operational amplifiers by adding an integrating capacitance to the stage that provides the bulk of the amplifier's gain. This capacitor creates a pole that is set at a frequency low enough to reduce the gain to one (0 dB) at or just below the frequency where the pole next highest in frequency is located. The result is a phase margin of ≈ 45°, depending on the proximity of still higher poles. This margin is sufficient to prevent oscillation in the most commonly used feedback configurations. In addition, dominant-pole compensation allows control of overshoot and ringing in the amplifier step response, which can be a more demanding requirement than the simple need for stability.
This compensation method is described below:
Let be the uncompensated transfer function of op amp in open-loop configuration which is given by:
where is the open-loop gain of the Op-Amp and , , and are the angular frequencies at which the gain function rolls off by -20dB, -40dB, and -60dB respectively.
Thus, for compensation, introduce a dominant pole by adding an RC network in series with the Op-Amp as shown in the figure.
The Transfer function of the compensated open loop Op-Amp circuit is given by:
where fd < f1 < f2 < f3
The compensation capacitance C is chosen such that fd < f1. Hence, the frequency response of a dominant pole compensated open loop Op-Amp circuit shows uniform gain roll off from fd and becomes 0 at f1 as shown in the graph.
The advantages of dominant pole compensation are:
1. It is simple and effective.
2. Noise immunity is improved since noise frequency components outside the bandwidth are eliminated.
Though simple and effective, this kind of conservative dominant pole compensation has two drawbacks:
It reduces the bandwidth of the amplifier, thereby reducing available open loop gain at higher frequencies. This, in turn, reduces the amount of feedback available for distortion correction, etc. at higher frequencies.
It reduces the amplifier's slew rate. This reduction results from the time it takes the finite current driving the compensated stage to charge the compensating capacitor. The result is the inability of the amplifier to reproduce high amplitude, rapidly changing signals accurately.
Often, the implementation of dominant-pole compensation results in the phenomenon of Pole splitting. This results in the lowest frequency pole of the uncompensated amplifier "moving" to an even lower frequency to become the dominant pole, and the higher-frequency pole of the uncompensated amplifier "moving" to a higher frequency. To overcome these disadvantages, pole zero compensation is used.
Other methods
Some other compensation methods are: lead compensation, lead–lag compensation and feed-forward compensation.
Lead compensation. Whereas dominant pole compensation places or moves poles in the open loop response, lead compensation places a zero in the open loop response to cancel one of the existing poles.
Lead–lag compensation places both a zero and a pole in the open loop response, with the pole usually being at an open loop gain of less than one.
Feed-forward or Miller compensation uses a capacitor to bypass a stage in the amplifier at high frequencies, thereby eliminating the pole that stage creates.
The purpose of these three methods is to allow greater open loop bandwidth while still maintaining amplifier closed loop stability. They are often used to compensate high gain, wide bandwidth amplifiers.
Footnotes
See also
Pole splitting
Bode plot
Negative feedback amplifier
Step response
Electronic design | Frequency compensation | [
"Engineering"
] | 1,639 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
607,084 | https://en.wikipedia.org/wiki/Library%20of%20Congress%20Living%20Legend | A Library of Congress Living Legend was someone recognized by the Library of Congress for creative contributions to American life. Those honored include artists, writers, activists, film makers, physicians, entertainers, sports figures, and public servants. Librarian of Congress Carla Hayden retired the program in 2018.
List of honorees
Hank Aaron (died 2021)
Madeleine Albright (died 2022)
Muhammad Ali (died 2016)
Mario Andretti
Ernie Banks (died 2015)
Harry Belafonte (died 2023)
Tony Bennett (died 2023)
James H. Billington (died 2018)
Big Bird (original performer Caroll Spinney died 2019)
Larry Bird
Herblock (died 2001)
Judy Blume
Julian Bond (died 2015)
T. Berry Brazelton (died 2018)
Gwendolyn Brooks (died 2000)
Dave Brubeck (died 2012)
Kobe Bryant (died 2020)
William F. Buckley, Jr. (died 2008)
Carol Burnett
Laura Bush
Ben Carson
Benny Carter (died 2003)
Johnny Cash (died 2003)
Vinton Cerf
Ray Charles (died 2004)
Linda Chavez
Julia Child (died 2004)
Beverly Cleary (died 2021)
David Copperfield
Bill Cosby
Walter Cronkite (died 2009)
Merce Cunningham (died 2009)
Michael DeBakey (died 2008)
Sylvia Earle
Marian Wright Edelman
Ahmet Ertegun (died 2006)
Suzanne Farrell
John Kenneth Galbraith (died 2006)
Andrew Goodpaster (died 2005)
Stephen Jay Gould (died 2002)
Katharine Graham (died 2001)
Archie Green (died 2009)
Thomas Hampson
Herbie Hancock
Mickey Hart
Al Hirschfeld (died 2003)
Bob Hope (died 2003)
Marta Casals Istomin
Glenn R. Jones (died 2015)
Quincy Jones (died 2024)
Jenette Kahn
Max Kampelman (died 2013)
George Kennan (died 2005)
Jackie Joyner Kersee
B.B. King (died 2015)
Billie Jean King
Jeane Kirkpatrick (died 2006)
John Kluge (died 2010)
Ursula K. Le Guin (died 2018)
Annie Leibovitz
Miguel León-Portilla (died 2019)
Carl Lewis
John Lewis (died 2020)
Mario Vargas Llosa
Alan Lomax (died 2002)
Yo-Yo Ma
Robert McCloskey (died 2003)
David McCullough (died 2022)
Mark McGwire
Rita Moreno
Toni Morrison (died 2019)
Odetta (died 2008)
Gordon Parks (died 2006)
Dolly Parton
Katherine Paterson
I. M. Pei (died 2019)
Jaroslav Pelikan (died 2006)
Itzhak Perlman
Colin Powell (died 2021)
Leontyne Price
Tito Puente (died 2000)
Sally K. Ride (died 2012)
Cal Ripken
Cokie Roberts (died 2019)
Frank Robinson (died 2019)
Fred Rogers (died 2003)
Philip Roth (died 2018)
Bob Schieffer
Gunther Schuller (died 2015)
Martin Scorsese
Pete Seeger (died 2014)
Maurice Sendak (died 2012)
Bobby Short (died 2005)
Stephen Sondheim (died 2021)
Steven Spielberg
Ralph Stanley (died 2016)
Gloria Steinem
Isaac Stern (died 2001)
Barbra Streisand
William Styron (died 2006)
Harold Varmus
Gwen Verdon (died 2000)
Lew Wasserman (died 2002)
Fred L. Whipple (died 2004)
Joseph Wilson (died 2015)
Tiger Woods
Herman Wouk (died 2019)
See also
List of awards for contributions to culture
List of medicine awards
References
External links
Library of Congress to Honor "Living Legends" (press release, with initial honorees). April 14, 2000. Public Affairs Office. Library of Congress official website
Living Legends (full list). Library of Congress official website, archived from the Internet Wayback Machine. Original site no longer extant.
Living Legend
Arts awards in the United States
Medicine awards
Governance and civic leadership awards
American sports trophies and awards
2000 establishments in the United States
2018 disestablishments in the United States | Library of Congress Living Legend | [
"Technology"
] | 808 | [
"Science and technology awards",
"Medicine awards"
] |
607,226 | https://en.wikipedia.org/wiki/Preventive%20war | A preventive war is an armed conflict "initiated in the belief that military conflict, while not imminent, is inevitable, and that to delay would involve greater risk." The party which is being attacked has a latent threat capability or it has shown that it intends to attack in the future, based on its past actions and posturing. A preventive war aims to forestall a shift in the balance of power by strategically attacking before the balance of power has had a chance to shift in the favor of the targeted party. Preventive war is distinct from preemptive strike, which is the first strike when an attack is imminent. Preventive uses of force "seek to stop another state . . . from developing a military capability before it becomes threatening or to hobble or destroy it thereafter, whereas [p]reemptive uses of force come against a backdrop of tactical intelligence or warning indicating imminent military action by an adversary."
Criticism
The majority view is that a preventive war undertaken without the approval of the United Nations is illegal under the modern framework of international law. The consensus is that preventive war "goes beyond what is acceptable in international law" and lacks legal basis. The UN High-level Panel on Threats, Challenges and Change stopped short of rejecting the concept outright but suggested that there is no right to preventive war. If there are good grounds for initiating preventive war, the matter should be put to the UN Security Council, which can authorize such action, given that one of the Council's main functions under Chapter VII of the UN Charter ("Action with Respect to Threats to the Peace, Breaches of the Peace, and Acts of Aggression") is to enforce the obligation of member states under Article 4, Paragraph 2 to "refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state . . . The Charter's drafters assumed that the Council might need to employ preventive force to forestall aggression such as initiated by Nazi Germany in the 1930s.
Examples
The Axis powers in World War II routinely invaded neutral countries on grounds of prevention and began the invasion of Poland in 1939 by claiming the Poles had attacked a border outpost first. In 1940, Germany invaded Denmark and Norway and argued that Britain might have used them as launching points for an attack or prevented supply of strategic materials to Germany. In the summer of 1941, Germany invaded the Soviet Union, inaugurating the bloody and brutal land war by claiming that a Judeo-Bolshevik conspiracy threatened the Reich. In late 1941, the Anglo-Soviet invasion of Iran was carried out to secure a supply corridor of petrol to the Soviet Union. Iranian Shah Rezā Shāh appealed to US President Franklin Roosevelt for help but was rebuffed on the grounds that "movements of conquest by Germany will continue and will extend beyond Europe to Asia, Africa, and even to the Americas, unless they are stopped by military force."
Pearl Harbor
Perhaps the most famous example of preventive war is the attack on Pearl Harbor by the Empire of Japan on December 7, 1941. Many in the US and Japan believed war to be inevitable. Coupled to the crippling US economic embargo that was rapidly degrading the Japanese military capability, that led the Japanese leadership to believe it was better to have the war as soon as possible.
The sneak attack was partly motivated by a desire to destroy the US Pacific Fleet to allow Japan to advance with reduced opposition from the US when it secured Japanese oil supplies by fighting against the British Empire and the Dutch Empire for control over the rich East Indian (Dutch East Indies, Malay Peninsula) oil-fields. In 1940, American policies and tension toward Japanese military actions and Japanese expansionism in the Far East increased. For example, in May 1940, the base of the US Pacific Fleet that was stationed on the West Coast was forwarded to an "advanced" position at Pearl Harbor in Honolulu, Hawaii.
The move was opposed by some US Navy officials, including their commander, Admiral James Otto Richardson, who was relieved by Roosevelt. Even so, the Far East Fleet was not significantly reinforced. Another ineffective plan to reinforce the Pacific was a rather late relocation of fighter planes to bases located on the Pacific islands like Wake Island, Guam, and the Philippines. For a long time, Japanese leaders, especially leaders of the Imperial Japanese Navy, had known that the large US military strength and production capacity posed a long-term threat to Japan's imperialist desires, especially if hostilities broke out in the Pacific. War games on both sides had long reflected those expectations.
Iraq War (2003–2011)
The 2003 invasion of Iraq was framed primarily as a preemptive war by the George W. Bush administration, although President Bush also argued it was supported by Security Council Resolutions: "Under Resolutions 678 and 687—both still in effect—the United States and our allies are authorized to use force in ridding Iraq of weapons of mass destruction." At the time, the US public and its allies were led to believe that Ba'athist Iraq might have restarted its nuclear weapons program or been "cheating" on its obligations to dispose of its large stockpile of chemical weapons dating from the Iran–Iraq War. Supporters of the war have argued it to be justified, as Iraq both harbored Islamic terrorist groups sharing a common hatred of the United States and was suspected to be developing weapons of mass destruction (WMD). Iraq's history of noncompliance of international security matters and its history of both developing and using such weapons were factors in the public perception of Iraq's having weapons of mass destruction.
In support of an attack on Iraq, US President George W. Bush stated in an address to the UN General Assembly on September 12, 2002 that the Iraqi "regime is a grave and gathering danger." However, despite extensive searches during the several years of occupation, the suspected weapons of mass destruction or weapons program infrastructure alleged by the Bush administration were not found to be functional or even known to most Iraqi leaders. Coalition forces instead found dispersed and sometimes-buried and partially dismantled stockpiles of abandoned and functionally expired chemical weapons. Some of the caches had been dangerously stored and were leaking, and many were then disposed of hastily and in secret, leading to secondary exposure from improper handling. Allegations of mismanagement and information suppression followed.
Case for preventive nuclear war
Since 1945, World War III between the US and the USSR was perceived by many as inevitable and imminent. Many high officials in the US military sector and some renowned luminaries in non-military fields advocated preventive war. According to their rationale, total war is inevitable, and it was senseless to permit the Russians to develop a nuclear parity with the United States. Hence the sooner the preventive war come the better, because the first strike is almost surely decisive and less devastating.
Dean Acheson
and James Burnham adhered to the version that the war is not inevitable but is already going on, although the American people still do not realize it.
The US military sector widely and wholeheartedly shared the idea of preventive war. Most prominent proponents included Defense Secretary Louis A. Johnson, JCS Chairman Admiral Arthur W. Radford, Navy Secretary Francis P. Matthews, Admiral Ralph A. Ofstie, Air Force Secretary W. Stuart Symington, Air Force Chiefs Curtis LeMay and Nathan F. Twining, Air Force Generals George Kenney and Orvil A. Anderson, General Leslie Groves (the wartime commander of the Manhattan Project) and CIA Director Walter Bedell Smith. NSC-100 and several studies by SAC and JCS during the Korean War advocated preventive war too.
In Congress, preventive warriors counted Deputy Secretary of Defense Paul Nitze, expert on the Soviet Union Charles E. Bohlen of the State Department, Senators John L. McClellan, Paul H. Douglas, Eugene D. Millikin, Brien McMahon (Chairman of the Atomic Energy Committee), William Knowland and Congressman Henry M. Jackson. The diplomatic circle included distinguished diplomats like George Kennan, William C. Bullitt (US Ambassador to Moscow), and John Paton Davies (from the same embassy).
John von Neumann of the Manhattan Project and later a consultant for the RAND Corporation expressed: "With the Russians it is not a question of whether but of when… If you say why not bomb them tomorrow, I say why not today?" Other renowned scientists and thinkers, such as Leo Szilard, William L. Laurence, James Burnham, and Bertrand Russell.
joined the preventive effort. The preventive war in the late 1940s was argued by “some very dedicated Americans.” “Realists” repeatedly proposed the preventive war. "The argument—prevent before it is too late—was quite common in the early atomic age and by no way limited to “the lunatic fringe.” A famous atomic scientist expressed a concern: In 1946, public discussion of international problems, in the United States at least, "has moved dangerously towards a consideration of so-called preventive war. One sees this tendency perhaps most markedly in the trend of news in Americans newspapers."
Bernard Brodie noted that at least prior to 1950, preventive war was a “live issue … among a very small but earnest minority of American citizens.” The dating of Brodie is too short, as the preventive war doctrine has had increasing support since the Korean War started. The late summer 1950 saw “a flurry of articles” in the public press dealing with preventive war. One of them in Time magazine (September 18, 1950) called for a buildup, followed by a “showdown” with the Russians by 1953. “1950 may have marked the high tide of ‘preventive war’ agitation…” According to Gallup poll of July 1950, right after the outbreak of the War, 14% of the polled opined for the immediate declaration of war on the USSR, the percentage which only slightly declined by the end of the War. “So preventive war thinking was surprisingly widespread in the early nuclear age, the period from mid-1945 through late 1954.”
The preventive warriors remained minority in America’s postwar political arena, and Washington’s elder statesmen soundly rejected their arguments. However, during several of the East-West confrontations that marked the first decade of the Cold War, well-placed officials in both the Truman and Eisenhower administrations urged their Presidents to launch preventive strikes on the Soviet Union. Entry in Truman’s secret personal journal on January 27, 1952 tells:
In 1953, Eisenhower wrote in a summary memorandum to his Secretary of State, John Foster Dulles: In present circumstances, "we would be forced to consider whether or not out duty to future generations did not require us to initiate war at the most propitious moment we could designate.” In May 1954, the JCS’s Advance Study Group proposed to Eisenhower to consider “deliberately precipitating war with the USSR in the near future,” before Soviet thermonuclear capability became a real menace. The same year, Eisenhower asked in a meeting of National Security Council: “Should the United States now get ready to fight the Soviet Union?” and pointed out that “he had brought up this question more than once at prior Council meetings and he had never done so facetiously.” By the fall 1954, Eisenhower made his mind up and approved a Basic National Security Policy paper which stated unequivocally that “the United States and its allies must reject the concept of preventive war, or acts intended to provoke war.”
Winston Churchill was more resolved on the preventive war. He argued repeatedly in the late 1940s that matters needed to be brought to a head with the Soviets before it was too late, while the United States still enjoyed a nuclear monopoly. Charles de Gaulle in 1954 regretted that now it was too late. The same regret of opportunity missed expressed later Curtis LeMay
and Henry Kissinger.
See also
A Clean Break: A New Strategy for Securing the Realm
Command responsibility
Caroline affair
Pre-emptive nuclear strike
Imperialism
Jus ad bellum
Kellogg–Briand Pact
Legality of the Iraq War
Military science
UN Charter
References
External links
The Caroline Case : Anticipatory Self-Defence in Contemporary International Law (Miskolc Journal of International Law v.1 (2004) No. 2 pp. 104-120)
The American Strategy of Preemptive War and International Law
Counterterrorism in the United States
Aggression in international law
Law of war
Wars by type
Prevention
Warfare by type | Preventive war | [
"Biology"
] | 2,544 | [
"Behavior",
"Aggression",
"Aggression in international law"
] |
607,233 | https://en.wikipedia.org/wiki/Rossi%20X-ray%20Timing%20Explorer | The Rossi X-ray Timing Explorer (RXTE) was a NASA satellite that observed the time variation of astronomical X-ray sources, named after physicist Bruno Rossi. The RXTE had three instruments — an All-Sky Monitor, the High-Energy X-ray Timing Experiment (HEXTE) and the Proportional Counter Array. The RXTE observed X-rays from black holes, neutron stars, X-ray pulsars and X-ray bursts. It was funded as part of the Explorer program and was also called Explorer 69.
RXTE had a mass of and was launched from Cape Canaveral on 30 December 1995, at 13:48:00 UTC, on a Delta II launch vehicle. Its International Designator is 1995-074A.
Mission
The X-Ray Timing Explorer (XTE) mission has the primary objective to study the temporal and broad-band spectral phenomena associated with stellar and galactic systems containing compact objects in the energy range 2--200 KeV and in time scales from microseconds to years. The scientific instruments consists of two pointed instruments, the Proportional Counter Array (PCA) and the High-Energy X-ray Timing Experiment (HEXTE), and the All Sky Monitor (ASM), which scans over 70% of the sky each orbit. All of the XTE observing time were available to the international scientific community through a peer review of submitted proposals. XTE used a new spacecraft design that allows flexible operations through rapid pointing, high data rates, and nearly continuous receipt of data at the Science Operations Center (SOC) at Goddard Space Flight Center via a Multiple Access link to the Tracking and Data Relay Satellite System (TDRSS). XTE was highly maneuverable with a slew rate of greater than 6° per minute. The PCA/HEXTE could be pointed anywhere in the sky to an accuracy of less than 0.1°, with an aspect knowledge of around 1 arcminute. Rotatable solar panels enable anti-sunward pointing to coordinate with ground-based night-time observations. Two pointable high-gain antennas maintain nearly continuous communication with the TDRSS. This, together with 1 GB (approximately four orbits) of on-board solid-state data storage, give added flexibility in scheduling observations.
Telecommunications
Required continuous TDRSS Multiple Access (MA) return link coverage except for zone of exclusion: Real-time and playback of engineering/housekeeping data at 16 or 32 kbs - Playback of science data at 48 or 64 kbs.
Requires 20 minutes of SSA contacts with alternating TDRSS per orbit: Real-time and playback of engineering/housekeeping data at 32 kbs - Playback of science data at 512 or 1024 kbs.
For launch and contingency, required TDRSS MA/SSA real-time engineering and housekeeping at 1 kbs.
The bit error rate shall be less than 1 in 10E8 for at least 95% of the orbits.
Instruments
All-Sky Monitor (ASM)
The All-Sky Monitor (ASM) provided all-sky X-ray coverage, to a sensitivity of a few percent of the Crab Nebula intensity in one day, in order to provide both flare alarms and long-term intensity records of celestial X-ray sources. The ASM consisted of three wide-angle shadow cameras equipped with proportional counters with a total collecting area of . The instrumental properties were:
Energy range: 2–12-keV;
Time resolution: observes 80% of the sky every 90 minutes;
Spatial resolution: 3' × 15';
Number of shadow cameras: 3, each with 6° × 90° FoV;
Collecting area: ;
Detector: Xenon proportional counter, position-sensitive;
Sensitivity: 30 mCrab.
It was built by the CSR at Massachusetts Institute of Technology. The principal investigator was Dr. Hale Bradt.
High Energy X-ray Timing Experiment (HEXTE)
The High-Energy X-ray Timing Experiment (HEXTE) is a scintillator array for the study of temporal and temporal/spectral effects of the hard X-ray (20 to 200 keV) emission from galactic and extragalactic sources. The HEXTE consisted of two clusters each containing four phoswich scintillation detectors. Each cluster could "rock" (beam switch) along mutually orthogonal directions to provide background measurements 1.5° or 3.0° away from the source every 16 to 128 seconds. In addition, the input was sampled at 8 microseconds so as to detect time-varying phenomena. Automatic gain control was provided by using an radioactive source mounted in each detector's field of view. The HEXTE's basic properties were:
Energy range: 15–250 keV;
Energy resolution: 15% at 60 keV;
Time sampling: 8 microseconds;
Field of view: 1° FWHM;
Detectors: 2 clusters of 4 NaI/CsI scintillation counters;
Collecting area: 2 × ;
Sensitivity: 1-Crab = 360 count/second per HEXTE cluster;
Background: 50 count/second per HEXTE cluster.
The HEXTE was designed and built by the Center for Astrophysics & Space Sciences (CASS) at the University of California, San Diego. The HEXTE principal investigator was Dr. Richard E. Rothschild.
Proportional Counter Array (PCA)
The Proportional Counter Array (PCA) provides approximately of X-ray detector area, in the energy range 2 to 60 keV, for the study of temporal/spectral effects in the X-ray emission from galactic and extragalactic sources. The PCA was an array of five proportional counters with a total collecting area of . The instrumental properties were:
Energy range: 2–60 keV;
Energy resolution: <18% at 6 keV;
Time resolution: 1 μs
Spatial resolution: collimator with 1° (FWHM);
Detectors: 5 proportional counters;
Collecting area: ;
Layers: 1 propane veto; 3 Xenon, each split into two; 1 Xenon veto layer;
Sensitivity: 0.1-mCrab;
Background: 90-mCrab.
The PCA is being built by the Laboratory for High Energy Astrophysics (LHEA) at Goddard Space Flight Center. The principal investigator was Jean Swank.
Results
Observations from the Rossi X-ray Timing Explorer have been used as evidence for the existence of the frame-dragging effect predicted by the theory of general relativity of Einstein. RXTE results have, as of late 2007, been used in more than 1400 scientific papers.
In January 2006, it was announced that Rossi had been used to locate a candidate intermediate-mass black hole named M82 X-1. In February 2006, data from RXTE was used to prove that the diffuse background X-ray glow in our galaxy comes from innumerable, previously undetected white dwarfs and from other stars' coronae. In April 2008, RXTE data was used to infer the size of the smallest known black hole.
RXTE ceased science operations on 12 January 2012.
Atmospheric entry
NASA scientists said that the decommissioned RXTE would re-enter the Earth's atmosphere "between 2014 and 2023" (30 April 2018). Later, it became clear that the satellite would re-enter in late April or early May 2018, and the spacecraft fell out of orbit on 30 April 2018.
See also
List of X-ray space telescopes
Neutron Star Interior Composition Explorer (NICER, launched in June 2017 and attached to ISS)
References
External links
MIT's Rossi X-Ray Timing Explorer Project
NASA RXTE Mission Site
Video documentary
Variations in the X-ray Sky by RXTE (1997)
RXTE Reveals the Cloudy Cores of Active Galaxies
Spacecraft launched in 1995
Spacecraft which reentered in 2018
Explorers Program
Space telescopes
X-ray telescopes | Rossi X-ray Timing Explorer | [
"Astronomy"
] | 1,618 | [
"Space telescopes"
] |
607,253 | https://en.wikipedia.org/wiki/Living%20Machine | A Living Machine is a form of ecological sewage treatment based on fixed-film ecology.
The Living Machine system was commercialized and is marketed by Living Machine Systems, L3C, a corporation based in Charlottesville, Virginia, United States.
Examples
Examples of Living Machines are mechanical composters for industrial kitchens, effective microorganisms as fertilizer for agricultural purposes, and Integrated Biotectural systems in landscaping and architecture like Earthships or the IBTS Greenhouse.
Components like tomato plants (for more water purification) and fish (for food) have been part of the living, ecosystem-like designs. The theory does not limit the size of the system, or the amount of species. One design optimum is a natural ecosystem which is designed for a special purpose like a sewage treating wetland in a suitable ecosystem for the locality. Another optimum is an economically viable system returning profit for the investor. The practice of permaculture is one example for a compromise between the two optimum design points.
The scale of Living Machine systems ranges from the individual building to community-scale public works. Some of the earliest Living Machines were used to treat domestic wastewater in small, ecologically-conscious villages, such as Findhorn Community in Scotland. The latest-generation Tidal Flow Wetland Living Machines are being used in major urban office buildings, military bases, housing developments, resorts and institutional campuses.
Living Machine System Process
“Fixed film ecology” has superseded systems based on hydroponics or a fluid medium. In fixed film systems, the wetland cells are filled with a solid aggregate medium having extensive surface area for beneficial biofilm (treatment bacteria) growth. Fixed film ecology allows denser and more diverse micro-ecosystems to form than does a liquid medium. These ecosystems go well beyond bacteria to include a variety of organisms up to and including macro-vegetation.
Tidal cycles (filling and draining the wetland in accelerated tidal action, with 12 or more cycles per day) are used to passively bring oxygen into the wetland cells. This action mimics the type of biological action that occurs in natural tidal estuaries. Tidal flow wetlands replace the need to blow air into a liquid medium - they use gravity to bring atmospheric oxygen into the cell when it is drained.
See also
Bioremediation
Rain garden
Anaerobic digestion
Constructed wetlands
Biomimicry
IBTS Greenhouse
References
External links
Landscape Machine concept, Wageningen University, the Netherlands
Living Machine, L3C website
Sewerage
Environmental engineering
Aquatic ecology
Environmental soil science
Pollution control technologies
Systems ecologists
Landscape architecture
Water conservation | Living Machine | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 514 | [
"Chemical engineering",
"Landscape architecture",
"Environmental soil science",
"Pollution control technologies",
"Water pollution",
"Sewerage",
"Civil engineering",
"Ecosystems",
"Environmental engineering",
"Aquatic ecology",
"Architecture"
] |
607,286 | https://en.wikipedia.org/wiki/Hilbert%27s%20program | In mathematics, Hilbert's program, formulated by German mathematician David Hilbert in the early 1920s, was a proposed solution to the foundational crisis of mathematics, when early attempts to clarify the foundations of mathematics were found to suffer from paradoxes and inconsistencies. As a solution, Hilbert proposed to ground all existing theories to a finite, complete set of axioms, and provide a proof that these axioms were consistent. Hilbert proposed that the consistency of more complicated systems, such as real analysis, could be proven in terms of simpler systems. Ultimately, the consistency of all of mathematics could be reduced to basic arithmetic.
Gödel's incompleteness theorems, published in 1931, showed that Hilbert's program was unattainable for key areas of mathematics. In his first theorem, Gödel showed that any consistent system with a computable set of axioms which is capable of expressing arithmetic can never be complete: it is possible to construct a statement that can be shown to be true, but that cannot be derived from the formal rules of the system. In his second theorem, he showed that such a system could not prove its own consistency, so it certainly cannot be used to prove the consistency of anything stronger with certainty. This refuted Hilbert's assumption that a finitistic system could be used to prove the consistency of itself, and therefore could not prove everything else.
Statement of Hilbert's program
The main goal of Hilbert's program was to provide secure foundations for all mathematics. In particular, this should include:
A formulation of all mathematics; in other words all mathematical statements should be written in a precise formal language, and manipulated according to well defined rules.
Completeness: a proof that all true mathematical statements can be proved in the formalism.
Consistency: a proof that no contradiction can be obtained in the formalism of mathematics. This consistency proof should preferably use only "finitistic" reasoning about finite mathematical objects.
Conservation: a proof that any result about "real objects" obtained using reasoning about "ideal objects" (such as uncountable sets) can be proved without using ideal objects.
Decidability: there should be an algorithm for deciding the truth or falsity of any mathematical statement.
Gödel's incompleteness theorems
Kurt Gödel showed that most of the goals of Hilbert's program were impossible to achieve, at least if interpreted in the most obvious way. Gödel's second incompleteness theorem shows that any consistent theory powerful enough to encode addition and multiplication of integers cannot prove its own consistency. This presents a challenge to Hilbert's program:
It is not possible to formalize all mathematical true statements within a formal system, as any attempt at such a formalism will omit some true mathematical statements. There is no complete, consistent extension of even Peano arithmetic based on a computably enumerable set of axioms.
A theory such as Peano arithmetic cannot even prove its own consistency, so a restricted "finitistic" subset of it certainly cannot prove the consistency of more powerful theories such as set theory.
There is no algorithm to decide the truth (or provability) of statements in any consistent extension of Peano arithmetic. Strictly speaking, this negative solution to the Entscheidungsproblem appeared a few years after Gödel's theorem, because at the time the notion of an algorithm had not been precisely defined.
Hilbert's program after Gödel
Many current lines of research in mathematical logic, such as proof theory and reverse mathematics, can be viewed as natural continuations of Hilbert's original program. Much of it can be salvaged by changing its goals slightly (Zach 2005), and with the following modifications some of it was successfully completed:
Although it is not possible to formalize all mathematics, it is possible to formalize essentially all the mathematics that anyone uses. In particular Zermelo–Fraenkel set theory, combined with first-order logic, gives a satisfactory and generally accepted formalism for almost all current mathematics.
Although it is not possible to prove completeness for systems that can express at least the Peano arithmetic (or, more generally, that have a computable set of axioms), it is possible to prove forms of completeness for many other interesting systems. An example of a non-trivial theory for which completeness has been proved is the theory of algebraically closed fields of given characteristic.
The question of whether there are finitary consistency proofs of strong theories is difficult to answer, mainly because there is no generally accepted definition of a "finitary proof". Most mathematicians in proof theory seem to regard finitary mathematics as being contained in Peano arithmetic, and in this case it is not possible to give finitary proofs of reasonably strong theories. On the other hand, Gödel himself suggested the possibility of giving finitary consistency proofs using finitary methods that cannot be formalized in Peano arithmetic, so he seems to have had a more liberal view of what finitary methods might be allowed. A few years later, Gentzen gave a consistency proof for Peano arithmetic. The only part of this proof that was not clearly finitary was a certain transfinite induction up to the ordinal ε0. If this transfinite induction is accepted as a finitary method, then one can assert that there is a finitary proof of the consistency of Peano arithmetic. More powerful subsets of second-order arithmetic have been given consistency proofs by Gaisi Takeuti and others, and one can again debate about exactly how finitary or constructive these proofs are. (The theories that have been proved consistent by these methods are quite strong, and include most "ordinary" mathematics.)
Although there is no algorithm for deciding the truth of statements in Peano arithmetic, there are many interesting and non-trivial theories for which such algorithms have been found. For example, Tarski found an algorithm that can decide the truth of any statement in analytic geometry (more precisely, he proved that the theory of real closed fields is decidable). Given the Cantor–Dedekind axiom, this algorithm can be regarded as an algorithm to decide the truth of any statement in Euclidean geometry. This is substantial as few people would consider Euclidean geometry a trivial theory.
See also
Grundlagen der Mathematik
Foundational crisis of mathematics
References
G. Gentzen, 1936/1969. Die Widerspruchfreiheit der reinen Zahlentheorie. Mathematische Annalen 112:493–565. Translated as 'The consistency of arithmetic', in The collected papers of Gerhard Gentzen, M. E. Szabo (ed.), 1969.
D. Hilbert. 'Die Grundlegung der elementaren Zahlenlehre'. Mathematische Annalen 104:485–94. Translated by W. Ewald as 'The Grounding of Elementary Number Theory', pp. 266–273 in Mancosu (ed., 1998) From Brouwer to Hilbert: The debate on the foundations of mathematics in the 1920s, Oxford University Press. New York.
S.G. Simpson, 1988. Partial realizations of Hilbert's program (pdf). Journal of Symbolic Logic 53:349–363.
R. Zach, 2006. Hilbert's Program Then and Now. Philosophy of Logic 5:411–447, arXiv:math/0508572 [math.LO].
External links
Mathematical logic
Proof theory
Program | Hilbert's program | [
"Mathematics"
] | 1,553 | [
"Hilbert's problems",
"Mathematical logic",
"Mathematical problems",
"Proof theory"
] |
607,411 | https://en.wikipedia.org/wiki/Big%20Basin%20Redwoods%20State%20Park | Big Basin Redwoods State Park is a state park in the U.S. state of California, located in Santa Cruz County, about northwest of Santa Cruz. The park contains almost all of the Waddell Creek watershed, which was formed by the seismic uplift of its rim, and the erosion of its center by the many streams in its bowl-shaped depression.
Big Basin is California's oldest state park, established in 1902, earning its designation as a California Historical Landmark. Its original have been increased over the years to over . It is part of the Northern California coastal forests ecoregion and is home to the largest continuous stand of ancient coast redwoods south of San Francisco. It contains of old-growth forest as well as recovering redwood forest, with mixed conifer, oaks, chaparral and riparian habitats. Elevations in the park vary from sea level to over 600 m (2,000 ft). The climate ranges from foggy and damp near the ocean to sunny, warm ridge tops.
The park has over of trails. Some of these trails link Big Basin to Castle Rock State Park and the eastern reaches of the Santa Cruz range. The Skyline-to-the-Sea Trail threads its way through the park along Waddell Creek to Waddell Beach, and the adjacent Theodore J. Hoover Natural Preserve, a freshwater marsh.
The park has many waterfalls, a wide variety of environments (from lush canyon bottoms to sparse chaparral-covered slopes), many species of mammals (deer, raccoons, an occasional bobcat) and abundant bird life – including Steller's jays, egrets, herons and acorn woodpeckers.
The CZU Lightning Complex fires in August 2020 burned over 97% of Big Basin and destroyed the park headquarters, closing the park for 2 years during rebuilding efforts before it reopened in Summer 2022.
History
Archaeological evidence has sporadically found prehistoric people inhabited old growth forests within the Park. Numerous resources would have been available to California Indians in the old growth forests, such as basketry material, plant foods like acorns and bulbs as well as animal prey for hunters and perhaps traditional sacred places. Ohlone tribes that lived on watercourses which begin in the park were the Quiroste, Achistaca, Cotoni and Sayante. In October 1769, the Portola expedition encountered the redwoods of southern Santa Cruz County, and camped at the mouth of Waddell Creek, in present-day Big Basin, later that month. Although many in the party had been ill with scurvy, they gorged themselves on berries and quickly recovered. This miraculous recovery, as it seemed at the time, inspired the name given to the valley: 'Cañada de la Salud' or Canyon of Health.
By the late 19th century, redwood forests were gaining international appreciation while also being decimated. Early conservationists, including notable Santa Cruzans William T. Jeter and A.A. Taylor were joined by Santa Clara County activists Andrew P. Hill, Father Robert Kenna and Carrie Stevens Walter. Their movement to preserve the Big Basin redwood forest began at Stanford University on May 1, Soon after Santa Cruzans led an excursion to the park where seven men and two women formed the Sempervirens Club. The Sempervirens galvanized the state-wide effort resulting in ground-breaking legislation being signed into law in March 1901. The official land transfer occurred in 1902: The California Redwood Park initially consisted of , most of it old growth forest.
In the following decades, visitation to Big Basin grew steadily as park amenities were developed. The Big Basin Inn offered cabins to rent, a restaurant, general store, barber shop, gas station and photographic studio. There were also a post office, a concrete swimming pool, boating areas, tennis courts and a dance floor. Campsites cost 50 cents a night in 1927 and many families stayed all summer. During the Great Depression of the 1930s, the Civilian Conservation Corps assigned a company to Big Basin. These men built the amphitheater, miles of trails, and many of the buildings still used today. The main administration building, built by the CCC in 1936, was listed on the National Register of Historic Places prior to its destruction in the 2020 fires.
Save the Redwoods League purchased a parcel known as Cascade Creek in 2020 that links Big Basin with Año Nuevo State Park.
2020 CZU Lightning Complex Wildfire
The CZU Lightning Complex fires started on August 16, 2020 and burned 86,509 acres across Santa Cruz and San Mateo counties. The fire spread quickly, and the area was evacuated on August 18. On August 20, it was reported that the park's historic headquarters building had been completely destroyed, and the campgrounds around the park were extensively damaged. After actively burning for 37 days, the fires were contained on September 22. Over 97% of Big Basin was burned and nearly every structure was destroyed.
This was the first major wildfire in Big Basin in over 100 years, which had previously burned in 1904. This led to a greater intensity of the CZU fires, causing severe damage to the majority of the old growth trees. While some of the trees fell during and after the fires, the majority of the ancient redwoods remain standing. However, studies have shown that only 24% of the forest in Big Basin is still alive and regrowing due to the intensity of the fires and drought in the following years, and the old growth forest may never fully recover.
An April 2021 backcountry tour revealed the scorched landscape and the hundred structures destroyed, and the park superintendent estimated it might be up to a year before the public will be allowed safe access to park trails. The burnt wreckage of 1,490 structures and 15,000 charred trees, mainly Douglas fir, had fallen or were in danger of falling onto the hiking trails. One year after the fire, the clean up and rebuilding process began. The park remained closed to the public until July 22, 2022. Almost two years after the fire, Big Basin partially reopened 8 hiking trails for day use.
Flora
Although redwoods dominate the landscape, many other plant species are common in Big Basin. One will certainly see coast Douglas-fir, tan oak, Pacific madrone, and Pacific wax myrtle trees in the park. Competing for sunshine are also many shrubs such as red huckleberries, western azalea, and many varieties of ferns. Spring and summer bring the wildflowers: redwood sorrel, salal, redwood violets, trillium, star lily and mountain iris. The rains of fall and winter deliver hundreds of kinds of fungi in a startling variety of shapes, sizes and colors.
Upon climbing to higher elevations, one will find the forest growing thinner, as redwoods are replaced by more drought-tolerant species. The higher, drier ridges and slopes of Big Basin are typically full of chaparral vegetation: knobcone pines, chinquapin and buckeye create the canopy, with ceanothus, manzanita, chamise, and chaparral pea growing dense and low. Adding a splash of color are wildflowers such as Indian paintbrush, monkey flower, bush poppies and yerba santa.
Near the mouth of Waddell Creek is the Theodore J. Hoover Natural Preserve, a rare relatively undisturbed freshwater marsh. This special place provides habitat for a wide variety of birds, reptiles and amphibians. The nearby Rancho Del Oso Nature and History Center interprets the cultural and natural history of the area.
Fauna
Mammals such as black-tailed deer, western gray squirrels, chipmunks and raccoons are common, but foxes, coyotes, bobcats, and opossums are also present. Cougars are known to live in the park but are rarely sighted. Grizzly bears are extinct in California, but were numerous in the past. The last known human to die in California due to a grizzly attack in the wild occurred in Big Basin when, in 1875, William Waddell, a lumber mill owner, was killed near Waddell Creek.
Bird life is abundant throughout the park. Steller's jays and acorn woodpeckers are both seen and heard, and the dark-eyed junco is widespread. Less obvious are the brown creeper, Anna's hummingbird, northern flicker, olive-sided flycatcher and sharp-shinned hawk. The first marbled murrelet nest ever sighted was located in Big Basin not far from the park headquarters. These robin-sized seabirds nest high in the oldest coast Douglas-firs and redwoods to feed their young. They can be seen or heard at dawn and dusk, high above the forest canopy.
Many reptiles are also present, but aside from the ubiquitous Coast Range subspecies of the western fence lizard (Sceloporus occidentalis bocourtii), most are rarely seen due to their shy behavior. The only dangerous reptile in the park is the Pacific rattlesnake (Crotalus oreganus), found almost exclusively in the high, dry chaparral.
The damp, shady woodland floor is home to a variety of amphibians. Commonly seen species include the California newt (Taricha torosa torosa), Pacific tree frog (Pseudacris regilla), and arboreal salamander (Aneides lugubris). Less commonly seen are the black salamander (Aneides flavipunctatus) and California giant salamander (Dicamptodon ensatus) and the threatened California red-legged frog (Rana draytonii). Particularly intriguing are banana slugs (Ariolimax spp.), which can reach 6 inches long.
The butterfly, California sisters (Adelpha bredowii), flutter high in the tree canopies.
Camping
Big Basin Redwoods State Park previously had many options for camping, including cabins, developed campsites, and trail camps. Within the park, there were 146 individual campsites, 36 cabins, and five trail camps. The 2020 CZU Lightning Complex fires destroyed many campgrounds; as of Summer 2024, none have re-opened.
Each campground at Big Basin Redwoods State Park was open on a different schedule during the year. The Huckleberry and Sequoia Campgrounds were open year round while Blooms Creek, Sempervirens, Watashi and Sky Meadow Campgrounds were seasonal.
Access
The park is about two hours south of San Francisco, or seven hours north of Los Angeles.
Big Basin can be approached from the east, through redwood forest and coastal mountains, or from the coast, along State Route 1. The eastern route, over State Route 9 through Saratoga and smaller towns like Boulder Creek is more popular because of the famous trees. This route passes Castle Rock State Park (California) on the eastern side of the Santa Cruz range.
From SR 1, Gazos Creek road offers a pleasant fire-road route for mountain bikes (road closed to motor vehicles), which can then descend into the headquarters area or turn off on Johansen fire road to join China Grade above its intersection with State Route 236.
After reopening the park after the CZU Lightning Complex fires, the Santa Cruz Metropolitan Transit District expanded its bus route 35 service to run four trips to and from the park on weekends only.
In popular culture
Big Basin plays the part of the fictional "Bolderoc National Park" in the 1942 George Marshall film, The Forest Rangers. It also stands in for Muir Woods in the 1958 Alfred Hitchcock film, Vertigo and for Redwood National Park in the 1967 Disney film, The Gnome-Mobile.
See also
List of California state parks
References
Further reading
External links
California State Parks: Big Basin Redwoods State Park website
Hikingsanfrancisco.com: Big Basin Hiking
Gallery
State parks of California
Parks in Santa Cruz County, California
Coast redwood groves
Santa Cruz Mountains
Campgrounds in California
Parks in the San Francisco Bay Area
Protected areas established in 1902
1902 establishments in California
Civilian Conservation Corps in California
Old-growth forests | Big Basin Redwoods State Park | [
"Biology"
] | 2,457 | [
"Old-growth forests",
"Ecosystems"
] |
607,464 | https://en.wikipedia.org/wiki/Primordial%20soup | Primordial soup, also known as prebiotic soup, is the hypothetical set of conditions present on the Earth around 3.7 to 4.0 billion years ago. It is an aspect of the heterotrophic theory (also known as the Oparin–Haldane hypothesis) concerning the origin of life, first proposed by Alexander Oparin in 1924, and J. B. S. Haldane in 1929.
As formulated by Oparin, in the primitive Earth's surface layers, carbon, hydrogen, water vapour, and ammonia reacted to form the first organic compounds. The concept of a primordial soup gained credence in 1953 when the "Miller–Urey experiment" used a highly reduced mixture of gases—methane, ammonia and hydrogen—to form basic organic monomers, such as amino acids.
Historical background
The notion that living beings originated from inanimate materials comes from the Ancient Greeks—the theory known as spontaneous generation. Aristotle in the 4th century BCE gave a proper explanation, writing:
Aristotle also states that it is not only that animals originate from other similar animals, but also that living things do arise and always have arisen from lifeless matter. His theory remained the dominant idea on origin of life (outside that of deity as a causal agent) from the ancient philosophers to the Renaissance thinkers in various forms. With the birth of modern science, experimental refutations emerged. Italian physician Francesco Redi demonstrated in 1668 that maggots developed from rotten meat only in a jar where flies could enter, but not in a closed-lid jar. He concluded that: omne vivum ex vivo (All life comes from life).
The experiment of French chemist Louis Pasteur in 1859 is regarded as the death blow to spontaneous generation. He experimentally showed that organisms (microbes) can not grow in sterilised water, unless it is exposed to air. The experiment won him the Alhumbert Prize in 1862 from the French Academy of Sciences, and he concluded: "Never will the doctrine of spontaneous generation recover from the mortal blow of this simple experiment."
Evolutionary biologists believed that a kind of spontaneous generation, but different from the simple Aristotelian doctrine, must have worked for the emergence of life. French biologist Jean-Baptiste de Lamarck had speculated that the first life form started from non-living materials. "Nature, by means of heat, light, electricity and moisture", he wrote in 1809 in Philosophie Zoologique (The Philosophy of Zoology), "forms direct or spontaneous generation at that extremity of each kingdom of living bodies, where the simplest of these bodies are found".
When English naturalist Charles Darwin introduced the theory of natural selection in his 1859 book On the Origin of Species, his supporters, such as the German zoologist Ernst Haeckel, criticised him for not using his theory to explain the origin of life. Haeckel wrote in 1862: "The chief defect of the Darwinian theory is that it throws no light on the origin of the primitive organism—probably a simple cell—from which all the others have descended. When Darwin assumes a special creative act for this first species, he is not consistent, and, I think, not quite sincere."
Although Darwin did not speak explicitly about the origin of life in On the Origin of Species, he did mention a "warm little pond" in a letter to Joseph Dalton Hooker dated February 1, 1871:
Heterotrophic theory
A coherent scientific argument was introduced by Soviet biochemist Alexander Oparin in 1924. According to Oparin, in the primitive Earth's surface, carbon, hydrogen, water vapour, and ammonia reacted to form the first organic compounds. Unbeknownst to Oparin, whose writing was circulated only in Russian, an English scientist J. B. S. Haldane independently arrived at a similar conclusion in 1929. It was Haldane who first used the term "soup" to describe the accumulation of organic material and water in the primitive Earth
According to the theory, organic compounds essential for life forms were synthesized in the primitive Earth under prebiotic conditions. The mixture of inorganic and organic compounds with water on the primitive Earth became the prebiotic or primordial soup. There, life originated and the first forms of life were able to use the organic molecules to survive and reproduce. Today the theory is variously known as the heterotrophic theory, heterotrophic origin of life theory, or the Oparin-Haldane hypothesis. Biochemist Robert Shapiro has summarized the basic points of the theory in its "mature form" as follows:
Early Earth had a chemically reducing atmosphere.
This atmosphere, exposed to energy in various forms, produced simple organic compounds ("monomers").
These compounds accumulated in the prebiotic soup, which may have been concentrated at places such as shorelines and oceanic vents.
By further transformation, more complex organic polymers – and ultimately life – developed in the soup.
Oparin's theory
Alexander Oparin first postulated his theory in Russia in 1924 in a small pamphlet titled Proiskhozhdenie Zhizny (The Origin of Life). According to Oparin, the primitive Earth's surface had a thick red-hot liquid, composed of heavy elements such as carbon (in the form of iron carbide). This nucleus was surrounded by the lightest elements, i.e. gases, such as hydrogen. In the presence of water vapour, carbides reacted with hydrogen to form hydrocarbons. Such hydrocarbons were the first organic molecules. These further combined with oxygen and ammonia to produce hydroxy- and amino-derivatives, such as carbohydrates and proteins. These molecules accumulated on the ocean's surface, becoming gel-like substances and growing in size. They gave rise to primitive organisms (cells), which he called coacervates. In his original theory, Oparin considered oxygen as one of the primordial gases; thus the primordial atmosphere was an oxidising one. However, when he elaborated his theory in 1936 (in a book by the same title, and translated into English in 1938), he modified the chemical composition of the primordial environment as strictly reducing, consisting of methane, ammonia, free hydrogen and water vapour—excluding oxygen.
In his 1936 work, impregnated by a Darwinian thought that involved a slow and gradual evolution from the simple to the complex, Oparin proposed a heterotrophic origin, result of a long process of chemical and pre-biological evolution, where the first forms of life should have been microorganisms dependent on the molecules and organic substances present in their external environment. That external environment was the primordial soup.
The idea of a heterotrophic origin was based, in part, on the universality of fermentative reactions, which, according to Oparin, should have first appeared in evolution due to its simplicity. This was opposed to the idea, widely accepted at that time, that the first organisms emerged endowed with an autotrophic metabolism, which included photosynthetic pigments, enzymes and the ability to synthesize organic compounds from CO2 and H2O; for Oparin it was impossible to reconcile the original photosynthetic organisms with the ideas of Darwinian evolution.
From the detailed analysis of the geochemical and astronomical data known at that date, Oparin also proposed a primitive atmosphere devoid of O2 and composed of CH4, NH3 and H2O; under these conditions it was pointed out that the origin of life had been preceded by a period of abiotic synthesis and subsequent accumulation of various organic compounds in the seas of primitive Earth. This accumulation resulted in the formation of a primordial broth containing a wide variety of molecules.
There, according to Oparin, a particular type of colloid, the coacervates, were formed due to the conglomeration of organic molecules and other polymers with positive and negative charges. Oparin suggested that the first living beings had been preceded by pre-cellular structures similar to those coacervates, whose gradual evolution gave rise to the appearance of the first organisms.
Like the coacervates, several of Oparin's original ideas have been reformulated and replaced; this includes, for example, the reducing character of the atmosphere on primitive Earth, the coacervates as a pre-cellular model and the primitive nature of glycolysis. In the same way, we now understand that the gradual processes are not necessarily slow, and we even know, thanks to the fossil record, that the origin and early evolution of life occurred in short geologic time lapses.
However, the general approach of Oparin's theory had great implications for biology, since his work achieved the transformation of the study of the origin of life from a purely speculative field to a structured and broad research program. Thus, since the second half of the twentieth century, Oparin's theory of the origin and early evolution of life has undergone a restructuring that accommodates the experimental findings of molecular biology, as well as the theoretical contributions of evolutionary biology.
A point of convergence between these two branches of biology and that has been perfectly incorporated into the heterotrophic origin theory is found in the RNA world hypothesis.
This links to the Soda Ocean Hypothesis, characterizing the primitive ocean with a higher carbonate mineral supersaturation.
Soda lakes are considered as environments that conserve and/or mimic ancient life conditions
and as "a recreated model of late Precambrian ocean chemistry" — that is, the "soda lake" environment that prepared the great explosion of life during the Cambrian.
Haldane's theory
J.B.S. Haldane independently postulated his primordial soup theory in 1929 in an eight-page article "The origin of life" in The Rationalist Annual. According to Haldane the primitive Earth's atmosphere was essentially reducing, with little or no oxygen. Ultraviolet rays from the Sun induced reactions on a mixture of water, carbon dioxide, and ammonia. Organic substances such as sugars and protein components (amino acids) were synthesised. These molecules "accumulated till the primitive oceans reached the consistency of hot dilute soup." The first reproducing things were created from this soup.
As to the priority over the theory, Haldane accepted that Oparin came first, saying, "I have very little doubt that Professor Oparin has the priority over me."
Monomer formation
One of the most important pieces of experimental support for the "soup" theory came in 1953. A graduate student, Stanley Miller, and his professor, Harold Urey, performed an experiment that demonstrated how organic molecules could have spontaneously formed from inorganic precursors, under conditions like those posited by the Oparin–Haldane hypothesis. The now-famous "Miller–Urey experiment" used a highly reduced mixture of gases—methane, ammonia and hydrogen—to form basic organic monomers, such as amino acids. This provided direct experimental support for the second point of the "soup" theory, and it is one of the remaining two points of the theory that much of the debate now centers.
Apart from the Miller–Urey experiment, the next most important step in research on prebiotic organic synthesis was the demonstration by Joan Oró that the nucleic acid purine base, adenine, was formed by heating aqueous ammonium cyanide solutions. In support of abiogenesis in eutectic ice, more recent work demonstrated the formation of s-triazines (alternative nucleobases), pyrimidines (including cytosine and uracil), and adenine from urea solutions subjected to freeze-thaw cycles under a reductive atmosphere (with spark discharges as an energy source).
The Darwinian dynamic
The evolution of living systems by natural selection that presumably emerged in the primordial soup, and certain nonliving physical order-generating systems, were proposed to obey a common fundamental principle that was termed the Darwinian dynamic. The basic conditions necessary for natural selection to operate as conceived by Darwin are variation of type, heritability and competition for limited resources. These conditions can apply to short replicating RNA molecules that were presumably present in the primordial soup, and such RNA molecules have been proposed to have preceded the emergence of more complex life (see RNA world). The basic processes of natural selection applicable to short replicating RNA molecules were shown to have the same form and content as equations that govern the emergence of macroscopic order in nonliving systems maintained far from thermodynamic equilibrium. However, currently, the extent to which Darwinian principles apply to the presumed prebiotic and protocellular phases of life, as well as to non-biological systems, remains an unresolved issue in efforts to understand the emergence of life.
See also
Common descent
Entropy and life
Primordial sandwich
Primordial sea
References
Evolutionarily significant biological phenomena
Evolutionary biology
Origin of life
Metaphors referring to food and drink | Primordial soup | [
"Biology"
] | 2,686 | [
"Biological hypotheses",
"Evolutionary biology",
"Origin of life"
] |
607,495 | https://en.wikipedia.org/wiki/Freezing-point%20depression | Freezing-point depression is a drop in the maximum temperature at which a substance freezes, caused when a smaller amount of another, non-volatile substance is added. Examples include adding salt into water (used in ice cream makers and for de-icing roads), alcohol in water, ethylene or propylene glycol in water (used in antifreeze in cars), adding copper to molten silver (used to make solder that flows at a lower temperature than the silver pieces being joined), or the mixing of two solids such as impurities into a finely powdered drug.
In all cases, the substance added/present in smaller amounts is considered the solute, while the original substance present in larger quantity is thought of as the solvent. The resulting liquid solution or solid-solid mixture has a lower freezing point than the pure solvent or solid because the chemical potential of the solvent in the mixture is lower than that of the pure solvent, the difference between the two being proportional to the natural logarithm of the mole fraction. In a similar manner, the chemical potential of the vapor above the solution is lower than that above a pure solvent, which results in boiling-point elevation. Freezing-point depression is what causes sea water (a mixture of salt and other compounds in water) to remain liquid at temperatures below , the freezing point of pure water.
Explanation
Using vapour pressure
The freezing point is the temperature at which the liquid solvent and solid solvent are at equilibrium, so that their vapor pressures are equal. When a non-volatile solute is added to a volatile liquid solvent, the solution vapour pressure will be lower than that of the pure solvent. As a result, the solid will reach equilibrium with the solution at a lower temperature than with the pure solvent. This explanation in terms of vapor pressure is equivalent to the argument based on chemical potential, since the chemical potential of a vapor is logarithmically related to pressure. All of the colligative properties result from a lowering of the chemical potential of the solvent in the presence of a solute. This lowering is an entropy effect. The greater randomness of the solution (as compared to the pure solvent) acts in opposition to freezing, so that a lower temperature must be reached, over a broader range, before equilibrium between the liquid solution and solid solution phases is achieved. Melting point determinations are commonly exploited in organic chemistry to aid in identifying substances and to ascertain their purity.
Due to concentration and entropy
In the liquid solution, the solvent is diluted by the addition of a solute, so that fewer molecules are available to freeze (a lower concentration of solvent exists in a solution versus pure solvent). Re-establishment of equilibrium is achieved at a lower temperature at which the rate of freezing becomes equal to the rate of liquefying. The solute is not occluding or preventing the solvent from solidifying, it is simply diluting it so there is a reduced probability of a solvent making an attempt at freezing in any given moment.
At the lower freezing point, the vapor pressure of the liquid is equal to the vapor pressure of the corresponding solid, and the chemical potentials of the two phases are equal as well.
Uses
The phenomenon of freezing-point depression has many practical uses. The radiator fluid in an automobile is a mixture of water and ethylene glycol. The freezing-point depression prevents radiators from freezing in winter. Road salting takes advantage of this effect to lower the freezing point of the ice it is placed on. Lowering the freezing point allows the street ice to melt at lower temperatures, preventing the accumulation of dangerous, slippery ice. Commonly used sodium chloride can depress the freezing point of water to about . If the road surface temperature is lower, NaCl becomes ineffective and other salts are used, such as calcium chloride, magnesium chloride or a mixture of many. These salts are somewhat aggressive to metals, especially iron, so in airports safer media such as sodium formate, potassium formate, sodium acetate, and potassium acetate are used instead.
Freezing-point depression is used by some organisms that live in extreme cold. Such creatures have evolved means through which they can produce a high concentration of various compounds such as sorbitol and glycerol. This elevated concentration of solute decreases the freezing point of the water inside them, preventing the organism from freezing solid even as the water around them freezes, or as the air around them becomes very cold. Examples of organisms that produce antifreeze compounds include some species of arctic-living fish such as the rainbow smelt, which produces glycerol and other molecules to survive in frozen-over estuaries during the winter months. In other animals, such as the spring peeper frog (Pseudacris crucifer), the molality is increased temporarily as a reaction to cold temperatures. In the case of the peeper frog, freezing temperatures trigger a large-scale breakdown of glycogen in the frog's liver and subsequent release of massive amounts of glucose into the blood.
With the formula below, freezing-point depression can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called cryoscopy (Greek cryo = cold, scopos = observe; "observe the cold") and relies on exact measurement of the freezing point. The degree of dissociation is measured by determining the van 't Hoff factor i by first determining mB and then comparing it to msolute. In this case, the molar mass of the solute must be known. The molar mass of a solute is determined by comparing mB with the amount of solute dissolved. In this case, i must be known, and the procedure is primarily useful for organic compounds using a nonpolar solvent. Cryoscopy is no longer as common a measurement method as it once was, but it was included in textbooks at the turn of the 20th century. As an example, it was still taught as a useful analytic procedure in Cohen's Practical Organic Chemistry of 1910, in which the molar mass of naphthalene is determined using a Beckmann freezing apparatus.
Laboratory uses
Freezing-point depression can also be used as a purity analysis tool when analyzed by differential scanning calorimetry. The results obtained are in mol%, but the method has its place, where other methods of analysis fail.
In the laboratory, lauric acid may be used to investigate the molar mass of an unknown substance via the freezing-point depression. The choice of lauric acid is convenient because the melting point of the pure compound is relatively high (43.8 °C). Its cryoscopic constant is 3.9 °C·kg/mol. By melting lauric acid with the unknown substance, allowing it to cool, and recording the temperature at which the mixture freezes, the molar mass of the unknown compound may be determined.
This is also the same principle acting in the melting-point depression observed when the melting point of an impure solid mixture is measured with a melting-point apparatus since melting and freezing points both refer to the liquid-solid phase transition (albeit in different directions).
In principle, the boiling-point elevation and the freezing-point depression could be used interchangeably for this purpose. However, the cryoscopic constant is larger than the ebullioscopic constant, and the freezing point is often easier to measure with precision, which means measurements using the freezing-point depression are more precise.
FPD measurements are also used in the dairy industry to ensure that milk has not had extra water added. Milk with a FPD of over 0.509 °C is considered to be unadulterated.
Formula
For dilute solution
If the solution is treated as an ideal solution, the extent of freezing-point depression depends only on the solute concentration that can be estimated by a simple linear relationship with the cryoscopic constant ("Blagden's Law").
where:
is the decrease in freezing point, defined as the freezing point of the pure solvent minus the freezing point of the solution, as the formula above results in a positive value given that all factors are positive. From the calculated using the formula above, the freezing point of the solution can then be calculated as .
, the cryoscopic constant, which is dependent on the properties of the solvent, not the solute. (Note: When conducting experiments, a higher k value makes it easier to observe larger drops in the freezing point.)
is the molality (moles of solute per kilogram of solvent)
is the van 't Hoff factor (number of ion particles per formula unit of solute, e.g. i = 2 for NaCl, 3 for BaCl2).
Some values of the cryoscopic constant Kf for selected solvents:
For concentrated solution
The simple relation above doesn't consider the nature of the solute, so it is only effective in a diluted solution. For a more accurate calculation at a higher concentration, for ionic solutes, Ge and Wang (2010) proposed a new equation:
In the above equation, TF is the normal freezing point of the pure solvent (273 K for water, for example); aliq is the activity of the solvent in the solution (water activity for aqueous solution); ΔHfusTF is the enthalpy change of fusion of the pure solvent at TF, which is 333.6 J/g for water at 273 K; ΔCfusp is the difference between the heat capacities of the liquid and solid phases at TF, which is 2.11 J/(g·K) for water.
The solvent activity can be calculated from the Pitzer model or modified TCPC model, which typically requires 3 adjustable parameters. For the TCPC model, these parameters are available for many single salts.
Ethanol example
The freezing point of ethanol water mixture is shown in the following graph.
See also
Melting-point depression
Boiling-point elevation
Colligative properties
Deicing
Eutectic point
Frigorific mixture
List of boiling and freezing information of solvents
Snow removal
References
Amount of substance
Chemical properties
Phase transitions | Freezing-point depression | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,092 | [
"Scalar physical quantities",
"Physical phenomena",
"Phase transitions",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Phases of matter",
"Critical phenomena",
"Amount of substance",
"nan",
"Statistical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"... |
607,497 | https://en.wikipedia.org/wiki/C99 | C99 (previously C9X, formally ISO/IEC 9899:1999) is a past version of the C programming language open standard. It extends the previous version (C90) with new features for the language and the standard library, and helps implementations make better use of available computer hardware, such as IEEE 754-1985 floating-point arithmetic, and compiler technology. The C11 version of the C programming language standard, published in 2011, updates C99.
History
After ANSI produced the official standard for the C programming language in 1989, which became an international standard in 1990, the C language specification remained relatively static for some time, while C++ continued to evolve, largely during its own standardization effort. Normative Amendment 1 created a new standard for C in 1995, but only to correct some details of the 1989 standard and to add more extensive support for international character sets. The standard underwent further revision in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which was adopted as an ANSI standard in May 2000. The language defined by that version of the standard is commonly referred to as "C99". The international C standard is maintained by the working group ISO/IEC JTC1/SC22/WG14.
Design
C99 is, for the most part, backward compatible with C89, but it is stricter in some ways.
In particular, a declaration that lacks a type specifier no longer has int implicitly assumed. The C standards committee decided that it was of more value for compilers to diagnose inadvertent omission of the type specifier than to silently process legacy code that relied on implicit int. In practice, compilers are likely to display a warning, then assume int and continue translating the program.
C99 introduced several new features, many of which had already been implemented as extensions in several compilers:
inline functions
intermingled declarations and code: variable declaration is no longer restricted to file scope or the start of a compound statement (block)
several new data types, including long long int, optional extended integer types, an explicit Boolean data type, and a complex type to represent complex numbers
variable-length arrays (although subsequently relegated in C11 to a conditional feature that implementations are not required to support)
flexible array members
support for one-line comments beginning with //, as in BCPL, C++ and Java
new library functions, such as snprintf
new headers, such as <stdbool.h>, <complex.h>, <tgmath.h>, and <inttypes.h>
type-generic math (macro) functions, in <tgmath.h>, which select a math library function based upon float, double, or long double arguments, etc.
improved support for IEEE floating point
designated initializers (for example, initializing a structure by field names: struct point p = { .x = 1, .y = 2 };)
compound literals (for instance, it is possible to construct structures in function calls: function((struct x) {1, 2}))
support for variadic macros (macros with a variable number of arguments)
restrict qualification allows more aggressive code optimization, removing compile-time array access advantages previously held by FORTRAN over ANSI C
universal character names, which allows user variables to contain other characters than the standard character set: four-digit or eight-digit hexadecimal sequences
keyword static in array indices in parameter declarations
Parts of the C99 standard are included in the current version of the C++ standard, including integer types, headers, and library functions. Variable-length arrays are not among these included parts because C++'s Standard Template Library already includes similar functionality.
IEEE 754 floating-point support
A major feature of C99 is its numerics support, and in particular its support for access to the features of IEEE 754-1985 (also known as IEC 60559) floating-point hardware present in the vast majority of modern processors (defined in "Annex F IEC 60559 floating-point arithmetic"). Platforms without IEEE 754 hardware can also implement it in software.
On platforms with IEEE 754 floating point:
FLT_EVAL_METHOD == 2 tends to limit the risk of rounding errors affecting numerically unstable expressions (see IEEE 754 design rationale) and is the designed default method for x87 hardware, but yields unintuitive behavior for the unwary user; FLT_EVAL_METHOD == 1 was the default evaluation method originally used in K&R C, which promoted all floats to double in expressions; and FLT_EVAL_METHOD == 0 is also commonly used and specifies a strict "evaluate to type" of the operands. (For gcc, FLT_EVAL_METHOD == 2 is the default on 32 bit x86, and FLT_EVAL_METHOD == 0 is the default on 64 bit x86-64, but FLT_EVAL_METHOD == 2 can be specified on x86-64 with option -mfpmath=387.) Before C99, compilers could round intermediate results inconsistently, especially when using x87 floating-point hardware, leading to compiler-specific behaviour; such inconsistencies are not permitted in compilers conforming to C99 (annex F).
Example
The following annotated example C99 code for computing a continued fraction function demonstrates the main features:
#include <stdio.h>
#include <math.h>
#include <float.h>
#include <fenv.h>
#include <tgmath.h>
#include <stdbool.h>
#include <assert.h>
double compute_fn(double z) // [1]
{
#pragma STDC FENV_ACCESS ON // [2]
assert(FLT_EVAL_METHOD == 2); // [3]
if (isnan(z)) // [4]
puts("z is not a number");
if (isinf(z))
puts("z is infinite");
long double r = 7.0 - 3.0/(z - 2.0 - 1.0/(z - 7.0 + 10.0/(z - 2.0 - 2.0/(z - 3.0)))); // [5, 6]
feclearexcept(FE_DIVBYZERO); // [7]
bool raised = fetestexcept(FE_OVERFLOW); // [8]
if (raised)
puts("Unanticipated overflow.");
return r;
}
int main(void)
{
#ifndef __STDC_IEC_559__
puts("Warning: __STDC_IEC_559__ not defined. IEEE 754 floating point not fully supported."); // [9]
#endif
#pragma STDC FENV_ACCESS ON
#ifdef TEST_NUMERIC_STABILITY_UP
fesetround(FE_UPWARD); // [10]
#elif TEST_NUMERIC_STABILITY_DOWN
fesetround(FE_DOWNWARD);
#endif
printf("%.7g\n", compute_fn(3.0));
printf("%.7g\n", compute_fn(NAN));
return 0;
}
Footnotes:
Compile with:
As the IEEE 754 status flags are manipulated in this function, this #pragma is needed to avoid the compiler incorrectly rearranging such tests when optimising. (Pragmas are usually implementation-defined, but those prefixed with STDC are defined in the C standard.)
C99 defines a limited number of expression evaluation methods: the current compilation mode can be checked to ensure it meets the assumptions the code was written under.
The special values such as NaN and positive or negative infinity can be tested and set.
long double is defined as IEEE 754 double extended or quad precision if available. Using higher precision than required for intermediate computations can minimize round-off error (the typedef double_t can be used for code that is portable under all FLT_EVAL_METHODs).
The main function to be evaluated. Although it appears that some arguments to this continued fraction, e.g., 3.0, would lead to a divide-by-zero error, in fact the function is well-defined at 3.0 and division by 0 will simply return a +infinity that will then correctly lead to a finite result: IEEE 754 is defined not to trap on such exceptions by default and is designed so that they can very often be ignored, as in this case. (If FLT_EVAL_METHOD is defined as 2 then all internal computations including constants will be performed in long double precision; if FLT_EVAL_METHOD is defined as 0 then additional care is need to ensure this, including possibly additional casts and explicit specification of constants as long double.)
As the raised divide-by-zero flag is not an error in this case, it can simply be dismissed to clear the flag for use by later code.
In some cases, other exceptions may be regarded as an error, such as overflow (although it can in fact be shown that this cannot occur in this case).
__STDC_IEC_559__ is to be defined only if "Annex F IEC 60559 floating-point arithmetic" is fully implemented by the compiler and the C library (users should be aware that this macro is sometimes defined while it should not be).
The default rounding mode is round to nearest (with the even rounding rule in the halfway cases) for IEEE 754, but explicitly setting the rounding mode toward + and - infinity (by defining TEST_NUMERIC_STABILITY_UP etc. in this example, when debugging) can be used to diagnose numerical instability. This method can be used even if compute_fn() is part of a separately compiled binary library. But depending on the function, numerical instabilities cannot always be detected.
Version detection
A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. As with the macro for C90, __STDC_VERSION__ can be used to write code that will compile differently for C90 and C99 compilers, as in this example that ensures that inline is available in either case (by replacing it with static in C90 to avoid linker errors).
#if __STDC_VERSION__ >= 199901L
/* "inline" is a keyword */
#else
# define inline static
#endif
Implementations
Most C compilers provide support for at least some of the features introduced in C99.
Historically, Microsoft has been slow to implement new C features in their Visual C++ tools, instead focusing mainly on supporting developments in the C++ standards. However, with the introduction of Visual C++ 2013 Microsoft implemented a limited subset of C99, which was expanded in Visual C++ 2015.
Future work
Since ratification of the 1999 C standard, the standards working group prepared technical reports specifying improved support for embedded processing, additional character data types (Unicode support), and library functions with improved bounds checking. Work continues on technical reports addressing decimal floating point, additional mathematical special functions, and additional dynamic memory allocation functions. The C and C++ standards committees have been collaborating on specifications for threaded programming.
The next revision of the C standard, C11, was ratified in 2011. The C standards committee adopted guidelines that limited the adoption of new features that have not been tested by existing implementations. Much effort went into developing a memory model, in order to clarify sequence points and to support threaded programming.
See also
C++23, C++20, C++17, C++14, C++11, C++03, C++98, versions of the C++ programming language standard
Compatibility of C and C++
C++ Technical Report 1
Floating point, for further discussion of usage of IEEE 754 hardware
References
Further reading
N1256 (final draft of C99 standard plus TC1, TC2, TC3); WG14; 2007. (HTML and ASCII versions)
ISO/IEC 9899:1999 (official C99 standard); ISO; 1999.
Rationale for C99; WG14; 2003.
External links
C Language Working Group 14 (WG14) Documents
C9X Charter - WG14
New things in C9X
Features of C99
C (programming language)
Programming language standards
Unix programming tools
IEC standards
ISO standards
bg:C (език за програмиране)#C99 | C99 | [
"Technology"
] | 2,746 | [
"Computer standards",
"Programming language standards",
"IEC standards"
] |
607,499 | https://en.wikipedia.org/wiki/Nightwear | Nightwear – also called sleepwear, or nightclothes – is clothing designed to be worn while sleeping. The style of nightwear worn may vary with the seasons, with warmer styles being worn in colder conditions and vice versa. Some styles or materials are selected to be visually appealing or erotic in addition to their functional purposes.
Variants
Nightwear includes:
Adult onesie - all-in-one footed sleepsuit worn by adults, similar to an infant onesie or children's blanket sleeper and usually made from cotton.
Babydoll - a short, sometimes sleeveless, loose-fitting nightgown or negligee for women, generally designed to resemble a young girl's nightgown.
Blanket sleeper - a warm sleeping garment for infants and young children.
Chemise - a delicate, loose-fitting, sleeveless, shirt-like lingerie garment for women, typically intended to feature a provocative appearance.
Negligee - loose-fitting women's nightwear intended to have sensuous appeal, usually made of sheer or semi-translucent fabrics and trimmed with lace or other fine material and bows.
Nightcap - warm cloth cap worn with pajamas, a nightshirt or a nightgown.
Nightgown - loose hanging nightwear for women, typically made from cotton, silk, satin, or nylon.
Nightshirt - loose fitting shirt reaching to below the knees.
Pajamas - traditionally loose fitting, two-piece garments.
Peignoir - long outer garment for women, usually sheer and made of chiffon; frequently sold with a matching nightgown, negligee, or panties.
Other types of garment commonly worn for sleeping—but not exclusively so— include gym shorts, t-shirts, tank tops, sweatpants, as well as underwear and/or socks (worn without outerwear). Sleeping in the nude is also common, especially in warmer climates.
Customs
According to a 2004 United States survey, 13% of men wear pajamas or nightgowns for sleeping, whereas 31% wear underwear and another 31% sleep nude. Among women, 55% wear pajamas or nightgowns, which were counted under the same option:
A survey by the BBC The Clothes Show Magazine in 1996 revealed the following about sleepwear in the UK:
†Most common response in 'Other' from women was outdoor clothes, from men shorts.
Children's nightwear
On 22 December 2011, the U.S. Consumer Product Safety Commission (CPSC) issued a letter to manufacturers, distributors, importers and retailers reminding the apparel industry of the enforcement policy and their obligations associated with children's sleepwear and loungewear.
The commission's regulations define the term children's sleepwear to include any product of wearing apparel (in sizes 0–14), such as nightgowns, pajamas, or similar or related items, such as robes, intended to be worn primarily for sleeping or activities related to sleeping, except: (1) diapers and underwear; (2) infant garments, sized for a child nine months of age or younger; and (3) tight-fitting garments that meet specific maximum dimensions.
All children's sleepwear and loungewear sold in the US are required to comply with the Flammable Fabrics Act (FFA) using the standards for Flammability of Children's Sleepwear 16 C.F.R. Parts 1615 and 1616. Moreover, they have to comply with the Consumer Product Safety Improvement Act of 2008 (CPSIA) requirements including tracking labels, a certificate of compliance, meeting requirements for lead content and surface coatings, and meeting requirements for phthalates.
References | Nightwear | [
"Biology"
] | 741 | [
"Behavior",
"Sleep",
"Nightwear"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.