id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,500,122
https://en.wikipedia.org/wiki/RX%20J0822%E2%88%924300
|- style="vertical-align: top;" | Galactic coordinates | 260.3841 −03.4718 RX J0822−4300, often referred to as a "Cosmic Cannonball", is a radio-quiet neutron star currently moving away from the center of the Puppis A supernova remnant at , making it one of the fastest moving stars ever found. Earlier, it was believed to move with speed as high as 1,500 km/s. Astronomers used NASA's Chandra X-ray Observatory to observe the star over a period of 11 years to determine its speed. Although the cosmic cannonball is not the only hypervelocity star discovered, it is unique in the apparent origin of its speed. Others may have derived theirs from a gravitational slingshot around the Milky Way's suspected supermassive black hole, Sagittarius A*. Current theories fail to explain how such speeds can be attained from a supernova explosion. It could be a possible quark star. See also Puppis A or SNR 260.4−3.4 References "Cosmic Canonball: One Of The Fastest Stars Ever Seen Challenges Astronomy Theories", ScienceDaily, (2007) "Chandra Discovers a Cosmic Cannonball", Science@NASA (10.28.2007) Chandra X-Ray Observatory, "RX J0822-4300 in Puppis A: Chandra Discovers Cosmic Cannonball", 2007 November 28 https://web.archive.org/web/20071205023347/http://www.unesp.br/universofisico/semanario.php?date=2006-08-14 Radio-quiet neutron stars 06.5 Hypervelocity stars Puppis ROSAT objects
RX J0822−4300
[ "Astronomy" ]
369
[ "Puppis", "Constellations" ]
14,500,388
https://en.wikipedia.org/wiki/%CE%91-L-fucosidase
{{DISPLAYTITLE:α-L-fucosidase}} The enzyme α-L-fucosidase () catalyzes the following chemical reaction: an α-L-fucoside + H2O L-fucose + an alcohol This enzyme belongs to the family of hydrolases, specifically those glycosidases that hydrolyse O- and S-glycosyl compounds. The systematic name of this enzyme class is α-L-fucoside fucohydrolase. This enzyme is also called α-fucosidase. It participates in N-glycan degradation and glycan structure degradation. Deficiency of this enzyme is called fucosidosis. In CAZy, α-L-fucosidases are found in glycoside hydrolase family 29 and glycoside hydrolase family 95. Structural studies As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes , , and . Human medical studies It was in a recent study by Endreffy, Bjørklund and collaborators (2017) found an association between the activity of α-L-fucosidase-1 (FUCA-1) and chronic autoimmune disorders in children. This should encourage further research on FUCA-1 as a marker of chronic inflammation and autoimmunity. See also 1,2-α-L-fucosidase 1,3-α-L-fucosidase 1,6-α-L-fucosidase FUCA1 FUCA2 References Further reading External links CAZy family GH29 CAZy family GH95 Protein families EC 3.2.1 Enzymes of known structure
Α-L-fucosidase
[ "Biology" ]
364
[ "Protein families", "Protein classification" ]
14,500,732
https://en.wikipedia.org/wiki/Salt-effect%20distillation
Salt-effect distillation is a method of extractive distillation in which a salt is dissolved in the mixture of liquids to be distilled. The salt acts as a separating agent by raising the relative volatility of the mixture and by breaking any azeotropes that may otherwise form. The technique is first attested in writings on alcohol attributed to Jabir ibn Hayyan (9th c. CE). Setup The salt is fed into the distillation column at a steady rate by adding it to the reflux stream at the top of the column. It dissolves in the liquid phase, and since it is non-volatile, flows out with the heavier bottoms stream. The bottoms are partially or completely evaporated to recover the salt for reuse. Usage Extractive distillation is more costly than ordinary fractional distillation due to costs associated with the recovery of the separating agent. One advantage of salt-effect distillation over other types of azeotropic distillation is the potential for reduced costs associated with energy usage. In addition, the salt ions have a greater effect on the volatility of the mixture to be distilled than other liquid-separating agents. Commercial usage of salt-effect distillation includes adding magnesium nitrate to an aqueous solution of nitric acid to concentrate it further. Calcium chloride is added to acetone-methanol and water-isopropanol mixtures in order to facilitate separation. References See also Distillation Extractive distillation Azeotrope Salting out Distillation
Salt-effect distillation
[ "Chemistry" ]
322
[ "Distillation", "Separation processes" ]
14,501,355
https://en.wikipedia.org/wiki/Surface%20diffusion
Surface diffusion is a general process involving the motion of adatoms, molecules, and atomic clusters (adparticles) at solid material surfaces. The process can generally be thought of in terms of particles jumping between adjacent adsorption sites on a surface, as in figure 1. Just as in bulk diffusion, this motion is typically a thermally promoted process with rates increasing with increasing temperature. Many systems display diffusion behavior that deviates from the conventional model of nearest-neighbor jumps. Tunneling diffusion is a particularly interesting example of an unconventional mechanism wherein hydrogen has been shown to diffuse on clean metal surfaces via the quantum tunneling effect. Various analytical tools may be used to elucidate surface diffusion mechanisms and rates, the most important of which are field ion microscopy and scanning tunneling microscopy. While in principle the process can occur on a variety of materials, most experiments are performed on crystalline metal surfaces. Due to experimental constraints most studies of surface diffusion are limited to well below the melting point of the substrate, and much has yet to be discovered regarding how these processes take place at higher temperatures. Surface diffusion rates and mechanisms are affected by a variety of factors including the strength of the surface-adparticle bond, orientation of the surface lattice, attraction and repulsion between surface species and chemical potential gradients. It is an important concept in surface phase formation, epitaxial growth, heterogeneous catalysis, and other topics in surface science. As such, the principles of surface diffusion are critical for the chemical production and semiconductor industries. Real-world applications relying heavily on these phenomena include catalytic converters, integrated circuits used in electronic devices, and silver halide salts used in photographic film. Kinetics Surface diffusion kinetics can be thought of in terms of adatoms residing at adsorption sites on a 2D lattice, moving between adjacent (nearest-neighbor) adsorption sites by a jumping process. The jump rate is characterized by an attempt frequency and a thermodynamic factor that dictates the probability of an attempt resulting in a successful jump. The attempt frequency ν is typically taken to be simply the vibrational frequency of the adatom, while the thermodynamic factor is a Boltzmann factor dependent on temperature and Ediff, the potential energy barrier to diffusion. Equation 1 describes the relationship: Where ν and Ediff are as described above, Γ is the jump or hopping rate, T is temperature, and kB is the Boltzmann constant. Ediff must be smaller than the energy of desorption for diffusion to occur, otherwise desorption processes would dominate. Importantly, equation 1 tells us how strongly the jump rate varies with temperature. The manner in which diffusion takes place is dependent on the relationship between Ediff and kBT as is given in the thermodynamic factor: when Ediff < kBT the thermodynamic factor approaches unity and Ediff ceases to be a meaningful barrier to diffusion. This case, known as mobile diffusion, is relatively uncommon and has only been observed in a few systems. For the phenomena described throughout this article, it is assumed that Ediff >> kBT and therefore Γ << ν. In the case of Fickian diffusion it is possible to extract both the ν and Ediff from an Arrhenius plot of the logarithm of the diffusion coefficient, D, versus 1/T. For cases where more than one diffusion mechanism is present (see below), there may be more than one Ediff such that the relative distribution between the different processes would change with temperature. Random walk statistics describe the mean squared displacement of diffusing species in terms of the number of jumps N and the distance per jump a. The number of successful jumps is simply Γ multiplied by the time allowed for diffusion, t. In the most basic model only nearest-neighbor jumps are considered and a corresponds to the spacing between nearest-neighbor adsorption sites. The root mean squared displacement goes as: The diffusion coefficient is given as: where for 1D diffusion as would be the case for in-channel diffusion, for 2D diffusion, and for 3D diffusion. Regimes There are four different general schemes in which diffusion may take place. Tracer diffusion and chemical diffusion differ in the level of adsorbate coverage at the surface, while intrinsic diffusion and mass transfer diffusion differ in the nature of the diffusion environment. Tracer diffusion and intrinsic diffusion both refer to systems where adparticles experience a relatively homogeneous environment, whereas in chemical and mass transfer diffusion adparticles are more strongly affected by their surroundings. Tracer diffusion describes the motion of individual adparticles on a surface at relatively low coverage levels. At these low levels (< 0.01 monolayer), particle interaction is low and each particle can be considered to move independently of the others. The single atom diffusing in figure 1 is a nice example of tracer diffusion. Chemical diffusion describes the process at higher level of coverage where the effects of attraction or repulsion between adatoms becomes important. These interactions serve to alter the mobility of adatoms. In a crude way, figure 3 serves to show how adatoms may interact at higher coverage levels. The adatoms have no "choice" but to move to the right at first, and adjacent adatoms may block adsorption sites from one another. Intrinsic diffusion occurs on a uniform surface (e.g. lacking steps or vacancies) such as a single terrace, where no adatom traps or sources are present. This regime is often studied using field ion microscopy, wherein the terrace is a sharp sample tip on which an adparticle diffuses. Even in the case of a clean terrace the process may be influenced by non-uniformity near the edges of the terrace. Mass transfer diffusion takes place in the case where adparticle sources and traps such as kinks, steps, and vacancies are present. Instead of being dependent only on the jump potential barrier Ediff, diffusion in this regime is now also dependent on the formation energy of mobile adparticles. The exact nature of the diffusion environment therefore plays a role in dictating the diffusion rate, since the formation energy of an adparticle is different for each type of surface feature as is described in the Terrace Ledge Kink model. Anisotropy Orientational anisotropy takes the form of a difference in both diffusion rates and mechanisms at the various surface orientations of a given material. For a given crystalline material each Miller Index plane may display unique diffusion phenomena. Close packed surfaces such as the fcc (111) tend to have higher diffusion rates than the correspondingly more "open" faces of the same material such as fcc (100). Directional anisotropy refers to a difference in diffusion mechanism or rate in a particular direction on a given crystallographic plane. These differences may be a result of either anisotropy in the surface lattice (e.g. a rectangular lattice) or the presence of steps on a surface. One of the more dramatic examples of directional anisotropy is the diffusion of adatoms on channeled surfaces such as fcc (110), where diffusion along the channel is much faster than diffusion across the channel. Mechanisms Adatom diffusion Diffusion of adatoms may occur by a variety of mechanisms. The manner in which they diffuse is important as it may dictate the kinetics of movement, temperature dependence, and overall mobility of surface species, among other parameters. The following is a summary of the most important of these processes: Hopping or jumping is conceptually the most basic mechanism for diffusion of adatoms. In this model, the adatoms reside on adsorption sites on the surface lattice. Motion occurs through successive jumps to adjacent sites, the number of which depends on the nature of the surface lattice. Figures 1 and 3 both display adatoms undergoing diffusion via the hopping process. Studies have shown the presence of metastable transition states between adsorption sites wherein it may be possible for adatoms to temporarily reside. Atomic exchange involves exchange between an adatom and an adjacent atom within the surface lattice. As shown in figure 4, after an atomic exchange event the adatom has taken the place of a surface atom and the surface atom has been displaced and has now become an adatom. This process may take place in both heterodiffusion (e.g. Pt adatoms on Ni) and self-diffusion (e.g. Pt adatoms on Pt). It is still unclear from a theoretical point of view why the atomic exchange mechanism is more predominant in some systems than in others. Current theory points towards multiple possibilities, including tensile surface stresses, surface relaxation about the adatom, and increased stability of the intermediate due to the fact that both atoms involved maintain high levels of coordination throughout the process. Tunneling diffusion is a physical manifestation of the quantum tunneling effect involving particles tunneling across diffusion barriers. It can occur in the case of low diffusing particle mass and low Ediff, and has been observed in the case of hydrogen diffusion on tungsten and copper surfaces. The phenomenon is unique in that in the regime where the tunneling mechanism dominates, the diffusion rate is nearly temperature-independent. Vacancy diffusion can occur as the predominant method of surface diffusion at high coverage levels approaching complete coverage. This process is akin to the manner in which pieces slide around in a "sliding puzzle". It is very difficult to directly observe vacancy diffusion due to the typically high diffusion rates and low vacancy concentration. Figure 5 shows the basic theme of this mechanism in an albeit oversimplified manner. Recent theoretical work as well as experimental work performed since the late 1970s has brought to light a remarkable variety of surface diffusion phenomena both with regard to kinetics as well as to mechanisms. Following is a summary of some of the more notable phenomena: Long jumps consist of adatom displacement to a non-nearest-neighbor adsorption site. They may include double, triple, and longer jumps in the same direction as a nearest-neighbor jump would travel, or they may be in entirely different directions as shown in figure 6. They have been predicted by theory to exist in many different systems, and have been shown by experiment to take place at temperatures as low as 0.1 Tm (melting temperature). In some cases data indicate long jumps dominating the diffusion process over single jumps at elevated temperatures; the phenomena of variable jump lengths is expressed in different characteristic distributions of atomic displacement over time (see figure 7). Rebound jumps have been shown by both experiment and simulations to take place in certain systems. Since the motion does not result in a net displacement of the adatom involved, experimental evidence for rebound jumps again comes from statistical interpretation of atomic distributions. A rebound jump is shown in figure 6. The figure is slightly misleading, however, as rebound jumps have only been shown experimentally to take place in the case of 1D diffusion on a channeled surface (in particular, the bcc (211) face of tungsten). Cross-channel diffusion can occur in the case of channeled surfaces. Typically in-channel diffusion dominates due to the lower energy barrier for diffusion of this process. In certain cases cross-channel has been shown to occur, taking place in a manner similar to that shown in figure 8. The intermediate "dumbbell" position may lead to a variety of final adatom and surface atom displacements. Long-range atomic exchange is a process involving an adatom inserting into the surface as in the normal atomic exchange mechanism, but instead of a nearest-neighbor atom it is an atom some distance further from the initial adatom that emerges. Shown in figure 9, this process has only been observed in molecular dynamics simulations and has yet to be confirmed experimentally. In spite of this long range atomic exchange, as well as a variety of other exotic diffusion mechanisms, are anticipated to contribute substantially at temperatures currently too high for direct observation. Cluster diffusion Cluster diffusion involves motion of atomic clusters ranging in size from dimers to islands containing hundreds of atoms. Motion of the cluster may occur via the displacement of individual atoms, sections of the cluster, or the entire cluster moving at once. All of these processes involve a change in the cluster’s center of mass. Individual mechanisms are those that involve movement of one atom at a time. Edge diffusion involves movement of adatoms or vacancies at edge or kink sites. As shown in figure 10, the mobile atom maintains its proximity to the cluster throughout the process. Evaporation-condensation involves atoms “evaporating” from the cluster onto a terrace accompanied by “condensation” of terrace adatoms onto the cluster leading to a change in the cluster’s center of mass. While figure 10 appears to indicate the same atom evaporating from and condensing on the cluster, it may in fact be a different atom condensing from the 2D gas. Leapfrog diffusion is similar to edge diffusion, but where the diffusing atom actually moves atop the cluster before settling in a different location from its starting position. Sequential displacement refers to the process involving motion one atom at a time, moving to free nearest-neighbor sites. Concerted mechanisms are those that involve movement of either sections of the cluster or the entire cluster all at once. Dislocation diffusion occurs when adjacent sub-units of a cluster move in a row-by-row fashion through displacement of a dislocation. As shown in figure 11(a) the process begins with nucleation of the dislocation followed by what is essentially sequential displacement on a concerted basis. Glide diffusion refers to the concerted motion of an entire cluster all at once (see figure 11(b)). Reptation is a snake-like movement (hence the name) involving sequential motion of cluster sub-units (see figure 11(c)). Shearing is a concerted displacement of a sub-unit of atoms within a cluster (see figure 11(d)). Size-dependence: the rate of cluster diffusion has a strong dependence on the size of the cluster, with larger cluster size generally corresponding to slower diffusion. This is not, however, a universal trend and it has been shown in some systems that the diffusion rate takes on a periodic tendency wherein some larger clusters diffuse faster than those smaller than them. Surface diffusion and heterogeneous catalysis Surface diffusion is a critically important concept in heterogeneous catalysis, as reaction rates are often dictated by the ability of reactants to "find" each other at a catalyst surface. With increased temperature adsorbed molecules, molecular fragments, atoms, and clusters tend to have much greater mobility (see equation 1). However, with increased temperature the lifetime of adsorption decreases as the factor kBT becomes large enough for the adsorbed species to overcome the barrier to desorption, Q (see figure 2). Reaction thermodynamics aside because of the interplay between increased rates of diffusion and decreased lifetime of adsorption, increased temperature may in some cases decrease the overall rate of the reaction. Experimental Surface diffusion may be studied by a variety of techniques, including both direct and indirect observations. Two experimental techniques that have proved very useful in this area of study are field ion microscopy and scanning tunneling microscopy. By visualizing the displacement of atoms or clusters over time, it is possible to extract useful information regarding the manner in which the relevant species diffuse-both mechanistic and rate-related information. In order to study surface diffusion on the atomistic scale it is unfortunately necessary to perform studies on rigorously clean surfaces and in ultra high vacuum (UHV) conditions or in the presence of small amounts of inert gas, as is the case when using He or Ne as imaging gas in field-ion microscopy experiments. See also Surface engineering Surface science False diffusion References Cited works G. Antczak, G. Ehrlich. Surface Science Reports 62 (2007), 39-61. (Review) Materials science Surface science
Surface diffusion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,271
[ "Applied and interdisciplinary physics", "Materials science", "Surface science", "Condensed matter physics", "nan" ]
14,501,385
https://en.wikipedia.org/wiki/Spaceflight%20%28magazine%29
Spaceflight is the monthly magazine of the British Interplanetary Society (BIS), reporting on space exploration topics. It was first published in 1956. In 2008, the magazine – edited by Clive Simpson – was the winner of the Sir Arthur Clarke Award in the category of Best Space Reporting. External links BIS Publications British Interplanetary Society 1956 establishments in the United Kingdom Monthly magazines published in the United Kingdom Science and technology magazines published in the United Kingdom Magazines established in 1956 magazine Astronomy magazines
Spaceflight (magazine)
[ "Astronomy" ]
98
[ "Astronomy magazines", "Works about astronomy", "Outer space", "Spaceflight" ]
14,501,387
https://en.wikipedia.org/wiki/Entosis
Entosis (from Greek ἐντός entos, "within" and -ωσις -osis, "development process") is the invasion of a living cell into another cell's cytoplasm. The process was discovered by Overholtzer et al. as reported in Cell. Entotic cells, also referred to as cell-in-cell structures, are triggered by loss of attachment to the extracellular matrix (ECM). This internalization of one cell by another is dependent on adherens junctions, and is driven by a Rho-dependent process, involving actin polymerization and myosin II activity in the internalized cell. Adherens junctions bind cells together by linking cadherin transmembrane protein complexes of adjacent cells to the cytoskeleton. When certain cell types are detached from the ECM and have lost adhesion, the compaction force between neighboring cells can cause them to push into their neighbors, forming the trademark cell-in-cell structures. Though cell-in-cell structures commonly refer to the interaction between two neighboring cells, entosis has been observed involving more than two cells. In the case of an entotic structure formed between three cells, the middle cell acts as both an internalizing and an outer host cell simultaneously. Aneuploidy, a condition in which nondisjunction gives rise to gametes with an abnormal number of chromosomes, is one of the most prevalent phenotypes of human tumors. The underlying cause of aneuploidy remains highly debated; however, entosis is shown to perturb cytokinesis (cytoplasmic division) and trigger the formation of aneuploid cells. This would be in line with past research, as cell-in-cell structures have been widely observed in the focused study of many human tumors, including lung, breast, and endometrial stromal carcinomas. A cell trapped by entosis is initially alive and can divide inside the cell that has enveloped it. On occasion, the entotic cell will be released by the host cell, but most internalized cells are eventually killed. Normal cells can kill themselves via apoptosis, which is followed by the programmed engulfment and phagocytic ingestion of the cell's remain by another. Entosis differs greatly from apoptosis in that the entotic process exhibits behavior closely resembling cellular invasion rather than cellular engulfment. Cancer cells adaptively avoid apoptosis, allowing them to live and multiply indefinitely, making it difficult to design drugs that effectively kill tumors. Therefore, entosis acts as a nonapoptotic cell death mechanism, and could possibly be a new way in which cancer cells can be killed. General mechanism The mechanism of entotic cell cannibalism is a complex cell biology process. The process is initiated when epithelial cells form adherens junctions , this is followed by the generation of actomyosin-contractility. The combination of this processes drives cell engulfment by neighbor cell. After internalization, the inner cell is usually killed and digested by the outer cell. This process involves non-canonical autophagy, formation of lysosomes and nutrient recovery. In general, entosis greatly depends on cytoskeletal structure changes and biophysical forces during the creation of cell-in-cell structures . Novel degradation and signaling pathways are employed during the inner cell killing and digestion process. Entosis in cancer Entosis has been found to be a different mechanism for cancer cells to form cell-in-cell structures at tumor sites. The entosis process in cancer cells is mediated via E-cadherin and P-cadherin. Since cadherins usually create homolytic cell to cell junctions, it is believed that the process mainly occurs between homologous cells. After cell-cell adhesions are mediated, the engulfed cells promote their own uptake into the neighbor cell. Additionally, they promote the ingestion process through actin polymerization and myosin contraction. The invading cell (outer cell) actomysin contraction is regulated by controllers or cell tension such as RhoA, furthermore they accumulate actin and myosin at the cell cortex which generates the mechanical tension that generates the cell-in-cell invasion mechanism. The entosis mechanism can potentially have substantial energetic implication in cancer cells compared to other mechanisms of cell death and engulfment. A crucial part of the process is the active involvement of invading cells, which does not happen in other forms of cell engulfment. This allows the mechanism to selectively target living cells, excluding dead cells or non-living material such as cell debris. After internalization, engulfed cells are killed by the host cell following the maturation of the entotic vacuole that encapsulates the entotic cell. The maturation of the entotic vacuole involves modification by autophagy pathway proteins, followed by lysosome fusion and inner cell dead and degradation inside the host cell. In this mechanism, autophagy pathway proteins play an important role by scavenging extracellular nutrients derived from the inner cell death. Internalized cells can also undergo alternative fates such as apoptosis or unharmed escape from host cell. In clinical cancer specimens, evidence of DNA fragmentation has been found suggesting that non-apoptotic cell death may be a common fate for entotic cells in human cancers. Entosis correlates with cancer worse prognosis in head and neck squamous cell carcinoma, anal carcinoma, lung adenocarcinoma, pancreatic ductal carcinoma, and some breast ductal carcinoma. In breast cancer, entosis correlates with two classical prognostic factors of breast cancer (HER2 and Ki67). In the analysis of entosis of the clinical case - calculations of the frequency of entosis showed that the highest frequency of entosis is during the formation of metastasis and when the neoplastic process is very advanced, the frequency of entosis structures decreases. This result suggests that entosis may be a regulated process depending on staging. Videos Entosis: a cell-in-cell invasion and death process: https://hms.harvard.edu/news-events/multimedia/entosis Entosis of prostate cancer PC-3 cells: https://www.youtube.com/watch?v=R5zNk0uXJHA When cells invade: Entosis: https://www.youtube.com/watch?v=fJxA-XoAK-A See also Apoptosis Autoschizis Necrosis Autophagy References Cell biology
Entosis
[ "Biology" ]
1,385
[ "Cell biology" ]
14,501,996
https://en.wikipedia.org/wiki/Amidase
In enzymology, an amidase (, acylamidase, acylase (misleading), amidohydrolase (ambiguous), deaminase (ambiguous), fatty acylamidase, N-acetylaminohydrolase (ambiguous)) is an enzyme that catalyzes the hydrolysis of an amide. In this way, the two substrates of this enzyme are an amide and H2O, whereas its two products are monocarboxylate and NH3. This enzyme belongs to the family of hydrolases, those acting on carbon-nitrogen bonds other than peptide bonds, specifically in linear amides. The systematic name of this enzyme class is acylamide amidohydrolase. Other names in common use include acylamidase, acylase, amidohydrolase, deaminase, fatty acylamidase, and N-acetylaminohydrolase. This enzyme participates in 6 metabolic pathways: urea cycle and metabolism of amino groups, phenylalanine metabolism, tryptophan metabolism, cyanoamino acid metabolism, benzoate degradation via coa ligation, and styrene degradation. Amidases contain a conserved stretch of approximately 130 amino acids known as the AS sequence. They are widespread, being found in both prokaryotes and eukaryotes. AS enzymes catalyse the hydrolysis of amide bonds (CO-NH2), although the family has diverged widely with regard to substrate specificity and function. Nonetheless, these enzymes maintain a core alpha/beta/alpha structure, where the topologies of the N- and C-terminal halves are similar. AS enzymes characteristically have a highly conserved C-terminal region rich in serine and glycine residues, but devoid of aspartic acid and histidine residues, therefore they differ from classical serine hydrolases. These enzymes possess a unique, highly conserved Ser-Ser-Lys catalytic triad used for amide hydrolysis, although the catalytic mechanism for acyl-enzyme intermediate formation can differ between enzymes. Examples of AS signature-containing enzymes include: Peptide amidase (Pam), which catalyses the hydrolysis of the C-terminal amide bond of peptides. Fatty acid amide hydrolases, which hydrolyse fatty acid amid substrates (e.g. cannabinoid anandamide and sleep-inducing oleamide), thereby controlling the level and duration of signalling induced by this diverse class of lipid transmitters. Malonamidase E2, which catalyses the hydrolysis of malonamate into malonate and ammonia, and which is involved in the transport of fixed nitrogen from bacteroids to plant cells in symbiotic nitrogen metabolism. Subunit A of Glu-tRNA(Gln) amidotransferase, a heterotrimeric enzyme that catalyses the formation of Gln-tRNA(Gln) by the transamidation of misacylated Glu-tRNA(Gln) via amidolysis of glutamine. Structural studies As of late 2018, 162 structures have been solved for this family, which can be accessed at the Pfam . References Further reading Protein families EC 3.5.1 Enzymes of known structure
Amidase
[ "Biology" ]
687
[ "Protein families", "Protein classification" ]
14,502,271
https://en.wikipedia.org/wiki/Weakly%20measurable%20function
In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree. Definition If is a measurable space and is a Banach space over a field (which is the real numbers or complex numbers ), then is said to be weakly measurable if, for every continuous linear functional the function is a measurable function with respect to and the usual Borel -algebra on A measurable function on a probability space is usually referred to as a random variable (or random vector if it takes values in a vector space such as the Banach space ). Thus, as a special case of the above definition, if is a probability space, then a function is called a (-valued) weak random variable (or weak random vector) if, for every continuous linear functional the function is a -valued random variable (i.e. measurable function) in the usual sense, with respect to and the usual Borel -algebra on Properties The relationship between measurability and weak measurability is given by the following result, known as Pettis' theorem or Pettis measurability theorem. A function is said to be almost surely separably valued (or essentially separably valued) if there exists a subset with such that is separable. In the case that is separable, since any subset of a separable Banach space is itself separable, one can take above to be empty, and it follows that the notions of weak and strong measurability agree when is separable. See also References Functional analysis Measure theory Types of functions
Weakly measurable function
[ "Mathematics" ]
366
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Types of functions" ]
8,681,009
https://en.wikipedia.org/wiki/CCL22
C-C motif chemokine 22 is a protein that in humans is encoded by the CCL22 gene. The protein encoded by this gene is secreted by dendritic cells and macrophages, and elicits its effects on its target cells by interacting with cell surface chemokine receptors such as CCR4. The gene for CCL22 is located in human chromosome 16 in a cluster with other chemokines called CX3CL1 and CCL17. References Further reading External links Cytokines
CCL22
[ "Chemistry" ]
111
[ "Cytokines", "Signal transduction" ]
8,681,733
https://en.wikipedia.org/wiki/CCL17
CCL17 is a powerful chemokine produced in the thymus and by antigen-presenting cells like dendritic cells, macrophages, and monocytes. CCL17 plays a complex role in cancer. It attracts T-regulatory cells allowing for some cancers to evade an immune response. However, in other cancers, such as melanoma, an increase in CCL17 is linked to an improved outcome. CCL17 has also been linked to autoimmune and allergic diseases. Classification CCL17 (CC chemokine ligand 17) was initially named TARC (thymus- and activation-regulated chemokine) when first isolated in 1996. It was later renamed CCL17 as the naming conventions for all cytokines were updated to standardize names. Function Cytokines, like CCL17, help cells communicate with one another, and stimulate cell movement. Chemokines are a type of cytokine that attract white blood cells to sites of inflammation or disease. CCL17 as well as its partner chemokine CCL22 induce chemotaxis in T-helper cells. They do this by binding to CCR4, a chemokine receptor expressed on type 2 helper T cells, cutaneous lymphocyte skin-localizing T cells, and regulatory T cells. CCR4 is also expressed by T cells involved in adult T-cell leukemia/lymphoma and cutaneous T cell lymphomas, making its ligands (namely CCL17) an attractive target for novel therapies as described below. CCL17 is one of the few chemokines that are not stored in the body, except in the thymus; these chemokines are made when needed by dendritic cells, macrophages, and monocytes. CCL17 is expressed constitutively in the thymus, but only transiently in phytohemagglutinin-stimulated peripheral blood mononuclear cells. CCL17 can also be detected in other tissues such as the colon, small intestine, and lung. Granulocyte-macrophage colony-stimulating factor (GM-CSF) upregulates CCL17 production in monocytes and macrophages. Dendritic cells will produce large quantities of CCL17 when stimulated with IL-4 or TSLP. CCL17 was the first CC chemokine identified that interacted with T cells with high affinity. CCL17 was also found to interact with monocytes, but with less affinity. It does not interact with granulocytes. It acts as a powerful chemoattractant to T-helper cells and T-regulatory cells because both can express CCR4. Cancer Classic Hodgkin lymphoma CCL17 was found to be highly expressed by the tumor cells of classic Hodgkin lymphoma. It can be detected by immunohistochemistry in >90% of cases in a diagnostic setting and is highly specific within B cell derived cancers. CCL17 is mainly responsible for the presence of large amounts of T-helper and T-regulatory cells in the tumor microenvironment, which is considered a hallmark of Hodgkin lymphoma. Levels of CCL17 in serum are ~400 times higher in Hodgkin lymphoma patients than in healthy controls and are strongly associated with tumor volume, disease stage, and response to therapy. Its levels are increasing already several years prior to symptoms and diagnosis in many Hodgkin lymphoma patients. Solid cancers This chemokine is very important in the human body’s response to cancers. While it sometimes allows cancer to invade more rapidly, it more often helps the human body fight cancer. Some cancers that form tumors, such as breast cancer, produce CCL17 which draws T regulatory cells into the area, enhancing the cancer’s ability to invade. On the other hand, CCL17 will also activate tumor-infiltrating lymphocytes tumors. For many cancers, the more CCL17 in the area, the better the prognosis is for cancer survival or recovery. Inflammation Like many cytokines, CCL17 is inflammatory, so while it plays a largely helpful role in attacking cancers, it can induce inflammatory diseases, including allergic skin diseases. Because of its inflammatory effects, much of the medical research is on methods to mitigate CCL17. Neutralizing CCL17 with monoclonal antibodies has been shown to relieve inflammatory arthritis and osteoarthritis. Topical steroids have been found to be an effective tool in normalizing levels of CCL17. Autoimmunity CCL17 is known to help leukocytes (and especially eosinophils) target their response to skin-located pathogens. This often occurs through the CCL17-CCR4 interaction on type 2 T helper cells, which then secrete a variety of interleukins. Direct interactions between CCL17 and eosinophils has been observed but not well defined. However, overexpressed CCL17 has been linked to atopic dermatitis (eczema) and multiple sclerosis, among other autoimmune diseases. Studies have shown that children with allergies and atopic dermatitis have higher quantiles of CCL17 compared to children without allergies. As such, therapeutic approaches involving CCL17 regulation have shown some success in several cases. This intervention often involves interfering with CCR4 through monoclonal antibody treatment (such as mogamulizumab). Another option is small-molecule interaction with CCR4, which has not yet had any clinical success. Atopic dermatitis (eczema) Researchers have found that type 2 helper-T cells in lesions of atopic dermatitis (AD) express more IL-4 and IL-13 than unaffected Th2 cells. Dendritic cells respond to IL-4 and IL-13 by secreting CCL17 (as well as CCL18 and CCL22), especially in "barrier-disrupted" skin (such as lesional skin). Because CCL17 is a key attractant for Th2, this creates a cycle of Th2 recruitment, IL-4 and IL-13 signaling, dendritic cell secretion of CCL17, and further recruitment of Th2 cells. Severity of AD is therefore correlated with concentration of CCL17 and CCL22 in both the blood serum and interstitial fluid of pediatric and adult patients with either acute or chronic AD. Because Th2 cells are present at elevated levels during pregnancy, a buildup of CCL17 in umbilical cord blood may summon more Th2 cells, causing the aforementioned positive feedback loop. This is correlated with a higher likelihood of developing AD (and other allergic diseases) in infants (including for mothers without AD), especially for the first two years of infancy. In adult patients, other signals (such as IL-22) have been shown to correlate with the severity and chronicity of AD in addition to levels of CCL17, although the causal relationships between each of these other signals and CCL17 are not all yet known. Other signaling components, like TSLP, are induced by other lesional epidermal cells and directly upregulate CCL17 production. Clinically, CCL17 has recently shown promise as a useful biomarker for AD severity as well as efficacy of treatment. Historically, physicians have used mostly visual, qualitative evaluations of lesion progress, but using CCL17 to quantify AD has allowed for more precise and accurate records of progress (or regression) during treatment. In concert with this, proposed treatments for AD include topical regulation of CCL17. Especially for infantile AD, where prolonged AD has been linked to severe food allergies, early quantification and treatment is especially important. This treatment may take the form of small-molecule inhibition of CCL17-CCR4 binding, which inhibits recruitment of Th2 cells and subsequent development of lesions. Multiple sclerosis (and EAE) Multiple sclerosis (MS) (and the animal model EAE) are autoimmune diseases characterized in part by changes in the expression and regulation of CCL17 in cerebrospinal fluid. There is also evidence to suggest that certain SNPs in the CCL17 and CCL22 genes may raise the risk of MS for an individual. While type 2 helper T (Th2) cells are a key component of AD because they are localized to the skin through the CCL17-CCR4 interaction, memory Th17 cells seem to express high levels of CCR4 in both human and murine models of MS and are therefore likely candidates for study and therapy. Treatments of MS (such as natalizumab or methylprednisolone) seem to lower overall chemokine levels (notably including either CCL17 itself or factors that are known to induce CCL17 production) in addition to other purported primary functions. However, these findings are complicated by CCR4 up- and downregulation findings, which have sometimes seemed counter to the CCL17 localization pathways. Experimental explorations with CCL17-deficient mice have therefore counterintuitively given different information than experiments measuring CCR4 regulation for EAE. Other disorders Several other disorders are also correlated with high levels of CCL17 or use CCL17 to localize Th2 cells. CCL17 can act as an inflammatory agent or as a symptom, and in either case, disrupting or manipulating the expression or ligand binding offers a therapeutic target. And, regardless of therapeutic potential, it can be used as a biomarker of disease. Drug rash with eosinophilia and systemic symptoms (DRESS) Bullous pemphigoid (BP) Senile erythroderma Eosinophilic pustular folliculitis Chronic spontaneous urticaria (hives) Maculopapular exanthema Stevens-Johnson syndrome/toxic epidermal necrolysis (Non-)episodic angioedema with eosinophilia Allergic asthma Allergic rhinitis/chronic rhinosinusitis with nasal polyps (CRSwNP) Eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome) Acute and chronic eosinophilic pneumonia Mycosis fungoides (MF) Sezary syndrome (SS) Lymphocytic variant HES Acute disseminated encephalomyelitis (ADEM) Neuromyelitis optica (NMO) (Devic's disease) Chromosomal location In humans the gene for CCL17 is located on chromosome 16 along with other chemokines including CCL22 and CX3CL1. References Further reading External links Cytokines
CCL17
[ "Chemistry" ]
2,283
[ "Cytokines", "Signal transduction" ]
8,681,935
https://en.wikipedia.org/wiki/Long-footed%20potoroo
The long-footed potoroo (Potorous longipes) is a small marsupial found in southeastern Australia, restricted to an area around the coastal border between New South Wales and Victoria. It was first recorded in 1967 when an adult male was caught in a dog trap in the forest southwest of Bonang, Victoria. It is classified as vulnerable. P. longipes is the largest species of Potorous, resembling the long-nosed potoroo, Potorous tridactylus. It is a solitary, nocturnal creature, feeding on fungi, vegetation, and small invertebrates. It differs from P. tridactylus in its larger feet and longer tail. Current threats to the species include predation by introduced feral cats and foxes, and loss of habitat from logging within its limited range. Taxonomy The scientific name of the animal commonly known as the long-footed potoroo is Potorous longipes. Potoroo is the common name for all of the three other species in the genus Potorous, Gilbert's potoroo, P. gilbertii, the broad-faced potoroo, P. platyops, and long-nosed potoroo, P. tridactylus. P. longipes is the largest potoroo, and most resembles P. tridactylus. The species was first recorded in 1967 in the East Gippsland region of Victoria, Australia. The formal description was published in 1980. Remains of the long-footed potoroo were found in predator droppings in 1986. Description and anatomy The long-footed potoroo is a very rare marsupial only found in Australia. A potoroo is a small type of kangaroo-like marsupial. It is about the size of a rabbit and its common name suggests, it has very long hind feet. These feet have long toes with very strong claws. The species is the largest potoroos with males weighing up to and females . The entire body length is . The tail can be between in length, while the hind foot is . This animal can be differentiated from other potoroos by its long back feet, which are the same length relative to its head. It has an extra footpad called the hallcual pad. The long-footed potoroo hops in a similar fashion to a kangaroo, yet can use its tail to grasp objects. It has a soft, dense coat, with grayish-brown fur that slowly fades into a lighter color on the feet and belly. Behavior and life history Habitat and distribution The long-footed potoroo lives in a range of montane forests. It has also been found in the warmer temperate rainforest. This species lives where the soil is constantly moist. It spends its day time sleeping in a nest on the ground in a hidden, sheltered area. An essential feature of the long-footed potoroo's habitat is the dense vegetation cover that supplies protection and shelter from predators. This species was not known to science until 1967, so historically, it is inadequately understood. It has a very restricted area where it lives. The main populations can be found in Victoria, in the Barry Mountains, which is in the northeast part of the state, and in the East Gippsland, located in the far east. A smaller population lives north of the Victorian border in the south-east forest of New South Wales. Population The long-footed potoroo is very difficult to find in the wild due to its shy behavior. The National Recovery Plan states that a few thousand individuals are unlikely to remain in the wild as of now; only a few hundred long-footed potoroos may survive. Diet Long-footed potoroos' diet normally consists of up to 91% of fruiting fungi found under ground. They are known to consume up to 58 different species of fungi as part of their diet. These underground fungi are also called sporocarps or truffles. If necessary, they may also eat fruits, plant material, and soil-dwelling invertebrates. Their jaws have shearing premolars and molars that are rounded at the top, indicating a varied diet is consumed. The long-footed potoroo plays a part in the symbiotic relationship between the fungi (Ectomycorrhizae) and the trees. It helps this relationship by releasing the spores of the fruiting fungi through its fecal material. In turn, this helps keep the forest healthy, benefiting both the fungi and the forest. The species of fungi that are eaten in the winter and summer are similar, but the amount of each type of fungal species varies between seasons and years. It has a sacculated fore stomach in which bacterial fermentation occurs. This aids in the breakdown of fungal cell walls. Behavior and communication The long-footed potoroo is very shy and elusive. It can produce a vocalization, a low kiss kiss sound when stressed or to communicate to its offspring. Although the long-footed potoroo is a nocturnal species, it may partake in early-morning basking in the sun. The long-footed potoroo is constantly hidden from plain sight. Under normal conditions, males are not aggressive. Nevertheless, if provoked, they can become aggressive in defending their home. Mating, reproduction, and parental care Breeding can occur all year, yet most young are born in the winter, spring, and early summer. Higher rainfall and deep, moist soil full of leaf litter provides a stable food supply. In turn, these periods of good conditions allows breeding to occur easily. When a female is in estrus, nearby males fight with one another, until dominance is established. The species has a monogamous mating system. The gestation period is around 38 days. In captivity, the offspring stay in the mother's pouch for 140 to 150 days. The offspring then reaches sexual maturity around 2 years old. Females can give birth up to three young per year, though one or two young is most commonly seen. After the young leave the pouch, they can stay with their mothers up to 20 weeks until they become independent. They stay in the mother's territory up to 12 months before leaving. The long-footed potoroo exhibits postpartum oestrus and embryonic diapauses. Movement patterns The long-footed potoroo moves to different parts of its territory due to the distribution of fungi. Thus seasonally, their territory boundaries change following the distribution of truffles. Males use a larger home range area than females use. The species is territorial and the territories of mated pairs can overlap with each other, but not with other pairs. The home range of the long-footed potoroo is between 22 and 60 ha in East Gippsland and between 14 and 23 ha in north-eastern Victoria. Conservation issues Status As of 2006, the long-footed potoroo has been classified as endangered (EN) by the IUCN Red List. According to the IUCN Red List, the long-footed potoroo is considered endangered because its area of occurrence is less than 5,000 km2. The dispersed area where the animal is found is most likely in a decline of the number of individuals due to predators and competition for food from introduced pigs. It is listed as an endangered species on schedule 1 of the New South Wales Threatened Species Conservation Act 1995. It is also considered an endangered species under the Commonwealth Environmental Protection and Biodiversity Conservation Act 1999, and as endangered by the Victorian Flora and Fauna Guarantee Act 1988. Threats Their most serious predators include the red fox, feral cats, and wild dogs, all invasive species. Their habitat is greatly disturbed due to building roads, thus they have seemed to move along these roads and forage for food in these areas. This also causes a threat from being hit with a motor vehicle. In Victoria, the State Forest has about half of the long-footed potoroo population. Introduced pigs may be a large competitor for the long-footed potoroo's specialized diet. Conservation plans Information on this rare species is spotty. Thus, to conserve it effectively, further studies on its way of life and habitat need to be conducted. Research was performed on a small captive population that was able to breed in the 1980s and 1990s at the Healesville Sanctuary. Small steps have been taken to increase the population of long-footed potoroo and to protect it from extinction. In the State Forest of Victoria, the long-footed potoroo is protected through special areas in which logging is monitored or prevented and burning of the forest has been reduced. Their natural predators such as the wild dogs, red fox and feral cats have also been put under control. This will allow the long-footed potoroo to reclaim their habitat and allow their numbers to rise again. Conservation plans such as these will not only benefit the long-footed potoroo, but will also be beneficial to other threatened animal species in this area. 2019–2020 Australian bushfires Over 82% of its habitat was burnt during the 2019-2020 Australian bushfires. References External links Images and movies of long-footed potoroo at ARKive Foundation for National Parks & Wildlife Potoroids Endangered fauna of Australia Mammals of New South Wales Mammals of Victoria (state) EDGE species Mammals described in 1980
Long-footed potoroo
[ "Biology" ]
1,882
[ "EDGE species", "Biodiversity" ]
8,682,105
https://en.wikipedia.org/wiki/CX3CL1
Fractalkine, also known as chemokine (C-X3-C motif) ligand 1, is a protein that in humans is encoded by the CX3CL1 gene. Function Fractalkine is a large cytokine protein of 373 amino acids that contains multiple domains and is the only known member of the CX3C chemokine family. It is also commonly known under the names fractalkine (in humans) and neurotactin (in mice). The polypeptide structure of CX3CL1 differs from the typical structure of other chemokines. For example, the spacing of the characteristic N-terminal cysteines differs; there are three amino acids separating the initial pair of cysteines in CX3CL1, with none in CC chemokines and only one intervening amino acid in CXC chemokines. CX3CL1 is produced as a long protein (with 373-amino acid in humans) with an extended mucin-like stalk and a chemokine domain on top. The mucin-like stalk permits it to bind to the surface of certain cells. However a soluble (90 kD) version of this chemokine has also been observed. Soluble CX3CL1 potently chemoattracts T cells and monocytes, while the cell-bound chemokine promotes strong adhesion of leukocytes to activated endothelial cells, where it is primarily expressed. CX3CL1 elicits its adhesive and migratory functions by interacting with the chemokine receptor CX3CR1. Its gene is located on human chromosome 16 along with some CC chemokines known as CCL17 and CCL22. Fractalkine is found commonly throughout the brain, particularly in neural cells, and its receptor is known to be present on microglial cells. It has also been found to be essential for microglial cell migration. CX3CL1 is also up-regulated in the hippocampus during a brief temporal window following spatial learning, the purpose of which may be to regulate glutamate-mediated neurotransmission tone. This indicates a possible role for the chemokine in the protective plasticity process of synaptic scaling. References External links Further reading Cytokines
CX3CL1
[ "Chemistry" ]
495
[ "Cytokines", "Signal transduction" ]
8,682,297
https://en.wikipedia.org/wiki/Picocassette
Picocassette is an audio storage medium introduced by Dictaphone in collaboration with JVC in 1985. The Picocassette was introduced to compete with the Microcassette, introduced by Olympus, and the Mini-Cassette, by Philips. Size It is approximately half the size of the previous Microcassette, and was intended for highly portable dictation devices. With a tape speed of 9 mm/s, each cassette could hold up to 60 minutes of dictation, 30 minutes per side. The signal-to-noise ratio was 35 dB. The widest dimension of the picocassette was near . See also Microcassette Mini-Cassette NT (cassette) References External links Image of a Picocassette (including ruler and Compact Cassette for comparison), at the Cassette Recorder Museum Techmoan: The Picocassette - Smallest Analogue Cassette Tape ever made Audio storage Audiovisual introductions in 1985 Tape recording Products introduced in 1985
Picocassette
[ "Technology" ]
199
[ "Recording devices", "Tape recording" ]
8,684,186
https://en.wikipedia.org/wiki/Lobby%20%28room%29
A lobby is a room in a building used for entry from the outside. Sometimes referred to as a foyer, entryway, reception area or entrance hall, it is often a large room or complex of rooms (in a theatre, opera house, concert hall, showroom, cinema, etc.) adjacent to the auditorium. It may be a repose area for spectators, especially used before performance and during intermissions, but also as a place of celebrations or festivities after performance. In other buildings, such as office buildings or condominiums, lobbies can function as gathering spaces between the entrance and elevators to other floors. Since the mid-1980s, there has been a growing trend to think of lobbies as more than just ways to get from the door to the elevator but instead as social spaces and places of commerce. Some research has even been done to develop scales to measure lobby atmosphere to improve hotel lobby design. Many office buildings, condominiums, hotels and skyscrapers go to great lengths to decorate their lobbies to create the right impression and convey an image. Etymology The word "lobby" comes from Medieval Latin lobia, laubia or lobium. Gallery See also Atrium (architecture) Genkan Vestibule (architecture) References Parts of a theatre Rooms ar:بهو
Lobby (room)
[ "Technology", "Engineering" ]
264
[ "Rooms", "Parts of a theatre", "Components", "Architecture" ]
8,684,340
https://en.wikipedia.org/wiki/Structured%20Audio%20Orchestra%20Language
Structured Audio Orchestra Language (SAOL) is an imperative, MUSIC-N programming language designed for describing virtual instruments, processing digital audio, and applying sound effects. It was published as subpart 5 of MPEG-4 Part 3 (ISO/IEC 14496-3:1999) in 1999. As part of the MPEG-4 international standard, SAOL is one of the key components of the MPEG-4 Structured Audio toolset, along with: Structured Audio Score Language (SASL) Structured Audio Sample Bank Format (SASBF) The MPEG-4 SA scheduler MIDI support See also Csound MPEG-4 Structured Audio References The MPEG-4 Structured Audio Standard External links SAOL.net - MPEG4 structured audio (mp4-sa) Audio programming languages MPEG
Structured Audio Orchestra Language
[ "Technology" ]
166
[ "Multimedia", "MPEG" ]
8,684,736
https://en.wikipedia.org/wiki/Rex%20Ronan%3A%20Experimental%20Surgeon
Rex Ronan: Experimental Surgeon is an educational action video game developed by Sculptured Software and published by Raya Systems for the Super Nintendo Entertainment System. The game teaches players about the hazards of smoking tobacco cigarettes. The initial development of the game received support from the US Agency for Healthcare Research and Quality. It is a part of educational video game series from Raya that includes Captain Novolin, Packy and Marlon and Bronkie the Bronchiasaurus. Gameplay and plot Jake Westboro is a man with it all: a beautiful wife and child, a large house in the suburbs, and the massively-paying position of a major CEO for the Blackburn Tobacco Company. As a result of smoking since he was 15, however, Jake is now dying from the effects of the cigarettes that he once sold. An experimental surgeon, Rex Ronan, volunteers to shrink himself and a craft down to near-microscopic size, so he can travel inside of Jake's body and fight his various diseases; removing tar, nicotine, precancerous cells, and other deadly health hazards. However, the tobacco company are concerned that if Jake survives, he will speak to the world about the hazards of tobacco and accordingly ruin their business; so they secretly place microbots inside him en-masse in an attempt to stop Ronan from treating him. If Ronan dies from the evil killbots sent by the company, then so does the patient, Jake. Evaluations Richard M. Satava mentioned two evaluations in which a number of children (none of whom were more than 12 years old) played a prototype version of the game. The average of the results indicated children who experienced enjoyment and who showed an interest with regard to acquiring information about tobacco's effects on people. According to Richard L. Street and Timothy R. Manning, the target audience was children and teenagers in the age range of 10 to 16 years. In addition, when it comes to persons who take up smoking, those in the age range of 10 to 16 years are at the highest risk, according to Street and Manning. References External links 1994 video games Action games Children's educational video games Fiction about nanotechnology Human body in popular culture Medical video games North America-exclusive video games Raya Systems games Sculptured Software games Single-player video games Super Nintendo Entertainment System games Super Nintendo Entertainment System-only games Video games about diseases Video games about microbes Video games about robots Video games about size change Video games developed in the United States
Rex Ronan: Experimental Surgeon
[ "Materials_science" ]
501
[ "Fiction about nanotechnology", "Nanotechnology" ]
8,685,313
https://en.wikipedia.org/wiki/Straddle%20carrier
A straddle carrier or straddle truck is a freight-carrying vehicle that carries its load underneath by "straddling" it, rather than carrying it on top like a conventional truck. The advantage of the straddle carrier is its ability to load and unload without the assistance of cranes or forklifts. The lifting apparatus under the carrier is operated by the driver without any outside assistance and without leaving the driver's seat. Lumber carriers The straddle carrier was invented by H. B. Ross in 1913 as a road-going vehicle that could easily transport lumber around mills and yards. Lumber was stacked on special pallets known as carrier blocks; the carrier would then straddle the stack, grasp and lift the carrier block, and drive off with the load. Because a straddle carrier is open at both front and rear, it can transport lumber much longer than the carrier itself, over in length. The Ross Carrier Company (now Northwest Caster & Equipment ) was founded in Seattle to manufacture and market the carrier, and similar designs were later manufactured by Gerlinger, Hyster, Yale, Caterpillar, and other companies. These "straddles" or "timber jinkers" were a common sight in seaports around the world until the 1970s, but were phased out as larger and faster conventional trucks came into use. An example of these road-going straddle carriers can be seen in the 1950 comedy film Watch the Birdie. Industrial straddle carriers Similar industrial straddle carriers are used in manufacturing and construction, both for handling oversized loads such as steel and pre-cast concrete and where transportation of special loads such as nitrogen tanks is required in restricted spaces not suitable for trucks. A key advantage of industrial straddle carriers and reach stackers over most forklifts is the ability to load or unload a semi-trailer in a single operation, which can improve efficiency. Straddle carriers are also used for handling boats onshore. These are also often called travel lifts or travelifts. Shipping container carriers The most common use of straddle carriers is in port terminals and intermodal yards, where they are used for stacking and moving ISO standard containers. The carrier straddles its load, picking it up and carrying it by connecting to the top lifting points using a container spreader. Some machines have the ability to stack containers up to four high. They travel at relatively low speeds (up to ) with a laden container. Drivers of the carrier sit sideways at the very top, and face the middle, so they can see behind and in front of the vehicle. Straddle carriers can lift up to , which equals up to two full containers. Gallery See also Crane Gantry crane References External links Combilift Straddle Carrier Liebherr straddle carrier Mobile cranes Cranes (machines) Intermodal containers Port infrastructure
Straddle carrier
[ "Engineering" ]
590
[ "Engineering vehicles", "Cranes (machines)" ]
8,685,463
https://en.wikipedia.org/wiki/Wings%20%28Chinese%20constellation%29
The Wings mansion (翼宿, pinyin: Yì Xiù) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the southern mansions of the Vermilion Bird. Asterisms References Chinese constellations
Wings (Chinese constellation)
[ "Astronomy" ]
50
[ "Chinese constellations", "Constellations" ]
8,685,802
https://en.wikipedia.org/wiki/City%20of%20Hope%20National%20Medical%20Center
City of Hope is a private, non-profit clinical research center, hospital and graduate school located in Duarte, California, United States. The center's main campus resides on of land adjacent to the boundaries of Duarte and Irwindale, with a network of clinical practice locations throughout Southern California, satellite offices in Monrovia and Irwindale, and regional fundraising offices throughout the United States. City of Hope is best known as a cancer treatment center. It has been designated a Comprehensive Cancer Center by the National Cancer Institute. City of Hope has also been ranked one of the nation's Best Cancer Hospitals by U.S. News & World Report for over ten years and is a founding member of the National Comprehensive Cancer Network. City of Hope played a role in the development of synthetic human insulin in 1978. The center has performed 13,000 hematopoietic stem cell transplants as of 2016 with patient outcomes that consistently exceed national averages. History In the late 19th and early 20th centuries, the spread of tuberculosis, also known as "consumption", was a growing concern in the United States and Europe. Owing to advancements in the scientific understanding of its contagious nature, a movement to house and quarantine sufferers became prevalent. Construction of tuberculosis sanatoria, including tent cities, became common in the United States, with many sanatoriums located in the Southwestern United States, where it was believed that the more arid climate would aid sufferers. In 1913, the Jewish Consumptive Relief Association was chartered in Los Angeles, California, with the intent of raising money to establish a free, non-sectarian sanatorium for persons from throughout the United States diagnosed with tuberculosis. After raising sufficient funds, the association purchased of land in Duarte, California, a small town in the more arid San Gabriel Valley, approximately east of downtown Los Angeles, and dubbed the property the Los Angeles Sanatorium. Opening January 11, 1914, the sanatorium originally consisted of two tents, one for patients and one for caregivers. The sanatorium was nicknamed "the city of hope", and grew in size for several decades, continuing to raise funds, construct permanent facilities, hire doctors and treat increasing numbers of patients. Treating tuberculosis remained the sanatorium's focus until after World War II, when antibiotics for tuberculosis were discovered. With tuberculosis becoming less prevalent, executive sanatorium director Samuel H. Golter began an initiative in 1946 to transform the sanatorium into a full medical center, supported by a research institute and post-graduate education. The Los Angeles Sanatorium officially changed its name to City of Hope National Medical Center in 1949. City of Hope's research institute was formally established in 1952. The City of Hope Graduate School of Biological Sciences was eventually chartered in 1993, and changed its name to the Irell & Manella Graduate School of Biological Sciences in 2009. From 1953 to 1985, under executive director Ben Horowitz, City of Hope grew further in size and became best known for its cancer research and treatment programs. Horowitz raised City of Hope's annual average operating budget from $600,000 to more than $100 million during his tenure. In 1981, the National Cancer Institute designated City of Hope a "Clinical Cancer Research Center". In 1983, the Arnold and Mabel Beckman Foundation awarded City of Hope a $10 million grant to establish the Beckman Research Institute of City of Hope; the Beckman Research Institute of City of Hope is now City of Hope's research moniker, and is one of six institutes/centers established by the Beckman Foundation in the United States. From 1983 to the present, City of Hope continued to grow, expanding its Duarte campus with additional patient care, research and support facilities. City of Hope also operates a network of community practice clinics throughout Southern California. City of Hope acquired Cancer Treatment Centers of America in 2022 and began operating the facilities as City of Hope in 2023. Research and treatment City of Hope's institutional goals are the prevention, treatment and cure of cancer and other life-threatening diseases, including diabetes and HIV/AIDS. As such, City of Hope's programs include the fields of brain, breast, gastrointestinal, gynecologic, thoracic and urologic cancers, as well as leukemia, lymphoma, and diabetes. City of Hope has been designated a Comprehensive Cancer Center by the National Cancer Institute, a branch of the National Institutes of Health. City of Hope is a bench to bedside institution, with investments in basic, translational and clinical research. Faculty, residents and fellows conduct biomedical research, treat patients and educate medical professionals with the medical center serving as a teaching hospital. Industrial, institutional, and National Cancer Institute-sponsored external peer-reviewed clinical trials are conducted at City of Hope. Synthetic human insulin In 1978, City of Hope researchers Arthur Riggs and Keiichi Itakura, working with Herbert Boyer of San Francisco-based biotechnology corporation Genentech, Inc., became the first scientists to produce synthetic human insulin. City of Hope licensed patents based on Riggs's and Itakura's work to Genentech. On August 13, 1999, City of Hope sued Genentech for allegedly cheating it out of its fair share of the profits from products based on the Riggs-Itakura patents. On April 24, 2008, the Supreme Court of California affirmed the jury's award of $300 million in contractual damages to City of Hope but reversed the award of $200 million in punitive damages. Hematopoietic cell transplantation On January 13, 2011, City of Hope performed its 10,000th hematopoietic stem cell transplantation, which includes transplants of bone marrow, peripheral blood stem cells collected by apheresis, and umbilical cord stem cells. By 2016, this has grown to over 13,000 stem cell transplants. National Comprehensive Cancer Network City of Hope is a founding member of the National Comprehensive Cancer Network (NCCN), a non-profit alliance of 21 U.S. cancer centers. The NCCN publishes clinical practice guidelines for oncological treatment among its member institutions. Member institutions include City of Hope, The University of Texas MD Anderson Cancer Center, St. Jude Children's Research Hospital/University of Tennessee Cancer Institute, Fox Chase Cancer Center in Philadelphia, Pennsylvania, Fred Hutchinson Cancer Research Center in Seattle, Washington, and 16 others. Facilities Patient care facilities City of Hope's main campus in Duarte has several treatment facilities for inpatient and outpatient care, including the Helford Clinical Research Hospital, Michael Amini Transfusion Medicine Center, the Geri and Richard Brawerman Center for Ambulatory Care and the Women's Center. In addition to the Duarte facilities, City of Hope has treatment facilities across Southern California, Georgia, Illinois and Arizona. Southern California community practice clinics are located in Antelope Valley, Arcadia, Corona, Glendale, Glendora, Huntington Beach, Irvine Sand Canyon, Long Beach, Mission Hills, Newport Beach, Palmdale, Pasadena, Riverside, San Bernardino, Santa Clarita, Sherman Oaks, Simi Valley, South Bay, South Pasadena, Temecula, Thousand Oaks, Torrance, Upland, West Covina, West Hills and Wildomar. City of Hope is building a $200 million, six-story cancer hospital, which will anchor its Lennar Foundation Cancer Center in Irvine, in Orange County. The center is slated to open in 2025. City of Hope is accredited by the Joint Commission, a private body which accredits over 17,000 health care organizations and programs in the United States. Beckman Research Institute of City of Hope Beckman Research Institute of City of Hope is one of six research facilities established by funding from the Arnold and Mabel Beckman Foundation. Its primary focus is research in the areas of cancer, diabetes, and HIV/AIDS. The institute shelters the City of Hope Irell & Manella Graduate School of Biological Sciences. Research conducted at the institute has contributed to discoveries in the areas of recombinant DNA technology, gene therapy and monoclonal antibodies. Center for Biomedicine & Genetics City of Hope Center for Biomedicine & Genetics is a manufacturing facility specializing in the production of pharmaceutical-grade materials. The center also assists clinical investigators with translational research and clinical trials. Irell & Manella Graduate School of Biological Sciences The graduate school at City of Hope; the school is housed within the Arnold and Mabel Beckman Center for Cancer Immunotherapeutics & Tumor Immunology. Patient housing City of Hope has 40 temporary, on-site residential housing units for patients and their caregivers, with integrated hospice and palliative care. Fundraising City of Hope secures funding from a mixture of sources, including patient revenue, private donations, foundation support and federal research grants. Annual fundraising events include Walk for Hope (a multi-city charity fundraising walk), Concert for Hope (a fundraising concert featuring celebrity musicians), and the City of Hope Celebrity Softball Challenge, held in Nashville, Tennessee. City of Hope maintains eight regional fundraising offices in various cities throughout the United States, including Palm Desert, Phoenix, San Diego, San Francisco, Seattle, Chicago, Philadelphia, and Ft. Lauderdale, Florida. The hospital also fundraises using giving days. In 2016, Doctors' Day allowed patients to thank doctors by giving in their name. More than $9000 was raised through 60 gifts. In 2017, City of Hope was planning a Bone Marrow Transplant Reunion Day and Survivors Day. The hospital also participates in #GivingTuesday. In 2015, the first time the hospital used the fundraiser, almost $120,000 were raised from 681 gifts. In 2016, those numbers rose to almost $200,000 from more than 1500 gifts. In January 2017 City of Hope received a donation of more than $50 million to establish the Wanek Family Project for Type 1 Diabetes at City of Hope. Affiliations City of Hope is affiliated with the following institutions: Association of Community Cancer Centers National Cancer Institute National Comprehensive Cancer Network (founding member) National Bone Marrow Transplantation Research Network (founding member) National Gene Vector Laboratory Southern California Islet Cell Consortium (SC-IC) Islet Cell Transplant Center Juvenile Diabetes Research Foundation Recognition In 2023, City of Hope was ranked as one of the top 10 "Best Hospitals" in cancer (#7) by U.S. News & World Report. In 2009, City of Hope was listed among eight preferred cancer hospitals in the May/June issue of AARP Magazine, which published the results of a survey of doctors from throughout the United States conducted by Consumers' Checkbook, a Washington, D.C.-based non-profit health care provider rating service. Sampled doctors were asked "where they were most likely to send patients with extremely difficult cases". As of 2016, 52 City of Hope physicians are currently listed as "Top Doctors" by Castle Connolly Medical Ltd., as nominated by their peers in the medical profession. Castle Connolly is an independent company that surveys thousands of medical professionals in the United States and publishes the results in an annual consumer guide, America's Top Doctors. In December 2015, CharityWatch rates City of Hope / Beckman Research Institute charity an "A−" grade. In 2016, Charity Navigator gave City of Hope a 4 stars – its highest rating – for the 11th consecutive year. Grants National Cancer Institute grants Specialized Program of Research Excellence (SPORE) grant for translational research studies for Hodgkin and non-Hodgkin lymphoma - Five-year, $11.5 million. Grant to City of Hope's Division of Nursing Research for study of palliative care and quality-of-life concerns for lung cancer patients - Five-year, $13.4 million. Grant to City of Hope's Division of Cancer Etiology for the California Teachers Study, a survey of over 130,000 public school teachers and administrators to study the link between obesity, physical activity, hormone exposure and cancer - Three-year, $5 million. Grant to City of Hope's Department of Population Sciences to study genetic susceptibility for secondary malignancies as a result of treatment for cancer survivors - Five-year, $3.4 million. Other grants California Institute for Regenerative Medicine (CIRM) grants for AIDS-related lymphoma and brain cancer research - $32.5 million. National Library of Medicine/National Institute of Diabetes and Digestive and Kidney Diseases grant to City of Hope's Department of Information Sciences to serve as coordinating center for distribution of islet cells and intestinal stem cells - $17 million. National Institute of Environmental Health Sciences grant to study ultraviolet light damage and its effect on mutagenesis - $2 million. Contracts National Heart, Lung, and Blood Institute (NHLBI) contract to facilitate stem cell research from laboratory to clinical study; focus on development and manufacture of stem cell therapies—five-year, $8.6 million. References External links Cancer hospitals Cancer organizations based in the United States Duarte, California History of biotechnology Hospitals in Los Angeles County, California Medical research institutes in California 1913 establishments in California Religious organizations established in 1913 NCI-designated cancer centers City of Hope National Medical Center
City of Hope National Medical Center
[ "Biology" ]
2,717
[ "History of biotechnology" ]
8,686,104
https://en.wikipedia.org/wiki/Anthracotheriidae
Anthracotheriidae is a paraphyletic family of extinct, hippopotamus-like artiodactyl ungulates related to hippopotamuses and whales. The oldest genus, Elomeryx, first appeared during the middle Eocene in Asia. They thrived in Africa and Eurasia, with a few species ultimately entering North America during the Oligocene. They died out in Europe and Africa during the Miocene, possibly due to a combination of climatic changes and competition with other artiodactyls, including pigs and true hippopotamuses. The youngest genus, Merycopotamus, died out in Asia during the late Pliocene, possibly for the same reasons. The family is named after the first genus discovered, Anthracotherium, which means "coal beast", as the first fossils of it were found in Paleogene-aged coal beds in France. Fossil remains of the anthracothere genus were discovered by the Harvard University and Geological Survey of Pakistan joint research project (Y-GSP) in the well-dated middle and late Miocene deposits of the Pothohar Plateau in northern Pakistan. In life, the average anthracothere would have resembled a skinny hippopotamus with a comparatively small, narrow head and most likely pig-like in general appearance. They had four or five toes on each foot, and broad feet suited to walking on soft mud. They had full sets of about 44 teeth with five semicrescentric cusps on the upper molars, which, in some species, were adapted for digging up the roots of aquatic plants. Evolutionary relationships Some skeletal characters of anthracotheres suggest they are related to hippos. The nature of the sediments in which they are fossilized implies they were amphibious, which supports the view, based on anatomical evidence, that they were ancestors of the hippopotamuses. In many respects, especially the anatomy of the lower jaw, Anthracotherium, as with other members of the family, is allied to the hippopotamus, of which it is probably an ancestral form. However, one study suggests that instead of anthracotheres, another pig-like group of artiodactyls, the palaeochoerids, are the true stem group of Hippopotamidae. Recent evidence, gained from comparative gene sequencing, further suggests that hippos are the closest living relatives of whales, so, if anthracotheres are stem hippos, they would also be related to whales in a clade provisionally called Whippomorpha. However, the earliest known anthracotheres appear in the fossil record in the middle Eocene, well after the archaeocetes had already taken up totally aquatic lifestyles. Although phylogenetic analyses of molecular data on extant animals strongly support the notion that hippopotamids are the closest relatives of cetaceans (whales, dolphins and porpoises), the two groups are unlikely to be closely related when extant and extinct artiodactyls are analyzed. Cetaceans originated about 50 million years ago in the Tethys Sea between India and China, whereas the family Hippopotamidae is only 15 million years old, and the first Asian hippopotamids are only 6 million years old. Yet, analyses of fossil clades have not resolved the issue of cetacean relations. Another study has offered a suggestion that anthracotheres are part of a clade that also consists of entelodonts (and even Andrewsarchus) and that is a sister clade to other cetancodonts, with Siamotherium as the most basal member of the clade Cetacodontamorpha. References Ancodonta Piacenzian extinctions Eocene first appearances Prehistoric mammal families Taxa named by Joseph Leidy Paraphyletic groups
Anthracotheriidae
[ "Biology" ]
809
[ "Phylogenetics", "Paraphyletic groups" ]
8,687,376
https://en.wikipedia.org/wiki/Pantopon
Pantopon, also known as Opium Alkaloids Hydrochlorides, is a preparation of opiates made up of all of the alkaloids present in opium in their natural proportions as hydrochlorides salts. It can sometimes be tolerated by people who are allergic to morphine. Pantopon is prepared by treating standardized medicinal opium with hydrochloric acid or, more commonly, mixing 20 parts morphine HCl, 5 parts codeine, 6 parts thebaine, 8 parts noscapine, 2 parts narcotine, 6 parts miscellaneous alkaloids hydrochlorides. Pantopon is, in other words, opium with all of the tar and other insolubles removed in an injectable form which, by weight, is nearly as potent as morphine. It was invented in 1909 by the Hoffmann-La Roche pharmaceutical company. Other drugs of the same type have included in the opium alkaloid hydrobromides, sulfates, phosphates, and valerates. "Opium in a syringe " and "Injectable Whole Opium" were common advertising slogans for the product from Roche. An example of similar product to Pantopon is Omnopon, which contains morphine, codeine, and papaverine. Society and culture Pantopon gave its name to the poem "Pantopon Rose" by the American writer William Burroughs and to a song with the same name by the Northern Ireland alternative metal band Therapy? on their 1994 album Troublegum. Pantopon also gave its name to the 1996 Mexican documentary Rosa Pantopon. References Opiates
Pantopon
[ "Chemistry" ]
332
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
8,687,829
https://en.wikipedia.org/wiki/Myriocin
Myriocin, also known as antibiotic ISP-1 and thermozymocidin, is a non-proteinogenic amino acid derived from the entomopathogenic fungus, Isaria sinclairii. Myriocin is a very potent inhibitor of serine palmitoyltransferase, the first step in sphingosine biosynthesis. Due to this property, it is used in biochemical research as a tool for depleting cells of sphingolipids. Myriocin was shown to inhibit the proliferation of an IL-2-dependent mouse cytotoxic T cell line. Myriocin possesses immunosuppressant activity. It is reported to be 10- to 100-fold more potent than ciclosporin. The multiple sclerosis drug fingolimod was derived from myriocin by using structure–activity relationship studies to determine the parts of the molecule important to its activity. References Antibiotics Alpha-Amino acids Beta hydroxy acids Vicinal diols
Myriocin
[ "Biology" ]
213
[ "Antibiotics", "Biocides", "Biotechnology products" ]
8,687,911
https://en.wikipedia.org/wiki/Proper%20acceleration
In relativity theory, proper acceleration is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured. Gravitation therefore does not cause proper acceleration, because the same gravity acts equally on the inertial observer. As a consequence, all inertial observers always have a proper acceleration of zero. Proper acceleration contrasts with coordinate acceleration, which is dependent on choice of coordinate systems and thus upon choice of observers (see three-acceleration in special relativity). In the standard inertial coordinates of special relativity, for unidirectional motion, proper acceleration is the rate of change of proper velocity with respect to coordinate time. In an inertial frame in which the object is momentarily at rest, the proper acceleration 3-vector, combined with a zero time-component, yields the object's four-acceleration, which makes proper-acceleration's magnitude Lorentz-invariant. Thus the concept is useful: (i) with accelerated coordinate systems, (ii) at relativistic speeds, and (iii) in curved spacetime. In an accelerating rocket after launch, or even in a rocket standing on the launch pad, the proper acceleration is the acceleration felt by the occupants, and which is described as g-force (which is not a force but rather an acceleration; see that article for more discussion) delivered by the vehicle only. The "acceleration of gravity" (involved in the "force of gravity") never contributes to proper acceleration in any circumstances, and thus the proper acceleration felt by observers standing on the ground is due to the mechanical force from the ground, not due to the "force" or "acceleration" of gravity. If the ground is removed and the observer allowed to free-fall, the observer will experience coordinate acceleration, but no proper acceleration, and thus no g-force. Generally, objects in a state of inertial motion, also called free-fall or a ballistic path (including objects in orbit) experience no proper acceleration (neglecting small tidal accelerations for inertial paths in gravitational fields). This state is also known as "zero gravity" ("zero-g") or "free-fall," and it produces a sensation of weightlessness. Proper acceleration reduces to coordinate acceleration in an inertial coordinate system in flat spacetime (i.e. in the absence of gravity), provided the magnitude of the object's proper-velocity (momentum per unit mass) is much less than the speed of light c. Only in such situations is coordinate acceleration entirely felt as a g-force (i.e. a proper acceleration, also defined as one that produces measurable weight). In situations in which gravitation is absent but the chosen coordinate system is not inertial, but is accelerated with the observer (such as the accelerated reference frame of an accelerating rocket, or a frame fixed upon objects in a centrifuge), then g-forces and corresponding proper accelerations felt by observers in these coordinate systems are caused by the mechanical forces which resist their weight in such systems. This weight, in turn, is produced by fictitious forces or "inertial forces" which appear in all such accelerated coordinate systems, in a manner somewhat like the weight produced by the "force of gravity" in systems where objects are fixed in space with regard to the gravitating body (as on the surface of the Earth). The total (mechanical) force that is calculated to induce the proper acceleration on a mass at rest in a coordinate system that has a proper acceleration, via Newton's law , is called the proper force. As seen above, the proper force is equal to the opposing reaction force that is measured as an object's "operational weight" (i.e. its weight as measured by a device like a spring scale, in vacuum, in the object's coordinate system). Thus, the proper force on an object is always equal and opposite to its measured weight. Examples When holding onto a carousel that turns at constant angular velocity an observer experiences a radially inward (centripetal) proper-acceleration due to the interaction between the handhold and the observer's hand. This cancels the radially outward geometric acceleration associated with their spinning coordinate frame. This outward acceleration (from the spinning frame's perspective) will become the coordinate acceleration when they let go, causing them to fly off along a zero proper-acceleration (geodesic) path. Unaccelerated observers, of course, in their frame simply see their equal proper and coordinate accelerations vanish when they let go. Similarly, standing on a non-rotating planet (and on earth for practical purposes) observers experience an upward proper-acceleration due to the normal force exerted by the earth on the bottom of their shoes. This cancels the downward geometric acceleration due to the choice of coordinate system (a so-called shell-frame). That downward acceleration becomes coordinate if they inadvertently step off a cliff into a zero proper-acceleration (geodesic or rain-frame) trajectory. Geometric accelerations (due to the connection term in the coordinate system's covariant derivative below) act on every gram of our being, while proper-accelerations are usually caused by an external force. Introductory physics courses often treat gravity's downward (geometric) acceleration as due to a mass-proportional force. This, along with diligent avoidance of unaccelerated frames, allows them to treat proper and coordinate acceleration as the same thing. Even then if an object maintains a constant proper-acceleration from rest over an extended period in flat spacetime, observers in the rest frame will see the object's coordinate acceleration decrease as its coordinate velocity approaches lightspeed. The rate at which the object's proper-velocity goes up, nevertheless, remains constant. Thus the distinction between proper-acceleration and coordinate acceleration allows one to track the experience of accelerated travelers from various non-Newtonian perspectives. These perspectives include those of accelerated coordinate systems (like a carousel), of high speeds (where proper and coordinate times differ), and of curved spacetime (like that associated with gravity on Earth). Classical applications At low speeds in the inertial coordinate systems of Newtonian physics, proper acceleration simply equals the coordinate acceleration a = d2x/dt2. As reviewed above, however, it differs from coordinate acceleration if one chooses (against Newton's advice) to describe the world from the perspective of an accelerated coordinate system like a motor vehicle accelerating from rest, or a stone being spun around in a slingshot. If one chooses to recognize that gravity is caused by the curvature of spacetime (see below), proper acceleration differs from coordinate acceleration in a gravitational field. For example, an object subjected to physical or proper acceleration ao will be seen by observers in a coordinate system undergoing constant acceleration aframe to have coordinate acceleration: Thus if the object is accelerating with the frame, observers fixed to the frame will see no acceleration at all. Similarly, an object undergoing physical or proper acceleration ao will be seen by observers in a frame rotating with angular velocity to have coordinate acceleration: In the equation above, there are three geometric acceleration terms on the right-hand side. The first "centrifugal acceleration" term depends only on the radial position and not the velocity of our object, the second "Coriolis acceleration" term depends only on the object's velocity in the rotating frame but not its position, and the third "Euler acceleration" term depends only on position and the rate of change of the frame's angular velocity. In each of these cases, physical or proper acceleration differs from coordinate acceleration because the latter can be affected by your choice of coordinate system as well as by physical forces acting on the object. Those components of coordinate acceleration not caused by physical forces (like direct contact or electrostatic attraction) are often attributed (as in the Newtonian example above) to forces that: (i) act on every gram of the object, (ii) cause mass-independent accelerations, and (iii) don't exist from all points of view. Such geometric (or improper) forces include Coriolis forces, Euler forces, g-forces, centrifugal forces and (as we see below) gravity forces as well. Viewed from a flat spacetime slice Proper-acceleration's relationships to coordinate acceleration in a specified slice of flat spacetime follow from Minkowski's flat-space metric equation . Here a single reference frame of yardsticks and synchronized clocks define map position x and map time t respectively, the traveling object's clocks define proper time τ, and the "d" preceding a coordinate means infinitesimal change. These relationships allow one to tackle various problems of "anyspeed engineering", albeit only from the vantage point of an observer whose extended map frame defines simultaneity. Acceleration in (1+1)D In the unidirectional case i.e. when the object's acceleration is parallel or antiparallel to its velocity in the spacetime slice of the observer, proper acceleration α and coordinate acceleration a are related through the Lorentz factor by . Hence the change in proper-velocity w=dx/dτ is the integral of proper acceleration over map-time t i.e. for constant . At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map-time, i.e. Δv=aΔt. For constant unidirectional proper-acceleration, similar relationships exist between rapidity η and elapsed proper time Δτ, as well as between Lorentz factor γ and distance traveled Δx. To be specific: where the various velocity parameters are related by These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at "1 gee" (10 m/s2 or about 1.0 light year per year squared) halfway to their destination, and then decelerate them at "1 gee" for the remaining half so as to provide earth-like artificial gravity from point A to point B over the shortest possible time. For a map-distance of ΔxAB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of . Hence the round-trip time on traveler clocks will be , during which the time elapsed on map clocks will be . This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on Earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on Earth clocks). Unfortunately, sustaining 1-gee acceleration for years is easier said than done, as illustrated by the maximum payload to launch mass ratios shown in the figure at right. In curved spacetime In the language of general relativity, the components of an object's acceleration four-vector A (whose magnitude is proper acceleration) are related to elements of the four-velocity via a covariant derivative D with respect to proper time : Here U is the object's four-velocity, and Γ represents the coordinate system's 64 connection coefficients or Christoffel symbols. Note that the Greek subscripts take on four possible values, namely 0 for the time-axis and 1–3 for spatial coordinate axes, and that repeated indices are used to indicate summation over all values of that index. Trajectories with zero proper acceleration are referred to as geodesics. The left hand side of this set of four equations (one each for the time-like and three spacelike values of index λ) is the object's proper-acceleration 3-vector combined with a null time component as seen from the vantage point of a reference or book-keeper coordinate system in which the object is at rest. The first term on the right hand side lists the rate at which the time-like (energy/mc) and space-like (momentum/m) components of the object's four-velocity U change, per unit time τ on traveler clocks. Let's solve for that first term on the right since at low speeds its spacelike components represent the coordinate acceleration. More generally, when that first term goes to zero the object's coordinate acceleration goes to zero. This yields Thus, as exemplified with the first two animations above, coordinate acceleration goes to zero whenever proper-acceleration is exactly canceled by the connection (or geometric acceleration) term on the far right. Caution: This term may be a sum of as many as sixteen separate velocity and position dependent terms, since the repeated indices μ and ν are by convention summed over all pairs of their four allowed values. Force and equivalence The above equation also offers some perspective on forces and the equivalence principle. Consider local book-keeper coordinates for the metric (e.g. a local Lorentz tetrad like that which global positioning systems provide information on) to describe time in seconds, and space in distance units along perpendicular axes. If we multiply the above equation by the traveling object's rest mass m, and divide by Lorentz factor γ = dt/dτ, the spacelike components express the rate of momentum change for that object from the perspective of the coordinates used to describe the metric. This in turn can be broken down into parts due to proper and geometric components of acceleration and force. If we further multiply the time-like component by lightspeed c, and define coordinate velocity as , we get an expression for rate of energy change as well: (timelike) and (spacelike). Here ao is an acceleration due to proper forces and ag is, by default, a geometric acceleration that we see applied to the object because of our coordinate system choice. At low speeds these accelerations combine to generate a coordinate acceleration like , while for unidirectional motion at any speed ao's magnitude is that of proper acceleration α as in the section above where α = γ3a when ag is zero. In general expressing these accelerations and forces can be complicated. Nonetheless, if we use this breakdown to describe the connection coefficient (Γ) term above in terms of geometric forces, then the motion of objects from the point of view of any coordinate system (at least at low speeds) can be seen as locally Newtonian. This is already common practice e.g. with centrifugal force and gravity. Thus the equivalence principle extends the local usefulness of Newton's laws to accelerated coordinate systems and beyond. Surface dwellers on a planet For low speed observers being held at fixed radius from the center of a spherical planet or star, coordinate acceleration ashell is approximately related to proper acceleration ao by: where the planet or star's Schwarzschild radius . As our shell observer's radius approaches the Schwarzschild radius, the proper acceleration ao needed to keep it from falling in becomes intolerable. On the other hand, for , an upward proper force of only is needed to prevent one from accelerating downward. At the Earth's surface this becomes: where is the downward 9.8 m/s2 acceleration due to gravity, and is a unit vector in the radially outward direction from the center of the gravitating body. Thus here an outward proper force of mg is needed to keep one from accelerating downward. Four-vector derivations The spacetime equations of this section allow one to address all deviations between proper and coordinate acceleration in a single calculation. For example, let's calculate the Christoffel symbols: for the far-coordinate Schwarzschild metric , where rs is the Schwarzschild radius 2GM/c2. The resulting array of coefficients becomes: From this you can obtain the shell-frame proper acceleration by setting coordinate acceleration to zero and thus requiring that proper acceleration cancel the geometric acceleration of a stationary object i.e. . This does not solve the problem yet, since Schwarzschild coordinates in curved spacetime are book-keeper coordinates but not those of a local observer. The magnitude of the above proper acceleration 4-vector, namely , is however precisely what we want i.e. the upward frame-invariant proper acceleration needed to counteract the downward geometric acceleration felt by dwellers on the surface of a planet. A special case of the above Christoffel symbol set is the flat-space spherical coordinate set obtained by setting rs or M above to zero: From this we can obtain, for example, the centripetal proper acceleration needed to cancel the centrifugal geometric acceleration of an object moving at constant angular velocity at the equator where . Forming the same 4-vector sum as above for the case of and zero yields nothing more than the classical acceleration for rotational motion given above, i.e. so that . Coriolis effects also reside in these connection coefficients, and similarly arise from coordinate-frame geometry alone. See also Acceleration: change in velocity Proper velocity: momentum per mass in special relativity; composed of the spacelike components of the 4-velocity Proper reference frame (flat spacetime): accelerated reference frame in special relativity (Minkowski space) Fictitious force: one name for mass times geometric acceleration Four-vector: making the connection between space and time explicit Kinematics: for studying ways that position changes with time Uniform acceleration: holding coordinate acceleration fixed Footnotes External links Excerpts from the first edition of Spacetime Physics, and other resources posted by Edwin F. Taylor James Hartle's gravity book page including Mathematica programs to calculate Christoffel symbols. Andrew Hamilton's notes and programs for working with local tetrads at U. Colorado, Boulder. Minkowski spacetime Acceleration
Proper acceleration
[ "Physics", "Mathematics" ]
3,670
[ "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities", "Acceleration" ]
8,688,139
https://en.wikipedia.org/wiki/Electronic%20circuit%20design
Electronic circuit design comprises the analysis and synthesis of electronic circuits. Methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Linear circuits, that is, circuits wherein the outputs are linearly dependent on the inputs, can be analyzed by hand using complex analysis. Simple nonlinear circuits can also be analyzed in this way. Specialized software has been created to analyze circuits that are either too complicated or too nonlinear to analyze by hand. Circuit simulation software allows engineers to design circuits more efficiently, reducing the time cost and risk of error involved in building circuit prototypes. Some of these make use of hardware description languages such as VHDL or Verilog. Network simulation software More complex circuits are analyzed with circuit simulation software such as SPICE and EMTP. Linearization around operating point When faced with a new circuit, the software first tries to find a steady state solution wherein all the nodes conform to Kirchhoff's Current Law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the software can analyze the response to perturbations using piecewise approximation, harmonic balance or other methods. Piece-wise linear approximation Software such as the PLECS interface to Simulink uses piecewise linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time. Synthesis Simple circuits may be designed by connecting a number of elements or functional blocks such as integrated circuits. More complex digital circuits are typically designed with the aid of computer software. Logic circuits (and sometimes mixed mode circuits) are often described in such hardware description languages as HDL, VHDL or Verilog, then synthesized using a logic synthesis engine. See also Circuit design Integrated circuit design References Electronic design Design
Electronic circuit design
[ "Engineering" ]
428
[ "Electronic design", "Electronic engineering", "Electronic circuits", "Design" ]
8,688,638
https://en.wikipedia.org/wiki/Washington%20Award
The Washington Award is an American engineering award. Since 1916 it has been given annually for "accomplishments which promote the happiness, comfort, and well-being of humanity". It is awarded jointly by the following engineering societies: American Institute of Mining, Metallurgical, and Petroleum Engineers, American Nuclear Society, American Society of Civil Engineers, American Society of Mechanical Engineers, Institute of Electrical and Electronics Engineers, National Society of Professional Engineers, and Western Society of Engineers (which administers the award). Honorees Source: The Washington Award Herbert C. Hoover, 1919 Robert W. Hunt, 1922 Arthur N. Talbot, 1924 Jonas Waldo Smith, 1925 John Watson Alvord, 1926 Orville Wright, 1927 Michael Idvorsky Pupin, 1928 Bion Joseph Arnold, 1929 Mortimer Elwyn Cooley, 1930 Ralph Modjeski, 1931 William David Coolidge, 1932 Ambrose Swasey, 1935 Charles Franklin Kettering, 1936 Frederick Gardner Cottrell, 1937 Frank Baldwin Jewett, 1938 Daniel Webster Mead, 1939 Daniel Cowan Jackling, 1940 Ralph Budd, 1941 William Lamont Abbott, 1942 Andrey Abraham Potter, 1943 Henry Ford, 1944 Arthur Holly Compton, 1945 Vannevar Bush, 1946 Karl Taylor Compton, 1947 Ralph Edward Flanders, 1948 John Lucian Savage, 1949 Wilfred Sykes, 1950 Edwin Howard Armstrong, 1951 Henry Townley Heald, 1952 Gustav Egloff, 1953 Lillian Moller Gilbreth, 1954 Charles Erwin Wilson, 1955 Robert E. Wilson, 1956 Walker Lee Cisler, 1957 Ben Moreell, 1958 James R. Killian, Jr., 1959 Herbert Payne Sedwick, 1960 William V. Kahler, 1961 Alexander C. Monteith, 1962 Philip Sporn, 1963 John Slezak, 1964 Glenn Theodore Seaborg, 1965 Augustus Braun Kinzel, 1966 Frederick Lawson Hovde, 1967 James B. Fisk, 1968 Nathan M. Newmark, 1969 H.G. Rickover, 1970 William L. Everitt, 1971 Thomas Otten Paine, 1972 John A. Volpe, 1973 John D. deButts, 1974 David Packard, 1975 Ralph B. Peck, 1976 Michael Tenenbaum, 1977 Dixy Lee Ray, 1978 Marvin Camras, 1979 Neil Armstrong, 1980 John E. Swearingen, 1981 Manson Benedict, 1982 John Bardeen, 1983 Robert W. Galvin, 1984 Stephen D. Bechtel, 1985 Mark Shepherd Jr., 1986 Grace Murray Hopper, 1987 James McDonald, 1988 Sherwood L. Fawcett, 1989 John H. Sununu, 1990 Frank Borman, 1991 Leon M. Lederman, 1992 William States Lee, 1993 Kenneth H. Olson, 1994 George W. Housner, 1995 Wilson Greatbatch, 1996 Frank Kreith, 1997 John R. Conrad, 1998 Jack S. Kilby, 1999 Donna Lee Shirley, 2000 Dan Bricklin, 2001 Bob Frankston, 2001 Richard J. Robbins, 2002 Eugene Cernan, 2003 Nick Holonyak, 2004 Robert S. Langer, 2005 Henry Petroski, 2006 Michael J. Birck, 2007 Dean Kamen, 2008 Clyde N. Baker, Jr., 2009 Alvy Ray Smith, 2010 Martin C. Jischke, 2011 Martin Cooper, 2012 Kristina M. Johnson, 2013 Bill Nye, 2014 Bernard Amadei, 2015 Aprille Joy Ericsson, 2016 Chuck Hull, 2017 Ivan Sutherland, 2018 Margaret Hamilton, 2019 Richard A. Berger, 2020 John B. Goodenough, 2021 John A. Rogers, 2022 Gwynne Shotwell, 2023 Robert Kahn & Vint Cerf, 2024 See also List of engineering awards References External links Engineering awards Awards established in 1916 American science and technology awards 1916 establishments in the United States
Washington Award
[ "Technology" ]
758
[ "Science and technology awards", "Engineering awards" ]
8,688,642
https://en.wikipedia.org/wiki/XtremPC
XtremPC was a computer magazine from Romania founded in 1998. XtremPC included previews and reviews on computer hardware, software, PC games and gadgets, as well as IT news. Although its major focus was on personal computers only, latter editions started including sections dedicated to game consoles as well. XtremPC was the first Romanian magazine to include a DVD in 2004, followed two years later by LeveL. The last issue of XtremPC was the May 2010 issue (No. 120), which appeared on 3 June 2010. The further issuing of the magazine temporarily ended as a result of a drop in the number of readers. Format XtremPC included four main sections: IT Express – news and articles regarding the latest innovations in the IT world; Hardware – news, previews, reviews, tests and comparison charts of computer hardware; Software & Communication – news, reviews and tests on computer software and communication and multimedia devices; Jocuri (Games) – news and reviews on PC games; later included console games as well. Editions The latter issues of the magazine were available in three editions based on the type of digital media that they included: XtremPC (key-coloured in green) – included the magazine only, priced at 5.9 lei (approx. US$2) XtremPC CD (key-coloured in orange) – included the magazine as well as a Compact Disc, priced at 7.9 lei (approx. US$2.6) XtremPC DVD (key-coloured in blue) – included the magazine as well as a DVD, priced at 12.9 lei (approx. US$4.2) Currently, all three editions of XtremPC are out of print. The website has been shut down, but the forum is still active. There is a fan site that holds the PDF versions of the magazine. References External links Revista XtremPC se inchide – 2 Mai site-ul revistei xtrempc se inchide – 1 Iulie XtremPC se inchide, raman cu Itfiles La revedere XtremPC! Defunct computer magazines Defunct magazines published in Romania Magazines established in 1998 Magazines disestablished in 2010 Science and technology in Romania 1998 establishments in Romania 2010 disestablishments in Romania Romanian-language magazines
XtremPC
[ "Technology" ]
482
[ "Computing stubs", "Computer magazine stubs" ]
8,688,742
https://en.wikipedia.org/wiki/Daniel%20Guggenheim%20Medal
The Daniel Guggenheim Medal is an American engineering award, established by Daniel and Harry Guggenheim. The medal is considered to be one of the greatest honors that can be presented for a lifetime of work in aeronautics. Its first recipient was Orville Wright. Other recipients have included American and international individuals from aeronautical corporations, governments, and academia. Since 1929 it has been given annually to persons who make notable achievements in the advancement of aeronautics. It is awarded jointly by the American Society of Mechanical Engineers, the Society of Automotive Engineers, the American Helicopter Society, and the American Institute of Aeronautics and Astronautics. The American Institute of Aeronautics and Astronautics administers the award. Physical Description Obverse: Spirit of St. Louis, a hot air balloon, and the nose of airship over sun burst and clouds depicted in relief; raised text on outer ring surrounding relief. Reverse: Three stylized bird wings surrounding raised letters and inscribed text. Dimensions (diameter x depth): 6.4 × 0.5cm (2 1/2 × 3/16 in.) Recipients The winners are listed below along with their award citation and year. See also List of aviation awards List of engineering awards Wright Brothers Medal Wright Brothers Memorial Trophy References External links The Daniel Guggenheim Medal - aiaa.org/guggenheim Aviation awards Aerospace engineering awards Awards established in 1929 1929 establishments in the United States
Daniel Guggenheim Medal
[ "Engineering" ]
270
[ "Aerospace engineering awards", "Aerospace engineering" ]
8,688,901
https://en.wikipedia.org/wiki/Niels%20Bohr%20International%20Gold%20Medal
The Niels Bohr International Gold Medal is an international engineering award. It has been awarded since 1955 for "outstanding work by an engineer or physicist for the peaceful utilization of atomic energy". The medal is administered by the Danish Society of Engineers (Denmark) in collaboration with the Niels Bohr Institute and the Royal Danish Academy of Sciences. It was awarded 10 times between 1955 and 1982 and again in 2013. The first recipient was Niels Bohr himself who received the medal in connection with his 70th birthday. 2013 laureate Alain Aspect, regarded as an outstanding figure in optical and atomic physics, was awarded the medal for his experiments on the Bell's inequalities test. It was presented on 7 October 2013 by Queen Margrethe and Prince Henrik at a special event at the Honorary Residence in the Carlsberg Academy. Recipients The following scientists have been awarded the Niels Bohr Medal: Niels Bohr, 1955 John Cockcroft, 1958 George de Hevesy, 1961 Pyotr Kapitsa, 1965 Isidor Isaac Rabi, 1967 Werner Karl Heisenberg, 1970 Richard P. Feynman, 1973 Hans A. Bethe, 1976 Charles H. Townes, 1979 John Archibald Wheeler, 1982 Alain Aspect, 2013 Jens Nørskov, 2018 Ewine van Dishoeck, 2022 See also UNESCO Niels Bohr Medal List of engineering awards List of physics awards References Awards established in 1955 Danish science and technology awards Physics awards 1955 establishments in Denmark
Niels Bohr International Gold Medal
[ "Technology" ]
301
[ "Science and technology awards", "Physics awards" ]
8,688,920
https://en.wikipedia.org/wiki/Joan%20Hodges%20Queneau%20Medal
The Joan Hodges Queneau Medal is an American engineering award for the field of environmental conservation. It has been given annually since 1976 for an "outstanding contribution by an engineer in behalf of environmental conservation". The award is administered by the National Audubon Society, and made jointly with the American Association of Engineering Societies. The award includes a citation, the "Palladium Medal", and a bronze statue. Award recipients 1977 - H. Beecher Charmbury 1983 - Roy W. Hann, Jr. 1984 - Barbara-Ann Gamboa Lewis 1985 - William A. Jester 1986 - Kenneth R. Daniel 1987 - Thomas K. MacVicar 1988 - Barney L. Capehart 1989 - James L. Baker 1990 - Joseph T. Ling 1991 - M. Kent Loftin 1992 - Hsieh Wen Shen 1994 - Luna Leopold 1995 - Robert Williams 1996 - Jared Leigh Cohon 2002 - William Carrol 2003 - James W. Poirot 2004 - Donald Van Norman Roberts 2005 - George G. Wicks 2008 - Albert A. Grant 2010 - Clifford W. Randall 2011 - Raymond A. Ferrara 2012 - Rao Y. Surampalli 2013 - Perry L. McCarty 2014 - Bruce E. Rittmann 2015 - Diran Apelian 2016 - Wendi Goldsmith 2017 - Jessica E. Kogel 2018 - D. Yogi Goswami See also List of engineering awards List of environmental awards References AAES Queneau Medal past recipients Engineering awards Environmental awards Awards established in 1976
Joan Hodges Queneau Medal
[ "Technology" ]
304
[ "Science and technology awards", "Engineering awards" ]
8,688,939
https://en.wikipedia.org/wiki/Max%20Jakob%20Memorial%20Award
The Max Jakob Memorial Award recognizes an 'eminent scholarly achievement and distinguished leadership' in the field of heat transfer. Awarded annually to a scholar by the American Society of Mechanical Engineers (ASME) and the American Institute of Chemical Engineers (AIChE), it is the highest honor in the field of heat transfer these professional organizations bestow. The award was established in 1961 by the American Society of Mechanical Engineering Heat Transfer Division in honor of Max Jakob, a pioneer in the science of heat transfer, commemorating his influential contributions as a research worker, educator, and author. In 1962, the AIChE joined the ASME in presenting the award. It is administered though the Max Jakob Memorial Award Committee, a board composed of three members from each of the two major professional organizations, as well as the Past Chair of the committee. The award is presented annually, without regard to society affiliation or nationality. It consists of a bronze plaque, an engraved certificate, an honorarium, and travel expenses to accept the award. Each year the recipient also presents the Max Jakob Award Lecture as part of the annual America Society of Mechanical Engineering National Heat Transfer Conference. Recipients 2024 Walter Grassi, Italy 2022 G.P. “Bud” Peterson, United States 2021 Michael Modest, United States 2020 Peter Wayner Jr., United States 2019 Arun Majumdar, United States 2018 John W. Rose, United Kingdom 2016 Je-Chin Han, United States 2014 P. S. Ayyaswamy, United States 2013 Kenneth R. Diller, United States 2012 Wataru Nakayama, Japan 2011 Dimos Poulikakos, Switzerland 2010 Amir Faghri, United States 2009 Ivan Catton, United States 2008 Suhas Patankar, United States 2007 Wen-Jei Yang, United States 2006 Kwang-Tzu Yang, United States 2005 Ping Cheng, China 2004 Vijay K. Dhir, United States 2003 Kenneth J. Bell, United States 2002 Yogesh Jaluria, United States 2001 John C. Chen, United States 2000 Vedat Arpaci, United States 1999 Adrian Bejan, United States 1998 Alexander I. Leontiev, Russia 1997 John R. Howell, United States 1996 Robert Siegel, United States 1995 Arthur E. Bergles, United States 1994 Geoffrey F. Hewitt, United Kingdom 1993 Benjamin Gebhart, United States 1992 William M. Kays, United States 1991 Franz X. Mayinger, Germany 1990 Richard J. Goldstein, United States 1989 James P. Hartnett, United States 1988 Yasuo Mori, Japan 1987 S. George Bankoff, United States 1986 Raymond Viskanta, United States 1985 Frank Kreith, United States 1984 Alexander Louis London, United States 1983 Bei Tse Chao, United States 1982 Simon Ostrach, United States 1981 Chang-Lin Tien, United States 1980 Ralph A. Seban, United States 1979 Stuart W. Churchill, United States 1978 Niichi Nishiwaki, Japan 1977 D. Brian Spalding, United Kingdom 1976 Ephraim M. Sparrow, United States 1975 Robert G. Deissler, United States 1974 Peter Grassmann, Switzerland 1973 Ulrich Grigull, Germany 1972 Karl A. Gardner, United States 1971 James W. Westwater, United States 1970 Warren M. Rohsenow, United States 1969 Samson Kutateladze, U.S.S.R. 1968 Shiro Nukiyama, Japan 1967 Thomas B. Drew, United States 1966 Sir Owen Saunders, United Kingdom 1965 Hoyt C. Hottel, United States 1964 Ernst Schmidt, Germany 1963 William H. McAdams, United States 1962 Llewellyn M.K. Boelter, United States 1961 Ernst R. G. Eckert, United States Notes See also List of engineering awards List of mechanical engineering awards References External links "Max Jakob Memorial Award - ASME Heat Transfer Division (HTD)" "Max Jakob Memorial Award Charter" "American Society of Mechanical Engineers" "American Institute of Chemical Engineers" Awards of the American Society of Mechanical Engineers Awards of the American Institute of Chemical Engineers
Max Jakob Memorial Award
[ "Chemistry" ]
826
[ "Awards of the American Institute of Chemical Engineers", "Chemical engineering awards" ]
8,688,979
https://en.wikipedia.org/wiki/Percy%20Nicholls%20Award
The Percy Nicholls Award is an American engineering prize. It has been given annually since 1942 for "notable scientific or industrial achievement in the field of solid fuels". The prize is given jointly by the American Institute of Mining, Metallurgical, and Petroleum Engineers and American Society of Mechanical Engineers. Recipients of this Prize 2023 - David G. Osborne 2022 - Michael A. Karmis 2021 - Not given 2019 - Not given 2018 - Not given 2017 - Not given 2016 - Not given 2015 - Yoginder Paul Chugh 2014 - Yiannis Levendis 2013 - Barbara J. Arnold 2012 - Not given 2011 - Sukumar Bandopadhyay 2010 - Ashwani K. Gupta 2009 - William Beck 2008 - George A. Richards 2007 - Peter J. Bethell 2006 - John L. Marion 2005 - Gerald H. Luttrell 2004 - Dr. Hisashi (Sho) Kobayashi 2003 - J. Brett Harvey 2002 - L. Douglas Smoot 2001 - Robert E. Murray 2000 - Klaus R. G. Hein 1999 - Peter T. Luckie 1998 - Not given 1997 - Frank F. Aplan 1996 - Adel F. Sarofim 1995 - Joseph W. Leonard, III 1994 - Robert H. Essenhigh 1993 - Robert L. Frantz 1992 - Richard W. Borio 1991 - Raja V. Ramani 1990 - Richard W. Bryers 1989 - Albert W. Duerbrouck 1988 - János M. Beér 1987 - Leonard G. Austin 1986 - Gordon H. Gronhovd 1985 - David A. Zegeer 1984 - George K. Lee 1983 - E. Minor Pace 1982 - James R. Jones 1981 - Jack A. Simon 1980 - George W. Land 1979 - William N. Poundstone 1978 - Albert F. Duzy 1977 - H. Beecher Charmbury 1976 - Richard B. Engdahl 1975 - Not given 1974 - George P. Cooper 1973 - Samuel M. Cassidy 1972 - Charles H. Sawyer 1971 - George E. Keller 1970 - Richard C. Corey 1969 - David R. Mitchell 1968 - W. T. Reid 1967 - Martin A. Elliott 1966 - C. T. Holland 1965 - L. F. Deming 1964 - Carroll F. Hardy 1963 - James R. Garvey 1962 - Charles E. Lawall 1961 - Otto de Lorenzi 1960 - Carl E. Lesher 1959 - Homer H. Lowry 1958 - Willibald Trinks 1957 - John Blizzard 1956 - Chester A. Reed 1955 - Ralph Hardgrove 1954 - John F. Barkley 1953 - Henry F. Hebley 1952 - Harry F. Yancey 1951 - Albert R. Humford 1950 - Julian E. Tobey 1949 - Lawrence A. Shipman 1948 - Ralph A. Sherman 1947 - Howard N. Eavenson 1946 - Arno C. Fieldner 1945 - Thomas A. Marsh 1944 - James B. Morrow 1943 - Henry Kreisinger 1942 - Ervin G. Bailey See also List of engineering awards List of mechanical engineering awards References Percy Nicholls Award Notes Awards of the American Society of Mechanical Engineers Awards of the American Institute of Mining, Metallurgical, and Petroleum Engineers Combustion engineering awards Awards established in 1942 1942 establishments in the United States
Percy Nicholls Award
[ "Chemistry", "Technology" ]
657
[ "Awards of the American Institute of Mining", " and Petroleum Engineers", "Combustion", "Science award stubs", "Combustion engineering awards", "Science and technology awards", "American Institute of Mining", " Metallurgical" ]
8,689,011
https://en.wikipedia.org/wiki/Elmer%20A.%20Sperry%20Award
The Elmer A. Sperry Award, named after the inventor and entrepreneur, is an American transportation engineering prize. It has been given since 1955 for "a distinguished engineering contribution which, through application, proved in actual service, has advanced the art of transportation whether by land, sea, air, or space." The prize is given jointly by the American Institute of Aeronautics and Astronautics, Institute of Electrical and Electronics Engineers, Society of Automotive Engineers, Society of Naval Architects and Marine Engineers, American Society of Civil Engineers, and the American Society of Mechanical Engineers (which administers it). The purpose of the award is to encourage progress in the engineering of transportation. Recipients Source: Elmer A. Sperry award 1955 William Francis Gibbs, for the development of the SS United States 1956 Donald W. Douglas, for the DC series of air transport planes 1957 Harold L. Hamilton, Richard M. Dilworth and Eugene W. Kettering, for developing the diesel-electric locomotive 1958 Ferdinand Porsche (in memoriam) and Heinz Nordhoff, for development of the Volkswagen automobile 1959 Sir Geoffrey De Havilland, Major Frank Halford (in memoriam) and Charles C. Walker, for the first jet-powered passenger aircraft and engines 1960 Frederick Darcy Braddon, Sperry Gyroscope Company, for the three-axis gyroscopic navigational reference 1961 Robert Gilmore LeTourneau, Firestone Tire and Rubber Company, for large capacity earth moving equipment and giant size tires 1962 Lloyd J. Hibbard, for applying the ignitron rectifier to railroad motive power 1963 Earl A. Thompson, for design and development of the first successful automatic automobile transmission 1964 Igor Sikorsky and Michael E. Gluhareff, Sikorsky Aircraft Division, United Aircraft Corporation, for developing the high-lift helicopter leading to the Skycrane 1965 Maynard Pennell, Richard L. Rouzie, John E. Steiner, William H. Cook and Richard L. Loesch, Jr., Commercial Airplane Division, Boeing, for the design and manufacture of the family of jet transports, including the 707, 720 and 727 1966 Hideo Shima, Matsutaro Fuji and Shigenari Oishi, Japanese National Railways, for developing the New Tokaido Line 1967 Edward R. Dye (in memoriam), Hugh DeHaven and Robert A. Wolf, Cornell Aeronautical Laboratory, for their contribution to automotive safety 1968 Christopher Cockerell and Richard Stanton-Jones, for the development of commercially useful hovercraft. 1969 Douglas C. MacMillan, M. Nielsen and Edward L. Teale, Jr. for the design and construction of the NS Savannah 1970 Charles Stark Draper of the Massachusetts Institute of Technology Instrumentation Laboratories, for the successful application of inertial guidance systems to commercial air navigation. 1971 Sedgwick N. Wight (in memoriam) and George W. Baughman, for development of Centralized Traffic Control on railways 1972 Leonard S. Hobbs and Perry W. Pratt of Pratt & Whitney, for the design and development of the Pratt & Whitney JT3 turbojet engine 1973–74 No award 1975 Jerome L. Goldman, Frank A. Nemec and James J. Henry, Friede and Goldman, Inc. and Alfred W. Schwendtner, for the design and development of barge carrying cargo vessels 1977 Clifford L. Eastburg and Harley J. Urbach, Railroad Engineering Department of the Timken Company, for the development of tapered roller bearings for railroad and industrial use 1978 Robert Puiseux, Michelin for the development of the radial tire. 1979 Leslie J. Clark, for his contributions to the conceptualization and initial development of the sea transport of liquefied natural gas 1980 William M. Allen, Malcolm T. Stamper, Joseph F. Sutter and Everette L. Webb, Boeing, for the introduction of widebody commercial jet aircraft 1981 Edward J. Wasp, for his development of long distance pipeline slurry transport of coal and other finely divided solid materials. 1982 Jörg Brenneisen, Ehrhard Futterlieb, Joachim Körber, Edmund Müller, G. Reiner Nill, Manfred Schulz, Herbert Stemmler and Werner Teich, for their development of solid state adjustable frequency induction motor transmission for diesel and electric motor locomotives 1983 Sir George Edwards; General Henri Ziegler; Sir Stanley Hooker, (in memoriam); Sir Archibald Russell; and André Turcat; commemorating their outstanding international contributions to the successful introduction of commercial supersonic aircraft such as Concorde 1984 Frederick Aronowitz, Joseph E. Killpatrick, Warren M. Macek and Theodore J. Podgorski, for the development of a ring laser gyroscopic system incorporated in a new series of commercial jetliners 1985 Richard K. Quinn, Carlton E. Tripp and George H. Plude for numerous innovative design concepts and an unusual method of construction of the first 1,000 foot self-unloading Great Lakes vessel, the M/V Stewart J. Cort 1986 George W. Jeffs, Dr. William R. Lucas, Dr. George E. Mueller, George F. Page, Robert F. Thompson and John F. Yardley, for their contributions to the concept and achievement of a reusable Space Transportation System 1987 Harry R. Wetenkamp, for his contributions toward the development of curved plate railroad wheel designs 1988 John Alvin Pierce, for his work on the OMEGA Navigation System 1989 Harold E. Froehlich, Charles B. Momsen, Jr., and Allyn C. Vine, for their development of the deep-diving submarine, DSV Alvin 1990 Claud M. Davis, Richard B. Hanrahan, John F. Keeley, and James H. Mollenauer, for their development of the Federal Aviation Administration enroute air traffic control system 1991 Malcom Purcell McLean, for his work on intermodal containerization 1992 Daniel K. Ludwig (in memoriam) for the development of the modern supertanker 1993 Heinz Leiber, WolfDieter Jonner and Hans Jürgen Gerstenmeier, Robert Bosch GmbH for the development of the Anti-lock braking system in motor vehicles 1994 Russell G. Altherr, for the development of a slackfree connector for articulated railroad freight cars 1995 No award 1996 Thomas G. Butler (in memoriam) and Richard H. MacNeal(in memoriam), for the development NASA Structural Analysis (NASTRAN) as a working tool for finite element computation 1997 No award 1998 Bradford Parkinson, for the development of the Global Positioning System (GPS) for the precise navigation of transportation vehicles 1999 No award 2000 The staff of SNCF and Alstom between 1965 and 1981 who created the initial TGV High Speed Rail System 2001 No award 2002 Raymond Pearlson, for the development of a new system for lifting ships out of the water for repair 2003 No award 2004 Josef Becker, for the development of the Rudderpropeller, a combined propulsion and steering system 2005 Victor Wouk, for his development of gasoline engine-electric motor hybrid-drive systems for automobiles and his achievements in small, lightweight electric power supplies and batteries technology 2006 Antony Jameson, for his computational fluid dynamics in aircraft design. 2007 Robert F. Cook, Peter T. Mahal, Pam L. Phillips, and James C. White, for their work in developing Engineered Materials Arresting Systems (EMAS) for airport runway safety areas. 2008 Thomas P. Stafford, Glynn S. Lunney, Aleksei A. Leonov, Konstantin D. Bushuyev, for their work on the Apollo-Soyuz mission and the Apollo-Soyuz docking interface design 2009 Boris Popov, for the development of the ballistic parachute system allowing the safe descent of disabled aircraft 2010 Takuma Yamaguchi, for his invention of the ARTICOUPLE to allow an articulated tug and barge (AT/B) waterborne transportation system 2012 Zigmund Bluvband, President, ALD Group and Herbert Hecht, Chief Engineer, SoHaR Incorporated 2013 C. Donald Bateman, for his development of Honeywell’s Ground Proximity Warning System (GPWS) 2014 Alden J. "Doc" Laborde, Bruce G. Collipp and Alan C. McClure, for their technological developments in offshore oil and gas exploration and production in deep waters 2015 Michael Sinnet and the Boeing 787-8 development team, for their work on the Boeing 787-8 2016 Harri Kulovaara, for introducing developments to enhance the efficiency, safety and environmental performance of cruise ships 2017 Bruno Murari, in recognition of his engineering achievements at STMicroelectronics. 2018 Panama Canal Authority, for planning and successfully managing a program to undertake and complete a massive infrastructure project, the “Expansion of the Panama Canal.” 2019 George A. (Sandy) Thomson, in recognition of leading the innovation for water-lubricated polymer propeller shaft bearings for marine transport thereby eliminating the requirement for oil lubrication. 2020 To Dominique Roddier, Christian Cermelli, and Alexia Aubault for the development of WindFloat, a floating foundation for offshore wind turbines. 2021 To Michimasa Fujino in recognition of his singular achievement of research and development of new technologies for business aviation including the Over-the-Wing Engine Mount and Natural Laminar Flow airfoil, and the introduction to the market of commercial aircraft based on these technologies through the formation of HondaJet. 2022 To Asad Madni for his work in the development of the first solid-state gyroscope and its subsequent integration into a complete automotive inertial measurement unit integrated circuit for stability control See also List of engineering awards List of mechanical engineering awards List of awards named after people References Elmer A. Sperry Award official site Elmer A. Sperry Award recipients list Elmer A. Sperry Award at ASCE Transportation engineering Awards established in 1955 Awards of the American Society of Mechanical Engineers 1955 establishments in the United States Awards of the American Society of Civil Engineers IEEE awards
Elmer A. Sperry Award
[ "Engineering" ]
2,040
[ "Civil engineering", "Transportation engineering", "Industrial engineering" ]
8,689,044
https://en.wikipedia.org/wiki/Hoover%20Medal
The Hoover Medal is an American engineering prize. It has been given since 1930 for "outstanding extra-career services by engineers to humanity". The prize is given jointly by the American Institute of Chemical Engineers, American Institute of Mining, Metallurgical, and Petroleum Engineers, American Society of Civil Engineers, Institute of Electrical and Electronics Engineers, and American Society of Mechanical Engineers (ASME), which administers it. It is named for Herbert Hoover, the first recipient, who was an engineer by profession. Past recipients Source:ASME See also List of engineering awards List of mechanical engineering awards List of awards for contributions to society List of awards named after people References Awards established in 1930 Awards of the American Society of Mechanical Engineers Awards of the American Institute of Mining, Metallurgical, and Petroleum Engineers Awards of the American Institute of Chemical Engineers Awards of the American Society of Civil Engineers IEEE awards Awards for contributions to society
Hoover Medal
[ "Chemistry" ]
186
[ "Awards of the American Institute of Mining", "Awards of the American Institute of Chemical Engineers", " and Petroleum Engineers", "Chemical engineering awards", "American Institute of Mining", " Metallurgical" ]
8,689,069
https://en.wikipedia.org/wiki/Kelvin%20Gold%20Medal
The Kelvin Gold Medal is a British engineering prize. In the annual report for 1914, it was reported that the Lord Kelvin Memorial Executive Committee decided that the balance of funds left over from providing a memorial window at Westminster Abbey should be devoted to providing a Kelvin Gold Medal to mark "a distinction in engineering work or investigation" by the Presidents of eight leading British Engineering Institutions. There was a delay in awarding the first medal due to the World War. The medal has been given triennially since 1920 for "distinguished service in the application of science to engineering". The Institution of Civil Engineers (Great Britain) administered the prize. The Committee of Presidents considers recommendations received from similar bodies from all parts of the world. The first recipient was William Unwin. Recipients See also List of engineering awards References Awards established in 1920 Awards of the Institution of Civil Engineers
Kelvin Gold Medal
[ "Technology" ]
170
[ "Science and technology awards", "Science award stubs" ]
8,689,519
https://en.wikipedia.org/wiki/Non-revenue%20water
Non-revenue water (NRW) is water that has been produced and is "lost" before it reaches the customer. Losses can be real losses (through leaks, sometimes also referred to as physical losses) or apparent losses (for example through theft or metering inaccuracies). High levels of NRW are detrimental to the financial viability of water utilities, as well to the quality of water itself. NRW is typically measured as the volume of water "lost" as a share of net water produced. However, it is sometimes also expressed as the volume of water "lost" per km of water distribution network per day. Components and audits The International Water Association (IWA) has developed a detailed methodology to assess the various components of NRW. Accordingly, NRW has the following components: Unbilled authorized consumption Apparent losses (water theft and metering inaccuracies) Real losses (from transmission mains, storage facilities, distribution mains or service connections) In many utilities the exact breakdown of NRW components and sub-components is simply not known, making it difficult to decide about the best course of action to reduce NRW. Metering of water use at the level of production (wells, bulk water supply), at key points in the distribution network and for consumers is essential to estimate levels of NRW (see Water metering). In most developed countries, there are no or very limited apparent losses. For developing countries the World Bank has estimated that, on average, apparent losses – in particular theft through illegal connections – account for about 40% of NRW. In some cities, apparent losses can be higher than real losses. Reducing apparent losses from illegal connections is often beyond what a utility can achieve by itself, because it requires a high level of political support. Illegal connections are often in slums, which means that their regularization in some cases particularly affects the poor. A water audit is a key tool to assess the breakdown of NRW and to develop a program for NRW reduction. Often a distinction is made between unvalidated and validated water audits. Unvalidated water audits are desktop studies that include many estimates and their results can have an error range for real losses of ± 50% or more. Its main value is to identify where it is necessary to reduce the uncertainty of the water audit through validation. Validating water audits is a complex process that involves testing of production water meters, testing of a representative random sample of customer meters, eliminating systematic errors created through the billing process and validating the number of illegal connections through aerial mapping, field surveys or cross-references between various existing databases. In developing countries it is rare to find utilities that have undertaken validated water audits, and even in developed countries they are not systematically used. The American Water Works Association (AWWA) has developed Water Audit Software which allows utilities to rate the overall degree of validity of their water audit data. Guidance on loss control planning is given based upon the credibility of the data and the measure of losses displayed by the water audit. NRW is sometimes also referred to as unaccounted-for water (UFW). While the two terms are similar, they are not identical, since non-revenue water includes authorized unbilled consumption (e.g. for firefighting or, in some countries, for use by religious institutions) while unaccounted-for water excludes it. Indicators The most commonly used indicator to measure NRW is the percentage of NRW as a share of water produced. While this indicator is easy to understand and indeed has been widely used, it has increasingly been recognized that it is not an appropriate indicator to benchmark NRW levels between utilities or even to monitor changes over time. When losses in terms of absolute volume are constant the percentage of NRW varies greatly with total water use, i.e. if water use increases and the volume of losses remains constant the percentage of NRW declines. This problem can be eliminated by measuring NRW not as a share, but in terms of absolute losses per connection per day, as recommended by the International Water Association (IWA). Nevertheless, the use of percentage figures to compare levels of NRW remains common despite its shortcomings. The International Benchmarking Network for Water and Sanitation recommends to use different indicators (percentage, losses per connection or losses per km of network) together. Losses per kilometer of network are more appropriate to benchmark real losses, while losses per connection are more appropriate to benchmark apparent losses. The concept of NRW as an indicator to compare real losses of water utilities has been criticized as flawed, particularly because real losses depend to some extent on factors largely outside the control of the utility, such as topography, age of network, length of network per connection and water use per capita. As an alternative indicator for the measurement of real losses an Infrastructure Leakage Index (ILI) has been developed. The ILI is defined as the ratio of Current Annual Real Losses (CARL) to Unavoidable Annual Real Losses (UARL). Overview of NRW levels Expressed as a share of produced water The following percentages indicate the share of NRW in total water produced: Singapore 5% (UFW) Batam Island - Indonesia 15% (2019) - (ATB Batam) Denmark 6% Netherlands 6% Germany 7% (2005) Japan 7% (2007) Eastern Manila, Philippines 11% (2011), down from 63% in 1997 Tunisia 18% (2004) England and Wales 19% (2005) MWA, Bangkok 25% (2012) France 26% (2005) Dhaka, Bangladesh 29% (2010) Italy 29% (2005) Chile 34% (2006) Eastern Jakarta, Indonesia 42% (2016), down from 59% in 1998 Amman, Jordan 34% (2010) Mexico 51% (2003) Western Jakarta, Indonesia 39% (2011), down from 57% in 1998 Kosovo 58% Bauchi state, Nigeria 70% Yerevan, Armenia 72% (1999) Lagos, Nigeria 96% (pre-2003) Expressed in cubic meters per network length The following figures are expressed in cubic meters per kilometer of distribution network per day: Netherlands 1.5 Denmark 1.6 Germany (towns) 0.7–2.4 Germany (large cities) 2.4–5 Australia 4.4 Malmö, Sweden 5 California Water Service Company 6 Portugal 7 England and Wales 10 Helsinki 18 Penn American Water 19 Russia 20 (2006) Stockholm 21 Scotland 21.3 Illinois American Water 26 Ireland 29 Brazil 42 (2006) China 52 (2006) Bucharest 350 in 2000 and 176 in 2007 These levels are given per km of network, not per connection. Benefits of NRW reduction The World Bank has estimated the total cost of NRW to utilities worldwide at US$14 billion per year. Reducing by half the current levels of losses in developing countries, where relative losses are highest, could generate an estimated US$2.9 billion in cash and serve an additional 90 million people. Benefits of NRW reduction, in particular of leakage reduction, include financial gains from increased water sales or reduced water production, including possibly the delay of costly capacity expansion; increased knowledge about the distribution system; increased firefighting capability due to increased pressure; reduced property damage; reduced risk of contamination. More stabilized water pressure throughout the system Leakage reduction may also be an opportunity to improve relations with the public and employees. A leak detection program may be highly visible, encouraging people to think about water conservation. The reduction of commercial losses, while politically and socially challenging, can also improve relations with the public, since some consumers may disgruntled to know that others are underbilled. In the specific context of the United States NRW reduction can also mean reduced legal liability and reduced insurance payments. Programs to reduce NRW Reducing NRW is a complex process. While some programs have been successful, there are many pitfalls. Successful programs In the following cities high levels of non-revenue water have been substantially reduced: Dolphin Coast (iLembe), South Africa, 30% in 1999 to 16% in 2003 by the private utility Siza Water Company; Istanbul, Turkey, from more than 50% prior to 1994 to 34% in 2000 by the public utility ISKI; Jamshedpur, India, from an estimated 36% in 2005 to 10% in 2009 by the private utility Jamshedpur Utilities and Services Company; East Manila, Philippines, from 63% in 1997 to 16% in 2009 by the private utility Manila Water; Ouagadougou and other cities in Burkina Faso, by the public utility Office National de l'Eau et de l'Assainissement (ONEA) which achieved a level of 16% in 2008; Paranaguá, Brazil, from 58% in 2000 to 38% in 2006 by a private utility; Phnom Penh, Cambodia, from 72% in 1993 to 6% in 2008 by the public utility Phnom Penh Water Supply Authority (PPWSA) (see Water supply in Phnom Penh for more details); Five municipalities in Rio de Janeiro State (Prolagos), Brazil, from 60% in 2000 to 36% in 2006 by a private utility; Rabat, Morocco, from 32% in 2002 to 19% in 2008 by the private utility REDAL; Cities in Senegal, from 32% in 1996 to 20% in 2006 by the private utility Senegalaise des Eaux; Tangiers, Morocco from 41% in 2002 to 21% in 2008 by the private utility Amendis. 8 districts in Johor State, Malaysia, from 38% in 2004 to 29% in 2011 by the private utility Ranhill Utilities Western part of Metro Manila, Philippines, NRW was reduced from 1,580 million liters per day in 2008 to 650 million liters per day in 2014 in cooperation with the private utility Miya. These successes were achieved by both public and private utilities, in every continent, in emerging countries as well as very poor countries, in large cities and smaller towns. All required a long-term commitment by utility management and the government – local or national – over a period of at least four years. Pitfalls of programs Many programs to reduce NRW have failed to achieve their objectives, sometimes from the onset and sometimes only in the long run. Often they focus on real losses without sufficient attention being paid to apparent losses. If programs achieve an initial reduction in NRW levels, they often increase again over the years to the same or even higher levels than before the program. Both apparent and real losses have a natural tendency to increase if nothing is done: more leakage will occur, there will be more defective meters, and information on customers and networks will become more outdated. In order to sustain NRW at low levels, investments in fixing leaks and replacing meters are insufficient in the best case and ineffective in the worst case. To achieve permanent results, management procedures related to a utility's organization, procedures and human resources have to be changed. Additionally the implementation of an Intelligent Pressure management system is an efficient approach to reduce the total real losses in the long term. It is one of the most basic and lucrative forms of optimizing a system and generally provides fast investment paybacks. According to a study by the World Bank some of the reasons why NRW levels in developing countries have not been reduced significantly are the following. Another source quotes the seven most frequent reasons for failure of NRW reduction programs as follows: Poor design Diagnoses based on preconceptions rather than experimentation Partial implementation Failure to mobilize the necessary human and financial resources Lack of coordination between the components of the program Underestimation of the difficulties Underestimation of the time factor Optimal level There is some debate as to what is an economically optimal level of leakage or, speaking more broadly, of NRW. From a financial or economic point of view it is not appropriate to try to reduce NRW to the lowest possible level, because the marginal cost of reducing NRW increases once the cheaper options have been exploited. Once the marginal cost of reducing NRW exceeds the marginal benefits or water savings, an economic optimum has been achieved. Benefits should be measured through reduced production costs if reduction of NRW results in lower water production, through the avoided costs of additional supply capacity if the system is close to the limit of its capacity and demand is growing, or through the value of water sold if reduction of NRW results in additional water sales. The latter can be done by valuing water through water tariffs (financial value) or through the willingness to pay by customers (economic value). There are fewer financial incentives for a utility to reduce NRW if water production is cheap, if there is no or little metering (so that revenues thus are independent of actual consumption), or if volumetric tariffs are low. In the United Kingdom the assessment of economic levels of leakage has a long history. The first national study on the topic was published in 1980 setting down a methodology for the assessment of economic leakage levels. This led to the implementation of sectors (District Metered Areas) in most water companies in the UK. The findings were reported in a major national research program in 1994. As a result of a drought in 1995/96 a number of companies initiated major leakage management programmes based on economic assessments. The situation in other parts of the world is quite different from the UK. Particularly in developing countries sectorisation is very rare and proactive leakage control limited. The benefits of pressure management are not widely appreciated and there is generally no assessment of the economic level of leakage. From a public health and drinking water quality point of view it is being argued that the level of real water losses should be as low as possible, independently of economic or financial considerations, in order to minimize the risk of drinking water contamination in the distribution network. The World Bank recommends that NRW should be "less than 25%", while the Chilean water regulator SISS has determined a NRW level of 15% as optimal in its model of an efficient water company that it uses to benchmark service providers. In England and Wales NRW stands at 19% or 149 liter/property/day. In the United States the American Water Works Association's (AWWA) Water Loss Control Committee recommended in 2009 that water utilities conduct annual water audits as a standard business practice. AWWA recommends that water utilities should track volumes of apparent and real losses and the annual cost impacts of these losses. Utilities should then seek to control excessive losses to levels that are economic for the water utility. In 1999 the California Urban Water Conservation Council identified a 10 percent benchmark for non-revenue water. See also Water supply Water meter Utility References Water supply
Non-revenue water
[ "Chemistry", "Engineering", "Environmental_science" ]
3,000
[ "Hydrology", "Water supply", "Environmental engineering" ]
8,690,896
https://en.wikipedia.org/wiki/Hydroamination
In organic chemistry, hydroamination is the addition of an bond of an amine across a carbon-carbon multiple bond of an alkene, alkyne, diene, or allene. In the ideal case, hydroamination is atom economical and green. Amines are common in fine-chemical, pharmaceutical, and agricultural industries. Hydroamination can be used intramolecularly to create heterocycles or intermolecularly with a separate amine and unsaturated compound. The development of catalysts for hydroamination remains an active area, especially for alkenes. Although practical hydroamination reactions can be effected for dienes and electrophilic alkenes, the term hydroamination often implies reactions metal-catalyzed processes. History Hydroamination is well-established technology for generating fragrances from myrcene. In this conversion, diethylamine adds across the diene substituent, the reaction being catalyzed by lithium diethylamide. Intramolecular hydroaminations were reported by Tobin J. Marks in 1989 using metallocene derived from rare-earth metals such as lanthanum, lutetium, and samarium. Catalytic rates correlated inversely with the ionic radius of the metal, perhaps as a consequence of steric interference from the ligands. In 1992, Marks developed the first chiral hydroamination catalysts by using a chiral auxiliary, which were the first hydroamination catalysts to favor only one specific stereoisomer. Chiral auxiliaries on the metallocene ligands were used to dictate the stereochemistry of the product. The first non-metallocene chiral catalysts were reported in 2003, and used bisarylamido and aminophenolate ligands to give higher enantioselectivity. Reaction scope Hydroamination has been examined with a variety of amines, unsaturated substrates, and vastly different catalysts. Amines that have been investigated span a wide scope including primary, secondary, cyclic, acyclic, and anilines with diverse steric and electronic substituents. The unsaturated substrates that have been investigated include alkenes, dienes, alkynes, and allenes. For intramolecular hydroamination, various aminoalkenes have been examined. Products Addition across the unsaturated carbon-carbon bond can be Markovnikov or anti-Markovnikov depending on the catalyst. When considering the possibly of R/S chirality, four products can be obtained: Markovnikov with R or S and anti-Markovnikov addition with R or S. Although there have been many reports of catalytic hydroamination with a wide range of metals, there are far fewer describing enantioselective catalysis to selectively make one of the four possible products. Recently, there have been reports of selectively making the thermodynamic or kinetic product, which can be related to the racemic Markovnikov or anti-Markovnikov structures (see Thermodynamic and Kinetic Product below). Catalysts and catalytic cycle Hydroamination reactions are atom-efficient processes that generally use readily available and cheap starting materials, therefore a general catalytic strategy is highly desirable. Also, direct catalytic hydroamination strategies have in principle significant benefits over more classical methods to prepare amine containing compounds, including the reduction in the number of synthetic steps required. However, hydroamination reactions pose some tough challenges for catalysis: Strong electron repulsion of the nitrogen atom lone pair and the electron rich carbon-carbon multiple bond, coupled with hydroamination reactions being entropically disfavoured (particularly the intermolecular version), results in a large reaction barrier. Regioselectivity issues also hamper the synthetic utility of the resulting products, with Markovnikov addition of the amine being the most common outcome over the less favoured anti-Markovnikov addition (see figure). As a result, there are now numerous catalysts that can be utilised in the hydroamination of alkene, allene and alkyne substrates, including various metal based heterogeneous catalysts, early-transition metal complexes (e.g. titanium and zirconium), late-transition metal complexes (e.g. ruthenium and palladium), lanthanide and actinide complexes (e.g. samarium and lanthanum), as well as Brønsted acids and bases. Catalysts Many metal-ligand combinations have been reported to catalyze hydroamination, including main group elements including alkali metals such as lithium, group 2 metals such as calcium, as well as group 3 metals such as aluminum, indium, and bismuth. In addition to these main group examples, extensive research has been conducted on the transition metals with reports of early, mid, and late metals, as well as first, second, and third row elements. Finally the lanthanides have been thoroughly investigated. Zeolites have also shown utility in hydroamination. Catalytic cycles The mechanism of metal-catalyzed hydroamination has been well studied. Particularly well studied is the organolanthanide catalyzed intramolecular hydroamination of alkenes. First, the catalyst is activated by amide exchange, generating the active catalyst (i). Next, the alkene inserts into the Ln-N bond (ii). Finally, protonolysis occurs generating the cyclized product while also regenerating the active catalyst (iii). Although this mechanism depicts the use of a lanthanide catalyst, it is the basis for rare-earth, actinide, and alkali metal based catalysts. Late transition metal hydroamination catalysts have multiple models based on the regioselective determining step. The four main categories are (1) nucleophilic attack on an alkene alkyne, or allyl ligand and (2) insertion of the alkene into the metal-amide bond. Generic catalytic cycles appear below. Mechanisms are supported by rate studies, isotopic labeling, and trapping of the proposed intermediates. Thermodynamics and kinetics The hydroamination reaction is approximately thermochemically neutral. The reaction however suffers from a high activation barrier, perhaps owing to the repulsion of the electron-rich substrate and the amine nucleophile. The intermolecular reaction also is accompanied by highly negative changing entropy, making it unfavorable at higher temperatures. Consequently, catalysts are necessary for this reaction to proceed. As usual in chemistry, intramolecular processes occur at faster rates than intermolecular versions. Thermodynamic vs kinetic product In general, most hydroamination catalysts require elevated temperatures to function efficiently, and as such, only the thermodynamic product is observed. The isolation and characterization of the rarer and more synthetically valuable kinetic allyl amine product was reported when allenes was used at the unsaturated substrate. One system utilized temperatures of 80 °C with a rhodium catalyst and aniline derivatives as the amine. The other reported system utilized a palladium catalyst at room temperature with a wide range of primary and secondary cyclic and acyclic amines. Both systems produced the desired allyl amines in high yield, which contain an alkene that can be further functionalized through traditional organic reactions. Base catalyzed hydroamination Strong bases catalyze hydroamination, an example being the ethylation of piperidine using ethene: Such base catalyzed reactions proceed well with ethene but higher alkenes are less reactive. Hydroamination catalyzed by group (IV) complexes Certain titanium and zirconium complexes catalyze intermolecular hydroamination of alkynes and allenes. Both stoichiometric and catalytic variants were initially examined with zirconocene bis(amido) complexes. Titanocene amido and sulfonamido complexes catalyze the intra-molecular hydroamination of aminoalkenes via a [2+2] cycloaddition that forms the corresponding azametallacyclobutane, as illustrated in the figure below. Subsequent protonolysis by incoming substrate gives the α-vinyl-pyrrolidine (1) or tetrahydropyridine (2) product. Experimental and theoretical evidence support the proposed imido intermediate and mechanism with neutral group IV catalysts. Formal hydroamination The addition of hydrogen and an amino group (NR2) using reagents other than the amine HNR2 is known as a "formal hydroamination" reaction. Although the advantages of atom economy and/or ready available of the nitrogen source are diminished as a result, the greater thermodynamic driving force, as well as ability to tune the aminating reagent are potentially useful. In place of the amine, hydroxylamine esters and nitroarenes have been reported as nitrogen sources. Applications Hydroamination could find applications due to the valuable nature of the resulting amine, as well as the greenness of the process. Functionalized allylamines, which can be produced through hydroamination, have extensive pharmaceutical application, although presently such species are not prepared by hydroamination. Hydroamination has been utilized to synthesize the allylamine Cinnarizine in quantitative yield. Cinnarizine treats both vertigo and motion sickness related nausea. Hydroamination is also promising for the synthesis of alkaloids. An example was the hydroamination step used in the total synthesis of (-)-epimyrtine. See also Ammoxidation - reaction of ammonia with alkenes to give nitriles Hydroboration Hydrosilylation (Olefin) Hydration Hydrofunctionalization References Addition reactions Organometallic chemistry Homogeneous catalysis Catalysis
Hydroamination
[ "Chemistry" ]
2,084
[ "Catalysis", "Homogeneous catalysis", "Organometallic chemistry", "Chemical kinetics" ]
4,130,888
https://en.wikipedia.org/wiki/Darboux%27s%20theorem%20%28analysis%29
In mathematics, Darboux's theorem is a theorem in real analysis, named after Jean Gaston Darboux. It states that every function that results from the differentiation of another function has the intermediate value property: the image of an interval is also an interval. When ƒ is continuously differentiable (ƒ in C1([a,b])), this is a consequence of the intermediate value theorem. But even when ƒ′ is not continuous, Darboux's theorem places a severe restriction on what it can be. Darboux's theorem Let be a closed interval, be a real-valued differentiable function. Then has the intermediate value property: If and are points in with , then for every between and , there exists an in such that . Proofs Proof 1. The first proof is based on the extreme value theorem. If equals or , then setting equal to or , respectively, gives the desired result. Now assume that is strictly between and , and in particular that . Let such that . If it is the case that we adjust our below proof, instead asserting that has its minimum on . Since is continuous on the closed interval , the maximum value of on is attained at some point in , according to the extreme value theorem. Because , we know cannot attain its maximum value at . (If it did, then for all , which implies .) Likewise, because , we know cannot attain its maximum value at . Therefore, must attain its maximum value at some point . Hence, by Fermat's theorem, , i.e. . Proof 2. The second proof is based on combining the mean value theorem and the intermediate value theorem. Define . For define and . And for define and . Thus, for we have . Now, define with . is continuous in . Furthermore, when and when ; therefore, from the Intermediate Value Theorem, if then, there exists such that . Let's fix . From the Mean Value Theorem, there exists a point such that . Hence, . Darboux function A Darboux function is a real-valued function ƒ which has the "intermediate value property": for any two values a and b in the domain of ƒ, and any y between ƒ(a) and ƒ(b), there is some c between a and b with ƒ(c) = y. By the intermediate value theorem, every continuous function on a real interval is a Darboux function. Darboux's contribution was to show that there are discontinuous Darboux functions. Every discontinuity of a Darboux function is essential, that is, at any point of discontinuity, at least one of the left hand and right hand limits does not exist. An example of a Darboux function that is discontinuous at one point is the topologist's sine curve function: By Darboux's theorem, the derivative of any differentiable function is a Darboux function. In particular, the derivative of the function is a Darboux function even though it is not continuous at one point. An example of a Darboux function that is nowhere continuous is the Conway base 13 function. Darboux functions are a quite general class of functions. It turns out that any real-valued function ƒ on the real line can be written as the sum of two Darboux functions. This implies in particular that the class of Darboux functions is not closed under addition. A strongly Darboux function is one for which the image of every (non-empty) open interval is the whole real line. The Conway base 13 function is again an example. Notes External links Theorems in calculus Theory of continuous functions Theorems in real analysis Articles containing proofs
Darboux's theorem (analysis)
[ "Mathematics" ]
764
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theory of continuous functions", "Theorems in real analysis", "Topology", "Articles containing proofs" ]
4,131,200
https://en.wikipedia.org/wiki/Insulator%20%28genetics%29
An insulator is a type of cis-regulatory element known as a long-range regulatory element. Found in multicellular eukaryotes and working over distances from the promoter element of the target gene, an insulator is typically 300 bp to 2000 bp in length. Insulators contain clustered binding sites for sequence specific DNA-binding proteins and mediate intra- and inter-chromosomal interactions. Insulators function either as an enhancer-blocker or a barrier, or both. The mechanisms by which an insulator performs these two functions include loop formation and nucleosome modifications. There are many examples of insulators, including the CTCF insulator, the gypsy insulator, and the β-globin locus. The CTCF insulator is especially important in vertebrates, while the gypsy insulator is implicated in Drosophila. The β-globin locus was first studied in chicken and then in humans for its insulator activity, both of which utilize CTCF. The genetic implications of insulators lie in their involvement in a mechanism of imprinting and their ability to regulate transcription. Mutations to insulators are linked to cancer as a result of cell cycle disregulation, tumourigenesis, and silencing of growth suppressors. Function Insulators have two main functions: Enhancer-blocking insulators prevent distal enhancers from acting on the promoter of neighbouring genes Barrier insulators prevent silencing of euchromatin by inhibiting the spread of neighbouring heterochromatin While enhancer-blocking is classified as an inter-chromosomal interaction, acting as a barrier is classified as an intra-chromosomal interaction. The need for insulators arises where two adjacent genes on a chromosome have very different transcription patterns; it is critical that the inducing or repressing mechanisms of one do not interfere with the neighbouring gene. Insulators have also been found to cluster at the boundaries of topologically associating domains (TADs) and may have a role in partitioning the genome into "chromosome neighborhoods" - genomic regions within which regulation occurs. Some insulators can act as both enhancer blocker and barriers, and some just have one of the two functions. Some examples of different insulators are: Drosophila melanogaster insulators gypsy and scs scs are both enhancer-blocking insulators Gallus gallus have insulators, Lys 5' A that have both enhancer-blocking and barrier activity, as well as HS4 that have only enhancer-blocking activity Saccharomyces cerevisiae insulators STAR and UASrpg are both barrier insulators Homo sapiens HS5 insulator acts as an enhancer-blocker Mechanism of action Enhancer-blocking insulators Similar mechanism of action for enhancer-blocking insulators; chromatin loop domains are formed in the nucleus that separates the enhancer and the promoter of a target gene. Loop domains are formed through the interaction between enhancer-blocking elements interacting with each other or securing chromatin fibre to structural elements within the nucleus. The action of these insulators is dependent on being positioned between the promoter of the target gene and the upstream or down stream enhancer. The specific way in which insulators block enhancers is dependent on the enhancers mode of action. Enhancers can directly interact with their target promoters through looping (direct-contact model), in which case an insulator prevents this interaction through the formation of a loop domain that separates the enhancer and promoter sites and prevents the promoter-enhancer loop from forming. An enhancer can also act on a promoter through a signal (tracking model of enhancer action). This signal may be blocked by an insulator through the targeting of a nucleoprotein complex at the base of the loop formation. Barrier insulators Barrier activity has been linked to the disruption of specific processes in the heterochromatin formation pathway. These types of insulators modify the nucleosomal substrate in the reaction cycle that is central to heterochromatin formation. Modifications are achieved through various mechanisms including nucleosome removal, in which nucleosome-excluding elements disrupt heterochromatin from spreading and silencing (chromatin-mediated silencing). Modification can also be done through recruitment of histone acetyltransferase(s) and ATP-dependent nucleosome remodelling complexes. CTCF insulator The CTCF insulator appears to have enhancer blocking activity via its 3D structure and have no direct connection with barrier activity. Vertebrates in particular appear to rely heavily on the CTCF insulator, however there are many different insulator sequences identified. Insulated neighborhoods formed by physical interaction between two CTCF-bound DNA loci contain the interactions between enhancers and their target genes. Regulation One mechanism of regulating CTCF is via methylation of its DNA sequence. CTCF protein is known to favourably bind to unmethylated sites, so it follows that methylation of CpG islands is a point of epigenetic regulation. An example of this is seen in the Igf2-H19 imprinted locus where methylation of the paternal imprinted control region (ICR) prevents CTCF from binding. A second mechanism of regulation is through regulating proteins that are required for fully functioning CTCF insulators. These proteins include, but are not limited to cohesin, RNA polymerase, and CP190. gypsy insulator The insulator element that is found in the gypsy retrotransposon of Drosophila is one of several sequences that have been studied in detail. The gypsy insulator can be found in the 5' untranslated region (UTR) of the retrotransposon element. Gypsy affects the expression of adjacent genes pending insertion into a new genomic location, causing mutant phenotypes that are both tissue specific and present at certain developmental stages. The insulator likely has an inhibitory effect on enhancers that control the spatial and temporal expression of the affected gene. β-globin locus The first examples of insulators in vertebrates was seen in the chicken β-globin locus, cHS4. cHS4 marks the border between the active euchromatin in the β-globin locus and the upstream heterochromatin region that is highly condensed and inactive. The cHS4 insulator acts as both a barrier to chromatin-mediated silencing via heterochromatin spreading, and blocks interactions between enhancers and promoters. A distinguishing characteristic of cHS4 is that it has a repetitive heterochromatic region on its 5' end. The human β-globin locus homologue of cHS4 is HS5. Different from the chicken β-globin locus, the human β-globin locus has an open chromatin structure and is not flanked by a 5' heterochromatic region. HS5 is thought to be a genetic insulator in vivo as it has both enhancer-blocking activity and transgene barrier activities. CTCF was first characterized for its role in regulating β-globin gene expression. At this locus, CTCF functions as an insulator-binding protein forming a chromosomal boundary. CTCF is present in both the chicken β-globin locus and human β-globin locus. Within cHS4 of the chicken β-globin locus, CTCF binds to a region (FII) that is responsible for enhancer blocking activity. Genetic implications Imprinting The ability of enhancers to activate imprinted genes is dependent on the presence of an insulator on the unmethylated allele between the two genes. An example of this is the Igf2-H19 imprinted locus. In this locus the CTCF protein regulates imprinted expression by binding to the unmethylated maternal imprinted control region (ICR) but not on the paternal ICR. When bound to the unmethylated maternal sequence, CTCF effectively blocks downstream enhancer elements from interacting with the Igf2 gene promoter, leaving only the H19 gene to be expressed. Transcription When insulator sequences are located in close proximity to the promoter of a gene, it has been suggested that they might serve to stabilize enhancer-promoter interactions. When they are located farther away from the promoter, insulator elements would compete with the enhancer and interfere with activation of transcription. Loop formation is common in eukaryotes to bring distal elements (enhancers, promoters, locus control regions) into closer proximity for interaction during transcription. The mechanism of enhancer-blocking insulators then, if in the correct position, could play a role in regulating transcription activation. Mutations and cancer CTCF insulators affect the expression of genes implicated in cell cycle regulation processes that are important for cell growth, cell differentiation, and programmed cell death (apoptosis). Two of these cell cycle regulation genes that are known to interact with CTCF are hTERT and C-MYC. In these cases, a loss of function mutation to the CTCF insulator gene changes the expression patterns and may affect the interplay between cell growth, differentiation and apoptosis and lead to tumourigenesis or other problems. CTCF is also required for the expression of tumour repressor retinoblastoma (Rb) gene and mutations and deletions of this gene are associated with inherited malignancies. When the CTCF binding site is removed expression of Rb is decreased and tumours are able to thrive. Other genes that encode cell cycle regulators include BRCA1, and p53, which are growth suppressors that are silenced in many cancer types, and whose expression is controlled by CTCF. Loss of function of CTCF in these genes leads to the silencing of the growth suppressor and contributes to the formation of cancer. The aberrant activation of insulators can modulate the expression of cancer-related genes, including matrix metalloproteinases involved in cancer cell invasion. References External links Gene expression
Insulator (genetics)
[ "Chemistry", "Biology" ]
2,123
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
4,131,413
https://en.wikipedia.org/wiki/Gidazepam
Gidazepam, also known as hydazepam or hidazepam, is a drug which is an atypical benzodiazepine derivative, developed in the Soviet Union. It is a selectively anxiolytic benzodiazepine. It also has therapeutic value in the management of certain cardiovascular disorders. Pharmacology Gidazepam and several of its analogs, in contrast to other benzodiazepines, are comparatively more selective agonists of TSPO (formerly the peripheral benzodiazepine receptor) than the benzodiazepine receptor. Gidazepam acts as a prodrug to its active metabolite 7-bromo-2,3-dihydro-5-phenyl-1H-1,4-benzodiazepin-2-one (desalkylgidazepam or bromo-nordazepam). Its anxiolytic effects can take several hours to manifest presumably due to its slow metabolism (half-life 87 hours). The onset and intensity of anxiolytic effects correlate with blood levels of desalkylgidazepam. See also Phenazepam—another benzodiazepine widely used in Russia and other CIS countries Cinazepam Cloxazolam List of Russian drugs References Benzodiazepines Organobromides Lactams Hydrazides Russian drugs Anxiolytics Prodrugs Drugs in the Soviet Union
Gidazepam
[ "Chemistry" ]
323
[ "Chemicals in medicine", "Prodrugs" ]
4,131,678
https://en.wikipedia.org/wiki/Hammond%27s%20postulate
Hammond's postulate (or alternatively the Hammond–Leffler postulate), is a hypothesis in physical organic chemistry which describes the geometric structure of the transition state in an organic chemical reaction. First proposed by George Hammond in 1955, the postulate states that: If two states, as, for example, a transition state and an unstable intermediate, occur consecutively during a reaction process and have nearly the same energy content, their interconversion will involve only a small reorganization of the molecular structures. Therefore, the geometric structure of a state can be predicted by comparing its energy to the species neighboring it along the reaction coordinate. For example, in an exothermic reaction the transition state is closer in energy to the reactants than to the products. Therefore, the transition state will be more geometrically similar to the reactants than to the products. In contrast, however, in an endothermic reaction the transition state is closer in energy to the products than to the reactants. So, according to Hammond’s postulate the structure of the transition state would resemble the products more than the reactants. This type of comparison is especially useful because most transition states cannot be characterized experimentally. Hammond's postulate also helps to explain and rationalize the Bell–Evans–Polanyi principle. Namely, this principle describes the experimental observation that the rate of a reaction, and therefore its activation energy, is affected by the enthalpy of that reaction. Hammond's postulate explains this observation by describing how varying the enthalpy of a reaction would also change the structure of the transition state. In turn, this change in geometric structure would alter the energy of the transition state, and therefore the activation energy and reaction rate as well. The postulate has also been used to predict the shape of reaction coordinate diagrams. For example, electrophilic aromatic substitution involves a distinct intermediate and two less well defined states. By measuring the effects of aromatic substituents and applying Hammond's postulate it was concluded that the rate-determining step involves formation of a transition state that should resemble the intermediate complex. History During the 1940s and 1950s, chemists had trouble explaining why even slight changes in the reactants caused significant differences in the rate and product distributions of a reaction. In 1955 George Hammond, a young professor at Iowa State University, postulated that transition-state theory could be used to qualitatively explain the observed structure-reactivity relationships. Notably, John E. Leffler of Florida State University proposed a similar idea in 1953. However, Hammond's version has received more attention since its qualitative nature was easier to understand and employ than Leffler's complex mathematical equations. Hammond's postulate is sometimes called the Hammond–Leffler postulate to give credit to both scientists. Interpreting the postulate Effectively, the postulate states that the structure of a transition state resembles that of the species nearest to it in free energy. This can be explained with reference to potential energy diagrams: In case (a), which is an exothermic reaction, the energy of the transition state is closer in energy to that of the reactant than that of the intermediate or the product. Therefore, from the postulate, the structure of the transition state also more closely resembles that of the reactant. In case (b), the energy of the transition state is close to neither the reactant nor the product, making none of them a good structural model for the transition state. Further information would be needed in order to predict the structure or characteristics of the transition state. Case (c) depicts the potential diagram for an endothermic reaction, in which, according to the postulate, the transition state should more closely resemble that of the intermediate or the product. Another significance of Hammond’s postulate is that it permits us to discuss the structure of the transition state in terms of the reactants, intermediates, or products. In the case where the transition state closely resembles the reactants, the transition state is called “early” while a “late” transition state is the one that closely resembles the intermediate or the product. An example of the “early” transition state is chlorination. Chlorination favors the products because it is an exothermic reaction, which means that the products are lower in energy than the reactants. When looking at the adjacent diagram (representation of an "early" transition state), one must focus on the transition state, which is not able to be observed during an experiment. To understand what is meant by an “early” transition state, the Hammond postulate represents a curve that shows the kinetics of this reaction. Since the reactants are higher in energy, the transition state appears to be right after the reaction starts. An example of the “late” transition state is bromination. Bromination favors the reactants because it is an endothermic reaction, which means that the reactants are lower in energy than the products. Since the transition state is hard to observe, the postulate of bromination helps to picture the “late” transition state (see the representation of the "late" transition state). Since the products are higher in energy, the transition state appears to be right before the reaction is complete. One other useful interpretation of the postulate often found in textbooks of organic chemistry is the following: Assume that the transition states for reactions involving unstable intermediates can be closely approximated by the intermediates themselves. This interpretation ignores extremely exothermic and endothermic reactions which are relatively unusual and relates the transition state to the intermediates which are usually the most unstable. Structure of transition states SN1 reactions Hammond's postulate can be used to examine the structure of the transition states of a SN1 reaction. In particular, the dissociation of the leaving group is the first transition state in a SN1 reaction. The stabilities of the carbocations formed by this dissociation are known to follow the trend tertiary > secondary > primary > methyl. Therefore, since the tertiary carbocation is relatively stable and therefore close in energy to the R-X reactant, then the tertiary transition state will have a structure that is fairly similar to the R-X reactant. In terms of the graph of reaction coordinate versus energy, this is shown by the fact that the tertiary transition state is further to the left than the other transition states. In contrast, the energy of a methyl carbocation is very high, and therefore the structure of the transition state is more similar to the intermediate carbocation than to the R-X reactant. Accordingly, the methyl transition state is very far to the right. SN2 reactions Bimolecular nucleophilic substitution (SN2) reactions are concerted reactions where both the nucleophile and substrate are involved in the rate limiting step. Since this reaction is concerted, the reaction occurs in one step, where the bonds are broken, while new bonds are formed. Therefore, to interpret this reaction, it is important to look at the transition state, which resembles the concerted rate limiting step. In the "Depiction of SN2 Reaction" figure, the nucleophile forms a new bond to the carbon, while the halide (L) bond is broken. E1 reactions An E1 reaction consists of a unimolecular elimination, where the rate determining step of the mechanism depends on the removal of a single molecular species. This is a two-step mechanism. The more stable the carbocation intermediate is, the faster the reaction will proceed, favoring the products. Stabilization of the carbocation intermediate lowers the activation energy. The reactivity order is (CH3)3C- > (CH3)2CH- > CH3CH2- > CH3-. Furthermore, studies describe a typical kinetic resolution process that starts out with two enantiomers that are energetically equivalent and, in the end, forms two energy-inequivalent intermediates, referred to as diastereomers. According to Hammond's postulate, the more stable diastereomer is formed faster. E2 reactions Elimination, bimolecular reactions are one step, concerted reaction where both base and substrate participate in the rate limiting step. In an E2 mechanism, a base takes a proton near the leaving group, forcing the electrons down to make a double bond, and forcing off the leaving group-all in one concerted step. The rate law depends on the first order concentration of two reactants, making it a 2nd order (bimolecular) elimination reaction. Factors that affect the rate determining step are stereochemistry, leaving groups, and base strength. A theory, for an E2 reaction, by Joseph Bunnett suggests the lowest pass through the energy barrier between reactants and products is gained by an adjustment between the degrees of Cβ-H and Cα-X rupture at the transition state. The adjustment involves much breaking of the bond more easily broken, and a small amount of breaking of the bond which requires more energy. This conclusion by Bunnett is a contradiction from the Hammond postulate. The Hammond postulate is the opposite of what Bunnett theorized. In the transition state of a bond breaking step it involves little breaking when the bond is easily broken and much breaking when it is difficult to break. Despite these differences, the two postulates are not in conflict since they are concerned with different sorts of processes. Hammond focuses on reaction steps where one bond is made or broken, or the breaking of two or more bonds is done with no time taken occur simultaneously. The E2 theory transition state concerns a process when bond formation or breaking are not simultaneous. Kinetics and the Bell–Evans–Polanyi principle Technically, Hammond's postulate only describes the geometric structure of a chemical reaction. However, Hammond's postulate indirectly gives information about the rate, kinetics, and activation energy of reactions. Hence, it gives a theoretical basis for the understanding the Bell–Evans–Polanyi principle, which describes the experimental observation that the enthalpy and rate of a similar reactions were usually correlated. The relationship between Hammond's postulate and the BEP principle can be understood by considering a SN1 reaction. Although two transition states occur during a SN1 reaction (dissociation of the leaving group and then attack by the nucleophile), the dissociation of the leaving group is almost always the rate-determining step. Hence, the activation energy and therefore rate of the reaction will depend only upon the dissociation step. First, consider the reaction at secondary and tertiary carbons. As the BEP principle notes, experimentally SN1 reactions at tertiary carbons are faster than at secondary carbons. Therefore, by definition, the transition state for tertiary reactions will be at a lower energy than for secondary reactions. However, the BEP principle cannot justify why the energy is lower. Using Hammond's postulate, the lower energy of the tertiary transition state means that its structure is relatively closer to its reactants R(tertiary)-X than to the carbocation product when compared to the secondary case. Thus, the tertiary transition state will be more geometrically similar to the R(tertiary)-X reactants than the secondary transition state is to its R(secondary)-X reactants. Hence, if the tertiary transition state is close in structure to the (low energy) reactants, then it will also be lower in energy because structure determines energy. Likewise, if the secondary transition state is more similar to the (high energy) carbocation product, then it will be higher in energy. Applying the postulate Hammond's postulate is useful for understanding the relationship between the rate of a reaction and the stability of the products. While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG‡ “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products. Hammond's postulate connects the rate of a reaction process with the structural features of those states that form part of it, by saying that the molecular reorganizations have to be small in those steps that involve two states that are very close in energy. This gave birth to the structural comparison between the starting materials, products, and the possible "stable intermediates" that led to the understanding that the most stable product is not always the one that is favored in a reaction process. Explaining seemingly contradictory results Hammond's postulate is especially important when looking at the rate-limiting step of a reaction. However, one must be cautious when examining a multistep reaction or one with the possibility of rearrangements during an intermediate stage. In some cases, the final products appear in skewed ratios in favor of a more unstable product (called the kinetic product) rather than the more stable product (the thermodynamic product). In this case one must examine the rate-limiting step and the intermediates. Often, the rate-limiting step is the initial formation of an unstable species such as a carbocation. Then, once the carbocation is formed, subsequent rearrangements can occur. In these kinds of reactions, especially when run at lower temperatures, the reactants simply react before the rearrangements necessary to form a more stable intermediate have time to occur. At higher temperatures when microscopic reversal is easier, the more stable thermodynamic product is favored because these intermediates have time to rearrange. Whether run at high or low temperatures, the mixture of the kinetic and thermodynamic products eventually reach the same ratio, one in favor of the more stable thermodynamic product, when given time to equilibrate due to microreversal. See also Bema Hapothle Curtin–Hammett principle Microscopic reversibility Bell–Evans–Polanyi principle References Further reading Chemical kinetics Physical organic chemistry
Hammond's postulate
[ "Chemistry" ]
2,904
[ "Chemical kinetics", "Chemical reaction engineering", "Physical organic chemistry" ]
4,131,773
https://en.wikipedia.org/wiki/Poisons%20Act%201972
The Poisons Act 1972 (c. 66) is an act of the Parliament of the United Kingdom making provisions for the sale of non-medicinal poisons, and the involvement of local authorities and the Royal Pharmaceutical Society of Great Britain in their regulation. The act refers to the Pharmacy and Poisons Act 1933, and the Poisons List. Non-medical poisons are divided into two separate lists. List one substances may only be sold by a registered pharmacist, and list two substances may be sold by a registered pharmacist or a licensed retailer. Further provisions are made, to enable the Royal Pharmaceutical Society to enforce the compliance with the act by pharmacists, and impose fines for breaches. Local authorities are responsible for vetting applications for list two substances, for law enforcement and control of licensed premises. Section 7 The Poison Rules 1982 (SI 1982/218) were made under this section. References Clifford Walsh and Peter Allsop (eds). "Poisons Act 1972". Current Law Statutes Annotated 1972. Sweet & Maxwell. Stevens & Sons. London. W Green & Son. Edinburgh. 1972. Chapter 66. Google "The Poisons Act 1972". Halsbury's Statutes of England. Third Edition. Butterworths. London. 1973. Volume 42: Continuation Volume 1972: . Page 1315. Halsbury's Statutes. Fourth Edition. Volume 28. Title "Medicine and Pharmacy". Page 548. Halsbury's Laws of England. Fourth Edition Reissue. 2006. Volume 30(2). Paragraphs 285, 286, 288 and 294 and passim. Pages 315 to 319, 321, 324, 327 to 329 and 331. Gradwohl's Legal Medicine. Third Edition. John Wright & Sons Ltd. 1976. Pages 440, 447 and 448. Joe Jacob (ed). Speller's Law relating to Hospitals and Kindred Institutions. Sixth Edition. H. K. Lewis & Co. Ltd. 1978. Pages 142, 152, 155 and 156. Dale and Appelbe's Pharmacy and Medicines Law. Tenth Edition. Pharmaceutical Press. 2013. Pages xxii, xxxviii, 140, 245 to 249, 256, 268, 304, 305, 330, 331, 465 and 479. J R Dale and G E Appelbe. "The Poisons Act, List and Rules". Pharmacy Law and Ethics. Second Edition. 1979. Chapter 17 at page 168 et seq. Pharmacy Law and Practice. Third Edition. 2001. Chapter 17. p 160. Fourth Edition. 2006. Chapter 18. p 188. Fifth Edition. 2013. Chapter 18. p 275. The Laws of Scotland: Stair Memorial Encyclopaedia. Title "Medicines, Poisons and Drugs". United Kingdom Acts of Parliament 1972 Poisons Toxicology in the United Kingdom
Poisons Act 1972
[ "Environmental_science" ]
587
[ "Poisons", "Toxicology in the United Kingdom", "Toxicology" ]
4,131,939
https://en.wikipedia.org/wiki/Saltation%20%28biology%29
In biology, saltation () is a sudden and large mutational change from one generation to the next, potentially causing single-step speciation. This was historically offered as an alternative to Darwinism. Some forms of mutationism were effectively saltationist, implying large discontinuous jumps. Speciation, such as by polyploidy in plants, can sometimes be achieved in a single and in evolutionary terms sudden step. Evidence exists for various forms of saltation in a variety of organisms. History Prior to Charles Darwin most evolutionary scientists had been saltationists. Jean-Baptiste Lamarck was a gradualist but similar to other scientists of the period had written that saltational evolution was possible. Étienne Geoffroy Saint-Hilaire endorsed a theory of saltational evolution that "monstrosities could become the founding fathers (or mothers) of new species by instantaneous transition from one form to the next." Geoffroy wrote that environmental pressures could produce sudden transformations to establish new species instantaneously. In 1864 Albert von Kölliker revived Geoffroy's theory that evolution proceeds by large steps, under the name of heterogenesis. With the publication of On the Origin of Species in 1859 Charles Darwin wrote that most evolutionary changes proceeded gradually. From 1860 to 1880 saltation had a minority interest but by 1890 had become a major interest to scientists. In their paper on evolutionary theories in the 20th century Levit et al wrote: The advocates of saltationism deny the Darwinian idea of slowly and gradually growing divergence of character as the only source of evolutionary progress. They would not necessarily completely deny gradual variation, but claim that cardinally new ‘body plans’ come into being as a result of saltations (sudden, discontinuous and crucial changes, for example, the series of macromutations). The latter are responsible for the sudden appearance of new higher taxa including classes and orders, while small variation is supposed to be responsible for the fine adaptations below the species level. In the early 20th century a mechanism of saltation was proposed as large mutations. It was seen as a much faster alternative to the Darwinian concept of a gradual process of small random variations being acted on by natural selection. It was popular with early geneticists such as Hugo de Vries, who along with Carl Correns helped rediscover Gregor Mendel's laws of inheritance in 1900, William Bateson, a British zoologist who switched to genetics, and early in his career Thomas Hunt Morgan. Some of these geneticists developed it into the mutation theory of evolution. There was also a debate over accounts of the evolution of mimicry and if they could be explained by gradualism or saltation. The geneticist Reginald Punnett supported a saltational theory in his book Mimicry in Butterflies (1915). The mutation theory of evolution held that species went through periods of rapid mutation, possibly as a result of environmental stress, that could produce multiple mutations, and in some cases completely new species, in a single generation. This mutationist view of evolution was later replaced by the reconciliation of Mendelian genetics with natural selection into a gradualistic framework for the neo-Darwinian synthesis. It was the emergence of population thinking in evolution which forced many scientists to adopt gradualism in the early 20th century. According to Ernst Mayr, it wasn't until the development of population genetics in the neo-Darwinian synthesis in the 1940s that demonstrated the explanatory power of natural selection that saltational views of evolution were largely abandoned. Saltation was originally denied by the "modern synthesis" school of neo-Darwinism which favoured gradual evolution but has since been accepted due to recent evidence in evolutionary biology (see the current status section). In recent years there are some prominent proponents of saltation, including Carl Woese. Woese, and colleagues, suggested that the absence of RNA signature continuum between domains of bacteria, archaea, and eukarya constitutes a primary indication that the three primary organismal lineages materialized via one or more major evolutionary saltations from some universal ancestral state involving dramatic change in cellular organization that was significant early in the evolution of life, but in complex organisms gave way to the generally accepted Darwinian mechanisms. The geneticist Barbara McClintock introduced the idea of "jumping genes", chromosome transpositions that can produce rapid changes in the genome. Saltational speciation, also known as abrupt speciation, is the discontinuity in a lineage that occurs through genetic mutations, chromosomal aberrations or other evolutionary mechanisms that cause reproductively isolated individuals to establish a new species population. Polyploidy, karyotypic fission, symbiogenesis and lateral gene transfer are possible mechanisms for saltational speciation. Macromutation theory The botanist John Christopher Willis proposed an early saltationist theory of evolution. He held that species were formed by large mutations, not gradual evolution by natural selection. The German geneticist Richard Goldschmidt was the first scientist to use the term "hopeful monster". Goldschmidt thought that small gradual changes could not bridge the hypothetical divide between microevolution and macroevolution. In his book The Material Basis of Evolution (1940) he wrote "the change from species to species is not a change involving more and more additional atomistic changes, but a complete change of the primary pattern or reaction system into a new one, which afterwards may again produce intraspecific variation by micromutation." Goldschmidt believed the large changes in evolution were caused by macromutations (large mutations). His ideas about macromutations became known as the hopeful monster hypothesis which is considered a type of saltational evolution. Goldschmidt's thesis however was universally rejected and widely ridiculed within the biological community, which favored the neo-Darwinian explanations of R.A. Fisher, J. B. S. Haldane and Sewall Wright. However, there has been a recent interest in the ideas of Goldschmidt in the field of evolutionary developmental biology as some scientists are convinced he was not entirely wrong. Otto Schindewolf, a German paleontologist, also supported macromutations as part of his evolutionary theory. He was known for presenting an alternative interpretation of the fossil record based on his ideas of orthogenesis, saltational evolution and extraterrestrial impacts opposed to gradualism but abandoned the view of macromutations in later publications. Søren Løvtrup, a biochemist and embryologist from Denmark, advocated a similar hypothesis of macromutation to Goldschmidt's in 1974. Lovtrup believed that macromutations interfered with various epigenetic processes, that is, those which affect the causal processes in biological development. This is in contrast to the gradualistic theory of micromutations of Neo-Darwinism, which claims that evolutionary innovations are generally the result of accumulation of numerous very slight modifications. Lovtrup also rejected the punctuated equilibria of Stephen Gould and Niles Eldredge, claiming it was a form of gradualism and not a macromutation theory. Lovtrup defended many of Darwin's critics including Schindewolf, Mivart, Goldschmidt, and Himmelfarb. Mae Wan Ho described Lovtrup's theory as similar to the hopeful monster theory of Richard Goldschmidt. Goldschmidt presented two mechanisms for how hopeful monsters might work. One mechanism, involved “systemic mutations”, rejected the classical gene concept and is no longer considered by modern science; however, his second mechanism involved “developmental macromutations” in “rate genes” or “controlling genes” that change early development and thus cause large effects in the adult phenotype. These kind of mutations are similar to the ones considered in contemporary evolutionary developmental biology. On the subject of Goldschmidt Donald Prothero in his book Evolution: What the Fossils Say and Why It Matters (2007) wrote: The past twenty years have vindicated Goldschmidt to some degree. With the discovery of the importance of regulatory genes, we realize that he was ahead of his time in focusing on the importance of a few genes controlling big changes in the organisms, not small-scales changes in the entire genome as neo-Darwinians thought. In addition, the hopeful monster problem is not so insurmountable after all. Embryology has shown that if you affect an entire population of developing embryos with a stress (such as a heat shock) it can cause many embryos to go through the same new pathway of embryonic development, and then they all become hopeful monsters when they reach reproductive age. In 2008 evolutionary biologist Olivia Judson in her article The Monster Is Back, and It’s Hopeful listed some examples which may support the hopeful monster hypothesis and an article published in the journal Nature in 2010 titled Evolution: Revenge of the Hopeful Monster reported that studies in stickleback populations in a British Columbia lake and bacteria populations in a Michigan lab have shown that large individual genetic changes can have vast effects on organisms "without dooming it to the evolutionary rubbish heap". According to the article "Single-gene changes that confer a large adaptive value do happen: they are not rare, they are not doomed and, when competing with small-effect mutations, they tend to win. But small-effect mutations still matter — a lot. They provide essential fine-tuning and sometimes pave the way for explosive evolution to follow." A paper by (Page et al. 2010) have written that the Mexican axolotl (Ambystoma mexicanum) could be classified as a hopeful monster as it exhibits an adaptive and derived mode of development that has evolved rapidly and independently among tiger salamanders. According to the paper there has been an interest in aspects of the hopeful monster hypothesis in recent years: Goldschmidt proposed that mutations occasionally yield individuals within populations that deviate radically from the norm and referred to such individuals as "hopeful monsters". If the novel phenotypes of hopeful monsters arise under the right environmental circumstances, they may become fixed, and the population will found a new species. While this idea was discounted during the Modern synthesis, aspects of the hopeful monster hypothesis have been substantiated in recent years. For example, it is clear that dramatic changes in phenotype can occur from few mutations of key developmental genes and phenotypic differences among species often map to relatively few genetic factors. These findings are motivating renewed interest in the study of hopeful monsters and the perspectives they can provide about the evolution of development. In contrast to mutants that are created in the lab, hopeful monsters have been shaped by natural selection and are therefore more likely to reveal mechanisms of adaptive evolution. Günter Theissen, a German professor of genetics, has classified homeotic mutants as "hopeful monsters" and has documented many examples of animal and plant lineages that may have originated in that way. American biologist Michael Freeling has proposed "balanced gene drive" as a saltational mechanism in the mutationist tradition, which could explain trends involving morphological complexity in plant and animal eukaryotic lineages. Current status Known mechanisms Examples of saltational evolution include cases of stabilized hybrids that can reproduce without crossing (such as allotetraploids) and cases of symbiogenesis. Both gene duplication and lateral gene transfer have the capacity to bring about relatively large changes that are saltational. Polyploidy (most common in plants but not unknown in animals) is saltational: a significant change (in gene numbers) can result in speciation in a single generation. Claimed instances Evidence of phenotypic saltation has been found in the centipede and some scientists have suggested there is evidence for independent instances of saltational evolution in sphinx moths. Saltational changes have occurred in the buccal cavity of the roundworm Caenorhabditis elegans. Some processes of epigenetic inheritance can also produce changes that are saltational. There has been a controversy over whether mimicry in butterflies and other insects can be explained by gradual or saltational evolution. According to Norrström (2006) there is evidence for saltation in some cases of mimicry. The endosymbiotic theory is considered to be a type of saltational evolution. Symonds and Elgar, 2004 have suggested that pheromone evolution in bark beetles is characterized by large saltational shifts. The mode of evolution of sex pheromones in Bactrocera has occurred by rapid saltational changes associated with speciation followed by gradual divergence thereafter. Saltational speciation has been recognized in the genus Clarkia (Lewis, 1966). It has been suggested (Carr, 1980, 2000) that the Calycadenia pauciflora could have originated directly from an ancestral race through a single saltational event involving multiple chromosome breaks. Specific cases of homeosis in flowers can be caused by saltational evolution. In a study of divergent orchid flowers (Bateman and DiMichele, 2002) wrote how simple homeotic morphs in a population can lead to newly established forms that become fixed and ultimately lead to new species. They described the transformation as a saltational evolutionary process, where a mutation of key developmental genes leads to a profound phenotypic change, producing a new evolutionary lineage within a species. Explanations Reviewing the history of macroevolutionary theories, the American evolutionary biologist Douglas J. Futuyma notes that since 1970, two very different alternatives to Darwinian gradualism have been proposed, both by Stephen Jay Gould: mutationism, and punctuated equilibria. Gould's macromutation theory gave a nod to his predecessor with an envisaged "Goldschmidt break" between evolution within a species and speciation. His advocacy of Goldschmidt was attacked with "highly unflattering comments" by B. Charlesworth and Templeton. Futuyma concludes, following other biologists reviewing the field such as K.Sterelny and A. Minelli, that essentially all the claims of evolution driven by large mutations could be explained within the Darwinian evolutionary synthesis. See also Catastrophism Phyletic gradualism Rapid modes of evolution Leo S. Berg History of evolutionary thought Eclipse of Darwinism Footnotes Sources Baker, Thomas C. (2002). Mechanism for saltational shifts in pheromone communication systems. Proceedings of the National Academy of Sciences. USA 99. 13368-13370. Bateman, Richard M.; DiMichele, William A. (2002). Generating and filtering major phenotypic novelties: neoGoldschmidtian saltation revisited. In: Cronk, Q. C. B.; Bateman R. M.; Hawkins, J. A. eds. Developmental genetics and plant evolution. London: Taylor & Francis. pp. 109–159. Hall, Brian K.; Pearson, Roy D. Müller, Gerd B. (2004). Environment, Development, and Evolution: Toward a Synthesis. MIT Press. Kutschera, Ulrich; Niklas, Karl J. (2008). Macroevolution via secondary endosymbiosis: a Neo-Goldschmidtian view of unicellular hopeful monsters and Darwin's primordial intermediate form. Theory in Biosciences 127: 277-289. Merrell, David J. (1994). The Adaptive Seascape: The Mechanism of Evolution. University of Minnesota Press. Schwartz, Jeffrey H. (2006). Sudden origins: a general mechanism of evolution based on stress protein concentration and rapid environmental change. The Anatomical Record. 289: 38–46. Gamberale-Stille, G.; Balogh, A. C.; Tullberg, B. S.; Leimar, O. (2012). Feature saltation and the evolution of mimicry. Evolution 66: 807-17. Theissen, Guenter. (2009). Saltational evolution: hopeful monsters are here to stay. Theory in Bioscience. 128, 43-51. External links New species evolve in bursts by Kerri Smith Non-Darwinian evolution Evolutionary biology Biology theories Rate of evolution Speciation
Saltation (biology)
[ "Biology" ]
3,343
[ "Evolutionary biology", "Evolutionary processes", "Speciation", "Non-Darwinian evolution", "Biology theories" ]
4,131,940
https://en.wikipedia.org/wiki/Nuclear%20power%20in%20Japan
Nuclear power generated 5.55% of Japan's electricity in 2023. The country's nuclear power industry was heavily influenced by the Fukushima accident, caused by the 2011 Tōhoku earthquake and tsunami. Before 2011, Japan was generating up to 30% of its electrical power from nuclear reactors. After the Fukushima accident, all reactors were shut down temporarily. , of the 54 nuclear reactors present in Japan before 2011, there were 33 operable reactors but only 13 reactors in 6 power plants were actually operating. A total of 24 reactors are scheduled for decommissioning or are in the process of being decommissioned. Others are in the process of being reactivated, or are undergoing modifications aimed to improve resiliency against natural disasters; Japan's 2030 energy goals posit that at least 33 will be reactivated by a later date. The Fukushima accident hardened attitudes toward nuclear power. In June 2011, immediately after the accident, more than 80% of Japanese said they were anti-nuclear and distrusted government information on radiation, but ten years later, in March 2021, only 11 percent of Japanese said they wanted that nuclear energy generation to be discontinued immediately. Another 49 percent were asking for a gradual exit from nuclear energy. In February 2023, a survey by Asahi Shimbun showed that 51% of participants in Japan favored the restart of nuclear plant operations, with 42% opposed. History Early years Overcoming popular resistance In 1954, the Operations Coordinating Board of the United States National Security Council proposed that the U.S. government undertake a "vigorous offensive" urging nuclear energy for Japan in order to overcome the widespread reluctance of the Japanese population to build nuclear reactors in the country. Thirty-two million Japanese people, a third of the Japanese population, signed a petition calling for banning hydrogen bombs. Journalist and author Foster Hailey wrote an op-ed piece published in The Washington Post where he called for adopting a proposal to build nuclear reactors in Japan, stating his opinion that: "Many Americans are now aware...that the dropping of the atomic bombs on Japan was not necessary. How better to make a contribution to amends than by offering Japan...atomic energy." For several years starting in 1954, the United States Central Intelligence Agency and other U.S. government agencies ran a propaganda war targeting the Japanese population to vanquish the Japanese people's opposition to nuclear power. In 1954, Japan budgeted 230 million yen for nuclear energy, marking the beginning of Japan's nuclear program. The Atomic Energy Basic Law limited activities to only peaceful purposes. The first nuclear power plant in Japan, the Tōkai Nuclear Power Plant, was built by the UK's GEC and was commissioned in 1966. Light water reactors In the 1970s, the first light water reactors were built in cooperation with American companies. These plants were bought from U.S. vendors such as General Electric and Westinghouse with contractual work done by Japanese companies, who would later get a license themselves to build similar plant designs. Developments in nuclear power since that time have seen contributions from Japanese companies and research institutes on the same level as the other big users of nuclear power. From the early 1970s to the present, the Japanese government promoted the siting of nuclear power plants through a variety of policy instruments involving soft social control and financial incentives. By offering large subsidies and public works projects to rural communities and by using educational trips, junkets for local government officials, and OpEds written as news by pro-nuclear supporters, the central government won over the support of depopulating, hard-on-their-luck coastal towns, and villages. Later years Japan's nuclear industry was not hit as hard by the effects of the Three Mile Island accident (TMI) or the Chernobyl disaster as some other countries. Construction of new plants continued to be strong through the 1980s, 1990s, and up to the present day. While many new plants had been proposed, all were subsequently canceled or never brought past initial planning. Cancelled plant orders include: The Hōhoku Nuclear Power Plant at Hōhoku, Yamaguchi1994 The Kushima Nuclear Power Plant at Kushima, Miyazaki1997 The Ashihama Nuclear Power Plant at Ashihama, Mie2000 (the first Project at the site in the 1970s was completed at Hamaoka as Unit 1&2) The Maki Nuclear Power Plant at Maki, Niigata (Kambara)Canceled in 2003 The Suzu Nuclear Power Plant at Suzu, Ishikawa2003 However, starting in the mid-1990s there were several nuclear-related accidents and cover-ups in Japan that eroded public perception of the industry, resulting in protests and resistance to new plants. These accidents included the Tokaimura nuclear accident, the Mihama steam explosion, cover-ups after an accident at the Monju reactor, among others, more recently the Chūetsu offshore earthquake aftermath. While exact details may be in dispute, it is clear that the safety culture in Japan's nuclear industry has come under greater scrutiny. 2000s On 18 April 2007, Japan and the United States signed the United States-Japan Joint Nuclear Energy Action Plan, aimed at putting in place a framework for the joint research and development of nuclear energy technology. Each country will conduct research into fast reactor technology, fuel cycle technology, advanced computer simulation and modeling, small and medium reactors, safeguards and physical protection; and nuclear waste management. In March 2008, Tokyo Electric Power Company announced that the start of operation of four new nuclear power reactors would be postponed by one year due to the incorporation of new earthquake resistance assessments. Units 7 and 8 of the Fukushima Daiichi plant would now enter commercial operation in October 2014 and October 2015, respectively. Unit 1 of the Higashidori plant is now scheduled to begin operating in December 2015, while unit 2 will start up in 2018 at the earliest. As of September 2008, Japanese ministries and agencies were seeking an increase in the 2009 budget by 6%. The total requested comes to 491.4 billion Japanese yen (US$4.6 billion), and the focuses of research are the development of the fast breeder reactor cycle, next-generation light water reactors, the Iter project, and seismic safety. Fukushima disaster and aftermath A 2011 independent investigation in Japan has "revealed a long history of nuclear power companies conspiring with governments to manipulate public opinion in favour of nuclear energy". One nuclear company "even stacked public meetings with its own employees who posed as ordinary citizens to speak in support of nuclear power plants". An energy white paper, approved by the Japanese Cabinet in October 2011, says "public confidence in the safety of nuclear power was greatly damaged" by the Fukushima disaster, and calls for a reduction in the nation's reliance on nuclear power. It also omits a section on nuclear power expansion that was in last year's policy review. Nuclear Safety Commission Chairman Haruki Madarame told a parliamentary inquiry in February 2012 that "Japan's atomic safety rules are inferior to global standards and left the country unprepared for the Fukushima nuclear disaster last March". There were flaws in, and lax enforcement of, the safety rules governing Japanese nuclear power companies, and this included insufficient protection against tsunamis. On 6 May 2011, Prime Minister Naoto Kan ordered the Hamaoka Nuclear Power Plant to be shut down as an earthquake of magnitude 8.0 or higher is likely to hit the area within the next thirty years. As of 27 March 2012, Japan had only one out of 54 nuclear reactors operating; the Tomari-3, after the Kashiwazaki-Kariwa 6 was shut down. The Tomari-3 was shut down for maintenance on 5 May, leaving Japan with no nuclear-derived electricity for the first time since 1970, when the country's then-only two reactors were taken offline for five days for maintenance. On 15 June 2012, approval was given to restart Ōi Units 3 and 4 which could take six weeks to bring them to full operation. On 1 July 2012, unit 3 of the Ōi Nuclear Power Plant was restarted. This reactor can provide 1,180 MW of electricity. On 21 July 2012 unit 4 was restarted, also 1,180 MW. The reactor was shut down again on 14 September 2013, again leaving Japan with no operating power reactors. Government figures in the 2014 Annual Report on Energy show that Japan depended on imported fossil fuels for 88% of its electricity in fiscal year 2013, compared with 62% in fiscal 2010. Without significant nuclear power, the country was self-sufficient for just 6% of its energy demand in 2012, compared with 20% in 2010. The additional fuel costs to compensate for its nuclear reactors being idled was ¥3.6 trillion. In parallel, domestic energy users have seen a 19.4% increase in their energy bills between 2010 and 2013, while industrial users have seen their costs rise 28.4% over the same period. In 2018 the Japanese government revised its energy plan to update the 2030 target for nuclear energy to 20%-22% of power generation by restarting reactors, compared to LNG 27%, coal 25%, renewables 23% and oil 3%. This would reduce Japan's carbon dioxide emissions by 26% compared to 2013, and increase self-sufficiency to about 24% by 2030, compared to 8% in 2016. Since the Fukushima Daiichi nuclear disaster, Japan has restarted twelve reactors and fifteen more have applied to restart, including two that are under construction. Amid the Russian invasion of Ukraine, Japan's Prime Minister announced the restart of nine units by winter 2022 and seven more by summer 2023. Investigations on the Fukushima disaster The National Diet of Japan Fukushima Nuclear Accident Independent Investigation Commission (NAIIC) is the first independent investigation commission by the National Diet in the 66-year history of Japan's constitutional government. NAICC was established on 8 December 2011 with the mission to investigate the direct and indirect causes of the Fukushima nuclear accident. NAICC submitted its inquiry report to both houses on 5 July 2012. The 10-member commission compiled its report based on more than 1,167 interviews and 900 hours of hearings. It was a six-month independent investigation, the first of its kind with wide-ranging subpoena powers in Japan's constitutional history, which held public hearings with former Prime Minister Naoto Kan and Tokyo Electric Power Co's former president Masataka Shimizu, who gave conflicting accounts of the disaster response. The commission chairman, Kiyoshi Kurokawa, declared with respect to the Fukushima nuclear incident: "It was a profoundly man-made disasterthat could and should have been foreseen and prevented." He added that the "fundamental causes" of the disaster were rooted in "the ingrained conventions of Japanese culture." The report outlines errors and willful negligence at the plant before the 2011 Tōhoku earthquake and tsunami on 11 March 2011 and a flawed response in the hours, days, and weeks that followed. It also offers recommendations and encourages Japan's parliament to "thoroughly debate and deliberate" the suggestions. Post-Fukushima nuclear policy Japan's new energy plan, approved by the Liberal Democratic Party cabinet in April 2014, calls nuclear power "the country's most important power source". Reversing a decision by the previous Democratic Party, the government will re-open nuclear plants, aiming for "a realistic and balanced energy structure". In May 2014 the Fukui District Court blocked the restart of the Oi reactors. In April 2015 courts blocked the restarting of two reactors at the Takahama Nuclear Power Plant but permitted the restart of two reactors at the Sendai Nuclear Power Plant. The government hopes that nuclear power will produce 20% of Japan's electricity by 2030. As of June 2015, approval was being sought from the new Nuclear Regulatory Agency for 24 units to restart, of the 54 pre-Fukushima units. The units also have to be approved by the local prefecture authorities before restarting. In July 2015 fuel loading was completed at the Sendai-1 nuclear plant, it restarted 11 August 2015 and was followed by unit 2 on 1 November 2015. Japan's Nuclear Regulatory Authority approved the restart of Ikata-3 which took place on 19 April 2016, this reactor is the fifth to receive approval to restart. The Takahama Nuclear Power Plant unit 4 restarted in May 2017 and unit 3 in June 2017. And by 2023, Unit 1 and 2 of Takahama also restarted. In November 2016 Japan signed a nuclear cooperation agreement with India. Japanese nuclear plant builders saw this as potential lifeline given that domestic orders had ended following the Fukushima disaster, and India is proposing to build about 20 new reactors over the next decade. However, there is Japanese domestic opposition to the agreement, as India has not agreed to the Treaty on the Non-Proliferation of Nuclear Weapons. In 2014, following the failure of the prototype Monju sodium-cooled fast reactor, Japan agreed to cooperate in developing the French ASTRID demonstration sodium-cooled fast breeder reactor. As of 2016, France was seeking the full involvement of Japan in the ASTRID development. In 2015, the Agency for Natural Resources and Energy changed the accounting provisions of the Electricity Business Act, so companies can account for decommissioning costs in ten yearly installments rather than a one-time charge. This will encourage the decommissioning of older and smaller nuclear units, most of which have not restarted since 2011. In 2022, during the global energy crisis which greatly increased the cost of imported fossil fuels, Japan's prime minister announced the building of safer next-generation nuclear reactors and restarting idle existing plants would be considered. In 2022 ten reactors were operational producing about 5% of Japan's electricity. In December 2022, Japan's Nuclear Regulation Authority (NRA) approved a draft-rule allowing nuclear reactors to operate beyond 60 years by excluding inspection downtimes. This was part of a policy at enhancing nuclear reactor use, including restarting many, extending older units' lives, and developing new reactor technologies. In February 2023, the cabinet approved this policy and the construction of new reactors. By May 2023, a law was enacted to officially omit shutdown periods from the 60-year limit, subject to the economy minister's approval. The law also required the NRA to perform inspections every 10 years for reactors over 30 years of operation. Seismicity Japan has had a long history of earthquakes and seismic activity, and destructive earthquakes, often resulting in tsunamis, occur several times a century. Due to this, concern has been expressed about the particular risks of constructing and operating nuclear power plants in Japan. Amory Lovins has said: "An earthquake-and-tsunami zone crowded with 127 million people is an unwise place for 54 reactors". To date, the most serious seismic-related accident has been the Fukushima Daiichi nuclear disaster, following the 2011 Tōhoku earthquake and tsunami. Professor Katsuhiko Ishibashi, one of the seismologists who have taken an active interest in the topic, coined the term genpatsu-shinsai (原発震災), from the Japanese words for "nuclear power" and "quake disaster" to express the potential worst-case catastrophe that could ensue. Dr Kiyoo Mogi, former chair of the Japanese Coordinating Committee for Earthquake Prediction, has expressed similar concerns, stating in 2004 that the issue 'is a critical problem which can bring a catastrophe to Japan through a man-made disaster'. Warnings from Kunihiko Shimazaki, a professor of seismology at the University of Tokyo, were also ignored. In 2004, as a member of an influential cabinet office committee on offshore earthquakes, Mr. Shimazaki "warned that Fukushima's coast was vulnerable to tsunamis more than twice as tall as the forecasts of as much as five meters put forth by regulators and Tokyo Electric". Minutes of the meeting on 19 February 2004, show that the government bureaucrats running the committee moved quickly to exclude his views from the committee's final report. He said the committee did not want to force Tokyo Electric to make expensive upgrades at the plant. Hidekatsu Yoshii, a member of the House of Representatives for Japanese Communist Party and an anti-nuclear campaigner, warned in March and October 2006 about the possibility of the severe damage that might be caused by a tsunami or earthquake. During a parliamentary committee in May 2010 he made similar claims, warning that the cooling systems of a Japanese nuclear plant could be destroyed by a landslide or earthquake. In response, Yoshinobu Terasaka, head of the Nuclear and Industrial Safety Agency, replied that the plants were so well designed that "such a situation is practically impossible". Following damage at the Kashiwazaki-Kariwa Nuclear Power Plant due to the 2007 Chūetsu offshore earthquake, Kiyoo Mogi called for the immediate closure of the Hamaoka Nuclear Power Plant, which was knowingly built close to the centre of the expected Tōkai earthquake. Katsuhiko Ishibashi previously claimed, in 2004, that Hamaoka was "considered to be the most dangerous nuclear power plant in Japan". The International Atomic Energy Agency (IAEA) has also expressed concern. At a meeting of the G8's Nuclear Safety and Security Group, held in Tokyo in 2008, an IAEA expert warned that a strong earthquake with a magnitude above could pose a 'serious problem' for Japan's nuclear power stations. Before Fukushima, "14 lawsuits charging that risks had been ignored or hidden were filed in Japan, revealing a disturbing pattern in which operators underestimated or hid seismic dangers to avoid costly upgrades and keep operating. But all the lawsuits were unsuccessful". Underscoring the risks facing Japan, a 2012 research institute investigation has "determined there is a 70% chance of a magnitude-7 earthquake striking the Tokyo metropolitan area within the next four years, and 98% over 30 years". The March 2011 earthquake was a magnitude 9. Design standards Between 2005 and 2007, three Japanese nuclear power plants were shaken by earthquakes that far exceeded the maximum peak ground acceleration used in their design. The tsunami that followed the 2011 Tōhoku earthquake, inundating the Fukushima I Nuclear Power Plant, was more than twice the design height, while the ground acceleration also slightly exceeded the design parameters. In 2006 a Japanese government subcommittee was charged with revising the national guidelines on the earthquake-resistance of nuclear power plants, which had last been partially revised in 2001, resulting in the publication of a new seismic guide – the 2006 Regulatory Guide for Reviewing Seismic Design of Nuclear Power Reactor Facilities. The subcommittee membership included Professor Ishibashi, however his proposal that the standards for surveying active faults should be reviewed was rejected and he resigned at the final meeting, claiming that the review process was 'unscientific' and the outcome rigged to suit the interests of the Japan Electric Association, which had 11 of its committee members on the 19-member government subcommittee. Ishibashi has subsequently claimed that, although the new guide brought in the most far-reaching changes since 1978, it was 'seriously flawed' because it underestimated the design basis of earthquake ground motion. He has also claimed that the enforcement system is 'a shambles' and questioned the independence of the Nuclear Safety Commission after a senior Nuclear and Industrial Safety Agency official appeared to rule out a new review of the NSC's seismic design guide in 2007. Following the publication of the new 2006 Seismic Guide, the Nuclear and Industrial Safety Agency, at the request of the Nuclear Safety Commission, required the design of all existing nuclear power plants to be re-evaluated. Geological surveys The standard of geological survey work in Japan is another area causing concern. In 2008 Taku Komatsubara, a geologist at the National Institute of Advanced Industrial Science and Technology alleged that the presence of active faults was deliberately ignored when surveys of potential new power plant sites were undertaken, a view supported by a former topographer. Takashi Nakata, a seismologist from the Hiroshima Institute of Technology has made similar allegations and suggests that conflicts of interest between the Japanese nuclear industry and the regulators contribute to the problem. A 2011 Natural Resources Defense Council report that evaluated the seismic hazard to reactors worldwide, as determined by the Global Seismic Hazard Assessment Program data, placed 35 of Japan's reactors in the group of 48 reactors worldwide in very high and high seismic hazard areas. Nuclear power plants As of January 2022 there are 33 operable reactors in Japan, of which 12 reactors are currently operating. Additionally, 5 reactors have been approved for restart and further 8 have restart applications under review. On 6 May 2011, then Prime Minister Naoto Kan requested the Hamaoka Nuclear Power Plant be shut down as an earthquake of magnitude 8.0 or higher is estimated 87% likely to hit the area within the next 30 years. Kan wanted to avoid a possible repeat of the Fukushima nuclear disaster. On 9 May 2011, Chubu Electric decided to comply with the government's request. In July 2011, a mayor in Shizuoka Prefecture and a group of residents filed a lawsuit seeking the decommissioning of the reactors at the Hamaoka nuclear power plant permanently. In April 2014, Reuters reported that Prime Minister Shinzo Abe favours restarting nuclear plants, but that its analysis suggests that only about one-third to two-thirds of reactors will be in a technical and economic position to restart. In April 2017 the Nuclear Regulation Authority approved plans to decommission the Genkai 1, Mihama 1 and 2, Shimane 1, and Tsuruga 1 reactors. Nuclear accidents In terms of consequences of radioactivity releases and core damage, the Fukushima I nuclear accidents in 2011 were the worst experienced by the Japanese nuclear industry, in addition to ranking among the worst civilian nuclear accidents, though no fatalities were caused and no serious exposure of radiation to workers occurred. The Tokaimura reprocessing plant fire in 1999 had 2 worker deaths, one more was exposed to radiation levels above legal limits, and over 660 others received detectable radiation doses but within permissible levels, well below the threshold to affect human health. The Mihama Nuclear Power Plant experienced a steam explosion in one of the turbine buildings in 2004 where five workers were killed and six injured. 2011 accidents There have been many nuclear shutdowns, failures, and three partial meltdowns which were triggered by the 2011 Tōhoku earthquake and tsunami. Fukushima Daiichi nuclear disaster According to the Federation of Electric Power Companies of Japan, "by April 27 approximately 55 percent of the fuel in reactor unit 1 had melted, along with 35 percent of the fuel in unit 2, and 30 percent of the fuel in unit 3; and overheated spent fuels in the storage pools of units 3 and 4 probably were also damaged". The accident exceeds the 1979 Three Mile Island accident in seriousness, and is comparable to the 1986 Chernobyl disaster. The Economist reports that the Fukushima disaster is "a bit like three Three Mile Islands in a row, with added damage in the spent-fuel stores", and that there will be ongoing impacts: Years of clean-up will drag into decades. A permanent exclusion zone could end up stretching beyond the plant’s perimeter. Seriously exposed workers may be at increased risk of cancers for the rest of their lives... On 24 March 2011, Japanese officials announced that "radioactive iodine-131 exceeding safety limits for infants had been detected at 18 water-purification plants in Tokyo and five other prefectures". Officials said also that the fallout from the Dai-ichi plant is "hindering search efforts for victims from the March 11 earthquake and tsunami". Problems in stabilizing the Fukushima Daiichi nuclear power plant have hardened attitudes to nuclear power. As of June 2011, "more than 80 percent of Japanese now say they are anti-nuclear and distrust government information on radiation". The ongoing Fukushima crisis may spell the end of nuclear power in Japan, as "citizen opposition grows and local authorities refuse permission to restart reactors that have undergone safety checks". Local authorities are skeptical that sufficient safety measures have been taken and are reticent to give their permission – now required by law – to bring suspended nuclear reactors back online. Two government advisers have said that "Japan's safety review of nuclear reactors after the Fukushima disaster is based on faulty criteria and many people involved have conflicts of interest". Hiromitsu Ino, Professor Emeritus at the University of Tokyo, says "The whole process being undertaken is exactly the same as that used previous to the Fukushima Dai-Ichi accident, even though the accident showed all these guidelines and categories to be insufficient". In 2012, former prime minister Naoto Kan was interviewed about the Fukushima nuclear disaster, and has said that at one point Japan faced a situation where there was a chance that people might not be able to live in the capital zone including Tokyo and would have to evacuate. He says he is haunted by the specter of an even bigger nuclear crisis forcing tens of millions of people to flee Tokyo and threatening the nation's existence. "If things had reached that level, not only would the public have had to face hardships but Japan's very existence would have been in peril". That convinced Kan to "declare the need for Japan to end its reliance on atomic power and promote renewable sources of energy such solar that have long taken a back seat in the resource-poor country's energy mix". Other accidents Other accidents of note include: 1981: Almost 300 workers were exposed to excessive levels of radiation after a fuel rod ruptured during repairs at the Tsuruga Nuclear Power Plant. December 1995: The fast breeder Monju Nuclear Power Plant sodium leak. State-run operator Donen was found to have concealed videotape footage that showed extensive damage to the reactor. March 1997: The Tokaimura nuclear reprocessing plant fire and explosion, northeast of Tokyo. 37 workers were exposed to low doses of radiation. Donen later acknowledged it had initially suppressed information about the fire. 1999: A fuel loading system malfunctioned at a nuclear plant in the Fukui Prefecture and set off an uncontrolled nuclear reaction and explosion. September 1999: The criticality accident at the Tokai fuel fabrication facility. Hundreds of people were exposed to radiation, three workers received doses above legal limits of whom two later died. 2000: Three TEPCO executives were forced to quit after the company in 1989 ordered an employee to edit out footage showing cracks in nuclear plant steam pipes in video being submitted to regulators. August 2002: a widespread falsification scandal starting in that led to the shut down of all Tokyo Electric Power Company’s 17 nuclear reactors; Tokyo Electric's officials had falsified inspection records and attempted to hide cracks in reactor vessel shrouds in 13 of its 17 units. 2002: Two workers were exposed to a small amount of radiation and suffered minor burns during a fire at Onagawa Nuclear Power Station in northern Japan. 2006: A small amount of radioactive steam was released at the Fukushima Dai-ichi plant and it escaped the compound. 16 July 2007: A severe earthquake (measuring 6.6 on the moment magnitude scale) hit the region where Tokyo Electric's Kashiwazaki-Kariwa Nuclear Power Plant is located and radioactive water spilled into the Sea of Japan; as of March 2009, all of the reactors remain shut down for damage verification and repairs; the plant with seven units was the largest single nuclear power station in the world. Nuclear waste disposal Japanese policy is to reprocess its spent nuclear fuel. Originally spent fuel was reprocessed under contract in England and France, but then the Rokkasho Reprocessing Plant was built, with operations originally expected to commence in 2007. The policy to use recovered plutonium as mixed oxide (MOX) reactor fuel was questioned on economic grounds, and in 2004 it was revealed the Ministry of Economy, Trade and Industry had covered up a 1994 report indicating reprocessing spent fuel would cost four times as much as burying it. In 2000, a Specified Radioactive Waste Final Disposal Act called for creation of a new organization to manage high level radioactive waste, and later that year the Nuclear Waste Management Organization of Japan (NUMO) was established under the jurisdiction of the Ministry of Economy, Trade and Industry. NUMO is responsible for selecting a permanent deep geological repository site, construction, operation and closure of the facility for waste emplacement by 2040. Site selection began in 2002 and application information was sent to 3,239 municipalities, but by 2006, no local government had volunteered to host the facility. Kōchi Prefecture showed interest in 2007, but its mayor resigned due to local opposition. In December 2013 the government decided to identify suitable candidate areas before approaching municipalities. In 2014 the head of the Science Council of Japan’s expert panel has said Japan's seismic conditions makes it difficult to predict ground conditions over the necessary 100,000 years, so it will be impossible to convince the public of the safety of deep geological disposal. The cost of MOX fuel had roughly quadrupled from 1999 to 2017, creating doubts about the economics of nuclear fuel reprocessing. In 2018 the Japanese Atomic Energy Commission updated plutonium guidelines to try to reduce plutonium stockpiles, stipulating that the Rokkasho Reprocessing Plant should only produce the amount of plutonium required for MOX fuel for Japan's nuclear power plants. Nuclear regulatory bodies in Japan Nuclear Regulation Authority – A nuclear safety agency under the environment ministry, created on 19 September 2012. It replaced the Nuclear and Industrial Safety Agency and the Nuclear Safety Commission. Japanese Atomic Energy Commission (AEC) 原子力委員会 – Now operating as a commission of inquiry to the Japanese cabinet, this organization coordinates the entire nation's plans in the area of nuclear energy. Nuclear Safety Commission 原子力安全委員会 – The former Japanese regulatory body for the nuclear industry. Nuclear and Industrial Safety Agency (NISA) 原子力安全・保安院 – A former agency that performed regulatory activities and was formed on 6 January 2001, after a reorganization of governmental agencies. Nuclear power companies Electric utilities running nuclear plants Japan is divided into a number of regions that each get electric service from their respective regional provider, all utilities hold a monopoly and are strictly regulated by the Japanese government. For more background information, see Energy in Japan. All regional utilities in Japan currently operate nuclear plants with the exception of the Okinawa Electric Power Company. They are also all members of the Federation of Electric Power Companies (FEPCO) industry organization. The companies are listed below. Regional electric providers Hokkaidō Electric Power Company (HEPCO) - 北海道電力 Tōhoku Electric Power Company (Tōhoku Electric) - 東北電力 Tokyo Electric Power Company (TEPCO) - 東京電力 Chūbu Electric Power Company (CHUDEN) - 中部電力 Hokuriku Electric Power Company (RIKUDEN) - 北陸電力 Kansai Electric Power Company (KEPCO) - 関西電力 Chūgoku Electric Power Company (Energia) - 中国電力 Shikoku Electric Power Company (YONDEN) - 四国電力 Kyūshū Electric Power Company (Kyūshū Electric) - 九州電力 Other companies with a stake in nuclear power Japan Atomic Energy Agency (JAEA) - 日本原子力研究開発機構 Japan Atomic Power Company (JPAC) - 日本原子力発電 JAPC, jointly owned by several Japan's major electric utilities, was created by special provisions from the Japanese government to be the first company in Japan to run a nuclear plant. Today it still operates two separate sites. Electric Power Development Company (EDPC, J-POWER) - 電源開発 This company was created by a special law after the end of World War II, it operates a number of coal fired, hydroelectric, and wind power plants, the Ohma nuclear plant that is under construction will mark its entrance to the industry upon completion. Nuclear vendors and fuel cycle companies Nuclear vendors provide fuel in its fabricated form, ready to be loaded in the reactor, nuclear services, and/or manage construction of new nuclear plants. The following is an incomplete list of companies based in Japan that provide such services. The companies listed here provide fuel or services for commercial light water plants, and in addition to this, JAEA has a small MOX fuel fabrication plant. Japan operates a robust nuclear fuel cycle. Nuclear Fuel Industries (NFI) - 原子燃料工業 NFI operates nuclear fuel fabrication plants in both Kumatori, Osaka and in Tōkai, Ibaraki, fabricating 284 and 200 (respectively) metric tons Uranium per year. The Tōkai site produces BWR, HTR, and ATR fuel while the Kumatori site produces only PWR fuel. Japan Nuclear Fuel Limited (JNFL, JNF) - 日本原燃 The shareholders of JNFL are the Japanese utilities. JNFL plans to open a full scale enrichment facility in Rokkasho, Aomori with a capacity of 1.5 million SWU/yr along with a MOX fuel fabrication facility. JNFL has also operated a nuclear fuel fabrication facility called Kurihama Nuclear Fuel Plant in Yokosuka, Kanagawa as GNF, producing BWR fuel. Mitsubishi Heavy Industries / Atmea - 三菱重工業 原子力事業本部 MHI operates a fuel manufacturing plant in Tōkai, Ibaraki, and contributes many heavy industry components to construction of new nuclear plants, and has recently designed its own APWR plant type, fuel fabrication has been completely PWR fuel, though MHI sells components to BWRs as well. It was selected by the Japanese government to develop fast breeder reactor technology and formed Mitsubishi FBR Systems. MHI has also announced an alliance with Areva to form a new company called Atmea. Global Nuclear Fuel (GNF). GNF was formed as a joint venture with GE Nuclear Energy (GENE), Hitachi, and Toshiba on 1 January 2000. GENE has since strengthened its relationship with Hitachi, forming a global nuclear alliance: GE Hitachi Nuclear Energy (GEH) - 日立GEニュークリア・エナジー This company was formed 1 July 2007. Its next generation reactor, the ESBWR has made significant progress with US regulators. Its predecessor design, the ABWR, has been approved by the UK regulator for construction in the UK, following successful completion of the generic design assessment (GDA) process in 2017. Toshiba - 東芝 電力システム社 原子力事業部 Toshiba has maintained a large nuclear business focused mostly on Boiling Water Reactors. With the purchase of the American Westinghouse by US$5.4 Billion in 2006, which is focused mainly on Pressurized Water Reactor technology, it increased the size of its nuclear business about twofold. On 29 March 2017 Toshiba placed Westinghouse in Chapter 11 bankruptcy because of $9 billion of losses from its nuclear reactor construction projects, mostly the construction of four AP1000 reactors in the U.S. Toshiba still has a profitable maintenance and nuclear fuel supply business in Japan, and is a significant contractor in the Fukushima clean-up. Recyclable-Fuel Storage Co. A company formed by TEPCO and Japan Atomic Power Co. to build a spent nuclear fuel storage facility in Aomori Prefecture. There have been discussions between Hitachi, Mitsubishi Heavy Industries and Toshiba about possibly consolidating some of their nuclear activities. Nuclear research and professional organizations in Japan Research organizations These organizations are government-funded research organizations, though many of them have special status to give them power of administration separate from the Japanese government. Their origins date back to the Atomic Energy Basic Law, but they have been reorganized several times since their inception. Japan Atomic Energy Research Institute (JAERI) - 日本原子力研究所 The original nuclear energy research organization established by the Japanese government under cooperation with U.S. partners. Atomic Fuel Corporation - 原子燃料公社 This organization was formed along with JAERI under the Atomic Energy Basic Law and was later reorganized to be PNC. Power Reactor and Nuclear Fuel Development Corporation (PNC) - Succeeded the AFC in 1967 in order to perform more direct construction of experimental nuclear plants, and was renamed JNC in 1998. Japan Nuclear Cycle Development Institute (JNC) - 核燃料サイクル開発機構 (semi-governmental agency) Was formed in 1998 as the direct successor to the PNC. This organization operated Lojo and Monju experimental and demonstration reactors. Japan Atomic Energy Agency (JAEA) - 日本原子力研究開発機構 This is the modern, currently operating primary nuclear research organization in Japan. It was formed by a merger of JAERI and JNC in 2005. Academic/professional organizations Japan Atomic Industrial Forum (JAIF) 日本原子力産業協会 is a non-profit organization, established in 1956 to promote the peaceful use of atomic energy. The Atomic Energy Society of Japan (AESJ) 日本原子力学会 is a major academic organization in Japan focusing on all forms of nuclear power. The Journal of Nuclear Science and Technology is the academic journal run by the AESJ. It publishes English and Japanese articles, though most submissions are from Japanese research institutes, universities, and companies. Japan Nuclear Technology Institute (JANTI) 日本原子力技術協会 was established to by the nuclear power industry to support and lead that industry. Japan Electric Association (JEA) 日本電気協会 develops and publishes codes and guides for the Japanese nuclear power industry and is active in promoting nuclear power. Other proprietary organizations JCO. Established in 1978 as by Sumimoto Metal Mining Co. this company did work with Uranium conversion and set up factories at the Tokai-mura site. Later, it was held solely responsible for the Tokaimura nuclear accident Anti-nuclear movement Long one of the world's most committed promoters of civilian nuclear power, Japan's nuclear industry was not hit as hard by the effects of the 1979 Three Mile Island accident (USA) or the 1986 Chernobyl disaster (USSR) as some other countries. Construction of new plants continued to be strong through the 1980s and into the 1990s. However, starting in the mid-1990s there were several nuclear related accidents and cover-ups in Japan that eroded public perception of the industry, resulting in protests and resistance to new plants. These accidents included the Tokaimura nuclear accident, the Mihama steam explosion, cover-ups after accidents at the Monju reactor, and more recently the Kashiwazaki-Kariwa Nuclear Power Plant was completely shut down for 21 months following an earthquake in 2007. While exact details may be in dispute, it is clear that the safety culture in Japan's nuclear industry has come under greater scrutiny. The negative impact of the 2011 Fukushima nuclear disaster has changed attitudes in Japan. Political and energy experts describe "nothing short of a nationwide loss of faith, not only in Japan’s once-vaunted nuclear technology but also in the government, which many blame for allowing the accident to happen". Sixty thousand people marched in central Tokyo on 19 September 2011, chanting "Sayonara nuclear power" and waving banners, to call on Japan's government to abandon nuclear power, following the Fukushima disaster. Bishop of Osaka, Michael Goro Matsuura, has called on the solidarity of Christians worldwide to support this anti-nuclear campaign. In July 2012, 75,000 people gathered near in Tokyo for the capital's largest anti-nuclear event. Organizers and participants said such demonstrations signal a fundamental change in attitudes in a nation where relatively few have been willing to engage in political protests since the 1960s. Anti-nuclear groups include the Citizens' Nuclear Information Center, Stop Rokkasho, Hidankyo, Sayonara Nuclear Power Plants, Women from Fukushima Against Nukes, and the Article 9 group. People associated with the anti-nuclear movement include: Jinzaburo Takagi, Haruki Murakami, Kenzaburō Ōe, Nobuto Hosaka, Mizuho Fukushima, Ryuichi Sakamoto and Tetsunari Iida. See also Energy in Japan Environmental issues in Japan Nuclear Regulation Authority Japan's non-nuclear weapons policy Japanese nuclear weapon program United States-Japan Joint Nuclear Energy Action Plan Notes References Further reading Bacon, Paul, and Christopher Hobson. Human Security and Japan's Triple Disaster: Responding to the 2011 earthquake, tsunami and Fukushima nuclear crisis (2014) Dreiling, Michael. "An Energy Industrial Complex in Post-Fukushima Japan: A Network Analysis of the Nuclear Power Industry, the State and the Media." XVIII ISA World Congress of Sociology (13–19 July 2014). Isaconf, 2014. Fam, Shun Deng, et al. "Post-Fukushima Japan: The continuing nuclear controversy." Energy Policy 68 (2014): 199–205. Jackson, Keith. "Natural Disaster and Nuclear Crisis in Japan: Response and recovery after Japan's 3/11 and After the Great East Japan Earthquake: Political and Policy Change in post-Fukushima Japan." Asia Pacific Business Review (2014): 1–9. Kelly, Dominic. "US Hegemony and the Origins of Japanese Nuclear Power: The Politics of Consent." New Political Economy 19.6 (2014): 819–846. Kinefuchi, Etsuko. "Nuclear Power for Good: Articulations in Japan's Nuclear Power Hegemony." Communication, Culture & Critique (2015). Kingston, Jeff. "Abe'S Nuclear Renaissance: Energy Politics in Post–3.11 Japan." Critical Asian Studies 46.3 (2014): 461–484. Len, Christopher, and Victor Nian. "Nuclear versus Natural Gas: An Assessment on the Drivers Influencing Japan's Energy Future." Energy Procedia 61 (2014): 194–197. Nian, Victor, and S. K. Chou. "The state of nuclear power two years after FukushimaThe ASEAN perspective." Applied Energy 136 (2014): 838–848. Zhang, Qi, and Benjamin C. Mclellan. "Review of Japan's power generation scenarios in light of the Fukushima nuclear accident." International Journal of Energy Research 38.5 (2014): 539–550. External links Nuclear power in Japan on the World Nuclear Association website Nuclear accidents and incidents
Nuclear power in Japan
[ "Chemistry" ]
8,764
[ "Nuclear accidents and incidents", "Radioactivity" ]
4,132,074
https://en.wikipedia.org/wiki/Bismuthine
Bismuthine (IUPAC name: bismuthane) is the chemical compound with the formula BiH3. As the heaviest analogue of ammonia (a pnictogen hydride), BiH3 is unstable, decomposing to bismuth metal well below 0 °C. This compound adopts the expected pyramidal structure with H–Bi–H angles of around 90°. The term bismuthine may also refer to a member of the family of organobismuth(III) species having the general formula , where R is an organic substituent. For example, Bi(CH3)3 is trimethylbismuthine. Preparation and properties BiH3 is prepared by the redistribution of methylbismuthine (BiH2Me): 3 BiH2Me → 2 BiH3 + BiMe3 The required BiH2Me, which is also thermally unstable, is generated by reduction of methylbismuth dichloride, BiCl2Me with LiAlH4. As suggested by the behavior of SbH3, BiH3 is unstable and decomposes to its constituent elements according to the following equation: 2 BiH3 → 3 H2 + 2 Bi (ΔH(gas) = −278 kJ/mol) The methodology used for detection of arsenic ("Marsh test") can also be used to detect BiH3. This test relies on the thermal decomposition of these trihydrides to the metallic mirrors of reduced As, Sb, and Bi. These deposits can be further distinguished by their distinctive solubility characteristics: arsenic dissolves in NaOCl, antimony dissolves in ammonium polysulfide, and bismuth resists both reagents. Uses and safety considerations The low stability of BiH3 precludes significant health effects, it decomposes rapidly well below room temperature. References Bismuth compounds Metal hydrides
Bismuthine
[ "Chemistry" ]
400
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
4,132,085
https://en.wikipedia.org/wiki/Event%20Viewer
Event Viewer is a component of Microsoft's Windows NT operating system that lets administrators and users view the event logs, typically file extensions .evt and .evtx, on a local or remote machine. Applications and operating-system components can use this centralized log service to report events that have taken place, such as a failure to start a component or to complete an action. In Windows Vista, Microsoft overhauled the event system. Due to the Event Viewer's routine reporting of minor start-up and processing errors (which do not, in fact, harm or damage the computer), the software is frequently used by technical support scammers to trick the victim into thinking that their computer contains critical errors requiring immediate technical support. An example is the "Administrative Events" field under "Custom Views" which can have over a thousand errors or warnings logged over a month's time. Overview Windows NT has featured event logs since its release in 1993. The Event Viewer uses event IDs to define the uniquely identifiable events that a Windows computer can encounter. For example, when a user's authentication fails, the system may generate Event ID 672. Windows NT 4.0 added support for defining "event sources" (i.e. the application which created the event) and performing backups of logs. Windows 2000 added the capability for applications to create their own log sources in addition to the three system-defined "System", "Application", and "Security" log-files. Windows 2000 also replaced NT4's Event Viewer with a Microsoft Management Console (MMC) snap-in. Windows Server 2003 added the AuthzInstallSecurityEventSource() API calls so that applications could register with the security-event logs, and write security-audit entries. Versions of Windows based on the Windows NT 6.0 kernel (Windows Vista and Windows Server 2008) no longer have a 300-megabyte limit to their total size. Prior to NT 6.0, the system opened on-disk files as memory-mapped files in kernel memory space, which used the same memory pools as other kernel components. Event Viewer log-files with filename extension evtx typically appear in a directory such as C:\Windows\System32\winevt\Logs\ Command-line interface Windows XP introduced a set of three command-line interface tools, useful to task automation: eventquery.vbs – Official script to query, filter and output results based on the event logs. Discontinued after XP. eventcreate – a command (continued in Vista and 7) to put custom events in the logs. eventtriggers – a command to create event driven tasks. Discontinued after XP, replaced by the "Attach task to this event" feature, that is, from within the list of events, on a single event and select from the pop-up menu. Windows Vista Event Viewer consists of a rewritten event tracing and logging architecture on Windows Vista. It has been rewritten around a structured XML log-format and a designated log type to allow applications to more precisely log events and to help make it easier for support technicians and developers to interpret the events. The XML representation of the event can be viewed on the Details tab in an event's properties. It is also possible to view all potential events, their structures, registered event publishers and their configuration using the wevtutil utility, even before the events are fired. There are a large number of different types of event logs including Administrative, Operational, Analytic, and Debug log types. Selecting the Application Logs node in the Scope pane reveals numerous new subcategorized event logs, including many labeled as diagnostic logs. Analytic and Debug events which are high frequency are directly saved into a trace file while Admin and Operational events are infrequent enough to allow additional processing without affecting system performance, so they are delivered to the Event Log service. Events are published asynchronously to reduce the performance impact on the event publishing application. Event attributes are also much more detailed and show EventID, Level, Task, Opcode, and Keywords properties. Filtering using XPath 1.0 Users can filter event logs by one or more criteria or by a limited XPath 1.0 expression, and custom views can be created for one or more events. Using XPath as the query language allows viewing logs related only to a certain subsystem or an issue with only a certain component, archiving select events and sending traces on the fly to support technicians. Here are examples of simple custom filters for the new Window Event Log: Caveats: There are limitations to Microsoft's implementation of XPath Queries using XPath string functions will result in error Event subscribers Major event subscribers include the Event Collector service and Task Scheduler 2.0. The Event Collector service can automatically forward event logs to other remote systems, running Windows Vista, Windows Server 2008 or Windows Server 2003 R2 on a configurable schedule. Event logs can also be remotely viewed from other computers or multiple event logs can be centrally logged and monitored without an agent and managed from a single computer. Events can also be directly associated with tasks, which run in the redesigned Task Scheduler and trigger automated actions when particular events take place. See also Common Log File System (CLFS) List of Microsoft Windows components Microsoft Management Console Technical support scam References External links Official sources: Event Viewer - Inside Show on Microsoft Learn Events and Errors (Windows Server 2008) on Microsoft Learn Windows components Computer logging Windows administration
Event Viewer
[ "Technology" ]
1,129
[ "Windows commands", "Computing commands", "Computer logging" ]
4,132,316
https://en.wikipedia.org/wiki/Peter%20B.%20Andrews
Peter Bruce Andrews (born 1937) is an American mathematician and Professor of Mathematics, Emeritus at Carnegie Mellon University in Pittsburgh, Pennsylvania, and the creator of the mathematical logic Q0. He received his Ph.D. from Princeton University in 1964 under the tutelage of Alonzo Church. He received the Herbrand Award in 2003. His research group designed the TPS automated theorem prover. A subsystem ETPS (Educational Theorem Proving System) of TPS is used to help students learn logic by interactively constructing natural deduction proofs. Publications Andrews, Peter B. (1965). A Transfinite Type Theory with Type Variables. North Holland Publishing Company, Amsterdam. Andrews, Peter B. (1971). "Resolution in type theory". Journal of Symbolic Logic 36, 414–432. Andrews, Peter B. (1981). "Theorem proving via general matings". J. Assoc. Comput. March. 28, no. 2, 193–214. Andrews, Peter B. (1986). An introduction to mathematical logic and type theory: to truth through proof. Computer Science and Applied Mathematics. . Academic Press, Inc., Orlando, FL. Andrews, Peter B. (1989). "On connections and higher-order logic". J. Automat. Reason. 5, no. 3, 257–291. Andrews, Peter B.; Bishop, Matthew; Issar, Sunil; Nesmith, Dan; Pfenning, Frank; Xi, Hongwei (1996). "TPS: a theorem-proving system for classical type theory". J. Automat. Reason. 16, no. 3, 321–353. Andrews, Peter B. (2002). An introduction to mathematical logic and type theory: to truth through proof. Second edition. Applied Logic Series, 27. . Kluwer Academic Publishers, Dordrecht. References External links Peter B. Andrews 1937 births Living people 20th-century American mathematicians 21st-century American mathematicians American logicians Mathematical logicians Carnegie Mellon University faculty Princeton University alumni
Peter B. Andrews
[ "Mathematics" ]
428
[ "Mathematical logic", "Mathematical logicians" ]
4,132,583
https://en.wikipedia.org/wiki/International%20Society%20of%20Biometeorology
The International Society of Biometeorology (ISB) is a professional society for scientists interested in biometeorology, specifically environmental and ecological aspects of the interaction of the atmosphere and biosphere. The organization's stated purpose is: "to provide one international organization for the promotion of interdisciplinary collaboration of meteorologists, physicians, physicists, biologists, climatologists, ecologists and other scientists and to promote the development of Biometeorology". The International Society of Biometeorology was founded in 1956 at UNESCO headquarters in Paris, France, by S. W. Tromp, a Dutch geologist, H. Ungeheuer, a German meteorologist, and several human physiologists of which F. Sargent II of the United States became the first President of the society. ISB affiliated organizations include: the International Association for Urban Climate, the International Society for Agricultural Meteorology, the International Union of Biological Sciences, the World Health Organization, and the World Meteorological Organization. ISB affiliate members include: the American Meteorological Society, the Centre for Renewable Energy Sources, the German Meteorological Society, the Society for the Promotion of Medicine-Meteorological Research e.V., International Society of Medical Hydrology and Climatology, and the UK Met Office. Publications ISB publishes the following journals: Bulletin of the Society of Biometeorology International Journal of Biometeorology References External links Biometeorology International scientific organizations Meteorological societies Climatological research organizations Biology organizations International medical associations
International Society of Biometeorology
[ "Environmental_science" ]
314
[ "Biometeorology" ]
4,132,781
https://en.wikipedia.org/wiki/Greg%20Fahy
Gregory Michael Fahy is a California-based cryobiologist, biogerontologist, and businessman. He is Vice President and Chief Scientific Officer at 21st Century Medicine, Inc, and has co-founded Intervene Immune, a company developing clinical methods to reverse immune system aging. He was the 2022–2023 president of the Society for Cryobiology. Education A native of California, Fahy holds a Bachelor of Science degree in biology from the University of California, Irvine and a PhD in pharmacology and cryobiology from the Medical College of Georgia in Augusta. He currently serves on the board of directors of two organizations and as a referee for numerous scientific journals and funding agencies, and holds 35 patents on cryopreservation methods, aging interventions, transplantation, and other topics. Career Fahy is the world's foremost expert in organ cryopreservation by vitrification. Fahy introduced the modern successful approach to vitrification for cryopreservation in cryobiology and he is widely credited, along with William F. Rall, for introducing vitrification into the field of reproductive biology. In 2005, where he was a keynote speaker at the annual Society for Cryobiology meeting, Fahy announced that 21st Century Medicine had successfully cryopreserved a rabbit kidney at −130 °C by vitrification and transplanted it into a rabbit after rewarming, with subsequent long-term life support by the vitrified-rewarmed kidney as the sole kidney. This research breakthrough was later published in the peer-reviewed journal Organogenesis. Fahy is also a biogerontologist and is the originator and Editor-in-Chief of The Future of Aging: Pathways to Human Life Extension, a multi-authored book on the future of biogerontology. He currently serves on the editorial boards of Rejuvenation Research and the Open Geriatric Medicine Journal and served for 16 years as a Director of the American Aging Association and for 6 years as the editor of AGE News, the organization's newsletter. Research As a scientist with the American Red Cross, Fahy was the originator of the first practical method of cryopreservation by vitrification and the inventor of computer-based systems to apply this technology to whole organs. Before joining Twenty-First Century Medicine, he was the chief scientist for Organ, Inc and of LRT, Inc. He was also Head of the Tissue Cryopreservation Section of the Transfusion and Cryopreservation Research Program of the U.S. Naval Medical Research Institute in Bethesda, Maryland where he spearheaded the original concept of ice blocking agents. In 2014, he was named a Fellow of the Society for Cryobiology in recognition of the impact of his work in low temperature biology. In 2015–2017, Fahy led the TRIIM (Thymus Regeneration, Immunorestoration, and Insulin Mitigation) human clinical trial, designed to reverse aspects of human aging. The purpose of the TRIIM trial was to investigate the possibility of using recombinant human growth hormone (rhGH) to prevent or reverse signs of immunosenescence in ten 51‐ to 65‐year‐old putatively healthy men. The study: Observed protective immunological changes, improved risk indices for many age‐related diseases, and a mean epigenetic age approximately 1.5 years less than baseline after 1 year of treatment (−2.5‐year change compared to no treatment at the end of the study). Awards Fahy was named as a Fellow of the Society for Cryobiology in 2014, and in 2010 he received the Distinguished Scientist Award for Reproductive Biology from the Reproductive Biology Professional Group of the American Society of Reproductive Medicine. He received the Cryopreservation Award from the International Longevity and Cryopreservation Summit held in Madrid, Spain in 2017 in recognition of his career in and dedication to the field of cryobiology. Fahy also received the Grand Prize for Medicine from INPEX in 1995 for his invention of computerized organ cryoprotectant perfusion technology. In 2005, he was recognized as a Fellow of the American Aging Association. References External links 21st Century Medicine Intervene Immune Living people University of California, Irvine alumni 21st-century American biologists Medical College of Georgia alumni Biogerontologists Cryobiology Year of birth missing (living people)
Greg Fahy
[ "Physics", "Chemistry", "Biology" ]
903
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
4,132,805
https://en.wikipedia.org/wiki/BitLocker
BitLocker is a full volume encryption feature included with Microsoft Windows versions starting with Windows Vista. It is designed to protect data by providing encryption for entire volumes. By default, it uses the Advanced Encryption Standard (AES) algorithm in cipher block chaining (CBC) or "xor–encrypt–xor (XEX)-based Tweaked codebook mode with ciphertext Stealing" (XTS) mode with a 128-bit or 256-bit key. CBC is not used over the whole disk; it is applied to each individual sector. History BitLocker originated as a part of Microsoft's Next-Generation Secure Computing Base architecture in 2004 as a feature tentatively codenamed "Cornerstone" and was designed to protect information on devices, particularly if a device was lost or stolen. Another feature, titled "Code Integrity Rooting", was designed to validate the integrity of Microsoft Windows boot and system files. When used in conjunction with a compatible Trusted Platform Module (TPM), BitLocker can validate the integrity of boot and system files before decrypting a protected volume; an unsuccessful validation will prohibit access to a protected system. BitLocker was briefly called Secure Startup before Windows Vista's release to manufacturing. BitLocker is available on: Enterprise and Ultimate editions of Windows Vista and Windows 7 Pro and Enterprise editions of Windows 8 and 8.1 Windows Embedded Standard 7 and Windows Thin PC Windows Server 2008 and later Pro, Enterprise, and Education editions of Windows 10 Pro, Enterprise, and Education editions of Windows 11 Features Initially, the graphical BitLocker interface in Windows Vista could only encrypt the operating system volume. Starting with Windows Vista with Service Pack 1 and Windows Server 2008, volumes other than the operating system volume could be encrypted using the graphical tool. Still, some aspects of the BitLocker (such as turning autolocking on or off) had to be managed through a command-line tool called manage-bde.wsf. The version of BitLocker included in Windows 7 and Windows Server 2008 Release 2 adds the ability to encrypt removable drives. On Windows XP or Windows Vista, read-only access to these drives can be achieved through a program called BitLocker To Go Reader, if FAT16, FAT32 or exFAT filesystems are used. In addition, a new command-line tool called manage-bde replaced the old manage-bde.wsf. Starting with Windows Server 2012 and Windows 8, Microsoft has complemented BitLocker with the Microsoft Encrypted Hard Drive specification, which allows the cryptographic operations of BitLocker encryption to be offloaded to the storage device's hardware, for example, self-encrypting drives. In addition, BitLocker can now be managed through Windows PowerShell. Finally, Windows 8 introduced Windows To Go in its Enterprise edition, which BitLocker can protect. Device encryption Windows Mobile 6.5, Windows RT and core editions of Windows 8.1 include device encryption, a feature-limited version of BitLocker that encrypts the whole system. Logging in with a Microsoft account with administrative privileges automatically begins the encryption process. The recovery key is stored to either the Microsoft account or Active Directory (Active Directory requires Pro editions of Windows), allowing it to be retrieved from any computer. While device encryption is offered on all editions of Windows 8.1, unlike BitLocker, device encryption requires that the device meet the InstantGo (formerly Connected Standby) specifications, which requires solid-state drives and a TPM 2.0 chip. Starting with Windows 10 1703, the requirements for device encryption have changed, requiring a TPM 1.2 or 2.0 module with PCR 7 support, UEFI Secure Boot, and that the device meets Modern Standby requirements or HSTI validation. Device encryption requirements were relaxed in Windows 11 24H2, with the Modern Standby, HSTI and Secure Boot compliance no longer required and the DMA interfaces blocklist removed. And device encryption will be enabled by default by clean installation of Windows 11 24H2, called auto device encryption. In September 2019 a new update was released (KB4516071) changing the default setting for BitLocker when encrypting a self-encrypting drive. Now, the default is to use software encryption for newly encrypted drives. This is due to hardware encryption flaws and security concerns related to those issues. Encryption modes Three authentication mechanisms can be used as building blocks to implement BitLocker encryption: Transparent operation mode: This mode uses the capabilities of TPM 1.2 hardware to provide for transparent user experience—the user powers up and logs into Windows as usual. The key used for disk encryption is sealed (encrypted) by the TPM chip and will only be released to the OS loader code if the early boot files appear to be unmodified. The pre-OS components of BitLocker achieve this by implementing a Static Root of Trust Measurement—a methodology specified by the Trusted Computing Group (TCG). This mode is vulnerable to a cold boot attack, as it allows a powered-down machine to be booted by an attacker. It is also vulnerable to a sniffing attack, as the volume encryption key is transferred in plain text from the TPM to the CPU during a successful boot. User authentication mode: This mode requires that the user provide some authentication to the pre-boot environment in the form of a pre-boot PIN or password. USB Key Mode: The user must insert a USB device that contains a startup key into the computer to be able to boot the protected OS. Note that this mode requires that the BIOS on the protected machine supports the reading of USB devices in the pre-OS environment. BitLocker does not support smart cards for pre-boot authentication. The following combinations of the above authentication mechanisms are supported, all with an optional escrow recovery key: TPM only TPM + PIN TPM + PIN + USB Key TPM + USB Key USB Key Password only Operation BitLocker is a logical volume encryption system. (A volume spans part of a hard disk drive, the whole drive or more than one drive.) When enabled, TPM and BitLocker can ensure the integrity of the trusted boot path (e.g. BIOS and boot sector), in order to prevent most offline physical attacks and boot sector malware. In order for BitLocker to encrypt the volume holding the operating system, at least two NTFS-formatted volumes are required: one for the operating system (usually C:) and another with a minimum size of 100 MB, which remains unencrypted and boots the operating system. (In case of Windows Vista and Windows Server 2008, however, the volume's minimum size is 1.5 GB and must have a drive letter.) Unlike previous versions of Windows, Vista's "diskpart" command-line tool includes the ability to shrink the size of an NTFS volume so that this volume may be created from already allocated space. A tool called the BitLocker Drive Preparation Tool is also available from Microsoft that allows an existing volume on Windows Vista to be shrunk to make room for a new boot volume and for the necessary bootstrapping files to be transferred to it. Once an alternate boot partition has been created, the TPM module needs to be initialized (assuming that this feature is being used), after which the required disk-encryption key protection mechanisms such as TPM, PIN or USB key are configured. The volume is then encrypted as a background task, something that may take a considerable amount of time with a large disk as every logical sector is read, encrypted and rewritten back to disk. The keys are only protected after the whole volume has been encrypted when the volume is considered secure. BitLocker uses a low-level device driver to encrypt and decrypt all file operations, making interaction with the encrypted volume transparent to applications running on the platform. Encrypting File System (EFS) may be used in conjunction with BitLocker to provide protection once the operating system is running. Protection of the files from processes and users within the operating system can only be performed using encryption software that operates within Windows, such as EFS. BitLocker and EFS, therefore, offer protection against different classes of attacks. In Active Directory environments, BitLocker supports optional key escrow to Active Directory, although a schema update may be required for this to work (i.e. if the Active Directory Services are hosted on a Windows version previous to Windows Server 2008). BitLocker and other full disk encryption systems can be attacked by a rogue boot manager. Once the malicious bootloader captures the secret, it can decrypt the Volume Master Key (VMK), which would then allow access to decrypt or modify any information on an encrypted hard disk. By configuring a TPM to protect the trusted boot pathway, including the BIOS and boot sector, BitLocker can mitigate this threat. (Note that some non-malicious changes to the boot path may cause a Platform Configuration Register check to fail, and thereby generate a false warning.) Security concerns TPM alone is not enough The "Transparent operation mode" and "User authentication mode" of BitLocker use TPM hardware to detect whether there are unauthorized changes to the pre-boot environment, including the BIOS and MBR. If any unauthorized changes are detected, BitLocker requests a recovery key on a USB device. This cryptographic secret is used to decrypt the Volume Master Key (VMK) and allow the bootup process to continue. However, TPM alone is not enough: In February 2008, a group of security researchers published details of a so-called "cold boot attack" that allows full disk encryption systems such as BitLocker to be compromised by booting the machine from removable media, such as a USB drive, into another operating system, then dumping the contents of pre-boot memory. The attack relies on the fact that DRAM retains information for up to several minutes (or even longer, if cooled) after the power has been removed. The Bress/Menz device, described in US Patent 9,514,789, can accomplish this type of attack. Similar full disk encryption mechanisms of other vendors and other operating systems, including Linux and Mac OS X, are vulnerable to the same attack. The authors recommend that computers be powered down when not in physical control of the owner (rather than be left in a sleep mode) and that the encryption software be configured to require a password to boot the machine. On 10 November 2015, Microsoft released a security update to mitigate a security vulnerability in BitLocker that allowed authentication to be bypassed by employing a malicious Kerberos key distribution center, if the attacker had physical access to the machine, the machine was part of a domain and had no PIN or USB flash drive protection. BitLocker still does not properly support TPM 2.0 security features which, as a result, can lead to a complete bypass of privacy protection when keys are transmitted over Serial Peripheral Interface in a motherboard. All these attacks require physical access to the system and are thwarted by a secondary protector such as a USB flash drive or PIN code. Upholding Kerckhoffs's principle Although the AES encryption algorithm used in BitLocker is in the public domain, its implementation in BitLocker, as well as other components of the software, are proprietary; however, the code is available for scrutiny by Microsoft partners and enterprises, subject to a non-disclosure agreement. According to Microsoft sources, BitLocker does not contain an intentionally built-in backdoor, so there is no Microsoft-provided way for law enforcement to have guaranteed access to the data on a user's drive. In 2006, the UK Home Office expressed concern over the lack of a backdoor and tried entering into talks with Microsoft to get one introduced. Microsoft developer and cryptographer Niels Ferguson denied the backdoor request and said, "over my dead body". Microsoft engineers have said that United States Federal Bureau of Investigation agents also put pressure on them in numerous meetings to add a backdoor, although no formal, written request was ever made; Microsoft engineers eventually suggested that agents should look for the hard copy of the encryption key that the BitLocker program suggests that its users make. Niels Ferguson's position that "back doors are simply not acceptable" is in accordance with Kerckhoffs's principle. Stated by Netherlands-born cryptographer Auguste Kerckhoffs in the 19th century, the principle holds that a cryptosystem should be secure, even if everything about the system, except the encryption key, is public knowledge. Since 2020, BitLocker's method and data structure is public knowledge due to reverse engineering; the Linux cryptsetup program is capable of reading and writing BitLocker-protected drives given the key. Other concerns Starting with Windows 8 and Windows Server 2012, Microsoft removed the Elephant Diffuser from the BitLocker scheme for no declared reason. Dan Rosendorf's research shows that removing the Elephant Diffuser had an "undeniably negative impact" on the security of BitLocker encryption against a targeted attack. Microsoft later cited performance concerns, and noncompliance with the Federal Information Processing Standards (FIPS), to justify the diffuser's removal. Starting with Windows 10 version 1511, however, Microsoft added a new FIPS-compliant XTS-AES encryption algorithm to BitLocker. Starting with Windows 10 version 1803, Microsoft added a new feature called "Kernel Direct Memory access (DMA) Protection" to BitLocker, to protect against DMA attacks via Thunderbolt 3 ports. "Kernel Direct Memory access (DMA) Protection" only protects against attacks through Thunderbolt. Direct Memory Access is also possible through PCI Express. In this type of attack an attacker would connect a malicious PCI Express Device, which can in turn write directly to the memory and bypass the Windows login. To protect again this type of attack, Microsoft introduced "Virtualization-based Security". In October 2017, it was reported that a flaw enabled private keys to be inferred from public keys, which could allow an attacker to bypass BitLocker encryption when an affected TPM chip is used. The flaw is the Return of Coppersmith's Attack or ROCA vulnerability which is in a code library developed by Infineon and had been in widespread use in security products such as smartcards and TPMs. Microsoft released an updated version of the firmware for Infineon TPM chips that fixes the flaw via Windows Update. See also Features new to Windows Vista List of Microsoft Windows components Windows Vista I/O technologies Next-Generation Secure Computing Base FileVault References External links BitLocker Drive Encryption Technical Overview System Integrity Team Blog Windows Server 2008 Windows 11 Windows 10 Windows 8 Windows 7 Windows Vista Cryptographic software Microsoft Windows security technology Disk encryption
BitLocker
[ "Mathematics", "Technology" ]
3,119
[ "Windows commands", "Cryptographic software", "Computing commands", "Mathematical software" ]
4,132,882
https://en.wikipedia.org/wiki/HD%20211415
HD 211415 is a double star in the constellation Grus. With an apparent visual magnitude of 5.33, it is visible to the naked eye. The annual parallax shift is 72.54 mas, which yields a distance estimate of 45 light years. It has a relatively high proper motion, traversing the celestial sphere at the rate of 93.4 mas per year, and is moving closer to the Sun with a radial velocity of −13 km/s. As of 1994, the two members of this system have an angular separation of 2.884″ along a position angle of 34.935°. Their projected separation is 39.8 AU. The pair are most likely gravitationally-bound with an orbit is probably being viewed nearly edge-on and a semimajor axis of around 100 AU. HD 211415 was identified in September 2003 by astrobiologist Margaret Turnbull from the University of Arizona in Tucson as one of the most promising nearby candidates for hosting life based on her analysis of the HabCat list of stars. It is a G-type main-sequence star with a stellar classification of G0 V. References External links Spectra HD 211415 Binary stars G-type main-sequence stars M-type main-sequence stars HD, 211415 Grus (constellation) Durchmusterung objects 0853 211415 110109
HD 211415
[ "Astronomy" ]
283
[ "Grus (constellation)", "Constellations" ]
4,133,069
https://en.wikipedia.org/wiki/Fludiazepam
Fludiazepam, marketed under the brand name Erispan (エリスパン) is a potent benzodiazepine and 2ʹ-fluoro derivative of diazepam, originally developed by Hoffmann-La Roche in the 1960s. It is marketed in Japan and Taiwan. It exerts its pharmacological properties via enhancement of GABAergic inhibition. Fludiazepam has 4 times more binding affinity for benzodiazepine receptors than diazepam. It possesses anxiolytic, anticonvulsant, sedative, hypnotic and skeletal muscle relaxant properties. Fludiazepam has been used recreationally. See also Diazepam Diclazepam (the 2ʹ-chloro analog) Difludiazepam (the 2',6'-difluoro derivative) Flunitrazepam (the 7-nitro analog) Flualprazolam (the triazolo derivative) Ro20-8552 References External links Official Dainippon Sumitomo Pharma Website Benzodiazepines Sedatives Hypnotics Anticonvulsants Anxiolytics Lactams Chloroarenes 2-Fluorophenyl compounds
Fludiazepam
[ "Biology" ]
263
[ "Hypnotics", "Behavior", "Sleep" ]
4,133,196
https://en.wikipedia.org/wiki/Adipocyte%20protein%202
aP2 (adipocyte Protein 2) is a carrier protein for fatty acids that is primarily expressed in adipocytes and macrophages. aP2 is also called fatty acid binding protein 4 (FABP4). Blocking this protein either through genetic engineering or drugs has the possibility of treating heart disease and the metabolic syndrome. See also Fatty acid-binding protein References External links PDBe-KB provides an overview of all the structure information available in the PDB for Human Fatty acid-binding protein, adipocyte PDBe-KB provides an overview of all the structure information available in the PDB for Mouse Fatty acid-binding protein, adipocyte Proteins
Adipocyte protein 2
[ "Chemistry" ]
139
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
4,133,201
https://en.wikipedia.org/wiki/Whitehead%27s%20theory%20of%20gravitation
In theoretical physics, Whitehead's theory of gravitation was introduced by the mathematician and philosopher Alfred North Whitehead in 1922. While never broadly accepted, at one time it was a scientifically plausible alternative to general relativity. However, after further experimental and theoretical consideration, the theory is now generally regarded as obsolete. Principal features Whitehead developed his theory of gravitation by considering how the world line of a particle is affected by those of nearby particles. He arrived at an expression for what he called the "potential impetus" of one particle due to another, which modified Newton's law of universal gravitation by including a time delay for the propagation of gravitational influences. Whitehead's formula for the potential impetus involves the Minkowski metric, which is used to determine which events are causally related and to calculate how gravitational influences are delayed by distance. The potential impetus calculated by means of the Minkowski metric is then used to compute a physical spacetime metric , and the motion of a test particle is given by a geodesic with respect to the metric . Unlike the Einstein field equations, Whitehead's theory is linear, in that the superposition of two solutions is again a solution. This implies that Einstein's and Whitehead's theories will generally make different predictions when more than two massive bodies are involved. Following the notation of Chiang and Hamity , introduce a Minkowski spacetime with metric tensor , where the indices run from 0 through 3, and let the masses of a set of gravitating particles be . The Minkowski arc length of particle is denoted by . Consider an event with co-ordinates . A retarded event with co-ordinates on the world-line of particle is defined by the relations . The unit tangent vector at is . We also need the invariants . Then, a gravitational tensor potential is defined by where It is the metric that appears in the geodesic equation. Experimental tests Whitehead's theory is equivalent with the Schwarzschild metric and makes the same predictions as general relativity regarding the four classical solar system tests (gravitational red shift, light bending, perihelion shift, Shapiro time delay), and was regarded as a viable competitor of general relativity for several decades. In 1971, Will argued that Whitehead's theory predicts a periodic variation in local gravitational acceleration 200 times longer than the bound established by experiment. Misner, Thorne and Wheeler's textbook Gravitation states that Will demonstrated "Whitehead's theory predicts a time-dependence for the ebb and flow of ocean tides that is completely contradicted by everyday experience". Fowler argued that different tidal predictions can be obtained by a more realistic model of the galaxy. Reinhardt and Rosenblum claimed that the disproof of Whitehead's theory by tidal effects was "unsubstantiated". Chiang and Hamity argued that Reinhardt and Rosenblum's approach "does not provide a unique space-time geometry for a general gravitation system", and they confirmed Will's calculations by a different method. In 1989, a modification of Whitehead's theory was proposed that eliminated the unobserved sidereal tide effects. However, the modified theory did not allow the existence of black holes. Subrahmanyan Chandrasekhar wrote, "Whitehead's philosophical acumen has not served him well in his criticisms of Einstein." Philosophical disputes Clifford M. Will argued that Whitehead's theory features a prior geometry. Under Will's presentation (which was inspired by John Lighton Synge's interpretation of the theory), Whitehead's theory has the curious feature that electromagnetic waves propagate along null geodesics of the physical spacetime (as defined by the metric determined from geometrical measurements and timing experiments), while gravitational waves propagate along null geodesics of a flat background represented by the metric tensor of Minkowski spacetime. The gravitational potential can be expressed entirely in terms of waves retarded along the background metric, like the Liénard–Wiechert potential in electromagnetic theory. A cosmological constant can be introduced by changing the background metric to a de Sitter or anti-de Sitter metric. This was first suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955. Will's work was disputed by Dean R. Fowler, who argued that Will's presentation of Whitehead's theory contradicts Whitehead's philosophy of nature. For Whitehead, the geometric structure of nature grows out of the relations among what he termed "actual occasions". Fowler claimed that a philosophically consistent interpretation of Whitehead's theory makes it an alternate, mathematically equivalent, presentation of general relativity. In turn, Jonathan Bain argued that Fowler's criticism of Will was in error. See also Classical theories of gravitation Eddington–Finkelstein coordinates References Further reading Alfred North Whitehead Obsolete theories in physics Theories of gravity
Whitehead's theory of gravitation
[ "Physics" ]
1,012
[ "Theories of gravity", "Theoretical physics", "Obsolete theories in physics" ]
4,133,205
https://en.wikipedia.org/wiki/Phil%20Farrand
Phil Farrand (born November 5, 1958) is an American computer programmer and consultant, webmaster and author. He is known for his Nitpicker's Guides, in which he nitpicks plot holes and continuity errors in the various Star Trek television programs and movies, and for the creation of Nitcentral, a website devoted to the same activity. Subsequent to his Nitpicker's Guides, he has ventured into fiction as a novelist. Early life Farrand was born in Broken Arrow, Oklahoma, and grew up in the Philippines, where his parents were missionaries for Assemblies of God. He first became interested in the original Star Trek as a child. After returning to the United States, Farrand earned bachelor's degrees in piano performance and music composition. Career Music Farrand worked as a music editor, but became frustrated with working with music printed on paper, and worked for two years on a notation package for the Apple II, which later became Polywriter. Later, working with Coda Music Technology, Farrand created an award-winning, high-end desktop publishing software package for music notation called Finale. Now owned by MakeMusic, Finale won Best Book/Video/Software at the 2015 Music & Sound Awards and has been used to score films such as Million Dollar Baby, The Aviator, Spider-Man 2, Sideways, Harry Potter and the Prisoner of Azkaban, The Passion of the Christ, Ratatouille, and Michael Clayton. As a nitpicker Farrand first became a Star Trek nitpicker when watching a scene in the 1990 Star Trek: The Next Generation episode The Offspring. In the scene, the character Wesley Crusher speaks to his mother, Dr. Beverly Crusher using his communicator badge. After responding to Dr. Crusher's reminder to get a haircut, Wesley utters a sarcastic remark, but without tapping his comm badge to terminate the connection, leading Farrand to wonder if Dr. Crusher heard the remark. This sparked a spirited discussion between Farrand and his Trekker friend as to how the communicators worked, and the inconsistencies in their depicted usage in the series. In 1990, Farrand decided to try writing fiction, but could not find anyone to read his work. Because the only agent willing to represent him dealt only with nonfiction works, Farrand decided to attempt writing nonfiction in order to develop a reputation on which a career writing fiction could be based. A book producer liked Farrand's idea for a Next Generation nitpicker's guide, and so Farrand spent two years conducting careful analysis of the first six seasons of that series, spending eight to nine hours a day for months watching each episode multiple times, composing a tongue-in-cheek analysis of the plot holes, continuity errors and other trivia in the series. In 1993 Dell Publishing published the first guide, The Nitpicker's Guide for Next Generation Trekkers. By 1994 nearly 800,000 copies had been sold, and four printings published. From 1994 to 1997, similar guides followed annually, including Guides for Star Trek: The Original Series, Star Trek: Deep Space Nine and The X-Files, along with a second Next Generation volume. Watching the episodes and movies of each series in order to compile each Guide took about seven months, leaving Farrand five months out of the year to learn how to write fiction. Although exhaustive in their attention to detail, the Guides were not intended as critiques of the series' episodes or movies, but lighthearted musings that Farrand explained with the philosophy, "All nitpickers shall perform their duties with lightheartedness and good cheer," explaining that nitpicking should be about having fun with one's favorite television shows, not pointing fingers and assigning blame. Farrand solicited submissions from readers, who then became members of the "Nitpicker's Guild." He began sending out newsletters in 1994 in order to keep in touch with the Guild, beginning with the April 1994 edition. The Guild numbered 7,450 members from 32 countries as of May 28, 1999. Farrand decided to create an online version of the newsletter called Nitpicker Central, or Nitcentral; this took the form of an HTML feature called "This Week at Nitcentral", and debuted in November 1997. The hardcopy version of the newsletter also continued, with a total of 17 issues published intermittently, ceasing with issue dated October 1998, which coincided with the creation of Nitcentral's message boards, using free Discus software. Farrand was Nitcentral's first and sole moderator at first, with the site covering only four topics, the live action Star Trek television programs that had been produced up to then: Star Trek: The Original Series, Star Trek: The Next Generation, Star Trek: Deep Space Nine, and Star Trek: Voyager. By June 2009, the topics listed on the main Topics page numbered 89. Farrand planned to release a Nitpicker's Guide for Star Wars in April 1999, one month before Star Wars: Episode I – The Phantom Menace, but publishers became wary of publishing media tie-in products as a result copyright infringement lawsuits brought against similar products. Although the lawsuits did not name Farrand's Guides as an example — and in fact, even cited the Guides used as an example of what was legal — Del Rey ceased publishing Farrand's Guides, leaving Nitcentral as the sole ongoing outlet for the Guild. As the site expanded, Farrand assigned dozens of moderators to oversee the site's various topics. Although Farrand has since stepped down as a moderator of day-to-day activities, he remains the ultimate authority on the site and will step in occasionally to resolve matters of severe conflict among visitors and moderators, who refer to him as "The Chief". Church work Following the cancellation of the Guides, Farrand returned to the computer consulting industry, hoping to begin writing his first novel in his free time. Those plans changed when his wife Lynette, who had served as music minister at their church for 16 years, decided to take a two-year break. Farrand, a devout Christian who mentions Jesus Christ in the acknowledgments of all his books, agreed to serve as interim music minister; combined with his consultation job, this consumed all of his time, and he worked seven days a week. He eventually stepped down as music minister on September 28, 2003. As a novelist Farrand's initial attempts to publish through a small publisher in August 2003 were not fruitful, and he ultimately decided to self-publish through on-demand publisher Xlibris. His novel The Son, the Wind and the Reign was published in 2004. It depicts a world in which Jesus Christ and his followers have returned to Earth to rule with an iron rod for a thousand years. Twenty years into the new rule, a resistance fighter named Avery Foster decides to confront the new rulers, including Judge Thomas Stone, whose brutal interpretations of the new law have oppressed anyone daring to rebel. Farrand wrote the novel in part to explore the question of how one can distinguish between the divine and extraterrestrials, and added a topic to Nitcentral for discussion of the novel. In 2007, Farrand published Grumpy Old Prophets: A Christmas Fable for Adults. He also began a new Internet provider venture called Zarks, providing high-speed Internet access to the rural areas in and around Greene County, Missouri. Personal life Farrand lives with his wife Lynette and his daughter Elizabeth in Springfield, Missouri. Books Nitpicker's Guides The Nitpicker's Guide for Next Generation Trekkers (1993) The Nitpicker's Guide for Classic Trekkers (1994) The Nitpicker's Guide for Next Generation Trekkers, Volume II (1995) Nitpicker's Fun & Games for Next Generation Trekkers (1995) The Nitpicker's Guide for Deep Space Nine Trekkers (1996) The Nitpicker's Guide for X-Philes (1997) On audio cassette The Nitpicker's Guide for Next Generation Trekkers Part 3 Fiction The Son, the Wind and the Reign Grumpy Old Prophets: A Christmas Fable for Adults Windfall: The 99 and 1: The Conviction Opus, Part One (2014) Windfall: Broadcast: The Conviction Opus, Part Two (2015) Windfall: The Strait Gate: The Conviction Opus, Part Three (2015) Non-fiction Still Whispers: Meditations To Help You Calm The Atmosphere Of Your Life And Find Abundance (2008) References External links Nitpicker Central 1958 births Living people 21st-century American male writers 21st-century American non-fiction writers 21st-century American novelists American children's writers American Christian writers American expatriates in the Philippines American fiction writers American male non-fiction writers American male novelists Christian novelists People in information technology
Phil Farrand
[ "Technology" ]
1,858
[ "People in information technology", "Information technology" ]
4,133,407
https://en.wikipedia.org/wiki/Uranium%20Information%20Centre
The Uranium Information Centre (UIC) was an Australian organisation primarily concerned with increasing the public understanding of uranium mining and nuclear electricity generation. Founded in 1978, the Centre worked for many years to provide information about the development of the Australian uranium industry, the contribution it can make to world energy supplies and the benefits it can bring Australia. It was a broker of information on all aspects of the mining and processing of uranium, the nuclear fuel cycle, and the role of nuclear energy in helping to meet world electricity demand. The Centre was funded by companies involved in uranium exploration, mining and export in Australia. In 1995 Ian Hore-Lacy assumed the role of General Manager of the UIC, a position he held until 2001. The UIC's website was established in the year of his appointment. After leaving the UIC, Ian Hore-Lacy went on to work for the World Nuclear Association (WNA) as Director of Public Information for 12 years and as of 2015 he continues to work there as a Senior Research Analyst. In the late 2000s, the UIC's main information-providing function was assumed by the WNA and World Nuclear News (WNN), based in London, UK. In 2008 the UIC's purely domestic function was taken over by the Australian Uranium Association, and was subsequently absorbed by the Minerals Council of Australia's uranium portfolio in 2013. See also List of uranium mines World Uranium Hearing Uranium mining debate External links World Nuclear Association Homepage World Nuclear News Homepage Australian Uranium Association Homepage Australian educational websites Organizations established in 1978 1978 establishments in Australia Nuclear organizations Uranium mining in Australia
Uranium Information Centre
[ "Engineering" ]
326
[ "Nuclear organizations", "Energy organizations" ]
4,133,427
https://en.wikipedia.org/wiki/Specific%20strength
The specific strength is a material's (or muscle's) strength (force per unit area at failure) divided by its density. It is also known as the strength-to-weight ratio or strength/weight ratio or strength-to-mass ratio. In fiber or textile applications, tenacity is the usual measure of specific strength. The SI unit for specific strength is Pa⋅m3/kg, or N⋅m/kg, which is dimensionally equivalent to m2/s2, though the latter form is rarely used. Specific strength has the same units as specific energy, and is related to the maximum specific energy of rotation that an object can have without flying apart due to centrifugal force. Another way to describe specific strength is breaking length, also known as self support length: the maximum length of a vertical column of the material (assuming a fixed cross-section) that could suspend its own weight when supported only at the top. For this measurement, the definition of weight is the force of gravity at the Earth's surface (standard gravity, 9.80665 m/s2) applying to the entire length of the material, not diminishing with height. This usage is more common with certain specialty fiber or textile applications. The materials with the highest specific strengths are typically fibers such as carbon fiber, glass fiber and various polymers, and these are frequently used to make composite materials (e.g. carbon fiber-epoxy). These materials and others such as titanium, aluminium, magnesium and high strength steel alloys are widely used in aerospace and other applications where weight savings are worth the higher material cost. Note that strength and stiffness are distinct. Both are important in design of efficient and safe structures. Calculations of breaking length where is the length, is the tensile strength, is the density and is the acceleration due to gravity ( m/s) Examples The data of this table is from best cases, and has been established for giving a rough figure. Note: Multiwalled carbon nanotubes have the highest tensile strength of any material yet measured, with labs producing them at a tensile strength of 63 GPa, still well below their theoretical limit of 300 GPa. The first nanotube ropes (20 mm long) whose tensile strength was published (in 2000) had a strength of 3.6 GPa, still well below their theoretical limit. The density is different depending on the manufacturing method, and the lowest value is 0.037 or 0.55 (solid). The 'Yuri' and space tethers The International Space Elevator Consortium uses the "Yuri" as a name for the SI units describing specific strength. Specific strength is of fundamental importance in the description of space elevator cable materials. One Yuri is conceived to be the SI unit for yield stress (or breaking stress) per unit of density of a material under tension. One Yuri equals 1 Pa⋅m3/kg or 1 N⋅m/kg, which is the breaking/yielding force per linear density of the cable under tension. A functional Earth space elevator would require a tether of 30–80 megaYuri (corresponding to 3100–8200 km of breaking length). Fundamental limit on specific strength The null energy condition places a fundamental limit on the specific strength of any material. The specific strength is bounded to be no greater than c2 ≈ , where c is the speed of light. This limit is achieved by electric and magnetic field lines, QCD flux tubes, and the fundamental strings hypothesized by string theory. Tenacity (textile strength) Tenacity is the customary measure of strength of a fiber or yarn. It is usually defined as the ultimate (breaking) force of the fiber (in gram-force units) divided by the denier. Because denier is a measure of the linear density, the tenacity works out to be not a measure of force per unit area, but rather a quasi-dimensionless measure analogous to specific strength. A tenacity of corresponds to: Mostly Tenacity expressed in report as cN/tex. See also Specific modulus Space elevator Space tether References External links Specific stiffness - Specific strength chart, University of Cambridge, Department of Engineering Engineering ratios Materials science Solid mechanics
Specific strength
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
869
[ "Solid mechanics", "Applied and interdisciplinary physics", "Metrics", "Engineering ratios", "Quantity", "Materials science", "Mechanics", "nan" ]
4,133,969
https://en.wikipedia.org/wiki/Ethyl%20loflazepate
Ethyl loflazepate (marketed under the brand names Meilax, Ronlax and Victan) is a drug which is a benzodiazepine derivative. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. In animal studies it was found to have low toxicity, although in rats evidence of pulmonary phospholipidosis occurred with pulmonary foam cells developing with long-term use of very high doses. Its elimination half-life is 51–103 hours. Its mechanism of action is similar to other benzodiazepines. Ethyl loflazepate also produces an active metabolite which is stronger than the parent compound. Ethyl loflazepate was designed to be a prodrug for descarboxyloflazepate, its active metabolite. It is the active metabolite which is responsible for most of the pharmacological effects rather than ethyl loflazepate. The main metabolites of ethyl loflazepate are descarbethoxyloflazepate, loflazepate and 3-hydroxydescarbethoxyloflazepate. Accumulation of the active metabolites of ethyl loflazepate are not affected by those with kidney failure or impairment. The symptoms of an overdose of ethyl loflazepate include sleepiness, agitation and ataxia. Hypotonia may also occur in severe cases. These symptoms occur much more frequently and severely in children. Death from therapeutic maintenance doses of ethyl loflazepate taken for 2 – 3 weeks has been reported in 3 elderly patients. The cause of death was asphyxia due to benzodiazepine toxicity. High doses of the antidepressant fluvoxamine may potentiate the adverse effects of ethyl loflazepate. Ethyl loflazeplate is commercialized in Mexico, under the trade name Victan. It is officially approved for the following conditions: Anxiety Post-trauma anxiety Anxiety associated with severe neuropathic pain Generalized anxiety disorder (GAD) Obsessive–compulsive disorder Panic attack Delirium tremens See also Benzodiazepine References External links Benzodiazepines Chloroarenes Ethyl esters Hypnotics Lactams 2-Fluorophenyl compounds
Ethyl loflazepate
[ "Biology" ]
499
[ "Hypnotics", "Behavior", "Sleep" ]
4,133,988
https://en.wikipedia.org/wiki/Conductor%20gallop
Conductor gallop is the high-amplitude, low-frequency oscillation of overhead power lines due to wind. The movement of the wires occurs most commonly in the vertical plane, although horizontal or rotational motion is also possible. The natural frequency mode tends to be around 1 Hz, leading the often graceful periodic motion to also be known as conductor dancing. The oscillations can exhibit amplitudes in excess of a metre, and the displacement is sometimes sufficient for the phase conductors to infringe operating clearances (coming too close to other objects), and causing flashover. The forceful motion also adds significantly to the loading stress on insulators and electricity pylons, raising the risk of mechanical failure of either. The mechanisms that initiate gallop are not always clear, though it is thought to be often caused by asymmetric conductor aerodynamics due to ice build up on one side of a wire. The crescent of encrusted ice approximates an aerofoil, altering the normally round profile of the wire and increasing the tendency to oscillate. Gallop can be a significant problem for transmission system operators, particularly where lines cross open, windswept country and are at risk to ice loading. If gallop is likely to be a concern, designers can employ smooth-faced conductors, whose improved icing and aerodynamic characteristics reduce the motion. Additionally, anti-gallop devices may be mounted to the line to convert the lateral motion to a less damaging twisting one. Increasing the tension in the line and adopting more rigid insulator attachments have the effect of reducing galloping motion. These measures can be costly, are often impractical after the line has been constructed, and can increase the tendency for the line to exhibit high frequency oscillations. If ice loading is suspected, it may be possible to increase power transfer on the line, and so raise its temperature by Joule heating, melting the ice. The sudden loss of ice from a line can result in a phenomenon called "jump", in which the catenary dramatically rebounds upwards in response to the change in weight. If the risk of trip is high, the operator may elect to pre-emptively switch out the line in a controlled manner rather than face an unexpected fault. The risk of mechanical failure of the line remains. Theoretical analysis The earliest studies of long wires embedded in a moving fluid motion dates to the late 19th century, when Vincenc Strouhal explained "singing" wires in terms of vortex shedding. Gallop is now known to arise from a different physical phenomenon: aerodynamic lift. Ice accumulated on the wire destroys the circular symmetry of the wire, and the natural up-and-down "singing" motion of a wire changes the angle of attack of the iced wire in the wind. For certain shapes, the variation in lift across the different angles is so large that it excites large-scale oscillations. Mathematically, an unloaded extended wire in dead air can be approximated as a mass suspended at height by a spring with constant . If the wind moves with velocity , then it makes angle with the wire, where At large wind velocities, the lift and drag induced on the wire are proportional to the square of the wind velocity, but the proportionality constants and (for a noncircular wire) depend on : where is the fluid density and the length of the wire. In principle, the excited oscillation can take three forms: rotation of the wire, horizontal sway, or vertical plunge. Most gallops combine rotation with at least one of the other two forms. For algebraic simplicity, this article will analyze a conductor only experiencing plunge (and not rotation); a similar treatment can address other dynamics. From geometrical considerations, the vertical component of the force must be keeping only terms first-order in the regime . Gallop occurs whenever the driving coefficient exceeds the natural damping of the wire; in particular, a necessary-but-not-sufficient condition is that This is known as the Den Hartog gallop condition, after the engineer who first discovered it. At low wind velocities , the above analysis begins to fail, because the gallop oscillation couples to the vortex shedding. Flutter A similar aeolian phenomenon is flutter, caused by vortices on the leeward side of the wire, and which is distinguished from gallop by its high-frequency (10 Hz), low-amplitude motion. To control flutter, transmission lines may be fitted with tuned mass dampers (known as Stockbridge dampers) clamped to the wires close to the towers. The use of bundle conductor spacers can also be of benefit. See also Aeolian vibration References Aerodynamics Electric power transmission Mechanical vibrations
Conductor gallop
[ "Physics", "Chemistry", "Engineering" ]
965
[ "Structural engineering", "Aerodynamics", "Mechanics", "Mechanical vibrations", "Aerospace engineering", "Fluid dynamics" ]
4,134,885
https://en.wikipedia.org/wiki/Contact%20immunity
Contact immunity is the property of some vaccines, where a vaccinated individual can confer immunity upon unimmunized individuals through contact with bodily fluids or excrement. In other words, if person "A" has been vaccinated for virus X and person "B" has not, person "B" can receive immunity to virus X just by coming into contact with person "A". The term was coined by Romanian physician Ioan Cantacuzino. The potential for contact immunity exists primarily in "live" or attenuated vaccines. Vaccination with a live, but attenuated, virus can produce immunity to more dangerous forms of the virus. These attenuated viruses produce little or no illness in most people. However, the live virus multiplies briefly, may be shed in body fluids or excrement, and can be contracted by another person. If this contact produces immunity and carries no notable risk, it benefits an additional person, and further increases the immunity of the group. The most prominent example of contact immunity was the oral polio vaccine (OPV). This live, attenuated polio vaccine was widely used in the US between 1960 and 1990; it continues to be used in polio eradication programs in developing countries because of its low cost and ease of administration. It is popular, in part, because it is capable of contact immunity. Recently immunized children "shed" live virus in their feces for a few days after immunization. About 25 percent of people coming into contact with someone immunized with OPV gained protection from polio through this form of contact immunity. Although contact immunity is an advantage of OPV, the risk of vaccine-associated paralytic poliomyelitis—affecting 1 child per 2.4 million OPV doses administered—led the Centers for Disease Control and Prevention (CDC) to cease recommending its use in the US as of January 1, 2010, in favor of inactivated poliovirus vaccine (IPV). The CDC continues to recommend OPV over IPV for global polio eradication activities. The main drawback of live virus–based vaccines is that a few people who are vaccinated or exposed to those who have been vaccinated may develop severe disease. Those with defective immune function are the most vulnerable. In the case of OPV, an average of eight to nine adults contracted paralytic polio from contact with a recently immunized child each year. As the risk of catching polio in the Western Hemisphere diminished, the risk of contact infection with the attenuated polio virus outweighed the advantages of OPV, leading the CDC to recommend its discontinuation. Contact immunity differs from herd immunity, a different type of group protection, in which risk for unimmunized individuals is reduced if they are surrounded by immunized individuals who are unlikely to contract, harbor, or transmit the disease. References Epidemiology Polio Vaccination
Contact immunity
[ "Biology", "Environmental_science" ]
612
[ "Epidemiology", "Vaccination", "Environmental social science" ]
4,135,000
https://en.wikipedia.org/wiki/Laurel%20wreath
A laurel wreath is a symbol of triumph, a wreath made of connected branches and leaves of the bay laurel (), an aromatic broadleaf evergreen. It was also later made from spineless butcher's broom (Ruscus hypoglossum) or cherry laurel (Prunus laurocerasus). It is worn as a chaplet around the head, or as a garland around the neck. Wreaths and crowns in antiquity, including the laurel wreath, trace back to Ancient Greece. In Greek mythology, the god Apollo, who is patron of lyrical poetry, musical performance and skill-based athletics, is conventionally depicted wearing a laurel wreath on his head in all three roles. Wreaths were awarded to victors in athletic competitions, including the ancient Olympics; for victors in athletics they were made of wild olive tree known as "kotinos" (), (sc. at Olympia) – and the same for winners of musical and poetic competitions. In Rome they were symbols of martial victory, crowning a successful commander during his triumph. Whereas ancient laurel wreaths are most often depicted as a horseshoe shape, modern versions are usually complete rings. In common modern idiomatic usage, a laurel wreath or "crown" refers to a victory. The expression "resting on one's laurels" refers to someone relying entirely on long-past successes for continued fame or recognition, whereas to "look to one's laurels" means to be careful of losing rank to competition. Background Apollo, the patron of sport, is associated with the wearing of a laurel wreath. This association arose from the ancient Greek mythology story of Apollo and Daphne. Apollo mocked the god of love, Eros (Cupid), for his use of bow and arrow, since Apollo is also patron of archery. The insulted Eros then prepared two arrows—one of gold and one of lead. He shot Apollo with the gold arrow, instilling in the god a passionate love for the river nymph Daphne. He shot Daphne with the lead arrow, instilling in her a hatred of Apollo. Apollo pursued Daphne until she begged to be free of him and was turned into a laurel tree. Apollo vowed to honor Daphne forever and used his powers of eternal youth and immortality to render the laurel tree evergreen. Apollo then crafted himself a wreath out of the laurel branches and turned Daphne into a cultural symbol for him and other poets and musicians. Academic use In some countries, the laurel wreath is used as a symbol of the master's degree. The wreath is given to young masters at the university graduation ceremony. The word "laureate" in 'poet laureate' refers to the laurel wreath. For example, the greatly admired medieval Florentine poet and philosopher Dante Alighieri is often represented in paintings and sculpture wearing a laurel wreath. In Italy, the term laureato is used in academia to refer to any student who has graduated. Right after the graduation ceremony, or laurea in Italian, the student receives a laurel wreath to wear for the rest of the day. This tradition originated at the University of Padua and has spread in the last two centuries to all Italian universities. At Connecticut College in the United States, members of the junior class carry a laurel chain, which the seniors pass through during commencement. It represents nature and the continuation of life from year to year. Immediately following commencement, the junior girls write out with the laurels their class year, symbolizing they have officially become seniors and the period will repeat itself the following spring. At Mount Holyoke College in South Hadley, Massachusetts, USA, laurel has been a fixture of commencement traditions since 1900, when graduating students carried or wore laurel wreaths. In 1902, the chain of mountain laurel was introduced; since then, tradition has been for seniors to parade around the campus, carrying and linked by the chain. The mountain laurel represents the bay laurel used by the Romans in wreaths and crowns of honor. At Reed College in Portland, Oregon, United States, members of the senior class receive laurel wreaths upon submitting their senior thesis in May. The tradition stems from the use of laurel wreaths in athletic competitions; the seniors have "crossed the finish line", so to speak. At St. Mark's School in Southborough, Massachusetts, students who successfully complete three years of one classical language and two of the other earn the distinction of the Classics Diploma and the honor of wearing a laurel wreath on Prize Day. In Sweden, those receiving a doctorate or an honorary doctorate in subjects traditionally falling within the Faculty of Philosophy (meaning philosophy, languages, arts, history and social sciences, as well as the natural sciences), receive a laurel wreath during the ceremony of conferral of the degree. In Finland, in University of Helsinki a laurel wreath is given during the ceremony of conferral for master's degree. Architectural and decorative arts motif The laurel wreath is a common motif in architecture, furniture, and textiles. The laurel wreath is seen carved in the stone and decorative plaster works of Robert Adam, and in Federal, Regency, Directoire, and Beaux-Arts periods of architecture. In decorative arts, especially during the Empire period, the laurel wreath is seen woven in textiles, inlaid in marquetry, and applied to furniture in the form of gilded brass mounts. Alfa Romeo added a laurel wreath to their logo after they won the inaugural Automobile World Championship in 1925 with the P2 racing car. As used in heraldry Laurel wreaths are commonly used in heraldry. They may be used as a charge in the shield, around the shield, or on top of it like an annular form. Wreaths are a form of headgear akin to circlets. In heraldry, a twisted band of cloth holds a mantling onto a helmet. This type of charge is called a "torse". A wreath is a circlet of foliage, usually with leaves, but sometimes with flowers. Wreaths may also be made from oak leaves, flowers, holly and rosemary; and are different from chaplets. While usually annular, they may also be penannular like a brooch. In the Society for Creative Anachronism, laurel wreaths are reserved for use in the arms of a territorial branch, which are required to include one or more. Wreath of service The "wreath of service" is located on all commissioner position patches in the Boy Scouts of America. This is a symbol for the service rendered to units and the continued partnership between volunteers and professional Scouter. The wreath of service represents commitment to program and unit service. Further reading See also Footnotes References External links Wreaths (attire) Visual motifs Architectural elements Headgear in heraldry Roman-era clothing Plants in culture
Laurel wreath
[ "Mathematics", "Technology", "Engineering" ]
1,368
[ "Visual motifs", "Building engineering", "Symbols", "Architectural elements", "Components", "Architecture" ]
4,135,133
https://en.wikipedia.org/wiki/Geophysical%20global%20cooling
Before the concept of plate tectonics, global cooling was a geophysical theory by James Dwight Dana, also referred to as the contracting earth theory. It suggested that the Earth had been in a molten state, and features such as mountains formed as it cooled and shrank. As the interior of the Earth cooled and shrank, the rigid crust would have to shrink and crumple. The crumpling could produce features such as mountain ranges. Application The Earth was compared to a cooling ball of iron, or a steam boiler with shifting boiler plates. By the early 1900s, it was known that temperature increased with increasing depth. With the thickness of the crust, the "boiler plates", being estimated at ten to fifty miles, the downward pressure would be hundreds of thousands of pounds per square inch. Although groundwater was expected to turn to steam at a great depth, usually the downward pressure would contain any steam. Steam's effect upon molten rock was suspected of being a cause of volcanoes and earthquakes, as it had been noticed that most volcanoes are near water. It was not clear whether the molten rock from volcanoes had its origin in the molten rock under the crust, or if increased heat due to pressure under mountains caused the rock to melt. One of the reasons for volcanoes was as a way in which "the contracting earth disposes of the matter it can no longer contain." A relationship between earthquakes and volcanoes had been noted, although the causes were not known. Fault lines and earthquakes tended to happen along the boundaries of the shifting "boiler plates", but the folding of mountains indicated that sometimes the plates buckled. In the early 1900s, Professor Eduard Suess used the theory to explain the 1908 Messina earthquake, being of the opinion that the Earth's crust was gradually shrinking everywhere. He also predicted that eruptions would follow the earthquake and tsunami in Southern Italy. He attributed the earthquake to the sinking of the Earth's crust, in the zone of which the Aeolian Islands are the center. He declared that as the process of sinking went on, the Calabrian and Sicilian highlands on either side of the Straits of Messina would be submerged, only the highest peaks remaining above the sea. The strait, he said, would thereby be greatly widened. Similarly, Professor Robert T. Hill explained at that time that "the rocks are being folded, fractured and otherwise broken or deformed by the great shrinking and settling of the earth's crust as a whole. The contraction of the earth's sphere is the physical shrinkage of age that is measured in aeons instead of years. The prehistoric convulsions of the earth before man inhabited this planet were terrific, almost inconceivable." There "was no doubt that earthquakes are diminishing." The displacement of the 1906 San Francisco earthquake was only a few feet, while prehistoric earthquakes made fissures and slides of 20,000 feet. The Pacific Ring of Fire had been noticed, as well as a second earthquake belt which went through: the Philippines Panama the Caribbean Spain the Alps the Himalayas Asia to Japan A contracting Earth served as framework for Leopold Kober and Hans Stille who worked on geosyncline theory in the first half of the 20th century. Objections Some of the objections include: Some large-scale features of the Earth are the result of extension rather than shortening. After radioactive decay was discovered, it was realized it would release heat inside the planet. This undermines the cooling effect upon which the shrinking planet theory is based. Identical fossils have been found thousands of kilometres apart, showing the planet was once a single continent which broke apart because of plate tectonics. Current status This theory is now disproven and considered obsolete. In contrast to Earth, however, global cooling remains the dominant explanation for scarp (cliff) features on the planet Mercury. After resumption of Lunar exploration in the 1990s, it was discovered there are scarps across the Moon's surface which are caused by contraction due to cooling. See also Expanding Earth Timeline of the development of tectonophysics References Bibliography Geophysics Obsolete geology theories Geodynamics
Geophysical global cooling
[ "Physics" ]
835
[ "Applied and interdisciplinary physics", "Geophysics" ]
4,135,156
https://en.wikipedia.org/wiki/Crashworthiness
Crashworthiness is the ability of a structure to protect its occupants during an impact. This is commonly tested when investigating the safety of aircraft and vehicles. Different criteria are used to figure out how safe a structure is in a crash, depending on the type of impact and the vehicle involved. Crashworthiness may be assessed either prospectively, using computer models (e.g., RADIOSS, LS-DYNA, PAM-CRASH, MSC Dytran, MADYMO) or experiments, or retrospectively, by analyzing crash outcomes. Several criteria are used to assess crashworthiness prospectively, including the deformation patterns of the vehicle structure, the acceleration experienced by the vehicle during an impact, and the probability of injury predicted by human body models. Injury probability is defined using criteria, which are mechanical parameters (e.g., force, acceleration, or deformation) that correlate with injury risk. A common injury criterion is the head impact criterion (HIC). Crashworthiness is measured after the fact by looking at injury risk in real-world crashes. Often, regression or other statistical methods are used to account for the many other factors that can affect the outcome of a crash. History Aviation The history of human tolerance to deceleration can likely be traced to the studies by John Stapp to investigate the limits of human tolerance in the 1940s and 1950s. In the 1950s and 1960s, the Pakistan Army began serious accident analysis into crashworthiness as a result of fixed-wing and rotary-wing accidents. As the US Army's doctrine changed, helicopters became the primary mode of transportation in Vietnam. Due to fires and the forces of deceleration on the spine, pilots were getting spinal injuries in crashes that they would have survived otherwise. Work began to develop energy-absorbing seats to reduce the chance of spinal injuries during training and combat in Vietnam. A lot of research was done to find out what people could handle, how to reduce energy, and how to build structures that would keep people safe in military helicopters. The primary reason is that ejecting from or exiting a helicopter is impractical given the rotor system and typical altitude at which Army helicopters fly. In the late 1960s, the Army published the Aircraft Crash Survival Design Guide. The guide was changed several times and turned into a set of books with different volumes for different aircraft systems. The goal of this guide is to show engineers what they need to think about when making military planes that can survive a crash. Consequently, the Army established a military standard (MIL-STD-1290A) for light fixed- and rotary-wing aircraft. The standard sets minimum requirements for the safety of human occupants in a crash. These requirements are based on the need to keep a space or volume that can be used for living and the need to reduce the deceleration loads on the occupant. Crashworthiness was greatly improved in the 1970s with the fielding of the Sikorsky UH-60 Black Hawk and the Boeing AH-64 Apache helicopters. Primary crash injuries were reduced, but secondary injuries within the cockpit continued to occur. This led to the consideration of additional protective devices such as airbags. Airbags were considered a viable solution to reducing the incidents of head strikes in the cockpit, in Army helicopters. Regulatory agencies The National Highway Traffic Safety Administration, the Federal Aviation Administration, the National Aeronautic and Space Administration, and the Department of Defense have been the leading proponents for crash safety in the United States. They've each come up with their own official safety rules and done a lot of research and development in the field. See also Airbag Airworthiness Anticlimber Automobile safety Buff strength of rail vehicles Bumper (car) Compressive strength Container compression test Crash test Crash test dummy Hugh DeHaven Jerome F. Lederer Railworthiness Roadworthiness Seakeeping Seat belt Seaworthiness Self-sealing fuel tank Spaceworthiness Telescoping (rail cars) References Further reading RDECOM TR 12-D-12, Full Spectrum Crashworthiness Criteria for Rotorcraft , Dec 2011. USAAVSCOM TR 89-D-22A, Aircraft Crash Survival Design Guide, Volume I - Design Criteria and Checklists , Dec 1989. USAAVSCOM TR 89-D-22B, Aircraft Crash Survival Design Guide, Volume II - Aircraft Design Crash Impact Conditions and Human Tolerance , Dec 1989. USAAVSCOM TR 89-D-22C, Aircraft Crash Survival Design Guide, Volume III - Aircraft Structural Crash Resistance , Dec 1989. USAAVSCOM TR 89-D-22D, Aircraft Crash Survival Design Guide, Volume IV - Aircraft Seats, Restraints, Litters, and Cockpit/Cabin Delethalization , Dec 1989. USAAVSCOM TR 89-D-22E, Aircraft Crash Survival Design Guide, Volume V - Aircraft Postcrash Survival , Dec 1989. External links Army Helicopter Crashworthiness at DTIC Basic Principle of Helicopter Crashworthiness at US Army Aeromedical Laboratory National Crash Analysis Center NHTSA Crashworthiness Rulemaking Activities History of Energy Absorption Systems for Crashworthy Helicopter Seats at FAA MIT Impact and Crashworthiness Lab School Bus Crashworthiness Research Rail Equipment Crashworthiness Transport safety Aviation accidents and incidents
Crashworthiness
[ "Physics" ]
1,056
[ "Physical systems", "Transport", "Transport safety" ]
4,135,179
https://en.wikipedia.org/wiki/Growler%20%28electrical%20device%29
A growler is an electrical device primarily used for testing a motor for shorted coils. A growler consists of a coil of wire wrapped around an iron core and connected to a source of alternating current. When placed on the armature or stator core of a motor the growler acts as the primary of a transformer and the armature coils act as the secondary. A "feeler", a thin strip of steel (hacksaw blade) can be used as the short detector. Motor testing The alternating magnetic flux set up by the growler passes through the windings of the armature coil, generating an alternating voltage in the coil. A short in the coil creates a closed circuit that will act like the secondary coil of a transformer, with the growler acting like the primary coil. This will induce an alternating current in the shorted armature that will in turn cause an alternating magnetic field to encircle the shorted armature coil. A flat, broad, flexible piece of metal containing iron is used to detect the magnetic field generated by a shorted armature. A hacksaw blade is commonly used as a feeler. The alternating magnetic field induced by a shorted armature is strong at the surface of the armature, and when the feeler is lightly touched to the iron core of an armature winding, small currents are induced in the feeler that generate a third alternating magnetic field surrounding the feeler. With the growler energized, the feeler is moved from slot to slot. When the feeler is moved over a slot containing the shorted coil, the alternating magnetic field will alternately attract and release the feeler, causing it to vibrate in synch with the alternating current. A strong vibration of the feeler accompanied by a growling noise indicated that the coil is shorted. Other uses Along with the standard application the growler can be used: to test series and interpoles (commutating) fields from a DC motor to determine phasing and polarity in multiwinding armatures to test rotors in rotating frequency changers, as well as in wound rotors to test shorts between turns in taped coils before installation into an armature or a stator as a low voltage isolation transformer as a high voltage autotransformer bucking or boosting for numerous tests on various types of equipment for preheating or baking armatures and rotors. References Electrical test equipment Electric transformers Tools Electric motors
Growler (electrical device)
[ "Technology", "Engineering" ]
519
[ "Engines", "Electrical test equipment", "Electric motors", "Measuring instruments", "Electrical engineering" ]
4,135,185
https://en.wikipedia.org/wiki/Li%C3%A9nard%20equation
In mathematics, more specifically in the study of dynamical systems and differential equations, a Liénard equation is a type of second-order ordinary differential equation named after the French physicist Alfred-Marie Liénard. During the development of radio and vacuum tube technology, Liénard equations were intensely studied as they can be used to model oscillating circuits. Under certain additional assumptions Liénard's theorem guarantees the uniqueness and existence of a limit cycle for such a system. A Liénard system with piecewise-linear functions can also contain homoclinic orbits. Definition Let and be two continuously differentiable functions on with an even function and an odd function. Then the second order ordinary differential equation of the form is called a Liénard equation. Liénard system The equation can be transformed into an equivalent two-dimensional system of ordinary differential equations. We define then is called a Liénard system. Alternatively, since the Liénard equation itself is also an autonomous differential equation, the substitution leads the Liénard equation to become a first order differential equation: which is an Abel equation of the second kind. Example The Van der Pol oscillator is a Liénard equation. The solution of a Van der Pol oscillator has a limit cycle. Such cycle has a solution of a Liénard equation with negative at small and positive otherwise. The Van der Pol equation has no exact, analytic solution. Such solution for a limit cycle exists if is a constant piece-wise function. Liénard's theorem A Liénard system has a unique and stable limit cycle surrounding the origin if it satisfies the following additional properties: g(x) > 0 for all x > 0; F(x) has exactly one positive root at some value p, where F(x) < 0 for 0 < x < p and F(x) > 0 and monotonic for x > p. See also Biryukov equation Footnotes External links Dynamical systems Ordinary differential equations Theorems in dynamical systems
Liénard equation
[ "Physics", "Mathematics" ]
404
[ "Theorems in dynamical systems", "Mechanics", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]
4,135,205
https://en.wikipedia.org/wiki/Software%20safety
Software safety (sometimes called software system safety) is an engineering discipline that aims to ensure that software, which is used in safety-related systems (i.e. safety-related software), does not contribute to any hazards such a system might pose. There are numerous standards that govern the way how safety-related software should be developed and assured in various domains. Most of them classify software according to their criticality and propose techniques and measures that should be employed during the development and assurance: Software for generic electronic safety-related systems: IEC 61508 (part 3 of the standard) Automotive software: ISO 26262 (part 6 of the standard) Railway software: EN 50716 Airborne software: DO-178C/ED-12C) Air traffic management software: DO-278A/ED-109A Medical devices: IEC 62304 Nuclear power plants: IEC 60880 Terminology System Safety is the overarching discipline that aims to achieve safety by reducing risks in technical systems to an acceptable level. According to the widely adopted system safety standard IEC 61508, safety is “freedom from unacceptable risk of harm”. As software alone – which can be considered as pure information – cannot cause any harm by itself, the term software safety is sometimes dismissed and replaced by “software system safety” (e.g. the Joint Software Systems Safety Engineering Handbook and MIL-STD-882E use this terminology). This stresses that software can only cause harm in the context of a technical system (see NASA Software Safety Guidebook, chapter 2.1.2), that has some effect on its environment. The goal of software safety is to make sure that software does not cause or contribute to any hazards in the system where it is used and that it can be assured and demonstrated that this is the case. This is typically achieved by the assignment of a "safety level" to the software and the selection of appropriate processes for the development and assurance of the software. Assignment of safety levels One of the first steps when creating safety-related software is to classify software according to its safety-criticality. Various standards suggest different levels, e.g. Software Levels A-E in DO-178C, SIL (Safety Integrity Level) 1-4 in IEC 61508, ASIL (Automotive Safety Integrity Level) A-D in ISO 26262. The assignment is typically done in the context of an overarching system, where the worst case consequences of software failures are investigated. For example, automotive standard ISO 26262 requires the performance of a Hazard and Risk Assessment ("HARA") on vehicle level to derive the ASIL of the software executed on a component. Process adherence and assurance It is essential to use an adequate development and assurance process, with appropriate methods and techniques, commensurate with the safety criticality of the software. Software safety standards recommend and sometimes forbid the use of such methods and techniques, depending on the safety level. Most standards suggest a lifecycle model (e.g. EN 50716, SIL (Safety Integrity Level) 1-4 in IEC 61508 suggests – among others – a V-model) and prescribe required activities to be executed during the various phases of the software. For example, IEC 61508 requires that software is specified adequately (e.g. by using formal or semi-formal methods), that the software design should be modular and testable, that adequate programming languages are used, documented code reviews are performed and that testing should be performed an several layers to achieve an adequately high test coverage. The focus on the software development and assurance process stems from the fact that software quality (and hence safety) is heavily influenced by the software process, as suggested by IEC 25010. It is claimed that the process influences the internal software quality attributes (e.g. code quality) and these in turn influence external software quality attributes (e.g. functionality and reliability). The following activities and topics addressed in the development process contribute to safe software. Documentation Comprehensive documentation of the complete development and assurance process is required by virtually all software safety standards. Typically, this documentation is reviewed and endorsed by third parties and therefore a prerequisite for the approval of safety-related software. The documentation ranges from various planning documents, requirements specifications, software architecture and design documentation, test cases on various abstraction levels, tool qualification reports, review evidence, verification and validation results etc. Fig C.2 in EN 50716 lists 32 documents that need to be created along the development lifecycle. Traceability Traceability is the practice to establish relationships between different types of requirements and between requirements and design, implementation and testing artefacts. According to EN 50716, the objective “is to ensure that all requirements can be shown to have been properly met and that no untraceable material has been introduced”. By documenting and maintaining traceability, it becomes possible to follow e.g. a safety requirement into the design of a system (to verify if it considered adequately), further on into the software source code (to verify if the code fulfils the requirement), and to an appropriate test case and test execution (to verify if the safety requirement has been tested adequately). Software implementation Safety standards can have requirements directly affecting the implementation of the software in source code, such as e.g. the selection of an appropriate programming language, the size and complexity of functions, the use of certain programming constructs and the need for coding standards. Part 3 of IEC 61508 contains the following requirements and recommendations: Use of a strongly typed programming language. Some languages are better suited than others for safety-related systems. Languages that support strong typing can detect more faults during the compilation process that would otherwise only be detected during runtime. Therefore, assembler is typically discouraged, whereas high level languages especially geared towards for the safety-related market are recommended (e.g. ADA). Use of an appropriate coding standard defining a “safe” language subset, e.g. MISRA C. MISRA-C is a coding standard for the C programming language that aims to improve code quality and safety by disallowing error prone constructs, or features that are compiler dependent (and whose behavior is therefore undefined). Limiting the use of recursion, pointers and interrupts (as they are error-prone). Disallowing “unstructured control flow in programs”, i.e. avoiding jumping in an unstructured way, e.g. by using “goto”-like statements. Test coverage Appropriate test coverage needs to be demonstrated, i.e. depending on the safety level more rigorous testing schemes have to be applied. A well known requirement regarding test coverage depending on the software level is given in DO-178C: Level C: Statement coverage is required - i.e. "every statement in the program has been invoked at least once" during testing. Level B: Branch coverage is required - i.e. "every point of entry and exit in the program has been invoked at least once and every decision in the program has taken on all possible outcomes at least once." Level A: Modified condition/decision coverage - an extension of branch coverage, with the requirement that "each condition in a decision has been shown to independently affect that decision's outcome." Independence Software safety standards typically require some activities to be executed with independence, i.e. by a different person, by a person with different reporting lines, or even by an independent organization. This ensures that conflicts of interest are avoided and increases the chances that faults (e.g. in the software design) are identified. For example, EN 50716 Figure 2 requires the roles “implementer”, “tester” and “verifier” to be held by different people, the role “validator” to be held by a person with different reporting line and the role “assessor” to be held by a person from a different organizational unit. DO-178C and DO-278A require several activities (e.g. test coverage verification, assurance activities) to be executed “with independence”, with independence being defined as “separation of responsibilities which ensures the accomplishment of objective evaluation”. Open questions and issues Software failure rates In system safety engineering, it is common to allocate upper bounds for failure rates of subsystems or components. It must then be shown that these subsystems or components do not exceed their allocated failure rates, or otherwise redundancy or other fault tolerance mechanisms must be employed. This approach is not practicable for software. Software failure rates cannot be predicted with any confidence. Although significant research in the field of software reliability has been conducted (see for example Lyu (1996), current software safety standards do not require any of these methods to be used or even discourage their usage, e.g. DO178C (p. 73) states: “Many methods for predicting software reliability based on developmental metrics have been published, for example, software structure, defect detection rate, etc. This document does not provide guidance for those types of methods, because at the time of writing, currently available methods did not provide results in which confidence can be placed.” ARP 4761 clause 4.1.2 states that software design errors “are not the same as hardware failures. Unlike hardware failures, probabilities of such errors cannot be quantified.” Safety and security Software safety and security may have differing interests in some cases. On the one hand safety-related software that is not secure can pose a safety risk, on the other hand, some security practices (e.g. frequent and timely patching) contradict established safety practices (rigorous testing and verification before anything is changed in an operational system). Artificial intelligence Software that employs artificial intelligence techniques such as machine learning follows a radically different lifecycle. In addition, the behavior is harder to predict than for a traditionally developed system. Hence, the question whether and how these technologies can be used, is under current investigation. Currently, standards generally do not endorse their use. For example, EN 50716 (Table A.3) states that artificial intelligence and machine learning are not recommended for any safety integrity level. Agile development methods Agile software development, which typically features many iterations, is sometimes still stigmatized as being too chaotic for safety-related software development. This might be partially caused by statements such as "working software over comprehensive documentation", which is found in the manifesto for agile development. Although most software safety standards present the software lifecycle in the traditional waterfall-like sequence, some do contain statements that allow for more flexible lifecycles. DO-178C states that "The processes of a software life cycle may be iterative, that is, entered and reentered." EN 50716 contains Annex C that shows how iterative development lifecycles can be used in line with the requirements of the standard. Goals Functional safety is achieved through engineering development to ensure correct execution and behavior of software functions as intended Safety consistent with mission requirements, is designed into the software in a timely, cost effective manner. On complex systems involving many interactions safety-critical functionality should be identified and thoroughly analyzed before deriving hazards and design safeguards for mitigations. Safety-critical functions lists and preliminary hazards lists should be determined proactively and influence the requirements that will be implemented in software. Contributing factors and root causes of faults and resultant hazards associated with the system and its software are identified, evaluated and eliminated or the risk reduced to an acceptable level, throughout the lifecycle. Reliance on administrative procedures for hazard control is minimized. The number and complexity of safety critical interfaces is minimized. The number and complexity of safety critical computer software components is minimized. Sound human engineering principles are applied to the design of the software-user interface to minimize the probability of human error. Failure modes, including hardware, software, human and system are addressed in the design of the software. Sound software engineering practices and documentation are used in the development of the software. Safety issues and safety attributes are addressed as part of the software testing effort at all levels. Software is designed for human machine interface, ease of maintenance and modification or enhancement Software with safety-critical functionality must be thoroughly verified with objective analysis and preferably test evidence that all safety requirements have been met per established criteria. See also Software assurance IEC 61508 - Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems ISO 26262 - Road vehicles – Functional safety Functional Safety Software quality System accident Notes References Software quality
Software safety
[ "Engineering" ]
2,567
[ "Safety engineering", "Systems engineering" ]
4,135,795
https://en.wikipedia.org/wiki/Mitochondrial%20permeability%20transition%20pore
The mitochondrial permeability transition pore (mPTP or MPTP; also referred to as PTP, mTP or MTP) is a protein that is formed in the inner membrane of the mitochondria under certain pathological conditions such as traumatic brain injury and stroke. Opening allows increase in the permeability of the mitochondrial membranes to molecules of less than 1500 daltons in molecular weight. Induction of the permeability transition pore, mitochondrial membrane permeability transition (mPT or MPT), can lead to mitochondrial swelling and cell death through apoptosis or necrosis depending on the particular biological setting. Roles in pathology The MPTP was originally discovered by Haworth and Hunter in 1979 and has been found to be involved in neurodegeneration, hepatotoxicity from Reye-related agents, cardiac necrosis and nervous and muscular dystrophies among other deleterious events inducing cell damage and death. MPT is one of the major causes of cell death in a variety of conditions. For example, it is key in neuronal cell death in excitotoxicity, in which overactivation of glutamate receptors causes excessive calcium entry into the cell. MPT also appears to play a key role in damage caused by ischemia, as occurs in a heart attack and stroke. However, research has shown that the MPT pore remains closed during ischemia, but opens once the tissues are reperfused with blood after the ischemic period, playing a role in reperfusion injury. MPT is also thought to underlie the cell death induced by Reye's syndrome, since chemicals that can cause the syndrome, like salicylate and valproate, cause MPT. MPT may also play a role in mitochondrial autophagy. Cells exposed to toxic amounts of Ca2+ ionophores also undergo MPT and death by necrosis. Structure While the MPT modulation has been widely studied, little is known about its structure. Initial experiments by Szabó and Zoratti proposed the MPT may comprise Voltage Dependent Anion Channel (VDAC) molecules. Nevertheless, this hypothesis was shown to be incorrect as VDAC−/− mitochondria were still capable to undergo MPT. Further hypothesis by Halestrap's group convincingly suggested the MPT was formed by the inner membrane Adenine Nucleotide Translocase (ANT), but genetic ablation of such protein still led to MPT onset. Thus, the only MPTP components identified so far are the TSPO (previously known as the peripheral benzodiazepine receptor) located in the mitochondrial outer membrane and cyclophilin-D in the mitochondrial matrix. Mice lacking the gene for cyclophilin-D develop normally, but their cells do not undergo Cyclosporin A-sensitive MPT, and they are resistant to necrotic death from ischemia or overload of Ca2+ or free radicals. However, these cells do die in response to stimuli that kill cells through apoptosis, suggesting that MPT does not control cell death by apoptosis. MPTP blockers Agents that transiently block MPT include the immune suppressant cyclosporin A (CsA); N-methyl-Val-4-cyclosporin A (MeValCsA), a non-immunosuppressant derivative of CsA; another non-immunosuppressive agent, NIM811, 2-aminoethoxydiphenyl borate (2-APB), bongkrekic acid and alisporivir (also known as Debio-025). TRO40303 is a newly synthetitised MPT blocker developed by Trophos company and currently is in Phase I clinical trial. Factors in MPT induction Various factors enhance the likelihood of MPTP opening. In some mitochondria, such as those in the central nervous system, high levels of Ca2+ within mitochondria can cause the MPT pore to open. This is possibly because Ca2+ binds to and activates Ca2+ binding sites on the matrix side of the MPTP. MPT induction is also due to the dissipation of the difference in voltage across the inner mitochondrial membrane (known as transmembrane potential, or Δψ). In neurons and astrocytes, the contribution of membrane potential to MPT induction is complex, see. The presence of free radicals, another result of excessive intracellular calcium concentrations, can also cause the MPT pore to open. Other factors that increase the likelihood that the MPTP will be induced include the presence of certain fatty acids, and inorganic phosphate. However, these factors cannot open the pore without Ca2+, though at high enough concentrations, Ca2+ alone can induce MPT. Stress in the endoplasmic reticulum can be a factor in triggering MPT. Conditions that cause the pore to close or remain closed include acidic conditions, high concentrations of ADP, high concentrations of ATP, and high concentrations of NADH. Divalent cations like Mg2+ also inhibit MPT, because they can compete with Ca2+ for the Ca2+ binding sites on the matrix and/or cytoplasmic side of the MPTP. Effects Multiple studies have found the MPT to be a key factor in the damage to neurons caused by excitotoxicity. The induction of MPT, which increases mitochondrial membrane permeability, causes mitochondria to become further depolarized, meaning that Δψ is abolished. When Δψ is lost, protons and some molecules are able to flow across the outer mitochondrial membrane uninhibited. Loss of Δψ interferes with the production of adenosine triphosphate (ATP), the cell's main source of energy, because mitochondria must have an electrochemical gradient to provide the driving force for ATP production. In cell damage resulting from conditions such as neurodegenerative diseases and head injury, opening of the mitochondrial permeability transition pore can greatly reduce ATP production, and can cause ATP synthase to begin hydrolysing, rather than producing, ATP. This produces an energy deficit in the cell, just when it most needs ATP to fuel activity of ion pumps. MPT also allows Ca2+ to leave the mitochondrion, which can place further stress on nearby mitochondria, and which can activate harmful calcium-dependent proteases such as calpain. Reactive oxygen species (ROS) are also produced as a result of opening the MPT pore. MPT can allow antioxidant molecules such as glutathione to exit mitochondria, reducing the organelles' ability to neutralize ROS. In addition, the electron transport chain (ETC) may produce more free radicals due to loss of components of the ETC, such as cytochrome c, through the MPTP. Loss of ETC components can lead to escape of electrons from the chain, which can then reduce molecules and form free radicals. MPT causes mitochondria to become permeable to molecules smaller than 1.5 kDa, which, once inside, draw water in by increasing the organelle's osmolar load. This event may lead mitochondria to swell and may cause the outer membrane to rupture, releasing cytochrome c. Cytochrome c can in turn cause the cell to go through apoptosis ("commit suicide") by activating pro-apoptotic factors. Other researchers contend that it is not mitochondrial membrane rupture that leads to cytochrome c release, but rather another mechanism, such as translocation of the molecule through channels in the outer membrane, which does not involve the MPTP. Much research has found that the fate of the cell after an insult depends on the extent of MPT. If MPT occurs to only a slight extent, the cell may recover, whereas if it occurs more it may undergo apoptosis. If it occurs to an even larger degree the cell is likely to undergo necrotic cell death. Possible evolutionary purpose Although the MPTP has been studied mainly in mitochondria from mammalian sources, mitochondria from diverse species also undergo a similar transition. While its occurrence can be easily detected, its purpose still remains elusive. Some have speculated that the regulated opening of the MPT pore may minimize cell injury by causing ROS-producing mitochondria to undergo selective lysosome-dependent mitophagy during nutrient starvation conditions. Under severe stress/pathologic conditions, MPTP opening would trigger injured cell death mainly through necrosis. There is controversy about the question of whether the MPTP is able to exist in a harmless, "low-conductance" state. This low-conductance state would not induce MPT and would allow certain molecules and ions to cross the mitochondrial membranes. The low-conductance state may allow small ions like Ca2+ to leave mitochondria quickly, in order to aid in the cycling of Ca2+ in healthy cells. If this is the case, MPT may be a harmful side effect of abnormal activity of a usually beneficial MPTP. MPTP has been detected in mitochondria from plants, yeasts, such as Saccharomyces cerevisiae, birds, such as guinea fowl and primitive vertebrates such as the Baltic lamprey. While the permeability transition is evident in mitochondria from these sources, its sensitivity to its classic modulators may differ when compared with mammalian mitochondria. Nevertheless, CsA-insensitive MPTP can be triggered in mammalian mitochondria given appropriate experimental conditions strongly suggesting this event may be a conserved characteristic throughout the eukaryotic domain. See also Crista NMDA receptor NMDA receptor antagonist References External links Mitochondrial permeability transition pore: an enigmatic gatekeeper (2012) NHS&T, Vol 1(3):47-51 Mitochondrial Permeability Transition (PT) from Celldeath.de. Accessed January 1, 2007. Cellular respiration Neurotrauma Mitochondria
Mitochondrial permeability transition pore
[ "Chemistry", "Biology" ]
2,123
[ "Biochemistry", "Mitochondria", "Cellular respiration", "Metabolism" ]
4,135,861
https://en.wikipedia.org/wiki/ADF/Cofilin%20family
ADF/cofilin is a family of actin-binding proteins associated with the rapid depolymerization of actin microfilaments that give actin its characteristic dynamic instability. This dynamic instability is central to actin's role in muscle contraction, cell motility and transcription regulation. Three highly conserved and highly (70%-82%) identical genes belonging to this family have been described in humans and mice: CFL1, coding for cofilin 1 (non-muscle, or n-cofilin) CFL2, coding for cofilin 2 (found in muscle: m-cofilin) DSTN, coding for destrin, also known as ADF or actin depolymerizing factor Actin-binding proteins regulate assembly and disassembly of actin filaments. Cofilin, a member of the ADF/cofilin family is actually a protein with 70% sequence identity to destrin, making it part of the ADF/cofilin family of small ADP-binding proteins. The protein binds to actin monomers and filaments, G actin and F actin, respectively. Cofilin causes depolymerization at the minus end of filaments, thereby preventing their reassembly. The protein is known to sever actin filaments by creating more positive ends on filament fragments. Cofilin/ADF (destrin) is likely to sever F-actin without capping and prefers ADP-actin. These monomers can be recycled by profilin, activating monomers to go back into filament form again by an ADP-to-ATP exchange. ATP-actin is then available for assembly. Structure The structure of actin depolymerizing factors is highly conserved across many organism due to actin's importance in many cellular processes. Proteins of the actin depolymerizing factor family characteristically consist of five beta sheets, four antiparallel and one parallel, and four alpha helices with a central alpha helix providing the structure and stability of the proteins. The actin depolymerizing factor homology domain (ADF-H domain) allows for binding to actin subunits and includes the central alpha helix, the N-terminus extension, and the C terminus helix. The N-terminus extension consists of a tilted loop that facilitates binding to G-actin but not F-actin due to steric hindrance present in F-actin. The C-terminus can form hydrogen bonds to F actin through its amide backbone and a serine at position S274. This serine is especially highly evolutionarily conserved due to its importance in actin binding. The central alpha helix is inserted into the hydrophobic cleft in between the first and third subunits of actin during actin binding. Cofilin binds monomeric (G-actin) and filamentous actin (F-actin). Its binding affinities are higher for ADP-actin over ADP-Pi and ATP-actin. Its binding changes the twist of F-actin. The structure of ADF was first characterized in 1980 by James Bamburg. Four actin histidines near the cofilin binding site may be needed for cofilin/actin interaction, but pH sensitivity alone may not be enough of an explanation for the levels of interaction encountered. Cofilin is accommodated in ADP-F actin because of increased flexibility in this form of actin. Binding by both cofilin and ADF (destrin) causes the crossover length of the filament to be reduced. Therefore, strains increase filament dynamics and the level of filament fragmentation observed. Function Cofilin is a ubiquitous actin-binding factor required for the reorganization of actin filaments. ADF/Cofilin family members bind G-actin monomers and depolymerize actin filaments through two mechanisms: severing and increasing the off-rate for actin monomers from the pointed end. "Older" ADP/ADP-Pi actin filaments free of tropomyosin and proper pH are required for cofilin to function effectively. In the presence of readily available ATP-G-actin cofilin speeds up actin polymerization via its actin-severing activity (providing free barbed ends for further polymerization and nucleation by the Arp2/3 complex). As a long-lasting in vivo effect, cofilin recycles older ADP-F-actin, helping cell to maintain ATP-G-actin pool for sustained motility. pH, phosphorylation and phosphoinositides regulate cofilin's binding and associating activity with actin The Arp2/3 complex and cofilin work together to reorganize the actin filaments in the cytoskeleton. Arp 2/3, an actin binding proteins complex, binds to the side of ATP-F-actin near the growing barbed end of the filament, causing nucleation of a new F-actin branch, while cofilin-driven depolymerization takes place after dissociating from the Arp2/3 complex. They also work together to reorganize actin filaments in order to traffic more proteins by vesicle to continue the growth of filaments. Cofilin also binds with other proteins such as myosin, tropomyosin, α-actinin, gelsolin and scruin. These proteins compete with cofilin for actin binding. Сofilin also play role in innate immune response. In a Model Organism ADF/cofilin is found in ruffling membranes and at the leading edge of mobile cells. In particular, ADF/cofilin promotes disassembly of the filament at the rear of the brush in Xenopus laevis lamellipodia, a protrusion from fibroblast cells characterized by actin networks. Subunits are added to barbed ends and lost from rear-facing pointed ends. Increasing the rate constant, k, for actin dissociation from the pointed ends was found to sever actin filaments. Through this experimentation, it was found that ATP or ADP-Pi are probably involved in binding to actin filaments. Mechanism of Action F-actin (filamentous actin) is stabilized when it is bound to ATP due to the presence of a serine on the second subunit of actin that is able to form hydrogen bonds to the last phosphate group in ATP and a nearby histidine attached to the main loop. This interaction stabilizes the structure internally due to the interactions between the main loop and the second subunit. When ATP is hydrolyzed to ADP, the serine can no longer form a hydrogen bond to ADP due to the loss of the inorganic phosphate which causes the serine side chain to twist, causing a conformational change in the second subunit. This conformational change also causes the serine to no longer be able to form a hydrogen bond with the histidine attached to the main loop and this weakens the linkage between subunits one and three, causing the entire molecule to twist. This twisting puts strain on the molecule and destabilizes it. Actin depolymerizing factor is able to bind to the destabilized F-actin by inserting the central helix into the cleft between the first and third subunits of actin. Actin depolymerizing factor binds F-actin cooperatively and induces a conformational change in F-actin that causes it to twist further and become more destabilized. This twisting causes severing of the bond between actin monomers, depolymerizing the filament. Regulation Phosphorylation Actin depolymerization factor is regulated by the phosphorylation of a serine on the C terminus by LIM kinases. Actin depolymerizing factor is activated when it is dephosphorylated and inhibited when it is phosphorylated. pH An alkaline environment stabilizes the inorganic phosphate released when ATP is hydrolyzed to ADP, so therefore a higher pH increases the favorability of the ATP bound to F-actin to be hydrolyzed to ADP resulting in the destabilization of actin. Tropomyosin binding F-actin binds the protein Tropomyosin and actin depolymerizing factor competitively and mutually exclusively. F-actin binding of Tropomyosin is uncooperative so therefore the binding of Tropomyosin does not induce a conformational change in F-actin and does not cause it to become destabilized. However, because F-actin cannot bind both Tropomyosin and actin depolymerizing factor at the same time due to Tropomyosin blocking the binding site of actin depolymerizing factor when it is bound to actin, Tropomyosin acts as a protector of actin against depolymerization. References External links MBInfo - Cofilin in Actin Filament Depolymerization See also Cofilin 1 Protein families
ADF/Cofilin family
[ "Biology" ]
1,990
[ "Protein families", "Protein classification" ]
4,135,937
https://en.wikipedia.org/wiki/In-vessel%20composting
In-vessel composting generally describes a group of methods that confine the composting materials within a building, container, or vessel. In-vessel composting systems can consist of metal or plastic tanks or concrete bunkers in which air flow and temperature can be controlled, using the principles of a "bioreactor". Generally the air circulation is metered in via buried tubes that allow fresh air to be injected under pressure, with the exhaust being extracted through a biofilter, with temperature and moisture conditions monitored using probes in the mass to allow maintenance of optimum aerobic decomposition conditions. This technique is generally used for municipal scale organic waste processing, including final treatment of sewage biosolids, to a stable state with safe pathogen levels, for reclamation as a soil amendment. In-vessel composting can also refer to aerated static pile composting with the addition of removable covers that enclose the piles, as with the system in extensive use by farmer groups in Thailand, supported by the National Science and Technology Development Agency there. In recent years, smaller scale in-vessel composting has been advanced. These can even use common roll-off waste dumpsters as the vessel. The advantage of using roll-off waste dumpsters is their relatively low cost, wide availability, they are highly mobile, often do not need building permits and can be obtained by renting or buying. Evaluation is ongoing with regard to the health risks associated with compost derived from sewage biosolids—including identifying safe levels of contaminates such as PFASs ("forever chemicals"). Offensive odors are caused by putrefaction (anaerobic decomposition) of nitrogenous animal and vegetable matter gassing off as ammonia. This is controlled with a higher carbon to nitrogen ratio, or increased aeration by ventilation, and use of a coarser grade of carbon material to allow better air circulation. Prevention and capture of any gases naturally occurring (volatile organic compounds) during the hot aerobic composting involved is the objective of the biofilter, and as the filtering material saturates over time, it can be used in the composting process and replaced with fresh material. A more advanced systems design is able to limit the odor issues considerably, and it is also able to raise the total energy and resource output by integrating in-vessel composting with anaerobic digestion. Through anaerobic decomposition it is also possible to reduce pathogen levels similarly to that of traditional aerated composting when the anaerobic bioreactors operate at thermophilic temperatures, between .§ Gallery See also Aerated static pile composting Anaerobic digestion Compost List of solid waste treatment technologies Mechanical biological treatment Waste management Windrow composting References Industrial composting Waste treatment technology
In-vessel composting
[ "Chemistry", "Engineering" ]
575
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
4,136,050
https://en.wikipedia.org/wiki/Norton%20Zinder
Norton David Zinder (November 7, 1928 – February 3, 2012) was an American biologist famous for his discovery of genetic transduction. Zinder was born in New York City, received his A.B. from Columbia University in 1947, Ph.D. from the University of Wisconsin–Madison in 1952, and became a member of the National Academy of Sciences in 1969. He led a lab at Rockefeller University until shortly before his death. In 1966 he was awarded the NAS Award in Molecular Biology from the National Academy of Sciences. Genetic transduction and RNA bacteriophage Working as a graduate student with Joshua Lederberg, Zinder discovered that a bacteriophage can carry genes from one bacterium to another. Initial experiments were carried out using Salmonella. Zinder and Lederberg named this process of genetic exchange transduction. Later, Zinder discovered the first bacteriophage that contained RNA as its genetic material. At that time, Harvey Lodish (now of the Massachusetts Institute of Technology and Whitehead Institute for Biomedical Research) worked in his lab. Norton Zinder died in 2012 of pneumonia after a long illness. References Further reading Papers authored by Norton Zinder Laboratory of Genetics at Rockefeller University Historical plaque at UW–Madison noting Zinder's contribution to molecular genetics Biography of Norton Zinder 1928 births 2012 deaths American microbiologists Rockefeller University faculty The Bronx High School of Science alumni University of Wisconsin–Madison alumni Phage workers Members of the United States National Academy of Sciences Human Genome Project scientists Scientists from New York City Columbia College (New York) alumni
Norton Zinder
[ "Engineering" ]
331
[ "Human Genome Project scientists" ]
4,136,114
https://en.wikipedia.org/wiki/Flat%20rated
When an engine is flat rated it means that an engine of high horsepower rating is constrained to a lower horsepower rating. The engine output in this case will always remain the same, but when atmospheric conditions such as high temperatures and high altitude ("hot and high") reduce the power output of the engine it has more headroom before it falls below the limited maximum output. In some cases the total power output of an engine needs to be constrained because the airframe can only handle a certain force. This is the case with gas turbine engines. Flat rating allows airplanes to operate under more demanding conditions, without the need for extra structural strengthening due to higher peak power output of the engine. For example, the Garrett AiResearch TPE-331-5 engine originally fitted on the Dornier 228 produces . If the outside air temperature is above 20°C, the airplane's maximum speed is reduced by approximately 10 knots (19 km/h), because hotter air is less dense and thus produces less pressure inside the turbine. The Dornier 228 can also be fitted with the Garrett AiResearch TPE-331-10 conversion of the -5 engine which produces but is limited (flat rated) to only 715. In this case the airplane will be able to maintain its top speed at temperatures above 30°C without the risk of exceeding the airplane's structural limits. External links Honeywell Aerospace TPE 331 Engine Conversions Dornier 228 Information Center Aircraft engines
Flat rated
[ "Technology" ]
295
[ "Engines", "Aircraft engines" ]
4,136,136
https://en.wikipedia.org/wiki/Electronic%20Cultural%20Atlas%20Initiative
The Electronic Cultural Atlas Initiative (ECAI) is a digital humanities initiative involving numerous academic professors and institutions around the world with the stated goal of creating a networked digital atlas by creating tools and setting standards for dynamic, digital maps. ECAI was established in 1997 by Emeritus Prof. Lewis Lancaster of the University of California, Berkeley, and has held two meetings per year most years from 1998 - 2009 (ongoing), one of which is often in conjunction with the Pacific Neighbourhood Consortium. The initiative is based at UC Berkeley. The ECAI 'clearinghouse' of distributed digital datasets was developed from 1998 by the Archaeological Computing Laboratory at the University of Sydney, and uses the ACL's TimeMap software. See also GIS Wikimaps External links http://www.ecai.org/ Historical Geographic Information Systems Online Forum on Google Cartography organizations Geographic information systems organizations Digital humanities Historical geographic information systems University of California, Berkeley Research institutes in the San Francisco Bay Area Digital humanities projects 1997 establishments in California
Electronic Cultural Atlas Initiative
[ "Technology" ]
208
[ "Digital humanities", "Computing and society" ]
4,136,144
https://en.wikipedia.org/wiki/BioMarin%20Pharmaceutical
BioMarin Pharmaceutical Inc. is an American biotechnology company headquartered in San Rafael, California. It has offices and facilities in the United States, South America, Asia, and Europe. BioMarin's core business and research is in enzyme replacement therapies (ERTs). BioMarin was the first company to provide therapeutics for mucopolysaccharidosis type I (MPS I), by manufacturing laronidase (Aldurazyme, commercialized by Genzyme Corporation). BioMarin was also the first company to provide therapeutics for phenylketonuria (PKU). Over the years, BioMarin has been criticised for drug pricing and for specific instances of denying access to drugs in clinical trials. History BioMarin was founded in 1997 by Christopher Starr Ph.D. and Grant W. Denison Jr. with an investment of a $1.5 million from Glyko Biomedical and went public in 1999. Seed investors were amongst others MPM Bioventures, Grosvenor Fund and Florian Schönharting. Business development In 2002, BioMarin acquired Glyko Biomedical. In 2009, BioMarin acquired Huxley Pharmaceuticals, Inc. (Huxley), which had rights to a proprietary form of 3,4-diaminopyridine (3,4-DAP), amifampridine phosphate. In 2010, BioMarin was granted marketing approval by the European Commission for 3,4-diaminopyridine (3,4-DAP), amifampridine phosphate for the treatment of the rare autoimmune disease Lambert–Eaton myasthenic syndrome (LEMS). BioMarin launched the product under the name Firdapse. In 2010, BioMarin acquired LEAD Therapeutics, Inc. (LEAD), a small private drug discovery and early stage development company with key compound LT-673, an orally available poly (ADP-ribose) polymerase (PARP) inhibitor studied for the treatment of patients with rare, genetically defined cancers. This acquisition was followed by the purchase of ZyStor Therapeutics, Inc. (ZyStor), a privately held biotechnology company developing ERTs for the treatment of lysosomal storage disorders and its lead product candidate, ZC-701, a fusion of insulin-like growth factor 2 and alpha glucosidase (IGF2-GAA) in development for Pompe disease. At its R&D day in October 2010, BioMarin also announced a new program for a peptide therapeutic, vosoritide (BMN-111), for the treatment of achondroplasia. In 2012, BioMarin acquired Zacharon Pharmaceuticals, a private biotechnology company based in San Diego focused on developing small molecules targeting pathways of glycan metabolism. In 2014, BioMarin acquired a histone deacetylase inhibitor chemical library from Repligen for $2 million with the intention of advancing work toward therapies for Friedreich's ataxia and other neurological disorders. In November 2014, the company agreed to the acquisition of Prosensa for up to $840 million; however, the range of treatments for Duchenne muscular dystrophy failed to attain FDA approval, and development ceased in May 2016. In October 2019 it was revealed that the group will open an office in Dublin to support further growth through Europe, the Middle East and Asia. Acquisition history The following is an illustration of the company's major mergers and acquisitions and historical predecessors (this is not a comprehensive list): Products As of 2022, BioMarin has six products on the market, each of which is an orphan drug. Tetrahydrobiopterin (branded as Kuvan) (sapropterin dihydrochloride), a small molecule drug for phenylketonuria, introduced in 2007 as the first medication-based intervention to treat phenylketonuria Arylsulfatase B (branded as Naglazyme) (galsulfase), a recombinant protein therapeutic for Maroteaux–Lamy syndrome (also called mucopolysaccharidosis type VI) Iduronidase (branded as Aldurazyme), a recombinant protein therapeutic for mucopolysaccharidosis I Amifampridine (branded as Firdapse), a small molecule drug for Lambert–Eaton myasthenic syndrome (as of 2013 approved in the EU only) Elosulfase alfa (branded as Vimizim), is the only enzyme replacement therapy to address the cause of Morquio A Syndrome (MPS IVA), which affects an estimated 3,000 patients in the developed world. The disease occurs as a result of a deficiency of activity in an enzyme involved in glycosaminoglycan (GAG) metabolism. Cerliponase alfa (branded as Brineura), is an enzyme replacement treatment for Batten disease, which is a form of neuronal ceroid lipofuscinosis. It was approved in 2017. Valoctocogene roxaparvovec (branded as Roctavian) is an adeno-associated viral vector for treatment of hemophilia A that aims to transfer a working copy of the Factor VIII gene into patients who lack one. It was approved in the EU in August 2022. Biomarin is working to develop several new drugs. Controversies In 2010, BioMarin became involved in controversy surrounding 3,4-diaminopyridine (3,4-DAP). BioMarin markets a phosphate salt of 3,4-DAP under the name Firdapse. In 2010, BioMarin was granted exclusive licensing rights to Firdapse for 10 years. As a result, the price of a prescribed National Health Service treatment course has increased from $1,987 for the unlicensed drug to $69,970 for Firdapse. The company states that prior to its licensing, there was no guaranteed quality control of the product and no way of formally monitoring for uncommon side effects through the regulatory process. In 2013, BioMarin Pharmaceuticals was at the center of a high profile debate regarding expanded access of cancer patients to experimental drugs. On the advice of her doctor, Andrea Sloan, a patient with advanced ovarian cancer, requested that the company provide her with access to BMN 673, an unapproved PARP inhibitor drug candidate that had exhibited promising activity in a small Phase 1 clinical trial. The company declined, citing safety concerns. Ms. Sloan eventually received a similar drug candidate from a different company. In 2015, there was another controversy over expanded access, concerning the supply of a drug on clinical trial to a German child who was suffering from a brain disorder but who was not part of the trial. In April 2019, the BBC reported that patients who took part in a trial treatment for the drug Kuvan (sapropterin hydrochloride) were later denied access to it. The company was criticised by the NHS and Stephen Hammond MP for patient profiteering. The company commented the following in response: "BioMarin is disappointed that the NHS England has not recognised the value of treating PKU patients with Kuvan, despite more than a decade of positive patient outcomes across 26 countries in Europe, Russia and Turkey" In June 2019, a Belgian court ordered BioMarin to continue supplying Vimizim to a young girl suffering from Morquio syndrome free of charge. BioMarin stopped providing free Vimizim at the beginning of the year after negotiations with Belgian health authorities regarding reimbursement of the product repeatedly failed. This caused the parents to start legal proceedings to force the company to keep providing the medicine free of charge. BioMarin was ordered in a preliminary injunction to keep doing so until a definitive judgment would be rendered, or until the medicine would be available on the Belgian market at a reasonable price. References External links Biotechnology companies of the United States Pharmaceutical companies of the United States Companies listed on the Nasdaq Technology companies based in the San Francisco Bay Area Companies based in San Rafael, California American companies established in 1997 Pharmaceutical companies established in 1997 Life sciences industry Biotechnology companies established in 1997 1997 establishments in California 1999 initial public offerings Virotherapy Health care companies based in California Companies in the S&P 400
BioMarin Pharmaceutical
[ "Biology" ]
1,737
[ "Life sciences industry" ]
4,136,175
https://en.wikipedia.org/wiki/Fire%20protection
Fire protection is the study and practice of mitigating the unwanted effects of potentially destructive fires. It involves the study of the behaviour, compartmentalisation, suppression and investigation of fire and its related emergencies, as well as the research and development, production, testing and application of mitigating systems. In structures, be they land-based, offshore or even ships, the owners and operators are responsible to maintain their facilities in accordance with a design-basis that is rooted in laws, including the local building code and fire code, which are enforced by the authority having jurisdiction. Buildings must be maintained in accordance with the current fire code, which is enforced by the fire prevention officers of a local fire department. In the event of fire emergencies, Firefighters, fire investigators, and other fire prevention personnel are called to mitigate, investigate and learn from the damage of a fire. Classifying fires When deciding on what fire protection is appropriate for any given situation, it is important to assess the types of fire hazards that may be faced. Some jurisdictions operate systems of classifying fires using code letters. Whilst these may agree on some classifications, they also vary. Below is a table showing the standard operated in Europe and Australia against the system used in the United States. 1 Technically there is no such thing as a "Class E" fire, as electricity itself does not burn. However, it is considered a dangerous and very deadly complication to a fire, therefore using the incorrect extinguishing method can result in serious injury or death. Class E, however generally refers to fires involving electricity, therefore a bracketed E, "(E)" denoted on various types of extinguishers. Fires are sometimes categorized as "one alarm", "two alarm", "three alarm" (or higher) fires. There is no standard definition for what this means quantifiably, though it always refers to the level response by the local authorities. In some cities, the numeric rating refers to the number of fire stations that have been summoned to the fire. In others, the number counts the number of "dispatches" for additional personnel and equipment. Components Fire protection in land-based buildings, offshore construction or on board ships is typically achieved via all of the following: Passive fire protection - the installation of firewalls and fire rated floor assemblies to form fire compartments intended to limit the spread of fire, high temperatures, and smoke. Active fire protection - manual and automatic detection and suppression of fires, such as fire sprinkler systems and (fire alarm) systems. Education - the provision of information regarding passive and active fire protection systems to building owners, operators, occupants, and emergency personnel so that they have a working understanding of the intent of these systems and how they perform in the fire safety plan. Balanced approach Passive fire protection (PFP) in the form of compartmentalisation was developed prior to the invention of or widespread use of active fire protection (AFP), mainly in the form of automatic fire sprinkler systems. During this time, PFP was the dominant mode of protection provided in facility designs. With the widespread installation of fire sprinklers in the past 50 years, the reliance on PFP as the only approach was reduced. Building operation in conformance with design Fire protection within a structure relies on all of its components. The building is designed in compliance with the local building code and fire code by the architect and other consultants. A building permit is issued after review by the Authority Having Jurisdiction (AHJ). Deviations from that original plan should be made known to the AHJ to make sure that the change is still in compliance with the law to prevent any unsafe conditions that may violate the law and put people at risk. For example, if the firestop systems in a structure were inoperable, a significant part of the fire safety plan might be compromised in the event of a fire because the walls and floors that contain the firestops are intended to have a fire-resistance rating. Likewise, if the sprinkler system or fire alarm system is inoperable for lack of proper maintenance, the likelihood of damage or personal injury is increased. Government Guidelines of Fire Protection and Fire Safety INDIA USA UAE EUROPE UK See also Fire prevention Automatic fire suppression Occupancy Building code Firefighting Fire test Listing and approval use and compliance Passive fire protection Compartmentalization Firestop Intumescent Endothermic Firestop pillow Fire door Fireproofing Fire-resistance rating Active fire protection External water spray system Fire sprinkler Fire alarm Fire alarm system Fire alarm control panel Fire detection Manual call point Fire sprinkler system Smoke detector Hypoxic air fire prevention system Gaseous fire suppression Condensed aerosol fire suppression Fire protection engineering Flame detector Fire Equipment Manufacturers' Association Notes Further reading Huang, Kai. 2009. Population and Building Factors That Impact Residential Fire Rates in Large U.S. Cities. Applied Research Project. Texas State University. http://ecommons.txstate.edu/arp/287/ . External links National Fire Protection Association (US) National Fire Sprinkler Association (US) Fire Equipment Manufacturers' Association (US) Building engineering
Fire protection
[ "Engineering" ]
1,052
[ "Building engineering", "Fire protection", "Civil engineering", "Architecture" ]
4,136,287
https://en.wikipedia.org/wiki/Acidophobe
An acidophobe is an organism that is intolerant of acidic environments. The terms acidophobia, acidophoby and acidophobic are also used. The term acidophobe is variously applied to plants, bacteria, protozoa, animals, chemical compounds, etc. The antonymous term is acidophile. Plants are known to be well-defined with respect to their pH tolerance, and only a small number of species thrive well under a broad range of acidity. Therefore the categorization acidophile/acidophobe is well-defined. Sometimes a complementary classification is used (calcicole/calcifuge, with calcicoles being "lime-loving" plants). In gardening, soil pH is a measure of acidity or alkalinity of soil, with pH = 7 indicating the neutral soil. Therefore acydophobes would prefer pH above 7. Acid intolerance of plants may be mitigated by lime addition and by calcium and nitrogen fertilizers. Acidophobic species are used as a natural instrument of monitoring the degree of acidifying contamination of soil and watercourses. For example, when monitoring vegetation, a decrease of acidophobic species would be indicative of acid rain increase in the area. A similar approach is used with aquatic species. Acidophobes Whiteworms (Enchytraeus albidus), a popular live food for aquarists, are acidophobes. Acidophobic compounds are the ones which are unstable in acidic media. Acidophobic crops: alfalfa, clover References Physiology
Acidophobe
[ "Biology" ]
339
[ "Physiology" ]
4,136,457
https://en.wikipedia.org/wiki/Public%20Readiness%20and%20Emergency%20Preparedness%20Act
The Public Readiness and Emergency Preparedness Act (PREPA), passed by the United States Congress and signed into law by President of the United States George W. Bush in December 2005 (as part of ), is a controversial tort liability shield intended to protect pharmaceutical manufacturers from financial risk in the event of a declared public health emergency. The part of PREPA that actually affords such protection is now codified at . The act specifically affords to drug makers immunity from actions related to the manufacture, testing, development, distribution, administration and use of medical countermeasures against chemical, biological, radiological and nuclear agents of terrorism, epidemics, and pandemics. PREPA strengthens and consolidates the oversight of litigation against pharmaceutical companies under the purview of the secretary of Health and Human Services (HHS). PREPA provides $3.8 billion for pandemic influenza preparedness to protect public health in the case of a pandemic disease outbreak. Vaccine manufacturers lobbied for the legislation, which would effectively preempt state vaccine safety laws in the case of an emergency declaration by HHS, by making clear they would not produce new vaccines unless the legislation was enacted. Injured parties are compensated by the Countermeasures Injury Compensation Program. During and in the aftermath of the 2020–21 COVID-19 pandemic in the United States, PREPA is being invoked in a variety of lawsuits, many involving skilled nursing or assisted living facilities where COVID-19 countermeasures including the administration or non-administration of vaccines is said to have resulted in or contributed to resident deaths. Although PREPA was around for more than 15 years, prior to COVID-19, the act's defensive application in litigation was not widespread, but now the application of the act is being included more frequently in a variety of COVID-19 related lawsuits, including Shareholder Derivative Litigation. Legislative process Legislative leaders Senator Bill Frist and Congressman Dennis Hastert were among the backers of PREPA legislation. Rep. Nathan Deal spoke on the House floor in support of the bill, calling it "absolutely critical legislation". It was added to the final version of a Department of Defense-appropriations bill (H.R. 2863) while the bill was negotiated between the Senate and the House of Representatives. On December 19, 2005, the appropriations bill with the PREPA legislation was approved by the House of Representatives in a vote of 308–106, with 2 voting Present and 18 not voting. On December 22, it was approved by the Senate in a vote of 93–0, with 7 not voting. President Bush signed the bill into law on December 30. Funding Of the $3.8 billion earmarked for pandemic preparedness, $350 million is slated for improvement of state and local preparedness. HHS will use most of the balance on "core preparedness activities", such as developing vaccines and stockpiling antiviral drugs. Under PREPA, an HHS emergency declaration will trigger establishment of a fund for "timely, uniform, and adequate compensation" program for vaccine injuries, but no funding provisions for such purposes were included in its language. Liability protection and consolidation of oversight PREPA was designed specifically to encourage rapid production of vaccines to protect American citizens in case of a potential public health threat. However, the primary effects of the legislation hinge on liability protections for drug companies, under provisions intended to remove financial risk barriers for any new vaccines that need to be rushed to market in case of an emergency. Under PREPA, the HHS secretary will have primary responsibility for making decisions on whether or not to declare an emergency that would justify removing financial risk barriers, which otherwise would cause a prudent manufacturer to exercise caution. Pursuant to such an emergency declaration, liability protection would extend to doctors and other individuals and organizations involved with countermeasures, which may include any medical product to prevent, treat, mitigate, or diagnose an epidemic. The act does not list any criteria for determining the existence of an emergency, but it does specify that any such declaration would have to list the diseases, populations, and geographic areas covered and when the emergency would end. PREPA removes the right to a jury trial for persons injured by a covered vaccine, unless a plaintiff can provide clear evidence of willful misconduct that resulted in death or serious physical injury. The act instructs the HHS secretary to write regulations "that further restrict the scope of actions or omissions by a covered person" that constitute willful misconduct. A plaintiff whose claim is subject to PREPA can sue the defendant only in the United States District Court for the District of Columbia. For such a civil action, PREPA requires the complaint to be pleaded with particularity, verified under oath by the plaintiff, and accompanied by an affidavit from a non-treating physician to explain how the covered countermeasure injured the plaintiff, as well as relevant medical records. In the event of an emergency declared by HHS, Federal law would preempt all state provisions related to pandemic emergency preparedness, and would supersede any state provision governing vaccines. PREPA applies to any drug, vaccine, or biological product that the HHS secretary deems a "covered countermeasure," or that the secretary decides is a public health situation that could become an emergency at some point in the future, whether or not there is a specific relationship to a dangerous pandemic or bioterrorism. By invoking provisions of PREPA, the HHS secretary can wield broad authority to declare an emergency, which in turn would trigger drug company immunity from liability at any time, thereby conferring upon drug companies legal immunity for harm caused by their misconduct. The immunity that could be conferred on drug and vaccine manufacturers can be applied regardless of wrongdoing by affected drug companies. Definitions The PREPA defines terms such as covered countermeasure and qualified pandemic or epidemic product in terms related to the Federal Food, Drug, and Cosmetic Act of 1938, specifically section 201(g) drugs and section 201(h) medical devices. The definitions of security countermeasure and biological product are related only internally. PREPA covers many kinds of loss, including death; physical, mental, or emotional injury, illness, disability, or condition; fear of physical, mental, or emotional injury, illness, disability, or condition, including any need for medical monitoring; and loss of or damage to property, including business interruption loss. Opposition Numerous consumer organizations vigorously opposed the legislation, including A-CHAMP, Eagle Forum, and Public Citizen, as well as first responder organizations representing nurses, firemen and veterans. A-CHAMP ran a series of full page advertisements in various publications in opposition to PREPA. Because the legislation delegates broad legislative power to the executive branch of government, opponents view it as a violation of fundamental principles of the U.S. Constitution. In 2005, Senator Edward Kennedy issued a statement demanding repeal of the PREPA legislation, while condemning the liability provisions as a giveaway to the drug industry. Kennedy said the bill makes it "essentially impossible" for injured parties to sue for damages, and that the measure allows common diseases to be used as a reason to activate the liability shield. Kennedy also asserted that one of the drug companies that lobbied for PREPA is Sanofi Pasteur, which was under Food and Drug Administration (FDA) investigation for being connected to at least five cases of Guillain–Barré syndrome asserted to have been caused by its meningococcal vaccine. When the PREPA legislation was presented, its broad liability shields, its potential for undermining state vaccine laws, and its consolidation of responsibility within the executive branch were misrepresented in Congress and media, according to critics, who note that it was portrayed instead as primarily concerned with preparations to combat the avian flu. Opponents also contended that PREPA would contribute to the potential for abuse of discretion by the George W. Bush Administration, which was generally perceived as friendly to the drug industry. In particular, critics were concerned about the possibility that state laws banning thimerosal containing vaccines (TCVs) may be preempted. If the HHS secretary designates that a vaccine is a covered countermeasure, thimerosal (a mercury containing preservative) can be used in the vaccine, even in states that have enacted such bans. See also Vaccines for the New Millennium Act References External links GallatinNewsExaminer.com - 'Hastert, Frist said to rig bill for drug firms: Frist denies protection was added in secret', Bill Theobald, Gannett News Service (February 9, 2006) Pitt.edu - 'Vaccine liability law changes proposed by Democrats', Chris Buell, Jurist Legal News & Research, University of Pittsburgh School of Law (February 15, 2006) Senate.gov - 'Harkin Calls on Frist and Hastert to Repeal "Dead of Night" Vaccine Liability Provision and Enact Real Protections (February 15, 2006) SLWeekly.com - 'Side Effects: Leavitt’s new power to limit suits against pharmaceutical companies has some critics feeling a bit ill', Louis Godfrey, Salt Lake City Weekly (February 9, 2006) SMMirror.com - 'Allowing the Drug Companies to Poison Our Children' (editorial), Lewis Seiler and Dan Hamburg, Santa Monica Mirror (March 30, 2006) UMN.edu - 'Pandemic funding, liability shield clear Congress' (December 28, 2005) Vaccination law United States federal health legislation Acts of the 109th United States Congress Disaster preparedness in the United States Vaccination in the United States Drug policy of the United States
Public Readiness and Emergency Preparedness Act
[ "Biology" ]
1,991
[ "Biotechnology law", "Vaccination law", "Vaccination" ]
4,137,230
https://en.wikipedia.org/wiki/Quadruple%20bond
A quadruple bond is a type of chemical bond between two atoms involving eight electrons. This bond is an extension of the more familiar types of covalent bonds: double bonds and triple bonds. Stable quadruple bonds are most common among the transition metals in the middle of the , such as rhenium, tungsten, technetium, molybdenum and chromium. Typically the ligands that support quadruple bonds are π-donors, not π-acceptors. Quadruple bonds are rare as compared to double bonds and triple bonds, but hundreds of compounds with such bonds have been prepared. History Chromium(II) acetate, Cr2(μ-O2CCH3)4(H2O)2, was the first chemical compound containing a quadruple bond to be synthesized. It was described in 1844 by E. Peligot, although its distinctive bonding was not recognized for more than a century. The first crystallographic study of a compound with a quadruple bond was provided by Soviet chemists for salts of . The very short Re–Re distance was noted. This short distance (and the salt's diamagnetism) indicated Re–Re bonding. These researchers, however, misformulated the anion as a derivative of Re(II), i.e., . Soon thereafter, F. Albert Cotton and C. B. Harris reported the crystal structure of potassium octachlorodirhenate or K2[Re2Cl8]·2H2O. This structural analysis indicated that the previous characterization was mistaken. Cotton and Harris formulated a molecular orbital rationale for the bonding that explicitly indicated a quadruple bond. The rhenium–rhenium bond length in this compound is only 224 pm. In molecular orbital theory, the bonding is described as σ2π4δ2 with one sigma bond, two pi bonds and one delta bond. Structure and bonding The [Re2Cl8]2− ion adopts an eclipsed conformation as shown at left. The delta bonding orbital is then formed by overlap of the d orbitals on each rhenium atom, which are perpendicular to the Re–Re axis and lie in between the Re–Cl bonds. The d orbitals directed along the Re–Cl bonds are stabilized by interaction with chloride ligand orbitals and do not contribute to Re–Re bonding. In contrast, the [Os2Cl8]2− ion with two more electrons (σ2π4δ2δ*2) has an Os–Os triple bond and a staggered geometry. Many other compounds with quadruple bonds between transition metal atoms have been described, often by Cotton and his coworkers. Isoelectronic with the dirhenium compound is the salt K4[Mo2Cl8] (potassium octachlorodimolybdate). An example of a ditungsten compound with a quadruple bond is ditungsten tetra(hpp). Quadruple bonds between atoms of main-group elements are unknown. For the dicarbon (C2) molecule as an example, molecular orbital theory shows that there are two sets of paired electrons in the sigma system (one bonding, one antibonding), and two sets of paired electrons in a degenerate π-bonding set of orbitals. This adds up to a bond order of 2, meaning that there exists a double bond between the two carbon atoms. The molecular orbital diagram of diatomic carbon would show that there are two pi bonds and no sigma bonds. A 2012 paper by S. Shaik et al. suggests that a quadruple bond exists in dicarbon, but this is disputed. See also Bond order References Further reading Chemical bonding
Quadruple bond
[ "Physics", "Chemistry", "Materials_science" ]
772
[ "Chemical bonding", "Condensed matter physics", "nan" ]
4,137,502
https://en.wikipedia.org/wiki/Bartoli%20indole%20synthesis
The Bartoli indole synthesis (also called the Bartoli reaction) is the chemical reaction of ortho-substituted nitroarenes and nitrosoarenes with vinyl Grignard reagents to form substituted indoles. The reaction is often unsuccessful without substitution ortho to the nitro group, with bulkier ortho substituents usually resulting in higher yields for the reaction. The steric bulk of the ortho group assists in the [3,3]-sigmatropic rearrangement required for product formation. Three equivalents of the vinyl Grignard reagent are necessary for the reaction to achieve full conversion when performed on nitroarenes, and only two equivalents when performed on nitrosoarenes. This method has become one of the shortest and most flexible routes to 7-substituted indoles. The Leimgruber-Batcho indole synthesis gives similar flexibility and regiospecificity to indole derivatives. One advantage of the Bartoli indole synthesis is the ability to produce indoles substituted on both the carbocyclic ring and the pyrrole ring, which is difficult to do with the Leimgruber-Batcho indole synthesis. Reaction mechanism The reaction mechanism of the Bartoli indole synthesis is illustrated below using o-nitrotoluene (1) and propenyl Grignard (2) to form 3,7-dimethylindole (13). The mechanism begins by the addition of the Grignard reagent (2) onto the nitroarene (1) to form intermediate 3. Intermediate 3 spontaneously decomposes to form a nitrosoarene (4) and a magnesium salt (5). (Upon reaction workup, the magnesium salt will liberate a carbonyl compound (6).) Reaction of the nitrosoarene (4) with a second equivalent of the Grignard reagent (2) forms intermediate 7. The steric bulk of the ortho group causes a [3,3]-sigmatropic rearrangement forming the intermediate 8. Cyclization and tautomerization give intermediate 10, which will react with a third equivalent of the Grignard reagent (2) to give a dimagnesium indole salt (12). Reaction workup eliminates water and gives the final desired indole (13). Therefore, three equivalents of the Grignard reagent are necessary, as one equivalent becomes carbonyl compound 6, one equivalent deprotonates 10 forming an alkene (11), and one equivalent gets incorporated into the indole ring. The nitroso intermediate (4) has been isolated from the reaction. Additionally, reaction of the nitroso intermediate (4) with two equivalents of the Grignard reagent produces the expected indole. The scope of the reaction includes substituted pyridines which can be used to make 4-azaindoles(left) and 6-azaindoles(right). Variations Dobbs modification Adrian Dobbs greatly enhanced the scope of the Bartoli indole synthesis by using an ortho-bromine as a directing group, which is subsequently removed by AIBN and tributyltin hydride. The synthesis of 4-methylindole (3) highlights the ability of this technique to produce highly substituted indoles. See also Fischer indole synthesis References Indole forming reactions Carbon-heteroatom bond forming reactions Name reactions
Bartoli indole synthesis
[ "Chemistry" ]
731
[ "Name reactions", "Carbon-heteroatom bond forming reactions", "Ring forming reactions", "Organic reactions" ]
4,137,557
https://en.wikipedia.org/wiki/Pyeong
A pyeong (abbreviationpy) is a Korean unit of area and floorspace, equal to a square kan or 36square Korean feet. The ping and tsubo are its equivalent Taiwanese and Japanese units, similarly based on a square bu (ja:步) or ken, equivalent to 36square Chinese or Japanese feet. Current use Korea In Korea, the period of Japanese occupation produced a pyeong of or 3.3058m2. It is the standard traditional measure for real estate floorspace, with an average house reckoned as about 25pyeong, a studio apartment as 8–12py, and a garret as 1½py. In South Korea, the unit has been officially banned since 1961 but with little effect prior to the criminalization of its commercial use effective 1 July 2007. Informal use continues, however, including in the form of real estate use of unusual fractions of meters equivalent to unit amounts of pyeong. Real estate listings on major websites such as Daum show measurements in square meters with the pyeong equivalent. Taiwan In Taiwan, the Taiwanese ping was introduced in the period of Taiwan under Japanese rule, which remains in fairly common use and is about 3.305m2. Japan In Japan, the usual measure of real estate floorspace is the tatami and the tsubo is reckoned as two tatami. The tatami varies by region but the modern standard is usually taken to be the Nagoya tatami of about 1.653m2, producing a tsubo of 3.306m2. It is sometimes reckoned as comprising 10gō. China In China, the metrication of traditional units would produce a ping of 4m2, but it is almost unknown, with most real estate floorspace simply reckoned in square meters. The longer length of the Hong Kong foot produces a larger ping of almost 5m2, but it is similarly uncommon. See also Japanese units of measurement Korean units of measurement Taiwanese units of measurement Chinese units of measurement References Systems of units Units of area Culture of Korea
Pyeong
[ "Mathematics" ]
423
[ "Quantity", "Systems of units", "Units of area", "Units of measurement" ]
4,137,589
https://en.wikipedia.org/wiki/Interlingual%20machine%20translation
Interlingual machine translation is one of the classic approaches to machine translation. In this approach, the source language, i.e. the text to be translated is transformed into an interlingua, i.e., an abstract language-independent representation. The target language is then generated from the interlingua. Within the rule-based machine translation paradigm, the interlingual approach is an alternative to the direct approach and the transfer approach. In the direct approach, words are translated directly without passing through an additional representation. In the transfer approach the source language is transformed into an abstract, less language-specific representation. Linguistic rules which are specific to the language pair then transform the source language representation into an abstract target language representation and from this the target sentence is generated. The interlingual approach to machine translation has advantages and disadvantages. The advantages are that it requires fewer components in order to relate each source language to each target language, it takes fewer components to add a new language, it supports paraphrases of the input in the original language, it allows both the analysers and generators to be written by monolingual system developers, and it handles languages that are very different from each other (e.g. English and Arabic). The obvious disadvantage is that the definition of an interlingua is difficult and maybe even impossible for a wider domain. The ideal context for interlingual machine translation is thus multilingual machine translation in a very specific domain. For example, Interlingua has been used as a pivot language in international conferences and has been proposed as a pivot language for the European Union. History The first ideas about interlingual machine translation appeared in the 17th century with Descartes and Leibniz, who came up with theories of how to create dictionaries using universal numerical codes, not unlike numerical tokens used by large language models nowadays. Others, such as Cave Beck, Athanasius Kircher and Johann Joachim Becher worked on developing an unambiguous universal language based on the principles of logic and iconographs. In 1668, John Wilkins described his interlingua in his "Essay towards a Real Character and a Philosophical Language". In the 18th and 19th centuries many proposals for "universal" international languages were developed, the most well known being Esperanto. That said, applying the idea of a universal language to machine translation did not appear in any of the first significant approaches. Instead, work started on pairs of languages. However, during the 1950s and 60s, researchers in Cambridge headed by Margaret Masterman, in Leningrad headed by Nikolai Andreev and in Milan by Silvio Ceccato started work in this area. The idea was discussed extensively by the Israeli philosopher Yehoshua Bar-Hillel in 1969. During the 1970s, noteworthy research was done in Grenoble by researchers attempting to translate physics and mathematical texts from Russian to French, and in Texas a similar project (METAL) was ongoing for Russian to English. Early interlingual MT systems were also built at Stanford in the 1970s by Roger Schank and Yorick Wilks; the former became the basis of a commercial system for the transfer of funds, and the latter's code is preserved at The Computer Museum at Boston as the first interlingual machine translation system. In the 1980s, renewed relevance was given to interlingua-based, and knowledge-based approaches to machine translation in general, with much research going on in the field. The uniting factor in this research was that high-quality translation required abandoning the idea of requiring total comprehension of the text. Instead, the translation should be based on linguistic knowledge and the specific domain in which the system would be used. The most important research of this era was done in distributed language translation (DLT) in Utrecht, which worked with a modified version of Esperanto, and the Fujitsu system in Japan. In 2016, Google Neural Machine Translation achieved "zero-shot translation", that is it directly translates one language into another. For example, it might be trained just for Japanese-English and Korean-English translation, but can perform Japanese-Korean translation. The system appears to have learned to produce a language-independent intermediate representation of language (an "interlingua"), which allows it to perform zero-shot translation by converting from and to the interlingua. Outline In this method of translation, the interlingua can be thought of as a way of describing the analysis of a text written in a source language such that it is possible to convert its morphological, syntactic, semantic (and even pragmatic) characteristics, that is "meaning" into a target language. This interlingua is able to describe all of the characteristics of all of the languages which are to be translated, instead of simply translating from one language to another. Sometimes two interlinguas are used in translation. It is possible that one of the two covers more of the characteristics of the source language, and the other possess more of the characteristics of the target language. The translation then proceeds by converting sentences from the first language into sentences closer to the target language through two stages. The system may also be set up such that the second interlingua uses a more specific vocabulary that is closer, or more aligned with the target language, and this could improve the translation quality. The above-mentioned system is based on the idea of using linguistic proximity to improve the translation quality from a text in one original language to many other structurally similar languages from only one original analysis. This principle is also used in pivot machine translation, where a natural language is used as a "bridge" between two more distant languages. For example, in the case of translating to English from Ukrainian using Russian as an intermediate language. Translation process In interlingual machine translation systems, there are two monolingual components: the analysis of the source language and the interlingual, and the generation of the interlingua and the target language. It is however necessary to distinguish between interlingual systems using only syntactic methods (for example the systems developed in the 1970s at the universities of Grenoble and Texas) and those based on artificial intelligence (from 1987 in Japan and the research at the universities of Southern California and Carnegie Mellon). The first type of system corresponds to that outlined in Figure 1. while the other types would be approximated by the diagram in Figure 4. The following resources are necessary to an interlingual machine translation system: Dictionaries (or lexicons) for analysis and generation (specific to the domain and the languages involved). A conceptual lexicon (specific to the domain), which is the knowledge base about events and entities known in the domain. A set of projection rules (specific to the domain and the languages). Grammars for the analysis and generation of the languages involved. One of the problems of knowledge-based machine translation systems is that it becomes impossible to create databases for domains larger than very specific areas. Another is that processing these databases is very computationally expensive. Efficacy One of the main advantages of this strategy is that it provides an economical way to make multilingual translation systems. With an interlingua it becomes unnecessary to make a translation pair between each pair of languages in the system. So instead of creating language pairs, where is the number of languages in the system, it is only necessary to make pairs between the languages and the interlingua. The main disadvantage of this strategy is the difficulty of creating an adequate interlingua. It should be both abstract and independent of the source and target languages. The more languages added to the translation system, and the more different they are, the more potent the interlingua must be to express all possible translation directions. Another problem is that it is difficult to extract meaning from texts in the original languages to create the intermediate representation. Existing interlingual machine translation systems Calliope-Aero Carabao Linguistic Virtual Machine Grammatical Framework Number Translator Google Translate use English internally as a pivot language for some language pairs such as Chinese and Japanese, and more generally those with "higher quality" neural-network translators with English but not between each other. See also Intermediate representation Pivot language Universal Networking Language Knowledge representation and reasoning Notes External links Interlingua Methods Slides Paper Machine translation Computational linguistics
Interlingual machine translation
[ "Technology" ]
1,686
[ "Machine translation", "Natural language and computing", "Computational linguistics" ]
4,137,670
https://en.wikipedia.org/wiki/Urban%20flight
Urban flight, sometimes referred to as suburban colonization, is the movement of people from an urban area to its suburbs. The phenomenon is often studied for the effects that it has on the city, especially the reduction of political power and the reduction of tax revenue which occurs as a result of the depopulation. Services and taxes favor suburbs As hinterlands acquire more population and more power then, according to the one man one vote principle, they get more votes in representative bodies, notably metropolitan regions or greater urban areas such as the Greater Toronto Area Greater Montreal, Greater Paris or Greater London. Suburban votes then come to outweigh inner city votes, just as, a century earlier, urbanization or urban colonialization diminished the power of rural voters. Decisions of these bodies accordingly begin to favor people who live in suburbs, providing more car-oriented and commuter services and more favorable property tax rates for single family homes as tenants in downtown apartment buildings pay higher rates. In urban areas that are growing rapidly, services may be developed that favor urban sprawl, such as large trunk sewers, express highways or shopping malls, as other services such as youth recreation disappear from downtown areas. This increases population drain to the suburbs as quality of life drops, but the increased population may then drive more people further out to the hinterlands which increases the political rewards (especially political donations from real estate developers building greenfield developments) for sprawl. Urban bankruptcy requires outside aid In very extreme cases, where cities are unable to recover costs of serving a vast suburban hinterland and are politically controlled by a larger jurisdiction, such as Manhattan within New York State, cities may go bankrupt as New York City in fact did in the 1970s. This had been predicted by urbanists including Jane Jacobs who had fought Robert Moses and his plan for the Cross-Manhattan Expressway system which was eventually defeated. The City only recovered with federal aid and urban autonomy rights including the right to levy its own income tax which it still has. Suburban flight polarizes communities Cities with impoverished downtown services can suffer riots or major unrest, as Los Angeles and Detroit did in the 1960s to 1980s. Such incidents speed the flight of middle class residents to the suburbs and sometimes to gated community developments where they are insulated from urban problems, and consume a very different range of services than downtown residents, which again are favored strongly by political representatives. Forced mergers further reduce downtown power In some cases, notably Toronto and Montreal in the 1990s, a larger political unit will force smaller urban units to merge against the will of residents, and this further increases the hold of the outer suburban regions as they hold a majority of seats in the new aggregated city council. Where a strong mayor system applies, the larger number of suburban residents will likely also control that post, and the need to campaign over a larger urban area will tend to exclude grassroots candidates or anti-poverty activist candidates not funded nor supported by wealthier suburban voters or real estate developers. Those who speak for the city may live on its outer edges. Mayors may be former mayors of former suburban cities such as Mel Lastman, former mayor of North York who became Mayor of Toronto once those cities (and three others) were merged in 1998. The political consequences of both mergers were severe. In Quebec, the Parti Québécois government was defeated by Jean Charest who permitted Montreal to hold a referendum in which it was permitted to de-amalgamate politically and regain the separate pre-merger urban identities. In Toronto no such relief occurred but a Province of Toronto movement emerged under Jane Jacobs (who had moved to Toronto in the 1960s and again fought expressways penetrating the downtown there, notably the Spadina Expressway and Front Street Extension), 2000 Lastman opponent Tooker Gomberg and Mayor in 2003 (after Lastman) David Miller. Theoretical analyses Joel Garreau in Edge City described the growth of cities on the edge of major urban areas, which became population and power centres in themselves. Dale Johnston in Lost in the Suburbs described a cultural and political gap that occurred in New Jersey and Ontario in the early 1990s when suburban voters began to outnumber urban or rural voters, and began to perceive that they were paying taxes to provide urban areas with services that were not duplicated in their community. Meanwhile, suburban communities would export problems to the cities, typically in the form of drug addicts, homelessness, smog, prostitution and other crimes serving suburban residents, and the need to accommodate a large number of commuters and their sewage and parking requirements. As downtown residents and suburban voters became estranged, each perceived themselves subsidizing the other, and accordingly a common solution, called in both New Jersey and Ontario the Common Sense Revolution, transferred funds from urban needs to suburban sprawl, triggering a decline in urban quality of life in both places, as population further spread out and downtowns became more hostile to suburban visitors. See also Core-periphery Internal colonialism Rural flight Further reading Urban planning Internal migration
Urban flight
[ "Engineering" ]
999
[ "Urban planning", "Architecture" ]
4,137,889
https://en.wikipedia.org/wiki/Tauopathy
Tauopathies are a class of neurodegenerative diseases characterized by the aggregation of abnormal tau protein. Hyperphosphorylation of tau proteins causes them to dissociate from microtubules and form insoluble aggregates called neurofibrillary tangles. Various neuropathologic phenotypes have been described based on the anatomical regions and cell types involved as well as the unique tau isoforms making up these deposits. The designation 'primary tauopathy' is assigned to disorders where the predominant feature is the deposition of tau protein. Alternatively, diseases exhibiting tau pathologies attributed to different and varied underlying causes are termed 'secondary tauopathies'. Some neuropathologic phenotypes involving tau protein are Alzheimer's disease, frontotemporal dementia, progressive supranuclear palsy, and corticobasal degeneration. Tau protein Tau protein, also called tubulin associated unit or microtubule-associated protein tau (MAPT), is a microtubule-associated protein that promotes polymerization and stabilization into microtubules by binding to tubulin. Variants of Tau isoforms, spanning from 352 to 441 amino acids, arise through the alternative splicing of exons 2,3 and 10 within the MAPT gene. The six isoforms are differentiated by the inclusion and exclusion of inserts of either 29 or 58 amino acids in the N-terminus domain. Furthermore, the isoforms are categorized based on the presence of either three (3R tau isoforms) or four (4R tau isoforms) tandem repeat sequences each consisting of 31 or 32 amino acids. Biomarkers Neuroimaging Positron emission tomography (PET) is one type of biomarker capable of identifying elevated levels of tau in patients with Alzheimer's disease. PET is a great tool that can supplement information such as regions with higher neuropathologic burden than others. But it needs to be eligible, and have more positive outcomes than negative, such as exposure to radioactivity. Biofluid The analysis of cerebrospinal fluid (CSF) represents a potential avenue for the development of biomarkers in tauopathies. Substantial data on CSF biomarkers is available for Alzheimer's disease (AD), focusing on measures related to total and phosphorylated forms of tau and amyloid-beta (Aβ) protein. Elevated CSF tau and decreased Aβ levels constitute the characteristic CSF signature of AD, allowing differentiation from controls. This signature may also assist in distinguishing atypical forms of AD pathology associated with clinical frontotemporal dementia (FTD) from those with underlying frontotemporal lobar degeneration (FTLD)-Tau pathology. Alzheimer's disease Alzheimer's disease (AD) is clinically characterized by a progressive decline in memory and cognitive functions, leading to severe dementia. Microscopically, AD is identified by the presence of two types of insoluble fibrous materials: (1) extracellular amyloid (Aβ) protein forming senile plaques and (2) intracellular neurofibrillary lesions (NFL) composed of abnormally and hyperphosphorylated tau protein. While AD is not strictly considered a prototypical tauopathy, as tau pathology coexists with Aβ protein deposition, the 'amyloid cascade hypothesis' posits that Aβ accumulation is the primary factor driving AD pathogenesis. Nevertheless, AD neurofibrillary lesions were the first to undergo ultrastructural and biochemical analysis, thus laying the foundation for in-depth studies on tau protein deposition in various tauopathies. Neuropathologic phenotypes Frontotemporal dementia Frontotemporal dementia is a part of a diverse spectrum of disorders clinically marked by dysfunction in the frontal and temporal lobes, collectively referred to as frontotemporal lobar degeneration (FTLD). The primary histological characteristics include profound neuronal loss, enlarged neurons, and distinctive spherical argyrophilic inclusions known as Pick bodies (PBs). These PBs primarily consist of hyperphosphorylated tau protein, with tau protein presenting as two major bands at 60 and 64 kDa and a variable, minor band at 69 kDa. Filamentous tau deposits in nerve cells are predominantly composed of 3R tau isoforms. Progressive supranuclear palsy Progressive supranuclear palsy (PSP) is a type of tauopathy, but the cause is not yet discovered. For PSP unusual phosphorylation for tau protein causes vital protein filaments in the nerve cells to destruct, a phenomenon called "neurofibrillary" degeneration. Typical symptoms of PSP would be abnormal speech, balance impairment and overcognitive and memory impairment. As CBD, PSP is also classified as a 4R tauopathy, and because of that PSP will often be selected for trials regarding anti-tau therapeutics. Corticobasal degeneration Corticobasal degeneration (CBD) is an increasingly acknowledged neurodegenerative disorder characterized by both motor and cognitive dysfunction. In affected regions, histological examination reveals pronounced neuronal loss accompanied by spongiosis and gliosis, cortical ballooned cells, and notable intracytoplasmic filamentous tau pathology in both glial and neuronal cells. Biochemically, the distinctive tau profile in CBD cases manifests as a prominent tau doublet at 64 and 68 kDa, which is variably identified. These bands predominantly consist of hyperphosphorylated 4R tau isoforms, leading to the classification of CBD as a 4R tauopathy. Tau therapeutics Currently, there are no specific treatments for tauopathies. Up till now, attempts have been made to target neurotransmitter disturbances to relieve disease symptoms. For AD a specific treatment is difficult because the pathological changes both early compared to the symptoms showing. Even though there is no current treatment for tauopathies, there are treatments that can relieve symptoms. Speech therapy can be beneficial for aphasia symptoms, symptoms such as depression and apathy frequently engaged with pharmaceuticals. For physical challenges, physical therapy has proven helpful in extending motor function for patients. Other diseases Primary age-related tauopathy (PART) dementia, with NFTs similar to AD, but without amyloid plaques. Chronic traumatic encephalopathy (CTE) Progressive supranuclear palsy (PSP) Corticobasal degeneration (CBD) Frontotemporal dementia and parkinsonism linked to chromosome 17 (FTDP-17) Vacuolar tauopathy Lytico-bodig disease (Parkinson-dementia complex of Guam) Ganglioglioma and gangliocytoma Meningioangiomatosis Subacute sclerosing panencephalitis (SSPE) As well as lead encephalopathy, tuberous sclerosis, pantothenate kinase-associated neurodegeneration, and lipofuscinosis See also Proteopathy References External links Dementia Medical signs Histopathology Cytoskeletal defects
Tauopathy
[ "Chemistry" ]
1,522
[ "Histopathology", "Microscopy" ]
4,138,124
https://en.wikipedia.org/wiki/Southern%20Hemisphere%20Auroral%20Radar%20Experiment
The Southern Hemisphere Auroral Radar Experiment, or SHARE, started in 1988, is an Antarctic research project designed to observe velocities and irregularities of electrical fields in the ionosphere and magnetosphere. It is operated jointly by the University of Natal, Potchefstroom University, the British Antarctic Survey and Johns Hopkins University and operates out of British Halley Station, South African SANAE IV Station and Japanese Showa Station. Using a total of 16 antennas, each mounted on a 12 m tower and radiating on fixed frequencies in the 8–20 MHz range, SHARE transmits a radio frequency pulse into the upper atmosphere every two minutes. The three stations' ranges overlap to cover most of the Antarctic continent. SHARE is part of the international Super Dual Auroral Radar Network (SuperDARN). It supplies valuable data to track space weather. Meteorology research and field projects Radio frequency propagation Plasma physics facilities Ground radars Astronomical experiments in the Antarctic 1988 establishments in Antarctica
Southern Hemisphere Auroral Radar Experiment
[ "Physics" ]
195
[ "Physical phenomena", "Spectrum (physical sciences)", "Plasma physics", "Radio frequency propagation", "Electromagnetic spectrum", "Waves", "Plasma physics stubs", "Plasma physics facilities" ]
4,138,132
https://en.wikipedia.org/wiki/Kinesin%2013
The Kinesin-13 Family are a subfamily of motor proteins known as kinesins. Most kinesins transport materials or cargo around the cell while traversing along microtubule polymer tracks with the help of ATP-hydrolysis-created energy. Structure They are easily identified by their three typical structural components including a highly conserved structural domain, catalytic core, and microtubule binding sites. The kinesin-13 family, unlike other kinesins, has an internally positioned motor domain. They were initially named KIF-M because of the unique location of their catalytic core in the middle of the polypeptide between the N-terminal globular domain and the C-terminal stalk but they are truly special due to their versatile nature. The Kinesin-13 family's molecular mechanism is less understood than other classes of kinesins which have their motor domains at one end of the molecule or the other. They are capable of traveling to both the minus and plus ends of microtubules whereas most motors are unidirectional. Thus they can catalytically depolymerize a microtubule from both ends making it a very efficient process. The exact mechanism of Kinesin-13 activated microtubule depolymerization remains unclear, however, recent biochemical and structural studies revealed some more detailed class specific features enabling researchers to formulate a model.) The protein first contacts the side wall of a microtubule. This is not a stable interaction because the convex surface of the catalytic core does not fit to the flat surface of the straight microtubule protofilament. Steric hindrance between the molecule neck and adjacent protofilament further inhibits full contact between protein and the microtubule and only facilitates one-dimensional diffusion along the microtubule. At this time, The protein's nucleotide binding pocket is trapped in an open state so that the structure is not hydrolyzing ATP. Once the motor reaches the end of the microtubule, the protofilament spontaneously curves itself allowing motor to make full contact with the tubulin subunit. More MCAK molecules collectively bind to the curved region supporting the theory that they do not actively peel away the microtubule but they wait patiently for it to adopt this curved conformation. They stabilize the curved conformation by binding to the end of the microtubule and then catalyze depolymerization. Functions during mitosis The major function of mitosis is to separate replicated sister chromatids, and this is accomplished in part during anaphase A when "kinetochore microtubules (or kMTs)" that link the sister chromatids to opposite spindle poles shorten by depolymerization, exerting forces on the chromatids that pull them to the poles. In Drosophila there is evidence that sister chromatids are moved to opposite spindle poles by a "kinesin-13 dependent pacman-flux mechanism" in which one kinesin-13 isoform, KLP59c, localized to kinetochores facilitates the depolymerization of the end of the kMTs facing the chromatid (pacman), whereas a second kinesin-13 isoform, KLP10A, localized on the spindle poles facilitates the depolymerization of the opposite end of the kMTs facing the poles (flux) See also KIF13A References External links Video Illustrations Motor proteins
Kinesin 13
[ "Chemistry" ]
713
[ "Molecular machines", "Motor proteins" ]
4,138,226
https://en.wikipedia.org/wiki/Brian%20Wowk
Brian G. Wowk is a Canadian medical physicist and cryobiologist known for the discovery and development of synthetic molecules that mimic the activity of natural antifreeze proteins in cryopreservation applications, sometimes called "ice blockers". As a senior scientist at 21st Century Medicine, Inc., he was a co-developer with Greg Fahy of key technologies enabling cryopreservation of large and complex tissues, including the first successful vitrification and transplantation of a mammalian organ (kidney). Wowk is also known for early theoretical work on future applications of molecular nanotechnology, especially cryonics, nanomedicine, and optics. In the early 1990s he wrote that nanotechnology would revolutionize optics, making possible virtual reality display systems optically indistinguishable from real scenery as in the fictitious Holodeck of Star Trek. These systems were described by Wowk in the chapter "Phased Array Optics" in the 1996 anthology Nanotechnology: Molecular Speculations on Global Abundance , and highlighted in the September 1998 Technology Watch section of Popular Mechanics magazine. Early life and education He obtained his undergraduate and graduate degrees from the University of Manitoba in Winnipeg, Canada. Dr. Wowk obtained his PhD in physics in 1997. His graduate studies included work in online portal imaging for radiotherapy at the Manitoba Cancer Treatment and Research Foundation (now Cancer Care Manitoba), and work on artifact reduction for functional magnetic resonance imaging at the National Research Council of Canada. His work in the latter field is cited by several text books, including Functional MRI which includes an image he obtained of magnetic field changes inside the human body caused by respiration. References Notes 1.Nanotechnology: Molecular Speculations on Global Abundance 2.Functional MRI External links 21st Century Medicine Cell Repair Technology Medical Time Travel Living people Cryobiology Cryonicists University of Manitoba alumni Medical physicists Year of birth missing (living people)
Brian Wowk
[ "Physics", "Chemistry", "Biology" ]
390
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
4,138,437
https://en.wikipedia.org/wiki/Nujol
Nujol is a brand of mineral oil by Plough Inc., cas number 8012-95-1, and density 0.838 g/mL at 25 °C, used in infrared spectroscopy. It is a heavy paraffin oil so it is chemically inert and has a relatively uncomplicated IR spectrum, with major peaks between 2950-2800, 1465-1450, and 1380–1300 cm−1. The empirical formula of Nujol is hard to determine exactly because it is a mixture but it is essentially the alkane formula where n is very large. To obtain an IR spectrum of a solid, a sample is combined with Nujol in a mortar and pestle or some other device to make a mull (a very thick suspension), and is usually sandwiched between potassium- or sodium chloride plates before being placed in the spectrometer. For very reactive samples, the layer of Nujol can provide a protective coating, preventing sample decomposition during acquisition of the IR spectrum. When preparing the sample it is important to keep the sample from being saturated with Nujol, this will result in erroneous spectra since the Nujol peaks will dominate, silencing the actual sample's peaks. References External links MSDS data sheet Nujol's historic use as an alternative medicine CAS Number for Nujol Hydrocarbon solvents Infrared spectroscopy Alkanes
Nujol
[ "Physics", "Chemistry", "Astronomy" ]
291
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Astronomy stubs", "Organic compounds", "Alkanes", "Infrared spectroscopy", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs", "Organic chemistry stubs" ]
4,138,548
https://en.wikipedia.org/wiki/Law%20of%20triviality
The law of triviality is C. Northcote Parkinson's 1957 argument that people within an organization commonly give disproportionate weight to trivial issues. Parkinson provides the example of a fictional committee whose job was to approve the plans for a nuclear power plant spending the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bicycle shed, while neglecting the proposed design of the plant itself, which is far more important and a far more difficult and complex task. The law has been applied to software development and other activities. The terms bicycle-shed effect, bike-shed effect, and bike-shedding were coined based on Parkinson's example; it was popularized in the Berkeley Software Distribution community by the Danish software developer Poul-Henning Kamp in 1999 and, due to that, has since become popular within the field of software development generally. Argument The concept was first presented as a corollary of his broader "Parkinson's law" spoof of management. He dramatizes this "law of triviality" with the example of a committee's deliberations on an atomic reactor, contrasting it to deliberations on a bicycle shed. As he put it: "The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (see ambiguity aversion), so one assumes that those who work on it understand it. However, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to implement their own proposal and demonstrate personal contribution. After a suggestion of building something new for the community, like a bike shed, problems arise when everyone involved argues about the details. This is a metaphor indicating that it is not necessary to argue about every little feature based simply on having the knowledge to do so. Some people have commented that the amount of noise generated by a change is inversely proportional to the complexity of the change. Behavioral research has produced evidence which confirms theories proposed by the law of triviality. People tend to spend more time on small decisions than they should, and less time on big decisions than they should. A simple explanation is that during the process of making a decision, one has to assess whether enough information has been collected to make the decision. If people make mistakes about whether they have enough information, then they will tend to feel overwhelmed by large and complex matters and stop collecting information too early to adequately inform their big decisions. The reason is that big decisions require collecting information for a long time and working hard to understand its complex ramifications. This leaves more of an opportunity to make a mistake (and stop) before getting enough information. Conversely, for small decisions, where people should devote little attention and act without hesitation, they may inefficiently continue to ponder for too long, partly because they are better able to understand the subject. Related principles and formulations There are several other principles, well known in specific problem domains, which express a similar sentiment. Sayre's law is a more general principle, which holds (among other formulations) that "In any dispute, the intensity of feeling is inversely proportional to the value of the issues at stake"; many formulations of the principle focus on academia. See also Analysis paralysis Attention inequality Busy work Dunning–Kruger effect Fredkin's paradox Hofstadter's law How many angels can dance on the head of a pin? Jevons paradox List of eponymous laws Omission bias Peter principle Procrastination Narcissism of small differences Sayre's Law Scope neglect Snackwell effect Student syndrome Time management Tyranny of small decisions Zero-risk bias References Further reading Karl Fogel, Producing Open Source Software: How to Run a Successful Free Software Project, O'Reilly, 2005, , "Bikeshed Effect" pp. 135, 261–268 (also online) Grace Budrys, Planning for the nation's health: a study of twentieth-century developments in the United States, Greenwood Press, 1986, , p. 81 (see extract at Internet Archive) Bob Burton et al., Nuclear Power, Pollution and Politics, Routledge, 1990, , p. ix (see extract at Google Books) Darren Chamberlain et al., Perl Template Toolkit, O'Reilly, 2004, , p. 412 (see extract at Google Books) Donelson R. Forsyth, Group Dynamics, Brooks/Cole, 1990, , p. 289 (see extract at Internet Archive) Henry Bosch, The Director at Risk: Accountability in the Boardroom, Allen & Unwin, 1995, , p. 92 (see extract at Google Books) Brian Clegg, Crash Course in Personal Development, Kogan Page, 2002, , p. 3 (see extract at Google Books) Richard M. Hodgetts, Management: Theory, Process, and Practice, Saunders, 1979, , p. 115 (see extract at Google Books) Journal, v. 37–38 1975–1980, Chartered Institute of Transport, p. 187 (see extract at Google Books) Russell D. Archibald, Managing High-Technology Programs and Projects, John Wiley and Sons, 2003, , p. 37 (see extract at Google Books) Kishor Bhagwati, Managing Safety: A Guide for Executives, Wiley-VCH, 2007, , p. 54 (see extract at Google Books) Jan Pen, Harmony and Conflict in Modern Society, (Trans. Trevor S. Preston) McGraw–Hill, 1966 p. 195 (see extract at Internet Archive) Derek Salman Pugh et al., Great Writers on Organizations, Dartmouth, 1993, , p. 116 (see extract at Google Books) The Federal Accountant v. 13 (September 1963 – June 1964), Association of Government Accountants, Federal Government Accountants Association, Cornell University Graduate School of Business and Public Administration, p. 16 (see extract at Google Books) Al Kelly, How to Make Your Life Easier at Work, McGraw–Hill, 1988, , p. 127 (see extract at Google Books) Henry Mintzberg, Power in and Around Organizations: Dynamic Techniques of Winning, Prentice–Hall, 1983, , p. 75 (see extract at Google Books) The Building Services Engineer v.40 1972–1973, Institution of Heating and Ventilating Engineers (Great Britain), Chartered Institution of Building Services (see extract at Google Books) Charles Hampden-Turner, Gentlemen and Tradesmen: The Values of Economic Catastrophe, Routledge, 1983, , p. 151 (see extract at Google Books) External links "Why Should I Care What Color the Bikeshed Is?" (FreeBSD FAQ) Adages 1950s neologisms Triviality Organizational behavior 1957 introductions
Law of triviality
[ "Biology" ]
1,400
[ "Behavior", "Organizational behavior", "Human behavior" ]
4,138,713
https://en.wikipedia.org/wiki/CBOSS%20Corporation
CBOSS Corporation (Convergent Business Operation Support System) is a telecom company primarily based in Russia and with offices located in Finland, UAE and Vietnam. CBOSS Corporation, also known as CBOSS Group, develops IT solutions for the automation of telecommunications enterprises. One of the three biggest Russian mobile operators, MTS used the CBOSS billing solution from 1998 until 2004, when it switched to FORIS OSS-IN from STROM Telecom company Mikhail Severov, Пробил час большого Билла. Петербургские операторы меняют счетные системы // SpbIT.su, 2005-04-05: " «Мобильные ТелеСистемы». До последнего времени ее корпоративным стандартом было использование биллинговой системы разработки московской CBOSS, которую она эксплуатирует с 1998 года. Однако в прошлом году эта ситуация изменилась, в частности, в Москве была запущена система Foris от чешской компании STROM Telecom" In 2004 CBOSS was rated as the 11th biggest IT company in Russia by CNews.ru. In 2006, CBOSS Corporation was recognized as the #1 IT-provider of integrated solutions for telecommunications in EMEA by Informa Telecoms Group In February 2004 CBOSS acquired the online billing solutions subsidiary of Fujitsu Services Oy and its product - rtBilling (CBOSSrtb) prepaid billing system. This system was used by several mobile operators: Britain O2, Australian Optus, Canadian Rogers, Austrian One GmbH and Columbian Colombia Movil. In 2008 CBOSS was selected by German MVNECO GmbH to provide IT infrastructure and IT solutions to implement mobile virtual network activities. References External links Company website Telecommunications companies of Russia Software companies of Russia Telecommunications companies established in 1996 Russian brands Telecommunications billing systems Business software companies
CBOSS Corporation
[ "Technology" ]
584
[ "Telecommunications systems", "Telecommunications billing systems" ]
1,531,820
https://en.wikipedia.org/wiki/Component-based%20Scalable%20Logical%20Architecture
CSLA .NET is a software framework created by Rockford Lhotka that provides a standard way to create robust object oriented programs using business objects. Business objects are objects that abstract business entities in an object oriented program. Some examples of business entities include sales orders, employees, or invoices. Although CSLA itself is free to download, the only documentation the creator provides are his books and videos, which are not free. CSLA (Component-based Scalable Logical Architecture) was originally targeted toward Visual Basic 6 in the book Visual Basic 6.0 Business Objects by Lhotka. With the advent of Microsoft .NET, CSLA was completely rewritten from the ground up, with no code carried forward, and called CSLA .NET. This revision took advantage of Web Services and the object oriented languages that came with Microsoft .NET (in particular, Visual Basic.NET and C#). CSLA .NET was expounded in Expert C# Business Objects and Expert One-on-One Visual Basic .NET Business Objects , both written by Lhotka. Although CSLA and CSLA .NET were originally targeted toward Microsoft programming languages, most of the framework can be applied to most object oriented languages. Current information about CSLA .NET is available through Lhotka's self-published Using CSLA 4 ebook series. Features of CSLA Smart data A business object encapsulates all the data and behavior (business logic and rules) associated with the object it represents. For example, an OrderEdit object will contain the data and business rule implementations necessary for the application to correctly allow the user to edit order information. Rules engine The CSLA .NET framework provides a rules engine that supports validation rules, business rules, and authorization rules. These rules are attached to object instances or properties, and are automatically invoked by CSLA .NET when necessary. Validation rules may be implemented using the CSLA .NET rule engine, or through the use of the DataAnnotations feature of Microsoft .NET. Object persistence Data creation, retrieval, updates, and deletes (CRUD) are performed by clearly defined methods of the business object associated with the data testing. Data access logic is clearly separated from business logic, typically using a repository pattern or other mainstream object-oriented programming techniques. Metastate maintenance CSLA .NET manages the metastate about each business object. For example, each business object tracks information about when it is new (it represents data that hasn't been saved yet) and when it is dirty (it needs to be saved to the database either because it is new or because its member data has been changed since it was last loaded). Business objects can also be marked for deletion so they can later be deleted (for example when a user has pressed a button confirming his or her intention to delete the rows.) n-Level undo This feature makes it possible for an object or collection of objects to maintain a collection of states. This allows the object to easily revert to previous states. This can be useful when a user wants to undo previous edits multiple times in an application. The feature can also allow a user to redo multiple edits that were previously undone. This feature can provide rich functionality for desktop application and web applications. One note of caution would be to consider the overhead for high-transaction web-based applications. n-Level undo capability will require storing the previous state of an application generally accessed by reflection. This is common practice in desktop applications where changes must be "Applied". In web based designs, the added storage may pose unnecessary overhead as changes are generally submitted in batch and do not require the same level of "undo" capability. Business rule tracking Allows objects to maintain collections of "broken rule" objects. Broken rules will exist for an object until it is in a valid state, meaning it is ready to be persisted to the database. BrokenRule objects are usually associated with validation logic such as ensuring that no alphabetic characters are entered into a phone number field. For example, if an Account object has a PhoneNumber property, and that property is assigned a phone number with alphabetic characters, the Account object's IsValid property will become false (making it impossible to save to the database) and then a new BrokenRule object will be created and assigned to the Account's Broken Rules collection. The rule will disappear when the invalid phone number is corrected making the Account object capable of saving itself to the database. Extended features of CSLA Simple UI creation Business objects created using CSLA .NET fully support data binding for all Microsoft .NET UI technologies, including Windows Runtime (WinRT), WPF, Web Forms, ASP.NET MVC, Windows Phone, Silverlight, and Windows Forms. Data-bound controls like DataGrids and ListBoxes can be bound to business objects instead of more generalized database objects like ADO.NET DataSets and DataTables. Distributed data access The CSLA .NET framework implements a concept called mobile objects or mobile agents to allow objects to move across network boundaries using WCF, Web Services, or other technologies. As a result, the data access enjoys location transparency, meaning that the logic may run on the client workstation or server depending on the application's configuration. It can also be configured to use manual database transactions or distributed two-phase commit transactions. Data access logic is cleanly separated from business logic, and can be implemented using any data access technology available on the Microsoft .NET platform. Examples include ADO.NET Entity Framework, raw ADO.NET, nHibernate, etc. Web Services support Business logic created with the CSLA .NET framework can easily be exposed as a web service to remote consumers. This can be done using server-side Microsoft .NET technologies such as Web API, WCF, and asmx web services. References Training CSLA.NET Training Books Using CSLA 4 ebook series Expert C# 2008 Business Objects Expert VB 2008 Business Objects Using CSLA .NET 3.0 CSLA .NET Version 2.1 Handbook Expert C# 2005 Business Objects Expert VB 2005 Business Objects Expert C# Business Objects Expert VB Business Objects Visual Basic 6 Distributed Objects Visual Basic 6 Business Objects Visual Basic 5 Business Objects Web sites CSLA .NET Training CSLA .NET home page CSLA .NET on GitHub CSLA .NET community forum External links Rockford Lhotka's website Application programming interfaces Component-based software engineering C Sharp libraries
Component-based Scalable Logical Architecture
[ "Technology" ]
1,325
[ "Component-based software engineering", "Components" ]
1,531,987
https://en.wikipedia.org/wiki/Comparison%20of%20documentation%20generators
The following tables compare general and technical information for a number of documentation generators. Please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs. Note that many of the generators listed are no longer maintained. General information Basic general information about the generators, including: creator or company, license, and price. Supported formats The output formats the generators can write. Other features See also Code readability Documentation generator Literate programming Self-documenting code Notes References Documentation generators
Comparison of documentation generators
[ "Technology" ]
116
[ "Software comparisons", "Computing comparisons" ]
1,532,037
https://en.wikipedia.org/wiki/Materials%20recovery%20facility
A materials recovery facility, materials reclamation facility, materials recycling facility or multi re-use facility (MRF, pronounced "murf") is a specialized waste sorting and recycling system that receives, separates and prepares recyclable materials for marketing to end-user manufacturers. Generally, the main recyclable materials include ferrous metal, non-ferrous metal, plastics, paper, glass. Organic food waste is used to assist anaerobic digestion or composting. Inorganic inert waste is used to make building materials. Non-recyclable high calorific value waste is used to making RDF (Refuse Derived Fuel) and SRF (Solid Recovered Fuel.) Industry and locations In the United States, there are over 300 materials recovery facilities. The total market size is estimated at $6.6B as of 2019. As of 2016, the top 75 were headed by Sims Municipal Recycling out of Brooklyn, New York. Waste Management operated 95 MRF facilities total, with 26 in the top 75. ReCommunity operated 6 in the top 75. Republic Services operated 6 in the top 75. Waste Connections operated 4 in the top 75. Business economics In 2018, a survey in the Northeast United States found that the processing cost per ton was $82, versus a value of around $45 per ton. Composition of the ton included 28% mixed paper and 24% old corrugated containers (OCC). Prices for OCC declined into 2019. Three paper mill companies have announced initiatives to use more recycled fiber. Glass recycling is expensive for these facilities, but a study estimated that costs could be cut significantly by investments in improved glass processing. In Texas, Austin and Houston have facilities which have invested glass recycling, built and operated by Balcones Recycling and FCC Environment, respectively. Robots have spread across the industry, helping with sorting. Process Waste enters a MRF when it is dumped onto the tipping floor by the collection trucks. The materials are then scooped up and placed onto conveyor belts, which transports it to the pre-sorting area. Here, human workers remove some items that are not recyclable, which will either be sent to a landfill or an incinerator. Between 5 and 45% of "dirty" MRF material is recovered. Potential hazards are also removed, such as lithium batteries, propane tanks, and aerosol cans, which can create fires. Materials like plastic bags and hoses, which can entangle the recycling equipment, are also removed. From there, materials are transported via another conveyer belt to the disk screen, which separates wide and flat materials like flattened cardboard boxes from items like cans, jars, paper, and bottles. Flattened boxes ride across the disk screen to the other side, while all other materials fall below, where paper is separated from the waste stream with a blower. The stream of cardboard and paper is overseen by more human workers, who ensure no plastic, metal, or glass is present. Newer MRFs or retrofitted ones may use industrial robots instead of humans for pre-sorting and for quality control. However, complete removal of human labor from the sortation process is unlikely for the foreseeable future, as one needs to replicate the dexterity of the human hand and nervous system for removing every type of contaminant within a material stream. The technical limitations of this involve advanced concepts in mechatronics and computer science, where a robot hand would need to be designed, and a highly flexible algorithm that creates another precise movement algorithm within the time constraints of the system (say, the highly approximate estimate of 30,000 lines of code to do this on a modern processor would trigger too long of a delay to be effective on a sortation line). In other words, one would need to search an encyclopedia of said robotic hand motions for every configuration of waste for every pick, and this may be computationally insurmountable, even with quantum computing, as every conditional would need to be checked every iteration. Metal is separated from plastics and glass first with electromagnets, which removes ferrous metals. Non-ferrous metals like aluminum are then removed with eddy current separators. The glass and plastic streams are separated by further disk screens. The glass is crushed into cullet for ease of transportation. The plastics are then separated by polymer type, often using infrared technology (optical sorting). Infrared light reflects differently off different polymer types; once identified, a jet of air shoots the plastic into the appropriate bin. MRFs might only collect and recycle a few polymers of plastic, sending the rest to landfills or incinerators. The separated materials are baled and sent to the shipping dock of the facility. Types Clean A clean MRF accepts recyclable materials that have already been separated at the source from municipal solid waste generated by either residential or commercial sources. There are a variety of clean MRFs. The most common are single stream where all recyclable material is mixed, or dual stream MRFs, where source-separated recyclables are delivered in a mixed container stream (typically glass, ferrous metal, aluminum and other non-ferrous metals, PET [No.1] and HDPE [No.2] plastics) and a mixed paper stream including corrugated cardboard boxes, newspapers, magazines, office paper and junk mail. Material is sorted to specifications, then baled, shredded, crushed, compacted, or otherwise prepared for shipment to market. Mixed-waste processing facility (MWPF) / Dirty MRF A mixed-waste processing system, sometimes referred to as a dirty MRF, accepts a mixed solid waste stream and then proceeds to separate out designated recyclable materials through a combination of manual and mechanical sorting. The sorted recyclable materials may undergo further processing required to meet technical specifications established by end-markets while the balance of the mixed waste stream is sent to a disposal facility such as a landfill. Today, MWPFs are attracting renewed interest as a way to address low participation rates for source-separated recycling collection systems and prepare fuel products and/or feedstocks for conversion technologies. MWPFs can give communities the opportunity to recycle at much higher rates than has been demonstrated by curbside or other waste collection systems. Advances in technology make today’s MWPF different and, in many respects better, than older versions. Wet MRF Around 2004, new mechanical biological treatment technologies were beginning to utilise wet MRFs. These combine a dirty MRF with water, which acts to densify, separate and clean the output streams. It also hydrocrushes and dissolves biodegradable organics in solution to make them suitable for anaerobic digestion. History In the United States, modern MRFs began in the 1970s. Peter Karter established Resource Recovery Systems, Inc. in Branford, Connecticut, the "first materials recovery facility (MRF)" in the US. See also Cradle-to-cradle design Curbside collection List of waste treatment technologies List of waste types Mechanical biological treatment Resource recovery Transfer station (waste management) Waste characterization Waste sorting References External links "Coming soon! van der Linde's amazing recycling machine" "Materials Recovery Facility Solutions" The Role of MRFS in Modern Day Waste Management Environmental engineering Recycling Waste treatment technology Articles containing video clips
Materials recovery facility
[ "Chemistry", "Engineering" ]
1,501
[ "Water treatment", "Chemical engineering", "Civil engineering", "Environmental engineering", "Waste treatment technology" ]
1,532,080
https://en.wikipedia.org/wiki/Recycling%20symbol
The universal recycling symbol ( or in Unicode) is a symbol consisting of three chasing arrows folded in a Möbius strip. It is an internationally recognized symbol for recycling. The symbol originated on the first Earth Day in 1970, created by Gary Anderson, then a 23-year-old student for the Container Corporation of America. The symbol is not trademarked and is in the public domain. Many variations on the logo have been created since its creation. The unicode U+2672 glyph is: ♲ History Worldwide attention to environmental issues led to the first Earth Day in 1970. Container Corporation of America, a large producer of recycled paperboard, sponsored a contest for art and design students at high schools and colleges across the country to raise awareness of environmental issues. The contest, which drew more than 500 submissions, was won by Gary Anderson, whose entry was the image now known as the universal recycling symbol. Anderson, then a 23-year-old college student at the University of Southern California, was awarded a $2,500 scholarship. The public-domain status of the symbol has been challenged, but this challenge was unsuccessful owing to the wide use of the symbol. However, the universal recycling symbol may have been inspired by similar existing symbols at the time, such as one featuring two arrows chasing each other in a circle that Volkswagen stamped in the early 1960s into some automobile parts it remanufactured. Variants The recycling symbol is in the public domain and is not a trademark. The Container Corporation of America originally applied for a trademark on the design, but the application was challenged, and the corporation decided to abandon the claim. As such, anyone may use or modify the recycling symbol, royalty-free. Though use of the symbol is regulated by law in some countries, countless variants of it exist worldwide. Anderson's original proposal had the arrows form a triangle standing on its tip—upside down compared with the versions most commonly seen today—but the CCA, in adopting Anderson's design, rotated it 60° to stand on its base instead. Both Anderson's proposal and CCA's designs form a Möbius strip with one half-twist by having two of the arrows fold over each other and one fold under, thereby canceling out one of the other folds. However, most variants of the symbol used today have all the arrows folding over themselves, producing a Möbius strip with three half-twists. Existing single half-twist variants of the logo do not generally agree on which of the arrows is the one to fold underneath. The logo is usually displayed with the arrows circulating clockwise, but the underlying Möbius strip exists in two topologically distinct mirror-image forms of opposite handedness. The American Paper Institute originally promoted four different variants of the recycling symbol for different purposes. The plain Möbius loop, either white with an outline or solid black, was to be used to indicate that a product was recyclable. The other two variants had the Möbius loop inside a circle—either white on black or black on white—and were meant for products made of recycled materials, with the white-on-black version to be used for 100% recycled fiber, and the black-on-white version for products containing both recycled and unrecycled fiber. For example, a paper envelope might have both the first and last of these four symbols to indicate that it was recyclable and made from both recycled and unrecycled fibers. In addition to the resin identification codes 1–7 in the triangular recycling symbol, Unicode lists the following recycling symbols: (indicates product contains recycled paper) (indicates product contains partially recycled paper) (e.g. for acid-free paper) An ISO/IEC working group has researched and documented some of the variations of the recycling logo in use during 2001 and has made recommendations for adding some more of them to the Unicode standard. With the rapid expansion of materials converted to printer filament for 3D printing using recyclebot technology, a large expansion of resin identification codes has been proposed. Resin identification code In 1988, the American Society of the Plastics Industry (SPI) developed the resin identification code that is used to indicate the predominant plastic material used in the manufacture of the product or packaging. Their purpose is to assist recyclers with sorting the collected materials, but they do not necessarily mean that the product/packaging can be recycled either through domestic curbside collection or industrial collections. The SPI symbols are loosely based on the Möbius loop symbol, but feature simpler bent (rather than folded over) arrows that can be embossed on plastic surfaces without loss of detail. The arrows are formed into a flat, two-dimensional triangle rather than the pseudo-three-dimensional triangle used in the original recycling logo. The resin identification codes can be represented by Unicode icons Recycling codes extend these numbers above 7 to include various non-plastic materials, including metals, glass, paper and cardboard, and batteries of various types. Other variants ♾, an infinity sign (∞) inside a circle, represents the permanent paper symbol, used in packaging and publishing to signify the use of durable acid-free paper. In some ways, this logo expresses the opposite intention from the recycle logo, in that the acid-free paper is intended to last indefinitely, rather than being recycled. Nevertheless, acid-free paper does not usually contain toxic materials (although certain inks do), so it is easily recycled or composted. A satirical version of the classic recycling logo also exists, in which the three arrows are twisted from a circular pattern to pointing radially outward, thus symbolizing wasteful one-time usage rather than environmentally friendly recycling. This message is reinforced by the circular inscription, "THIS PROJECT WAS ENVIRONMENTALLY UNFRIENDLY", surrounding the modified logo. The satirical logo appears in the 1998 catalog of an installation art work in Bayonne, New Jersey, in which the artist Steven Pippin modified a row of glass-doored washing machines in a laundromat to operate as giant cameras. The cameras were used to take sequential photographs in the manner of pioneering stop motion photographer Eadweard Muybridge. The front-loading washing machines were then used to develop and process the 24 inch (61 cm) diameter circular film negatives. See also Green Dot symbol List of international common standards Japanese recycling symbols References Further reading Jones, Penny; Powell, Jerry. "Gary Anderson has been found!". Resource Recycling: North America's Recycling and Composting Journal, May 1999. Everson, Michael; Freytag, Asmus (2001-04-02). "Background information on Recycling Symbols" (PDF), ISO/IEC Working Group Document N2342 44 Recycle Logos and Symbols External links Certification marks Consumer symbols Recycling Symbols introduced in 1970
Recycling symbol
[ "Mathematics" ]
1,380
[ "Symbols", "Certification marks" ]
1,532,134
https://en.wikipedia.org/wiki/Aperture%20masking%20interferometry
Aperture masking interferometry (or Sparse aperture masking) is a form of speckle interferometry, that allows diffraction limited imaging from ground-based telescopes (like the Keck Telescope and the Very Large Telescope), and is a high contrast imaging mode on the James Webb Space Telescope. This technique allows ground-based telescopes to reach the maximum possible resolution, allowing ground-based telescopes with large diameters to produce far greater resolution than the Hubble Space Telescope. A mask is placed over the telescope which only allows light through a small number of holes. This array of holes acts as a miniature astronomical interferometer. The principal limitation of the technique is that it is applicable only to relatively bright astronomical objects, since the mask discards most of the light received from the astronomical source. The method was developed by John E. Baldwin and collaborators in the Cavendish Astrophysics Group at the University of Cambridge in the late 1980s. Description In the aperture masking technique, the bispectral analysis (speckle masking) method is typically applied to image data taken through masked apertures, where most of the aperture is blocked off and light can only pass through a series of small holes (subapertures). The aperture mask removes atmospheric noise from these measurements through the use of closure quantities, allowing the bispectrum to be measured more quickly than for an un-masked aperture. For simplicity the aperture masks are usually either placed in front of the secondary mirror (e.g. Tuthill et al. 2000) or placed in a re-imaged aperture plane (e.g. Haniff et al. 1987; Young et al. 2000; Baldwin et al. 1986), as shown in Figure 1.a) . The masks are usually categorised either as non-redundant or partially redundant. Non-redundant masks consist of arrays of small holes where no two pairs of holes have the same separation vector (the same baseline – see aperture synthesis). Each pair of holes provides a set of fringes at a unique spatial frequency in the image plane. Partially redundant masks are usually designed to provide a compromise between minimizing the redundancy of spacings and maximizing both the throughput and the range of spatial frequencies investigated (Haniff & Buscher 1992; Haniff et al. 1989). Figures 1.b) and 1.c) show examples of aperture masks used in front of the secondary at the Keck telescope by Peter Tuthill and collaborators; Figure 1.b) is a non-redundant mask while Figure 1.c) is partially redundant. Although the signal-to-noise of speckle masking observations at high light level can be improved with aperture masks, the faintest limiting magnitude cannot be significantly improved for photon-noise limited detectors (see Buscher & Haniff 1993). Interferometry with the James Webb Space Telescope Aperture Masking Interferometry is available on the James Webb Space Telescope, which is the first execution of this technique (or any form of interferometry) in space. This is enabled by a non-redundant mask with seven holes (sub-apertures), which is embedded as a mode of the NIRISS instrument. See also List of astronomical interferometers at visible and infrared wavelengths References Peter Tuthill's PhD thesis on aperture masking (PostScript) (PDF) Baldwin et al. (1986) Buscher & Haniff (1993) Haniff et al. (1987) Haniff et al., 1989 Buscher et al. 1990 Haniff & Buscher, 1992 Tuthill et al. (2000) Young et al. (2000) Further reading NIRISS Aperture Masking Interferometry External links Old method brings life to new stars – ABC Science Online Examples of high-resolution time-lapse movies produced with aperture masking Peter Tuthill awarded Eureka award for aperture masking work Astronomical interferometers Astronomical imaging Speckle imaging
Aperture masking interferometry
[ "Astronomy" ]
816
[ "Astronomical interferometers", "Astronomical instruments" ]
1,532,268
https://en.wikipedia.org/wiki/Epsilon%20Bo%C3%B6tis
Epsilon Boötis (ε Boötis, abbreviated Epsilon Boo, ε Boo), officially named Izar ( ), is a binary star in the northern constellation of Boötes. The star system can be viewed with the unaided eye at night, but resolving the pair with a small telescope is challenging; an aperture of or greater is required. Nomenclature ε Boötis (Latinised to Epsilon Boötis) is the star's Bayer designation. It bore the traditional names Izar, Mirak and Mizar, and was named by Friedrich Georg Wilhelm von Struve. Izar, and Mizar are from the and مئزر Mi'zar ('kilt like undergarment') and ('the loins'); is Latin for 'loveliest'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Izar for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names. In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated ( ), which was translated into Latin as , meaning 'belt of barker'. In Chinese astronomy, ('Celestial Lance'), refers to an asterism consisting of Epsilon Boötis, Sigma Boötis and Rho Boötis. Consequently, the Chinese name for Epsilon Boötis itself is ('the First Star of Celestial Lance'). Properties Epsilon Boötis consists of a pair of stars with an angular separation of at a position angle of . The brighter component (A) has an apparent visual magnitude of 2.37, making it readily visible to the naked eye at night. The fainter component (B) is at magnitude 5.12, which by itself would also be visible to the naked eye. Parallax measurements from the Hipparcos astrometry satellite put the system at a distance of about from the Earth. This means the pair has a projected separation of 185 Astronomical Units, and they orbit each other with a period of at least 1,000 years. The brighter member has a stellar classification of K0 II-III, which means it is a fairly late-stage star well into its stellar evolution, having already exhausted its supply of hydrogen fuel at the core. With more than four times the mass of the Sun, it has expanded to about 38 times the Sun's radius and is emitting 650 times the luminosity of the Sun. This energy is being radiated from its outer envelope at an effective temperature of 4,755 K, giving it the orange hue of a K-type star. The companion star has a classification of A2 V, so it is a main sequence star that is generating energy through the thermonuclear fusion of hydrogen at its core. This star is rotating rapidly, with a projected rotational velocity of . It has a surface temperature of about and a radius nearly three times the Sun, leading to a bolometric luminosity 45 times that of the Sun. By the time the smaller main sequence star reaches the current point of the primary in its evolution, the larger star will have lost much of its mass in a planetary nebula and will have evolved into a white dwarf. The pair will have essentially changed roles: the brighter star becoming the dim dwarf, while the lesser companion will shine as a giant star. In culture In 1973, the Scottish astronomer and science fiction writer Duncan Lunan claimed to have managed to interpret a message caught in the 1920s by two Norwegian physicists that, according to his theory, came from a 13,000 year old satellite polar orbiting the Earth known as the Black Knight and sent there by the inhabitants of a planet orbiting Epsilon Boötis. The story was even reported in Time magazine. Lunan later withdrew his Epsilon Boötis theory, presenting proofs against it and clarifying why he was brought to formulate it in the first place, but later revoked his withdrawal. References External links Information page for HR 5506 (Izar) on VizieR Information page for HR 5505 (ε Boötes B) on VizieR Information page for CCDM J14449+2704 (all component stars) on VizieR Image of Epsilon Boötis List of constellations and named stars Izar star chart with viewing information on in-the-sky.org ε Boötes B star chart with viewing information on in-the-sky.org Binary stars Bootis, 36 129988 9 072105 Bootis, Epsilon Boötes K-type bright giants K-type giants A-type main-sequence stars Izar 5505 6 BD+27 2417
Epsilon Boötis
[ "Astronomy" ]
965
[ "Boötes", "Constellations" ]
1,532,288
https://en.wikipedia.org/wiki/Parvise
A parvis or parvise is the open space in front of and around a cathedral or church, especially when surrounded by either colonnades or porticoes, as at St. Peter's Basilica in Rome. It is thus a church-specific type of forecourt, front yard or apron. Etymology The term derives via Old French from the Latin paradisus meaning "paradise". This in turn came via Ancient Greek from the Indo-European Aryan languages of ancient Iran, where it meant a walled enclosure or garden precinct with heavenly flowers planted by the Clercs (Clerics). Parvis of St Paul's Cathedral In London in the Middle Ages the Serjeants-at-law practised at the parvis of St Paul's Cathedral, where clients could seek their counsel. In the 14th century Geoffrey Chaucer referred to "A sergeant of the laws ware and wise/ That often hadde yben at the paruis...". Later, ecclesiastical courts developed at Doctors' Commons on the same site. Late English use In England the term was much later used to mean a room over the porch of a church. The architectural historians John Fleming, Hugh Honour and Nikolaus Pevsner, and the theologians Frank Cross and Elizabeth Livingstone all say this usage is wrong. The Oxford English Dictionary records this use as being "historical", and current in the middle of the 19th century. It may stem from an earlier misuse in F. Blomefield's book Norfolk, published in 1744. Examples of English parvises See also Church of the Holy Sepulchre References Sources Further reading Architectural elements
Parvise
[ "Technology", "Engineering" ]
333
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,532,526
https://en.wikipedia.org/wiki/Mark%20Fletcher%20%28businessman%29
Mark Fletcher is an American entrepreneur. He was the founder and CEO of the news aggregator website, Bloglines, and the Vice President of Ask.com until June 2006. Ask Jeeves acquired Bloglines on 8 February 2005. On September 23, 2014 Fletcher launched Groups.io in beta. In February 2005, Fletcher won one of the annual Rave Awards, presented by Wired magazine, for the success of Bloglines. Fellow nominees in the Tech Innovator category were Internet entrepreneur Jimmy Wales, Adam Curry, Scott Maccabe, and Zhang Zuoyi. Previously, Fletcher started the free mailing list service ONElist. ONElist merged with eGroups, which was later acquired by Yahoo! in June 2000. Yahoo! Groups closed down in December 15, 2020. Many groups migrated to Groups.io Fletcher was also a software engineer at internet appliance maker Diba, Inc., now owned by Sun Microsystems, and at Pixel, Inc. Fletcher has invested in One True Media, Plaxo, Techdirt and Wesabe Fletcher obtained a Bachelor of Science degree in Computer Science from the University of California, San Diego. Footnotes External links wingedpig.com (Fletcher's weblog) American computer businesspeople American technology chief executives American technology company founders Angel investors American venture capitalists IAC Inc. people Living people University of California, San Diego alumni Yahoo! people Year of birth missing (living people)
Mark Fletcher (businessman)
[ "Technology" ]
292
[ "Computing stubs", "Computer specialist stubs" ]
1,532,579
https://en.wikipedia.org/wiki/Equidistant
A point is said to be equidistant from a set of objects if the distances between that point and each object in the set are equal. In two-dimensional Euclidean geometry, the locus of points equidistant from two given (different) points is their perpendicular bisector. In three dimensions, the locus of points equidistant from two given points is a plane, and generalising further, in n-dimensional space the locus of points equidistant from two points in n-space is an (n−1)-space. For a triangle the circumcentre is a point equidistant from each of the three vertices. Every non-degenerate triangle has such a point. This result can be generalised to cyclic polygons: the circumcentre is equidistant from each of the vertices. Likewise, the incentre of a triangle or any other tangential polygon is equidistant from the points of tangency of the polygon's sides with the circle. Every point on a perpendicular bisector of the side of a triangle or other polygon is equidistant from the two vertices at the ends of that side. Every point on the bisector of an angle of any polygon is equidistant from the two sides that emanate from that angle. The center of a rectangle is equidistant from all four vertices, and it is equidistant from two opposite sides and also equidistant from the other two opposite sides. A point on the axis of symmetry of a kite is equidistant between two sides. The center of a circle is equidistant from every point on the circle. Likewise the center of a sphere is equidistant from every point on the sphere. A parabola is the set of points in a plane equidistant from a fixed point (the focus) and a fixed line (the directrix), where distance from the directrix is measured along a line perpendicular to the directrix. In shape analysis, the topological skeleton or medial axis of a shape is a thin version of that shape that is equidistant from its boundaries. In Euclidean geometry, parallel lines (lines that never intersect) are equidistant in the sense that the distance of any point on one line from the nearest point on the other line is the same for all points. In hyperbolic geometry the set of points that are equidistant from and on one side of a given line form a hypercycle (which is a curve not a line). See also Equidistant set References Elementary geometry
Equidistant
[ "Mathematics" ]
567
[ "Elementary mathematics", "Elementary geometry" ]
1,532,591
https://en.wikipedia.org/wiki/Zud
A zud, dzud (), dzhut, zhut, djut, or jut (, , ) is a periodic disaster in steppe, semi-desert and desert regions in Mongolia and Central Asia (including Kazakhstan, Uzbekistan, Turkmenistan, Tajikistan, and Kyrgyzstan) in which large numbers of livestock die, primarily due to starvation, being unable to graze due to particular severe climatic conditions. Various kinds of zud are recognized, depending on the particular type of climatic conditions. In winter it may be caused by an impenetrable ice crust, and in summer it may happen due to drought. The literal translation of the Kazakh word 'жұт' is "devourer". One-third of Mongolia's population depends entirely on pastoral farming for its livelihood, which contributes to 80% of its agricultural output and 11% of the country's GDP. Harsh zuds can cause economic crises and food security issues in the country. Description In Mongolia, the following types of zud are recognized: tsagaan (white) zud results from high snowfall that prevents livestock from reaching the grass. It is a frequent and serious disaster that has caused a great number of deaths. khar (black) zud results from a lack of snowfall in grazing areas, leading to both livestock and humans lacking water. This type of zud does not occur every year, nor does it affect large areas. It mostly happens in the Gobi Desert region. tumur (iron) zud results from a short wintertime warming, followed by a return to sub-freezing temperatures. The snow melts and then freezes again, producing an impenetrable ice-cover that prevents livestock from grazing. huiten (cold) zud occurs when the temperature drops to very low levels for several days. The cold temperature and the strong winds prevent livestock from grazing; the animals have to use most of their energy to keep warm. havsarsan (combined) zud is a combination of at least two of the above types of zud. tuuvaryin zud is when any of the above are geographically widespread, and may include complications such as overgrazing. In Kazakhstan there is a proverb that "Djut has seven relatives" (жұт жеты агайынды). When interpreted, seven severe natural conditions are mentioned (not always the same), e.g., summer drought, grass drying out, early winter, deep snow, winter rains, ice crust, blizzard. Man-made factors Human factors worsen the situation caused by the harsh winters. Under the communist regime, the state regulated the size of the herds to prevent overgrazing. The 1990s saw a deregulation of Mongolia's economy and a simultaneous growth in worldwide demand for cashmere wool, which is made from goat hair. As a result, the number of goats in Mongolia has increased significantly. Unlike sheep, goats tend to damage the grass by nibbling at its roots; their sharp hooves also damage the upper layer of the pasture, which is subsequently swept away by the wind. This leads to desertification. Additionally, climate change has resulted in snowier winters and stronger droughts, both of which contribute to harsher and more frequent zuds. Mitigation Some traditional methods to protect the livestock from such inclement weather conditions include drying and storing cut grass during the summer months, and collecting sheep and goat dung to build dried flammable blocks called kizyak in Central Asia and аргал (аргал түлш) in Mongolia. Dried grass can be fed to animals to prevent death from starvation when zud occurs. The kizyak, or blocks of sheep and goat dung, are stacked to make a wall that protects the animals from the wind chills, and keep them warm enough to withstand the harsh conditions. These blocks can also be burnt as fuel during the winter. These methods are still practiced today in the westernmost parts of Mongolia, and areas formerly part of the Zuun Gar nation. Because of the semi-permanent structure of the winter shelter for their livestock and the cold, mostif not allnomads engage in transhumance (seasonal migration). They have winter locations to spend the winter, which are in a valley protected by mountains on most sides from the wind, while in the summer they move to more open space. Extent and history It is not uncommon for zuds to kill over one million head of livestock in a given winter. The 1944 record of almost seven million head of livestock lost was surpassed in the 21st century. The arctic oscillation in both 1944–45 and in 2010 was pushed much deeper into Central Asia, bringing prolonged extreme cold weather. In 1999–2000, 2000–2001, and 2001–2002, Mongolia was hit by three zuds in a row, in which a combined number of 11 million animals were lost. During the winter of 2009–2010, 80% of the country's territory was covered with a snow blanket of 200–600mm (7-24 inches). In the Uvs aimag, extreme cold (night temperature of −48 °C / −54 °F) remained for almost 50 days. 9,000 families lost their entire herds while a further 33,000 suffered 50% loss. The Ministry of Food, Agriculture and Light Industry reported 2,127,393 head of livestock were lost as of 9 February 2010 (188,270 horse, cattle, camel and 1,939,123 goat and sheep). The agriculture ministry predicted that livestock losses might reach four million before the end of winter; however, by May 2010, the United Nations reported that eight million, or about 17% of the country's entire livestock, had died. In the winter of 2015–2016, extreme temperatures were again recorded and the previous summer's drought led to insufficient hay fodder reserves for many herders, which caused another ongoing loss of livestock. The zud of winter 2023–2024 was particularly substantial with 2 million animals dead by late February, which had increased to 5 million by late March, and to a total of 7.1 million animals by early June, representing over 10% of the country's livestock population. Social consequences Some herders who lose all of their animals to zud have to seek a new life in the cities. Mongolia's capital, Ulaanbaatar, is surrounded by clusters of wooden houses without roads, water or sewage systems. Lacking in education and skills to survive in an urban environment, many displaced herders cannot find work and become extremely poor, may become addicted to alcohol, and may commit crime. Others risk their lives in dangerous illegal mining jobs. Notes References Zud Natural Disaster, Prevention and Recovery by Tsakhiagiin Elbegdorj, President of Mongolia Environment of Mongolia Weather hazards Environment of Kazakhstan Environment of Kyrgyzstan Natural disasters in Asia
Zud
[ "Physics" ]
1,409
[ "Weather", "Physical phenomena", "Weather hazards" ]
1,532,606
https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Riemann%E2%80%93Roch%20theorem
In mathematics, specifically in algebraic geometry, the Grothendieck–Riemann–Roch theorem is a far-reaching result on coherent cohomology. It is a generalisation of the Hirzebruch–Riemann–Roch theorem, about complex manifolds, which is itself a generalisation of the classical Riemann–Roch theorem for line bundles on compact Riemann surfaces. Riemann–Roch type theorems relate Euler characteristics of the cohomology of a vector bundle with their topological degrees, or more generally their characteristic classes in (co)homology or algebraic analogues thereof. The classical Riemann–Roch theorem does this for curves and line bundles, whereas the Hirzebruch–Riemann–Roch theorem generalises this to vector bundles over manifolds. The Grothendieck–Riemann–Roch theorem sets both theorems in a relative situation of a morphism between two manifolds (or more general schemes) and changes the theorem from a statement about a single bundle, to one applying to chain complexes of sheaves. The theorem has been very influential, not least for the development of the Atiyah–Singer index theorem. Conversely, complex analytic analogues of the Grothendieck–Riemann–Roch theorem can be proved using the index theorem for families. Alexander Grothendieck gave a first proof in a 1957 manuscript, later published. Armand Borel and Jean-Pierre Serre wrote up and published Grothendieck's proof in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof. Formulation Let X be a smooth quasi-projective scheme over a field. Under these assumptions, the Grothendieck group of bounded complexes of coherent sheaves is canonically isomorphic to the Grothendieck group of bounded complexes of finite-rank vector bundles. Using this isomorphism, consider the Chern character (a rational combination of Chern classes) as a functorial transformation: where is the Chow group of cycles on X of dimension d modulo rational equivalence, tensored with the rational numbers. In case X is defined over the complex numbers, the latter group maps to the topological cohomology group: Now consider a proper morphism between smooth quasi-projective schemes and a bounded complex of sheaves on The Grothendieck–Riemann–Roch theorem relates the pushforward map (alternating sum of higher direct images) and the pushforward by the formula Here is the Todd genus of (the tangent bundle of) X. Thus the theorem gives a precise measure for the lack of commutativity of taking the push forwards in the above senses and the Chern character and shows that the needed correction factors depend on X and Y only. In fact, since the Todd genus is functorial and multiplicative in exact sequences, we can rewrite the Grothendieck–Riemann–Roch formula as where is the relative tangent sheaf of f, defined as the element in . For example, when f is a smooth morphism, is simply a vector bundle, known as the tangent bundle along the fibers of f. Using A1-homotopy theory, the Grothendieck–Riemann–Roch theorem has been extended by to the situation where f is a proper map between two smooth schemes. Generalising and specialising Generalisations of the theorem can be made to the non-smooth case by considering an appropriate generalisation of the combination and to the non-proper case by considering cohomology with compact support. The arithmetic Riemann–Roch theorem extends the Grothendieck–Riemann–Roch theorem to arithmetic schemes. The Hirzebruch–Riemann–Roch theorem is (essentially) the special case where Y is a point and the field is the field of complex numbers. A version of Riemann–Roch theorem for oriented cohomology theories was proven by Ivan Panin and Alexander Smirnov. It is concerned with multiplicative operations between algebraic oriented cohomology theories (such as algebraic cobordism). The Grothendieck-Riemann-Roch is a particular case of this result, and the Chern character comes up naturally in this setting. Examples Vector bundles on a curve A vector bundle of rank and degree (defined as the degree of its determinant; or equivalently the degree of its first Chern class) on a smooth projective curve over a field has a formula similar to Riemann–Roch for line bundles. If we take and a point, then the Grothendieck–Riemann–Roch formula can be read as hence, This formula also holds for coherent sheaves of rank and degree . Smooth proper maps One of the advantages of the Grothendieck–Riemann–Roch formula is it can be interpreted as a relative version of the Hirzebruch–Riemann–Roch formula. For example, a smooth morphism has fibers which are all equi-dimensional (and isomorphic as topological spaces when base changing to ). This fact is useful in moduli-theory when considering a moduli space parameterizing smooth proper spaces. For example, David Mumford used this formula to deduce relationships of the Chow ring on the moduli space of algebraic curves. Moduli of curves For the moduli stack of genus curves (and no marked points) there is a universal curve where is the moduli stack of curves of genus and one marked point. Then, he defines the tautological classes where and is the relative dualizing sheaf. Note the fiber of over a point this is the dualizing sheaf . He was able to find relations between the and describing the in terms of a sum of (corollary 6.2) on the chow ring of the smooth locus using Grothendieck–Riemann–Roch. Because is a smooth Deligne–Mumford stack, he considered a covering by a scheme which presents for some finite group . He uses Grothendieck–Riemann–Roch on to get Because this gives the formula The computation of can then be reduced even further. In even dimensions , Also, on dimension 1, where is a class on the boundary. In the case and on the smooth locus there are the relations which can be deduced by analyzing the Chern character of . Closed embedding Closed embeddings have a description using the Grothendieck–Riemann–Roch formula as well, showing another non-trivial case where the formula holds. For a smooth variety of dimension and a subvariety of codimension , there is the formula Using the short exact sequence , there is the formula for the ideal sheaf since . Applications Quasi-projectivity of moduli spaces Grothendieck–Riemann–Roch can be used in proving that a coarse moduli space , such as the moduli space of pointed algebraic curves , admits an embedding into a projective space, hence is a quasi-projective variety. This can be accomplished by looking at canonically associated sheaves on and studying the degree of associated line bundles. For instance, has the family of curves with sections corresponding to the marked points. Since each fiber has the canonical bundle , there are the associated line bundles and It turns out that is an ample line bundlepg 209, hence the coarse moduli space is quasi-projective. History Alexander Grothendieck's version of the Riemann–Roch theorem was originally conveyed in a letter to Jean-Pierre Serre around 1956–1957. It was made public at the initial Bonn Arbeitstagung, in 1957. Serre and Armand Borel subsequently organized a seminar at Princeton University to understand it. The final published paper was in effect the Borel–Serre exposition. The significance of Grothendieck's approach rests on several points. First, Grothendieck changed the statement itself: the theorem was, at the time, understood to be a theorem about a variety, whereas Grothendieck saw it as a theorem about a morphism between varieties. By finding the right generalization, the proof became simpler while the conclusion became more general. In short, Grothendieck applied a strong categorical approach to a hard piece of analysis. Moreover, Grothendieck introduced K-groups, as discussed above, which paved the way for algebraic K-theory. See also Kawasaki's Riemann–Roch formula Notes References External links The Grothendieck-Riemann-Roch Theorem The thread "Applications of Grothendieck-Riemann-Roch?" on MathOverflow. The thread "how does one understand GRR? (Grothendieck Riemann Roch)" on MathOverflow. The thread "Chern class of ideal sheaf" on Stack Exchange. Topological methods of algebraic geometry Theorems in algebraic geometry Bernhard Riemann
Grothendieck–Riemann–Roch theorem
[ "Mathematics" ]
1,859
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
1,532,648
https://en.wikipedia.org/wiki/Supplee%27s%20paradox
In relativistic physics, Supplee's paradox (also called the submarine paradox) is a physical paradox that arises when considering the buoyant force exerted on a relativistic bullet (or in a submarine) immersed in a fluid subject to an ambient gravitational field. If a bullet has neutral buoyancy when it is at rest in a perfect fluid and then it is launched with a relativistic speed, observers at rest within the fluid would conclude that the bullet should sink, since its density will increase due to the length contraction effect. On the other hand, in the bullet's proper frame it is the moving fluid that becomes denser and hence the bullet would float. But the bullet cannot sink in one frame and float in another, so there is a paradox situation. The paradox was first formulated by James M. Supplee (1989), where a non-rigorous explanation was presented. George Matsas has analysed this paradox in the scope of general relativity and also pointed out that these relativistic buoyancy effects could be important in some questions regarding the thermodynamics of black holes. A comprehensive explanation of Supplee's paradox through both the special and the general theory of relativity was presented by Ricardo Soares Vieira. Hrvoje Nikolic noticed that rigidity of the submarine is not essential and presented a general relativistic analysis revealing that paradox resolves by the fact that the relevant velocity of the submarine is relative to Earth (which is the source of the gravitational field), not relative to the observer. Buoyancy To simplify the analysis, it is customary to neglect drag and viscosity, and even to assume that the fluid has constant density. A small object immersed in a container of fluid subjected to a uniform gravitational field will be subject to a net downward gravitational force, compared with the net downward gravitational force on an equal volume of the fluid. If the object is less dense than the fluid, the difference between these two vectors is an upward pointing vector, the buoyant force, and the object will rise. If things are the other way around, it will sink. If the object and the fluid have equal density, the object is said to have neutral buoyancy and it will neither rise nor sink. Resolution The resolution comes down to observing that the usual Archimedes principle cannot be applied in the relativistic case. If the theory of relativity is correctly employed to analyse the forces involved, there will be no true paradox. Supplee himself concluded that the paradox can be resolved with a more careful analysis of the gravitational buoyancy forces acting on the bullet. Considering the reasonable (but not justified) assumption that the gravitational force depends on the kinetic energy content of the bodies, Supplee showed that the bullet sinks in the frame at rest with the fluid with the acceleration , where is the gravitational acceleration and is the Lorentz factor. In the proper reference frame of the bullet, the same result is obtained by noting that this frame is not inertial, which implies that the shape of the container will no more be flat, on the contrary, the sea floor becomes curved upwards, which results in the bullet getting far away from the sea surface, i.e., in the bullet relatively sinking. The non-justified assumption considered by Supplee that the gravitational force on the bullet should depend on its energy content was eliminated by George Matsas, who used the full mathematical methods of general relativity in order to explain the Supplee paradox and agreed with Supplee's results. In particular, he modelled the situation using a Rindler chart, where a submarine is accelerated from the rest to a given velocity v. Matsas concluded that the paradox can be resolved by noting that in the frame of the fluid, the shape of the bullet is altered, and derived the same result which had been obtained by Supplee. Matsas has applied a similar analysis to shed light on certain questions involving the thermodynamics of black holes. Finally, Vieira has recently analysed the submarine paradox through both special and general relativity. In the first case, he showed that gravitomagnetic effects should be taken into account in order to describe the forces acting in a moving submarine underwater. When these effects are considered, a relativistic Archimedes principle can be formulated, from which he showed that the submarine must sink in both frames. Vieira also considered the case of a curved space-time in the proximity of the Earth. In this case he assumed that the space-time can be approximately regarded as consisting of a flat space but a curved time. He showed that in this case the gravitational force between the Earth at rest and a moving body increases with the speed of the body in the same way as considered by Supplee (), providing in this way a justification for his assumption. Analysing the paradox again with this speed-dependent gravitational force, the Supplee paradox is explained and the results agree with those obtained by Supplee and Matsas. See also Bell's spaceship paradox Ehrenfest paradox Ladder paradox Twin paradox References External links Light Speed Submarine - article about the paradox in Physical Review Focus Physical paradoxes Theory of relativity Relativistic paradoxes
Supplee's paradox
[ "Physics" ]
1,078
[ "Theory of relativity" ]
1,532,710
https://en.wikipedia.org/wiki/Hans%20Snook
Hans Roger Snook (born 26 May 1948) is a British businessman, best known for his time as the founder (with Graham Howe) and chief executive of British mobile phone company Orange. Early life He was born to a German mother and a British father, and grew up in Vancouver, British Columbia, Canada, where he went to the University of British Columbia. Career He then began a career in hotel management, which led him to Calgary for six years. In 1983, he set off on a round-the-world trip, which was cut short when he arrived in Hong Kong and became chief executive of a wireless paging business (which subsequently became part of the Hutchison Whampoa Group). In 1992 Snook was despatched to the UK where he closed Hutchison's Rabbit CT2 phone network and directed efforts to developing the UK's fourth mobile phone network. On 28 April 1994, Orange was launched. Within five years the company had developed an enviable reputation as well as a growing international presence. In October 1999, Mannesmann of Germany purchased Orange plc, in a failed attempt to challenge Vodafone as the world's leading mobile phone company. This set off a chain of events which resulted in France Télécom taking ownership of Orange, and in 2001 Snook stepped down as a special advisor to Orange. His public involvement since then in the UK telecoms industry was as chairman of Carphone Warehouse between 2002 and 2005. On stepping down from this post he was appointed non-executive chairman of Monstermob Group plc, the ringtone company. From 2002, Snook was a director of The Diagnostic Clinic Ltd, providing health screening linked to alternative medicine, and its parent company The Integrated Health Consultancy Limited. Both companies entered liquidation in 2012 with aggregate debts of £8.6 million, of which £7.2m was owed to Snook. Personal life Hans is divorced from his first wife Etta Lai Yee Lau. He is now married to Helen Seward. They live in Marbella. See also Peter Erskine (businessman), founder of O2 Chris Gent, founder of Vodafone References External links Guardian interview, Terry Macalister, 23 August 2003 1948 births Living people British technology chief executives British technology company founders British telecommunications industry businesspeople History of mobile telecommunications in the United Kingdom Orange S.A.
Hans Snook
[ "Technology" ]
487
[ "Mobile telecommunications", "History of mobile telecommunications in the United Kingdom" ]