id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,252,991 | https://en.wikipedia.org/wiki/Orbital%20hybridisation | In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies.
History and uses
Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C–H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called hybrid orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C–H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures.
Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity.
Overview
Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen.
Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form , where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4.
Types of hybridisation
sp3
Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms.
Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read:
This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H–C–H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation.
The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals.
The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds.
According to quantum mechanics the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids.
In CH4, four sp3 hybrid orbitals are overlapped by hydrogen 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength.
The following :
translates into :
sp2
Other carbon compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons.
For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised.
forming a total of three sp2 orbitals with one remaining p orbital. In ethene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data.
sp
The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals,
resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles.
Hybridisation and molecule shape
Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories.
spx hybridisation
As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules.
spxdy hybridisation
As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons.
sdx hybridisation
In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules.
Hybridisation of hypervalent molecules
Octet expansion
In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations.
In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding.
Resonance
In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule.
Hybridisation in computational VB theory
While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined a priori but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted:
that hybridisation is restricted to integer values (isovalent hybridisation)
that hybrid orbitals are orthogonal to one another (hybridisation defects)
This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values.
Isovalent hybridisation
Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described.
The hybridization of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed towards electropositive substituents".
For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals.
Hybridisation defects
Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg.
However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane.
Photoelectron spectra
One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results.
Localized vs canonical molecular orbitals
Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value.
Two localized representations
Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals.
For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an spx hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an spx hybrid orbital).
See also
Crystal field theory
Isovalent hybridisation
Ligand field theory
Linear combination of atomic orbitals
MO diagrams
VALBOND
References
External links
Covalent Bonds and Molecular Structure
Hybridisation flash movie
Hybrid orbital 3D preview program in OpenGL
Understanding Concepts: Molecular Orbitals
General Chemistry tutorial on orbital hybridization
Chemical bonding
Molecular geometry
Stereochemistry
Quantum chemistry | Orbital hybridisation | Physics,Chemistry,Materials_science | 3,303 |
19,700,214 | https://en.wikipedia.org/wiki/Wine%20lactone | Wine lactone is a pleasant smelling compound found naturally in apples, orange juice, grapefruit juice, orange essential oil, clementine peel oil and various grape wines. It was first discovered as an essential oil metabolite in koala urine by Southwell in 1975. It was discovered several years later by Guth in white wines and was named "wine lactone". This monoterpene imparts "coconut, woody and sweet" odors to a wine. There are 8 possible isomers of wine lactone with the (3S, 3a S, 7aR) isomer being the only one that has been found in wine. This isomer is also the most potent of all eight with an odor detection threshold of 10 ng/L in model wine.
The odor threshold of the (3S,3aS,7aR)-wine lactone stereoisomer is 0.00001-0.00004 ng/L in air.
References
External links
Leffingwell website lists a number of wine lactone variants, notably the 0.00001-0.00004 ng/L version with animated structures
Food additives
Gamma-lactones
Wine chemistry
Monoterpenes
Sweet-smelling chemicals | Wine lactone | Chemistry | 250 |
1,349,294 | https://en.wikipedia.org/wiki/Bell%20series | In mathematics, the Bell series is a formal power series used to study properties of arithmetical functions. Bell series were introduced and developed by Eric Temple Bell.
Given an arithmetic function and a prime , define the formal power series , called the Bell series of modulo as:
Two multiplicative functions can be shown to be identical if all of their Bell series are equal; this is sometimes called the uniqueness theorem: given multiplicative functions and , one has if and only if:
for all primes .
Two series may be multiplied (sometimes called the multiplication theorem): For any two arithmetic functions and , let be their Dirichlet convolution. Then for every prime , one has:
In particular, this makes it trivial to find the Bell series of a Dirichlet inverse.
If is completely multiplicative, then formally:
Examples
The following is a table of the Bell series of well-known arithmetic functions.
The Möbius function has
The Mobius function squared has
Euler's totient has
The multiplicative identity of the Dirichlet convolution has
The Liouville function has
The power function Idk has Here, Idk is the completely multiplicative function .
The divisor function has
The constant function, with value 1, satisfies , i.e., is the geometric series.
If is the power of the prime omega function, then
Suppose that f is multiplicative and g is any arithmetic function satisfying for all primes p and . Then
If denotes the Möbius function of order k, then
See also
Bell numbers
References
Arithmetic functions
Mathematical series | Bell series | Mathematics | 328 |
3,404,894 | https://en.wikipedia.org/wiki/Difference%20in%20differences | Difference in differences (DID or DD) is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differential effect of a treatment on a 'treatment group' versus a 'control group' in a natural experiment. It calculates the effect of a treatment (i.e., an explanatory variable or an independent variable) on an outcome (i.e., a response variable or dependent variable) by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group. Although it is intended to mitigate the effects of extraneous factors and selection bias, depending on how the treatment group is chosen, this method may still be subject to certain biases (e.g., mean regression, reverse causality and omitted variable bias).
In contrast to a time-series estimate of the treatment effect on subjects (which analyzes differences over time) or a cross-section estimate of the treatment effect (which measures the difference between treatment and control groups), difference in differences uses panel data to measure the differences, between the treatment and control group, of the changes in the outcome variable that occur over time.
General definition
Difference in differences requires data measured from a treatment group and a control group at two or more different time periods, specifically at least one time period before "treatment" and at least one time period after "treatment." In the example pictured, the outcome in the treatment group is represented by the line P and the outcome in the control group is represented by the line S. The outcome (dependent) variable in both groups is measured at time 1, before either group has received the treatment (i.e., the independent or explanatory variable), represented by the points P1 and S1. The treatment group then receives or experiences the treatment and both groups are again measured at time 2. Not all of the difference between the treatment and control groups at time 2 (that is, the difference between P2 and S2) can be explained as being an effect of the treatment, because the treatment group and control group did not start out at the same point at time 1. DID, therefore, calculates the "normal" difference in the outcome variable between the two groups (the difference that would still exist if neither group experienced the treatment), represented by the dotted line Q. (Notice that the slope from P1 to Q is the same as the slope from S1 to S2.) The treatment effect is the difference between the observed outcome (P2) and the "normal" outcome (the difference between P2 and Q).
Formal definition
Consider the model
where is the dependent variable for individual and time , is the group to which belongs (i.e. the treatment or the control group), and is short-hand for the dummy variable equal to 1 when the event described in is true, and 0 otherwise. In the plot of time versus by group, is the vertical intercept for the graph for , and is the time trend shared by both groups according to the parallel trend assumption (see Assumptions below). is the treatment effect, and is the residual term.
Consider the average of the dependent variable and dummy indicators by group and time:
and suppose for simplicity that and . Note that is not random; it just encodes how the groups and the periods are labeled. Then
The strict exogeneity assumption then implies that
Without loss of generality, assume that is the treatment group, and is the after period, then and , giving the DID estimator
which can be interpreted as the treatment effect of the treatment indicated by . Below it is shown how this estimator can be read as a coefficient in an ordinary least squares regression. The model described in this section is over-parametrized; to remedy that, one of the coefficients for the dummy variables can be set to 0, for example, we may set .
Assumptions
All the assumptions of the OLS model apply equally to DID. In addition, DID requires a parallel trend assumption. The parallel trend assumption says that are the same in both and . Given that the formal definition above accurately represents reality, this assumption automatically holds. However, a model with may well be more realistic. In order to increase the likelihood of the parallel trend assumption holding, a difference-in-differences approach is often combined with matching. This involves 'matching' known 'treatment' units with simulated counterfactual 'control' units: characteristically equivalent units which did not receive treatment. By defining the Outcome Variable as a temporal difference (change in observed outcome between pre- and posttreatment periods), and matching multiple units in a large sample on the basis of similar pre-treatment histories, the resulting ATE (i.e. the ATT: Average Treatment Effect for the Treated) provides a robust difference-in-differences estimate of treatment effects. This serves two statistical purposes: firstly, conditional on pre-treatment covariates, the parallel trends assumption is likely to hold; and secondly, this approach reduces dependence on associated ignorability assumptions necessary for valid inference.
As illustrated to the right, the treatment effect is the difference between the observed value of y and what the value of y would have been with parallel trends, had there been no treatment. The Achilles' heel of DID is when something other than the treatment changes in one group but not the other at the same time as the treatment, implying a violation of the parallel trend assumption.
To guarantee the accuracy of the DID estimate, the composition of individuals of the two groups is assumed to remain unchanged over time. When using a DID model, various issues that may compromise the results, such as autocorrelation and Ashenfelter dips, must be considered and dealt with.
Implementation
The DID method can be implemented according to the table below, where the lower right cell is the DID estimator.
Running a regression analysis gives the same result. Consider the OLS model
where is a dummy variable for the period, equal to when , and is a dummy variable for group membership, equal to when . The composite variable is a dummy variable indicating when . Although it is not shown rigorously here, this is a proper parametrization of the model formal definition, furthermore, it turns out that the group and period averages in that section relate to the model parameter estimates as follows
where stands for conditional averages computed on the sample, for example, is the indicator for the after period, is an indicator for the control group. Note that is an estimate of the counterfactual rather than the impact of the control group. The control group is often used as a proxy for the counterfactual (see, Synthetic control method for a deeper understanding of this point). Thereby, can be interpreted as the impact of both the control group and the intervention's (treatment's) counterfactual. Similarly, , due to the parallel trend assumption, is also the same differential between the treatment and control group in . The above descriptions should not be construed to imply the (average) effect of only the control group, for , or only the difference of the treatment and control groups in the pre-period, for . As in Card and Krueger, below, a first (time) difference of the outcome variable eliminates the need for time-trend (i.e., ) to form an unbiased estimate of , implying that is not actually conditional on the treatment or control group. Consistently, a difference among the treatment and control groups would eliminate the need for treatment differentials (i.e., ) to form an unbiased estimate of . This nuance is important to understand when the user believes (weak) violations of parallel pre-trend exist or in the case of violations of the appropriate counterfactual approximation assumptions given the existence of non-common shocks or confounding events. To see the relation between this notation and the previous section, consider as above only one observation per time period for each group, then
and so on for other values of and , which is equivalent to
But this is the expression for the treatment effect that was given in the formal definition and in the above table.
Example
The Card and Krueger article on minimum wage in New Jersey, published in 1994, is considered one of the most famous DID studies; Card was later awarded the 2021 Nobel Memorial Prize in Economic Sciences in part for this and related work. Card and Krueger compared employment in the fast food sector in New Jersey and in Pennsylvania, in February 1992 and in November 1992, after New Jersey's minimum wage rose from $4.25 to $5.05 in April 1992. Observing a change in employment in New Jersey only, before and after the treatment, would fail to control for omitted variables such as weather and macroeconomic conditions of the region. By including Pennsylvania as a control in a difference-in-differences model, any bias caused by variables common to New Jersey and Pennsylvania is implicitly controlled for, even when these variables are unobserved. Assuming that New Jersey and Pennsylvania have parallel trends over time, Pennsylvania's change in employment can be interpreted as the change New Jersey would have experienced, had they not increased the minimum wage, and vice versa. The evidence suggested that the increased minimum wage did not induce a decrease in employment in New Jersey, contrary to what some economic theory would suggest. The table below shows Card & Krueger's estimates of the treatment effect on employment, measured as FTEs (or full-time equivalents). Card and Krueger estimate that the $0.80 minimum wage increase in New Jersey led to an average 2.75 FTE increase in employment per store.
A software example application of this research is found on the Stata's command -diff- authored by Juan Miguel Villa.
See also
Design of experiments
Average treatment effect
Synthetic control method
References
Further reading
External links
Difference in Difference Estimation, Healthcare Economist website
Econometric modeling
Regression analysis
Design of experiments
Observational study
Causal inference
Subtraction | Difference in differences | Mathematics | 2,063 |
40,716,202 | https://en.wikipedia.org/wiki/Smith%20space | In functional analysis and related areas of mathematics, a Smith space is a complete compactly generated locally convex topological vector space having a universal compact set, i.e. a compact set which absorbs every other compact set (i.e. for some ).
Smith spaces are named after
Marianne Ruth Freundlich Smith, who introduced them as duals to Banach spaces in some versions of duality theory for topological vector spaces. All Smith spaces are stereotype and are in the stereotype duality relations with Banach spaces:
for any Banach space its stereotype dual space is a Smith space,
and vice versa, for any Smith space its stereotype dual space is a Banach space.
Smith spaces are special cases of Brauner spaces.
Examples
As follows from the duality theorems, for any Banach space its stereotype dual space is a Smith space. The polar of the unit ball in is the universal compact set in . If denotes the normed dual space for , and the space endowed with the -weak topology, then the topology of lies between the topology of and the topology of , so there are natural (linear continuous) bijections
If is infinite-dimensional, then no two of these topologies coincide. At the same time, for infinite dimensional the space is not barreled (and even is not a Mackey space if is reflexive as a Banach space).
If is a convex balanced compact set in a locally convex space , then its linear span possesses a unique structure of a Smith space with as the universal compact set (and with the same topology on ).
If is a (Hausdorff) compact topological space, and the Banach space of continuous functions on (with the usual sup-norm), then the stereotype dual space (of Radon measures on with the topology of uniform convergence on compact sets in ) is a Smith space. In the special case when is endowed with a structure of a topological group the space becomes a natural example of a stereotype group algebra.
A Banach space is a Smith space if and only if is finite-dimensional.
See also
Stereotype space
Brauner space
Notes
References
Functional analysis
Topological vector spaces | Smith space | Mathematics | 443 |
26,834,387 | https://en.wikipedia.org/wiki/Maternal%20to%20zygotic%20transition | Maternal to zygotic transition (MZT), also known as embryonic genome activation, is the stage in embryonic development during which development comes under the exclusive control of the zygotic genome rather than the maternal (egg) genome. The egg contains stored maternal genetic material mRNA which controls embryo development until the onset of MZT. After MZT the diploid embryo takes over genetic control.
This requires both zygotic genome activation (ZGA), and degradation of maternal products. This process is important because it is the first time that the new embryonic genome is utilized and the paternal and maternal genomes are used in combination (ie. different alleles will be expressed). The zygotic genome now drives embryo development.
MZT is often thought to be synonymous with midblastula transition (MBT), but these processes are, in fact, distinct. However, the MBT roughly coincides with ZGA in many metazoans, and thus may share some common regulatory features. For example, both processes are proposed to be regulated by the nucleocytoplasmic ratio. MBT strictly refers to changes in the cell cycle and cell motility that occur just prior to gastrulation. In the early cleavage stages of embryogenesis, rapid divisions occur synchronously and there are no "gap" stages in the cell cycle. During these stages, there is also little to no transcription of mRNA from the zygotic genome, but zygotic transcription is not required for MBT to occur. Cellular functions during early cleavage are carried out primarily by maternal products – proteins and mRNAs contributed to the egg during oogenesis.
Zygotic genome activation
To begin transcription of zygotic genes, the embryo must first overcome the silencing that has been established. The cause of this silencing could be due to several factors: chromatin modifications leading to repression, lack of adequate transcription machinery, or lack of time in which significant transcription can occur due to the shortened cell cycles. Evidence for the first method was provided by Newport and Kirschner's experiments showing that nucleocytoplasmic ratio plays a role in activating zygotic transcription. They suggest that a defined amount of repressor is packaged into the egg, and that the exponential amplification of DNA at each cell cycle results in titration of the repressor at the appropriate time. Indeed, in Xenopus embryos in which excess DNA is introduced, transcription begins earlier. More recently, evidence has been shown that transcription of a subset of genes in Drosophila is delayed by one cell cycle in haploid embryos. The second mechanism of repression has also been addressed experimentally. Prioleau et al. show that by introducing TATA binding protein (TBP) into Xenopus oocytes, the block in transcription can be partially overcome. The hypothesis that shortened cell cycles can cause repression of transcription is supported by the observation that mitosis causes transcription to cease.
The generally accepted mechanism for the initiation of embryonic gene regulatory networks in mammals is that there are multiple waves of MZT. In mouse, the first of these occurs in the zygote, where expression of a few pioneering transcription factors gradually increases the expression of target genes downstream. This induction of genes leads to a second major MZT event
Clearing of maternal transcripts
To eliminate the contribution of maternal gene products to development, maternally-supplied mRNAs must be degraded in the embryo. Studies in Drosophila have shown that sequences in the 3' UTR of maternal transcripts mediate their degradation These sequences are recognized by regulatory proteins that cause destabilization or degradation of the transcripts. Recent studies in both zebrafish and Xenopus have found evidence of a role for microRNAs in degradation of maternal transcripts. In zebrafish, the microRNA miR-430 is expressed at the onset of zygotic transcription and targets several hundred mRNAs for deadenylation and degradation. Many of these targets are genes that are expressed maternally. Similarly, in Xenopus, the miR-430 ortholog miR-427 has been shown to target maternal mRNAs for deadenylation. Specifically, miR-427 targets include cell cycle regulators such as Cyclin A1 and Cyclin B2.
References
Developmental biology | Maternal to zygotic transition | Biology | 902 |
18,716,923 | https://en.wikipedia.org/wiki/Algebra | Algebra is the branch of mathematics that studies certain abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication.
Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions.
Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra, since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures.
Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries, when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences.
Definition and etymology
Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations defined on that set, like addition and multiplication. Algebra explores the laws, general characteristics, and types of algebraic structures. Within certain algebraic structures, it examines the use of variables in equations and how to manipulate these equations.
Algebra is often understood as a generalization of arithmetic. Arithmetic studies operations like addition, subtraction, multiplication, and division, in a particular domain of numbers, such as the real numbers. Elementary algebra constitutes the first level of abstraction. Like arithmetic, it restricts itself to specific types of numbers and operations. It generalizes these operations by allowing indefinite quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal algebra is still more abstract in that it is not interested in specific algebraic structures but investigates the characteristics of algebraic structures in general.
The term "algebra" is sometimes used in a more narrow sense to refer only to elementary algebra or only to abstract algebra. When used as a countable noun, an algebra is a specific type of algebraic structure that involves a vector space equipped with a certain type of binary operation. Depending on the context, "algebra" can also refer to other algebraic structures, like a Lie algebra or an associative algebra.
The word algebra comes from the Arabic term (), which originally referred to the surgical treatment of bonesetting. In the 9th century, the term received a mathematical meaning when the Persian mathematician Muhammad ibn Musa al-Khwarizmi employed it to describe a method of solving equations and used it in the title of a treatise on algebra, [The Compendious Book on Calculation by Completion and Balancing] which was translated into Latin as . The word entered the English language in the 16th century from Italian, Spanish, and medieval Latin. Initially, its meaning was restricted to the theory of equations, that is, to the art of manipulating polynomial equations in view of solving them. This changed in the 19th century when the scope of algebra broadened to cover the study of diverse types of algebraic operations and structures together with their underlying axioms, the laws they follow.
Major branches
Elementary algebra
Elementary algebra, also called school algebra, college algebra, and classical algebra, is the oldest and most basic form of algebra. It is a generalization of arithmetic that relies on variables and examines how mathematical statements may be transformed.
Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithm. For example, the operation of addition combines two numbers, called the addends, into a third number, called the sum, as in
Elementary algebra relies on the same operations while allowing variables in addition to regular numbers. Variables are symbols for unspecified or unknown quantities. They make it possible to state relationships for which one does not know the exact values and to express general laws that are true, independent of which numbers are used. For example, the equation belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of multiplication, which is expressed in the equation
Algebraic expressions are formed by using arithmetic operations to combine variables and numbers. By convention, the lowercase letters , , and represent variables. In some cases, subscripts are added to distinguish variables, as in , , and . The lowercase letters , , and are usually used for constants and coefficients. The expression is an algebraic expression created by multiplying the number 5 with the variable and adding the number 3 to the result. Other examples of algebraic expressions are and
Some algebraic expressions take the form of statements that relate two expressions to one another. An equation is a statement formed by comparing two expressions, saying that they are equal. This can be expressed using the equals sign (), as in Inequations involve a different type of comparison, saying that the two sides are different. This can be expressed using symbols such as the less-than sign (), the greater-than sign (), and the inequality sign (). Unlike other expressions, statements can be true or false, and their truth value usually depends on the values of the variables. For example, the statement is true if is either 2 or −2 and false otherwise. Equations with variables can be divided into identity equations and conditional equations. Identity equations are true for all values that can be assigned to the variables, such as the equation Conditional equations are only true for some values. For example, the equation is only true if is 5.
The main goal of elementary algebra is to determine the values for which a statement is true. This can be achieved by transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs to subtract 5 from the right side to balance both sides. The goal of these steps is usually to isolate the variable one is interested in on one side, a process known as solving the equation for that variable. For example, the equation can be solved for by adding 7 to both sides, which isolates on the left side and results in the equation
There are many other techniques used to solve equations. Simplification is employed to replace a complicated expression with an equivalent simpler one. For example, the expression can be replaced with the expression since by the distributive property. For statements with several variables, substitution is a common technique to replace one variable with an equivalent expression that does not use this variable. For example, if one knows that then one can simplify the expression to arrive at In a similar way, if one knows the value of one variable one may be able to use it to determine the value of other variables.
Algebraic equations can be interpreted geometrically to describe spatial figures in the form of a graph. To do so, the different variables in the equation are understood as coordinates and the values that solve the equation are interpreted as points of a graph. For example, if is set to zero in the equation , then must be −1 for the equation to be true. This means that the -pair is part of the graph of the equation. The -pair by contrast, does not solve the equation and is therefore not part of the graph. The graph encompasses the totality of -pairs that solve the equation.
Polynomials
A polynomial is an expression consisting of one or more terms that are added or subtracted from each other, like Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive-integer power. A monomial is a polynomial with one term while two- and three-term polynomials are called binomials and trinomials. The degree of a polynomial is the maximal value (among its terms) of the sum of the exponents of the variables (4 in the above example). Polynomials of degree one are called linear polynomials. Linear algebra studies systems of linear polynomials. A polynomial is said to be univariate or multivariate, depending on whether it uses one or more variables.
Factorization is a method used to simplify polynomials, making it easier to analyze them and determine the values for which they evaluate to zero. Factorization consists in rewriting a polynomial as a product of several factors. For example, the polynomial can be factorized as The polynomial as a whole is zero if and only if one of its factors is zero, i.e., if is either −2 or 5. Before the 19th century, much of algebra was devoted to polynomial equations, that is equations obtained by equating a polynomial to zero. The first attempts for solving polynomial equations were to express the solutions in terms of th roots. The solution of a second-degree polynomial equation of the form is given by the quadratic formula
Solutions for the degrees 3 and 4 are given by the cubic and quartic formulas. There are no general solutions for higher degrees, as proven in the 19th century by the Abel–Ruffini theorem. Even when general solutions do not exist, approximate solutions can be found by numerical tools like the Newton–Raphson method.
The fundamental theorem of algebra asserts that every univariate polynomial equation of positive degree with real or complex coefficients has at least one complex solution. Consequently, every polynomial of a positive degree can be factorized into linear polynomials. This theorem was proved at the beginning of the 19th century, but this does not close the problem since the theorem does not provide any way for computing the solutions.
Linear algebra
Linear algebra starts with the study of systems of linear equations. An equation is linear if it can be expressed in the form where , , ..., and are constants. Examples are and . A system of linear equations is a set of linear equations for which one is interested in common solutions.
Matrices are rectangular arrays of values that have been originally introduced for having a compact and synthetic notation for systems of linear equations. For example, the system of equations
can be written as
where and are the matrices
Under some conditions on the number of rows and columns, matrices can be added, multiplied, and sometimes inverted. All methods for solving linear systems may be expressed as matrix manipulations using these operations. For example, solving the above system consists of computing an inverted matrix such that where is the identity matrix. Then, multiplying on the left both members of the above matrix equation by one gets the solution of the system of linear equations as
Methods of solving systems of linear equations range from the introductory, like substitution and elimination, to more advanced techniques using matrices, such as Cramer's rule, the Gaussian elimination, and LU decomposition. Some systems of equations are inconsistent, meaning that no solutions exist because the equations contradict each other. Consistent systems have either one unique solution or an infinite number of solutions.
The study of vector spaces and linear maps form a large part of linear algebra. A vector space is an algebraic structure formed by a set with an addition that makes it an abelian group and a scalar multiplication that is compatible with addition (see vector space for details). A linear map is a function between vector spaces that is compatible with addition and scalar multiplication. In the case of finite-dimensional vector spaces, vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear map: if one chooses a particular basis to describe the vectors being transformed, then the entries in the matrix give the results of applying the linear map to the basis vectors.
Systems of equations can be interpreted as geometric figures. For systems with two variables, each equation represents a line in two-dimensional space. The point where the two lines intersect is the solution of the full system because this is the only point that solves both the first and the second equation. For inconsistent systems, the two lines run parallel, meaning that there is no solution since they never intersect. If two equations are not independent then they describe the same line, meaning that every solution of one equation is also a solution of the other equation. These relations make it possible to seek solutions graphically by plotting the equations and determining where they intersect. The same principles also apply to systems of equations with more variables, with the difference being that the equations do not describe lines but higher dimensional figures. For instance, equations with three variables correspond to planes in three-dimensional space, and the points where all planes intersect solve the system of equations.
Abstract algebra
Abstract algebra, also called modern algebra, is the study of algebraic structures. An algebraic structure is a framework for understanding operations on mathematical objects, like the addition of numbers. While elementary algebra and linear algebra work within the confines of particular algebraic structures, abstract algebra takes a more general approach that compares how algebraic structures differ from each other and what types of algebraic structures there are, such as groups, rings, and fields. The key difference between these types of algebraic structures lies in the number of operations they use and the laws they obey. In mathematics education, abstract algebra refers to an advanced undergraduate course that mathematics majors take after completing courses in linear algebra.
On a formal level, an algebraic structure is a set of mathematical objects, called the underlying set, together with one or several operations. Abstract algebra is primarily interested in binary operations, which take any two objects from the underlying set as inputs and map them to another object from this set as output. For example, the algebraic structure has the natural numbers () as the underlying set and addition () as its binary operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary operation is function composition, which takes two transformations as input and has the transformation resulting from applying the first transformation followed by the second as its output.
Group theory
Abstract algebra classifies algebraic structures based on the laws or axioms that its operations obey and the number of operations it uses. One of the most basic types is a group, which has one operation and requires that this operation is associative and has an identity element and inverse elements. An operation is associative if the order of several applications does not matter, i.e., if is the same as for all elements. An operation has an identity element or a neutral element if one element e exists that does not change the value of any other element, i.e., if An operation has inverse elements if for any element there exists a reciprocal element that undoes . If an element operates on its inverse then the result is the neutral element e, expressed formally as Every algebraic structure that fulfills these requirements is a group. For example, is a group formed by the set of integers together with the operation of addition. The neutral element is 0 and the inverse element of any number is The natural numbers with addition, by contrast, do not form a group since they contain only positive integers and therefore lack inverse elements.
Group theory examines the nature of groups, with basic theorems such as the fundamental theorem of finite abelian groups and the Feit–Thompson theorem. The latter was a key early step in one of the most important mathematical achievements of the 20th century: the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
Ring theory and field theory
A ring is an algebraic structure with two operations that work similarly to addition and multiplication of numbers and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, and Moreover, multiplication is associative and has an identity element generally denoted as . Multiplication needs not to be commutative; if it is commutative, one has a commutative ring. The ring of integers () is one of the simplest commutative rings.
A field is a commutative ring such that and each nonzero element has a multiplicative inverse. The ring of integers does not form a field because it lacks multiplicative inverses. For example, the multiplicative inverse of is which is not an integer. The rational numbers, the real numbers, and the complex numbers each form a field with the operations of addition and multiplication.
Ring theory is the study of rings, exploring concepts such as subrings, quotient rings, polynomial rings, and ideals as well as theorems such as Hilbert's basis theorem. Field theory is concerned with fields, examining field extensions, algebraic closures, and finite fields. Galois theory explores the relation between field theory and group theory, relying on the fundamental theorem of Galois theory.
Theories of interrelations among structures
Besides groups, rings, and fields, there are many other algebraic structures studied by algebra. They include magmas, semigroups, monoids, abelian groups, commutative rings, modules, lattices, vector spaces, algebras over a field, and associative and non-associative algebras. They differ from each other in regard to the types of objects they describe and the requirements that their operations fulfill. Many are related to each other in that a basic structure can be turned into a more advanced structure by adding additional requirements. For example, a magma becomes a semigroup if its operation is associative.
Homomorphisms are tools to examine structural features by comparing two algebraic structures. A homomorphism is a function from the underlying set of one algebraic structure to the underlying set of another algebraic structure that preserves certain structural characteristics. If the two algebraic structures use binary operations and have the form and then the function is a homomorphism if it fulfills the following requirement: The existence of a homomorphism reveals that the operation in the second algebraic structure plays the same role as the operation does in the first algebraic structure. Isomorphisms are a special type of homomorphism that indicates a high degree of similarity between two algebraic structures. An isomorphism is a bijective homomorphism, meaning that it establishes a one-to-one relationship between the elements of the two algebraic structures. This implies that every element of the first algebraic structure is mapped to one unique element in the second structure without any unmapped elements in the second structure.
Another tool of comparison is the relation between an algebraic structure and its subalgebra. The algebraic structure and its subalgebra use the same operations, which follow the same axioms. The only difference is that the underlying set of the subalgebra is a subset of the underlying set of the algebraic structure. All operations in the subalgebra are required to be closed in its underlying set, meaning that they only produce elements that belong to this set. For example, the set of even integers together with addition is a subalgebra of the full set of integers together with addition. This is the case because the sum of two even numbers is again an even number. But the set of odd integers together with addition is not a subalgebra because it is not closed: adding two odd numbers produces an even number, which is not part of the chosen subset.
Universal algebra is the study of algebraic structures in general. As part of its general perspective, it is not concerned with the specific elements that make up the underlying sets and considers operations with more than two inputs, such as ternary operations. It provides a framework for investigating what structural features different algebraic structures have in common. One of those structural features concerns the identities that are true in different algebraic structures. In this context, an identity is a universal equation or an equation that is true for all elements of the underlying set. For example, commutativity is a universal equation that states that is identical to for all elements. A variety is a class of all algebraic structures that satisfy certain identities. For example, if two algebraic structures satisfy commutativity then they are both part of the corresponding variety.
Category theory examines how mathematical objects are related to each other using the concept of categories. A category is a collection of objects together with a collection of morphisms or "arrows" between those objects. These two collections must satisfy certain conditions. For example, morphisms can be joined, or composed: if there exists a morphism from object to object , and another morphism from object to object , then there must also exist one from object to object . Composition of morphisms is required to be associative, and there must be an "identity morphism" for every object. Categories are widely used in contemporary mathematics since they provide a unifying framework to describe and analyze many fundamental mathematical concepts. For example, sets can be described with the category of sets, and any group can be regarded as the morphisms of a category with just one object.
History
The origin of algebra lies in attempts to solve mathematical problems involving arithmetic calculations and unknown quantities. These developments happened in the ancient period in Babylonia, Egypt, Greece, China, and India. One of the earliest documents on algebraic problems is the Rhind Mathematical Papyrus from ancient Egypt, which was written around 1650 BCE. It discusses solutions to linear equations, as expressed in problems like "A quantity; its fourth is added to it. It becomes fifteen. What is the quantity?" Babylonian clay tablets from around the same time explain methods to solve linear and quadratic polynomial equations, such as the method of completing the square.
Many of these insights found their way to the ancient Greeks. Starting in the 6th century BCE, their main interest was geometry rather than algebra, but they employed algebraic methods to solve geometric problems. For example, they studied geometric figures while taking their lengths and areas as unknown quantities to be determined, as exemplified in Pythagoras' formulation of the difference of two squares method and later in Euclid's Elements. In the 3rd century CE, Diophantus provided a detailed treatment of how to solve algebraic equations in a series of books called Arithmetica. He was the first to experiment with symbolic notation to express polynomials. Diophantus's work influenced Arab development of algebra with many of his methods reflected in the concepts and techniques used in medieval Arabic algebra. In ancient China, The Nine Chapters on the Mathematical Art, a book composed over the period spanning from the 10th century BCE to the 2nd century CE, explored various techniques for solving algebraic equations, including the use of matrix-like constructs.
There is no unanimity as to whether these early developments are part of algebra or only precursors. They offered solutions to algebraic problems but did not conceive them in an abstract and general manner, focusing instead on specific cases and applications. This changed with the Persian mathematician al-Khwarizmi, who published his The Compendious Book on Calculation by Completion and Balancing in 825 CE. It presents the first detailed treatment of general methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries.
In India, Brahmagupta investigated how to solve quadratic equations and systems of equations with several variables in the 7th century CE. Among his innovations were the use of zero and negative numbers in algebraic equations. The Indian mathematicians Mahāvīra in the 9th century and Bhāskara II in the 12th century further refined Brahmagupta's methods and concepts. In 1247, the Chinese mathematician Qin Jiushao wrote the Mathematical Treatise in Nine Sections, which includes an algorithm for the numerical evaluation of polynomials, including polynomials of higher degrees.
The Italian mathematician Fibonacci brought al-Khwarizmi's ideas and techniques to Europe in books including his Liber Abaci. In 1545, the Italian polymath Gerolamo Cardano published his book Ars Magna, which covered many topics in algebra, discussed imaginary numbers, and was the first to present general methods for solving cubic and quartic equations. In the 16th and 17th centuries, the French mathematicians François Viète and René Descartes introduced letters and symbols to denote variables and operations, making it possible to express equations in an abstract and concise manner. Their predecessors had relied on verbal descriptions of problems and solutions. Some historians see this development as a key turning point in the history of algebra and consider what came before it as the prehistory of algebra because it lacked the abstract nature based on symbolic manipulation.
In the 17th and 18th centuries, many attempts were made to find general solutions to polynomials of degree five and higher. All of them failed. At the end of the 18th century, the German mathematician Carl Friedrich Gauss proved the fundamental theorem of algebra, which describes the existence of zeros of polynomials of any degree without providing a general solution. At the beginning of the 19th century, the Italian mathematician Paolo Ruffini and the Norwegian mathematician Niels Henrik Abel were able to show that no general solution exists for polynomials of degree five and higher. In response to and shortly after their findings, the French mathematician Évariste Galois developed what came later to be known as Galois theory, which offered a more in-depth analysis of the solutions of polynomials while also laying the foundation of group theory. Mathematicians soon realized the relevance of group theory to other fields and applied it to disciplines like geometry and number theory.
Starting in the mid-19th century, interest in algebra shifted from the study of polynomials associated with elementary algebra towards a more general inquiry into algebraic structures, marking the emergence of abstract algebra. This approach explored the axiomatic basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as the Austrian mathematician Emil Artin. They researched different forms of algebraic structures and categorized them based on their underlying axioms into types, like groups, rings, and fields.
The idea of the even more general approach associated with universal algebra was conceived by the English mathematician Alfred North Whitehead in his 1898 book A Treatise on Universal Algebra. Starting in the 1930s, the American mathematician Garrett Birkhoff expanded these ideas and developed many of the foundational concepts of this field. The invention of universal algebra led to the emergence of various new areas focused on the algebraization of mathematicsthat is, the application of algebraic methods to other branches of mathematics. Topological algebra arose in the early 20th century, studying algebraic structures such as topological groups and Lie groups. In the 1940s and 50s, homological algebra emerged, employing algebraic techniques to study homology. Around the same time, category theory was developed and has since played a key role in the foundations of mathematics. Other developments were the formulation of model theory and the study of free algebras.
Applications
The influence of algebra is wide-reaching, both within mathematics and in its applications to other fields. The algebraization of mathematics is the process of applying algebraic methods and principles to other branches of mathematics, such as geometry, topology, number theory, and calculus. It happens by employing symbols in the form of variables to express mathematical insights on a more general level, allowing mathematicians to develop formal models describing how objects interact and relate to each other.
One application, found in geometry, is the use of algebraic statements to describe geometric figures. For example, the equation describes a line in two-dimensional space while the equation corresponds to a sphere in three-dimensional space. Of special interest to algebraic geometry are algebraic varieties, which are solutions to systems of polynomial equations that can be used to describe more complex geometric figures. Algebraic reasoning can also solve geometric problems. For example, one can determine whether and where the line described by intersects with the circle described by by solving the system of equations made up of these two equations. Topology studies the properties of geometric figures or topological spaces that are preserved under operations of continuous deformation. Algebraic topology relies on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them.
Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field of inquiry. Examples are the use of algebraic expressions to describe general laws, like Fermat's Last Theorem, and of algebraic structures to analyze the behavior of numbers, such as the ring of integers. The related field of combinatorics uses algebraic techniques to solve problems related to counting, arrangement, and combination of discrete objects. An example in algebraic combinatorics is the application of group theory to analyze graphs and symmetries. The insights of algebra are also relevant to calculus, which uses mathematical expressions to examine rates of change and accumulation. It relies on algebra, for instance, to understand how these expressions can be transformed and what role variables play in them. Algebraic logic employs the methods of algebra to describe and analyze the structures and patterns that underlie logical reasoning, exploring both the relevant mathematical structures themselves and their application to concrete problems of logic. It includes the study of Boolean algebra to describe propositional logic as well as the formulation and analysis of algebraic structures corresponding to more complex systems of logic.
Algebraic methods are also commonly employed in other areas, like the natural sciences. For example, they are used to express scientific laws and solve equations in physics, chemistry, and biology. Similar applications are found in fields like economics, geography, engineering (including electronics and robotics), and computer science to express relationships, solve problems, and model systems. Linear algebra plays a central role in artificial intelligence and machine learning, for instance, by enabling the efficient processing and analysis of large datasets. Various fields rely on algebraic structures investigated by abstract algebra. For example, physical sciences like crystallography and quantum mechanics make extensive use of group theory, which is also employed to study puzzles such as Sudoku and Rubik's cubes, and origami. Both coding theory and cryptology rely on abstract algebra to solve problems associated with data transmission, like avoiding the effects of noise and ensuring data security.
Education
Algebra education mostly focuses on elementary algebra, which is one of the reasons why elementary algebra is also called school algebra. It is usually not introduced until secondary education since it requires mastery of the fundamentals of arithmetic while posing new cognitive challenges associated with abstract reasoning and generalization. It aims to familiarize students with the formal side of mathematics by helping them understand mathematical symbolism, for example, how variables can be used to represent unknown quantities. An additional difficulty for students lies in the fact that, unlike arithmetic calculations, algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often with the goal of determining an unknown quantity.
Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including sticks or cups, and "function machines" representing equations as flow diagrams. One method uses balance scales as a pictorial approach to help students grasp basic problems of algebra. The mass of some objects on the scale is unknown and represents variables. Solving an equation corresponds to adding and removing objects on both sides in such a way that the sides stay in balance until the only object remaining on one side is the object of unknown mass. Word problems are another tool to show how algebra is applied to real-life situations. For example, students may be presented with a situation in which Naomi's brother has twice as many apples as Naomi. Given that both together have twelve apples, students are then asked to find an algebraic equation that describes this situation () and to determine how many apples Naomi has
At the university level, mathematics students encounter advanced algebra topics from linear and abstract algebra. Initial undergraduate courses in linear algebra focus on matrices, vector spaces, and linear maps. Upon completing them, students are usually introduced to abstract algebra, where they learn about algebraic structures like groups, rings, and fields, as well as the relations between them. The curriculum typically also covers specific instances of algebraic structures, such as the systems of the rational numbers, the real numbers, and the polynomials.
See also
References
Notes
Citations
Sources
External links | Algebra | Mathematics | 6,957 |
18,605,046 | https://en.wikipedia.org/wiki/List%20of%20antibiotics | The following is a list of antibiotics. The highest division between antibiotics is bactericidal and bacteriostatic. Bactericidals kill bacteria directly, whereas bacteriostatics prevent them from dividing. However, these classifications are based on laboratory behavior. The development of antibiotics has had a profound effect on the health of people for many years. Also, both people and animals have used antibiotics to treat infections and diseases. In practice, both treat bacterial infections.
By coverage
The following are lists of antibiotics for specific microbial coverage (not an exhaustive list):
MRSA
Antibiotics that cover methicillin-resistant Staphylococcus aureus (MRSA):
Vancomycin
Teicoplanin
Linezolid
Daptomycin
Trimethoprim/sulfamethoxazole
Doxycycline
Ceftobiprole (5th generation)
Ceftaroline (5th generation)
Clindamycin
Dalbavancin
Delafloxacin
Fusidic acid
Mupirocin (topical)
Omadacycline
Oritavancin
Tedizolid
Telavancin
Tigecycline (also covers gram negatives)
Pseudomonas aeruginosa
Antibiotics that cover Pseudomonas aeruginosa:
Certain cephalosporins, cephalosporin-beta-lactamase-inhibitor combinations, and new siderophore cephalosporins.
Ceftazidime (3rd generation)
Cefepime (4th generation)
Ceftobiprole (5th generation)
Ceftolozane/tazobactam
Ceftazidime/avibactam
Cefiderocol(siderophore cephalosporin)
Certain penicillins:
Piperacillin and Piperacillin/tazobactam
Ticarcillin/clavulanic acid
Certain carbapenems and carbapenem-beta-lactamase-inhibitors combinations:
Carbapenems: (meropenem, imipenem/cilastatin, doripenem - NOT ertapenem)
Meropenem/vaborbactam
Imipenem/cilastatin/relebactam
Others:
Fluoroquinolones: particularly levofloxacin, ciprofloxacin
Polymyxins: Colistin, Polymyxin B
Aztreonam (monobactam)
Aminoglycosides - particularly tobramycin and amikacin
VRE
Antibiotics that usually have activity against vancomycin-resistant Enterococcus (VRE):
Linezolid and Tedizolid
Streptogramins such as quinupristin-dalfopristin
Advanced generation tetracyclines: Tigecycline, Omadacycline, Eravacycline
Daptomycin
Oritavancin
Antibiotics with less reliable but occasional (depending on isolate and subspecies) activity:
occasionally penicillins including penicillin, ampicillin and ampicillin-sulbactam, amoxicillin and amoxicillin-clavulnate, and piperacillin-tazobactam (not all vancomycin-resistant Enterococcus isolates are resistant to penicillin and ampicillin)
occasionally doxycycline and minocycline
occasionally fluoroquinolones such as moxifloxacin, levofloxacin, and ciprofloxacin
By class
See also pathogenic bacteria for a list of antibiotics sorted by target bacteria.
Note: (Bs): Bacteriostatic
Antibiotic candidates
These are antibiotic candidates, and known antibiotics that are not yet mass-produced.
See also
Timeline of antibiotics, listed by year of introduction
Pathogenic bacteria
Notes
References
Antibiotics | List of antibiotics | Biology | 797 |
44,764,475 | https://en.wikipedia.org/wiki/Aldehyde%20tag | An aldehyde tag is a short peptide tag that can be further modified to add fluorophores, glycans, PEG (polyethylene glycol) chains, or reactive groups for further synthesis. A short, genetically-encoded peptide with a consensus sequence LCxPxR is introduced into fusion proteins, and by subsequent treatment with the formylglycine-generating enzyme (FGE), the cysteine of the tag is converted to a reactive aldehyde group. This electrophilic group can be targeted by an array of aldehyde-specific reagents, such as aminooxy- or hydrazide-functionalized compounds.
Development
The aldehyde tag is an artificial peptide tag recognized by the formylglycine-generating enzyme (FGE). Formylglycine is a glycine with a formyl group (-CHO) at the α-carbon. The sulfatase motif is the basis for the sequence of the peptide which results in the site-specific conversion of a cysteine to a formylglycine residue. The peptide tag was engineered after studies on FGE recognizable sequences in sulfatases from different organisms revealed a high homology in the sulfatase motif in bacteria, archaea as well as eukaryotes.
Aldehydes and ketones are used as chemical reporters due to their electrophilic properties. These properties enable a reaction under mild conditions when using a strong nucleophilic coupling partner. Typically, hydrazides and aminooxy probes are used in bioconjugation by forming stabilized addition products with carbonyl groups that are favored under the physiological reaction conditions. At neutral pH, the equilibrium of Schiff base formation lies far to the reactant side. To form stable hydrazones and oximes, compound derivatives are used to yield more product. Since the pH optimum of 4 to 6 cannot be achieved by adding a catalyst due to associated toxicity, the reaction is slow in live cells. A typical reaction constant is 10−4 to 10−3 M−1 s−1.
A carbonyl group is introduced into proteins as a chemical reporter using various techniques, including methods like stop codon suppression and aldehyde tagging. Limiting the use of aldehydes and ketones is their restricted bioorthogonality in certain cellular environments. Limitations of aldehydes and ketones as chemical reporters include:
Competition with endogenous aldehydes or ketones in metabolites and cofactors, resulting in low yields and impaired specificity.
Side reactions, such as oxidation or unwanted addition of endogenous nucleophiles.
Restrained set of probes that form sufficiently stable products.
Aldehydes and ketones are therefore best used in compartments where such unwanted side reactions are decreased. For experiments with live cells, cell surfaces and extracellular space are typical fielding areas. Nevertheless, a feature of carbonyl groups is the vast number of organic reactions that involve them as electrophiles. Some of these reactions are readily convertible to ligations for probing aldehydes. A reaction recently employed for bioconjugation by Agarwal et al. is the adaptation of the Pictet-Spengler reaction as a ligation. The reaction is known from natural product biosynthetic pathways and has the major advantage of forming a new carbon-carbon bond. This guarantees long-term stability compared to carbon-heteroatom bonds with similar reaction kinetics.
The modification of cysteine or, more rarely, serine by FGE is an uncommon posttranslational modification that was discovered in the late 1990s. The deficiency of FGE leads to an overall deficiency of functional sulfatases due to a lack of α-formylglycine formation vital for the sulfatases to perform their function. FGE is essential for protein modification and need of high specificity and conversion rate is given in the native setting, which makes this reaction applicable in chemical and synthetic biology.
Aldehyde tags were first inserted into the modified sulfatase motif peptide for proteins of interest in 2007. Since then, similar usage of aldehydes and ketones as chemical reporters in bioorthogonal applications has been demonstrated in self-assembly of cell-lysing drugs, the targeting of proteins, as well as glycans and the preparation of heterobifunctional fusion proteins.
Genetically encoding the aldehyde tag
The formylglycine tag or aldehyde tag is a convenient 6- or 13-amino acids long tag fused to a protein of interest. The 6-mer tag represents the small core consensus sequence and the 13-mer tag the longer full motif. The experiments on the genetically encoded aldehyde tag by clearly showed the high conversion efficiency with only the core consensus sequence present. Four proteins were produced recombinantly in E.coli with an 86% efficiency of for the full-length motif and >90% efficiency for the 6-mer determined by mass spectrometry.
The size of the sequence is analogous to the commonly used 6x His-Tag and has the advantage that it can also be genetically encoded. The sequence is recognized in the ER solely depending on primary sequence and subsequently targeted by FGE. Notably, in the setup of recombinant expression proteins in E. coli a coexpression of exogenous FGE aids full conversion, although E. coli has endogenous FGE-activity.
The introduction of an aldehyde tag has a workflow that consists of three segments: A the expression of the fusion protein, that carries the peptide tag derived from the sulfatase motif, B the enzymatic conversion of Cys to f(Gly) and C the bioorthogonal probing with hydrazides or alkoxy amines (Fig. 1).
As seen in Fig. 1, the engineered aldehyde tag consists of six amino acids. A set of organisms from all domains of life was chosen and the sequence homology of the sulfatase motif was determined. The sequence used is the best consensus for sequences found in bacteria, archaea, worms and higher vertebrates.
FGE-mechanism of cysteine-formylglycine conversion
The catalytic mechanism of FGE is well studied. A multistep redox reaction with a covalent enzyme: substrate intermediate is proposed. The role of the cysteine residue for the occurring conversion was studied by mutating the cysteine to alanine. No conversion was found using mass spectrometry when the mutated peptide tag was used. The mechanism shows the important role of the redox active thiol group of cysteine in the formation of f(Gly), as seen in Fig. 2.
The key step of the catalytic cycle is the monooxidation of the cysteine residue of the enzyme, forming a reactive sulfenic acid intermediate. Subsequently, the hydroxyl group is transferred to the cysteine of the substrate and after hetero-analogous β-elimination of H2O, a thioaldehyde is formed. This compound is very reactive and easily hydrolyzed, releasing the aldehyde and a molecule of H2S,
Applications
The aldehyde tag is a technique which recently found increased application because of the introduction of bioorthogonal chemical reporters. Bioorthogonal agents contain functional groups such as azides or cyclooctynes for coupling which are not naturally found in the cell. Due to their foreignness, they seem inert and do not disrupt the native metabolism,
Fig. 3 gives an overview of possible labeling methods for formylglycine. For example, it can be coupled to probes such as biotin or a protein tag like Flag that are useful for purification and detection. Furthermore, fluorophores can be directly conjugated for live cell imaging. The conjugation of polyethylene glycol (PEG) chains to potential drug candidates extends the stability against proteases in body fluids and at the same time reduces renal clearance and immunogenicity. The first application described here, deals with the formation of protein-protein conjugates through bioorthogonal probes. Since, the aldehyde tag is strictly speaking not a true bioorthogonal agent as it can be found in various metabolites, it can cause cross reactions during protein labeling. However, coupling bioorthogonal probes such as azides or cyclooctynes can be applied to overcome this obstacle. As a second application, the coupling of glycan moieties to proteins is presented here. It can be utilised in the strategy of chemically introduced glycosylation patterns.
Forming protein-protein conjugates via Cu-free click chemistry
Studies have explored the strategy of producing protein-protein conjugates with the help of the aldehyde tag. Their aim was to connect full length human IgG (hIgG) to the human growth hormone (hGH). These protein-protein conjugates can be superior to monomeric proteins in terms of serum half life in protein therapeutics and, additionally, have appealing dual binding properties.
In order to achieve protein fusion, the five-residue aldehyde tag (CxPxR) was incooperated into hIgG and hGH. In hIgG, the aldehyde tag was introduced at the C termini of the two heavy chains, resulting in two possible conjugation sites. FGE then oxidizes the cysteine residue to formylglycine (fGly) during protein expression. For the subsequent conjugation steps, the strategy of the copper-free click chemistry was selected. A strain-promoted 1,3-dipolar cycloaddition of a cyclooctynes and an azide was carried out forming a covalent linkage (also termed the Cu-free azide-alkyne cycloaddition). Thus, the aldehyde bearing proteins react under oxime formation with different heterobifunctional linkers which carry an aminooxy residue on one end and either an azide or cyclooctynes on the other. This results in the attachment of hIgG to a linker containing a cyclooctyne (here dibenzoazacyclooctyne (DIBAC)) and hGH to a linker holding an azide function (Fig.: 2A and B). The proteins hGH and hIgG were also treated with DIBAC-488, azide Alexa Fluor 647 and analysed by SDS-PAGE and Western blot to validate oxime formation. Next, the DIBAC-hIgG and azide-hGH derivatives are joined by Cu-free click chemistry (Fig.: 2C). The resulting fusion proteins were purified and analyzed by immunoblot (see Hudak et al. 2012).
The Western blots were first stained with Ponceau and then incubated with IgG antibodies against hGH and subsequently treated with α-mIgG HRP and α-hIgG 647 for visualisation. In the hIgG-hGH conjugate Western blot (nonreducing conditions), two separate bands with different molecular weights are visible after immunodetection. These can be contributed to the formation of mono- and bi-conjugated hGH to hIgG.
Chemical glycosylation of the IgG Fc fragment
Nature has perfected glycosylation of proteins through a complex interaction of enzymes and carbohydrates over thousands of years. However, chemical glycosylation is still an obstacle due to the difficult synthesis of glycan in general. The synthesis of carbohydrate derivatives can be slow and tedious. Nonetheless, the interest in technologies to structurally mimic protein glycosylation is an appealing application as some protein functions solely depend on the pattern of the attached glycan. The Fc fragment of the IgG antibody, for example, is a homodimer with a highly conserved N-glycosylation site. The attached sugar moieties modulate the binding to specific immunoreceptors, thereby modifying the whole antibody function.
Smith et al. demonstrate the application of the aldehyde tag as a chemical conjugation site for glycans. The aldehyde tag sequence was incooperated into the Fc construct and introduced into CHO (Chinese hamster ovary) cells. As controls, gene constructs were used in which the cysteine residue was mutated to an alanine. After expression, the Fc proteins were purified using a protein A/G agarose column. The conversion in CHO cells of cystein to formylglycine was examined using aminooxy AlexaFluor 488 and subsequent SDS-PAGE. However, fluorescence scanning displayed no fluorescence labeling, i.e. no formylglycine formation by endogenous FGE in CHO cells. The unaltered proteins were then treated with recombinant FGE from Mycobacterium tuberculosis in vitro in which the aldehyde group was successfully installed at the glycosylation site of Fc (Fig. 3A).
Next, the introduction of N-acetylglucoseamine (GlcNAc) to the aldehyde tagged proteins via oxime formation was carried out through the treatment with aminooxy GlcNAc (AO-GlcNAc) (Fig. 3B). The conjugation was confirmed by liquid chromatography-electrospray ionisation-mass spectrometry (LC-ESI-MS) and lectin blot with the GlcNAc-binding wheat germ agglutinin attached to AlexaFluor 647. Having successfully introduced GlcNAc, the monomer was extended with a glycan structure containing GlcNAc, mannose (Man) and galactose (Gal) (Fig. 3C). A mutant endoglycosidase EndoS (EndoS-D233Q) was utilised as it is highly specific for IgG Fc N-linked GlcNAc residues and does not elongate Asn-GlcNAc sites on other proteins or on denatured IgGs. Product formation was again monitored by LC-ESI-MS and lectin blot probing, with the sialic acid-binding sambucus nigra agglutinin attached to fluorescein isothiocyanate.
A successful chemical glycosylation of the Fc IgG fragment was achieved which resembles the natural occurring glycosylation pattern. The study discussed above focused on the IgG antibody, however, the application of the aldehyde tag for glycan conjugation could potentially be extended to other proteins.
References
External links
Biochemical separation processes
Peptides | Aldehyde tag | Chemistry,Biology | 3,089 |
2,283,477 | https://en.wikipedia.org/wiki/Sextans%20Dwarf%20Spheroidal | The Sextans Dwarf Spheroidal is a dwarf spheroidal galaxy that was discovered in 1990 by Mike Irwin as the 8th satellite of the Milky Way, located in the constellation of Sextans. It is also an elliptical galaxy, and displays a redshift because it is receding from the Sun at 224 km/s (72 km/s from the Galaxy). The distance to the galaxy is 320,000 light-years and the diameter is 8,400 light-years along its major axis.
Like other dwarf spheroidal galaxies, the Sextans Dwarf's population consists of old, metal-poor stars: one study found that the majority of stars have a metallicity between [Fe/H] = −3.2 and −1.4. An analysis of several stars found them to also be deficient in barium, except for one star.
References
Dwarf spheroidal galaxies
Local Group
Milky Way Subgroup
Sextans
? | Sextans Dwarf Spheroidal | Astronomy | 200 |
37,472,617 | https://en.wikipedia.org/wiki/Aspergillus%20foetidus | Aspergillus foetidus is a species of fungus in the genus Aspergillus.
References
foetidus
Fungi described in 1945
Taxa named by Charles Thom
Fungus species | Aspergillus foetidus | Biology | 37 |
8,938,988 | https://en.wikipedia.org/wiki/Hyfrecator | A hyfrecator is a low-powered medical apparatus used in electrosurgery on conscious patients, usually in an office setting. It is used to destroy tissue directly, and to stop bleeding during minor surgery. It works by emitting low-power high-frequency high-voltage AC electrical pulses, via an electrode mounted on a handpiece, directly to the affected area of the body. A continuous electric spark discharge may be drawn between probe and tissue, especially at the highest settings of power, although this is not necessary for the device to function. The amount of output power is adjustable, and the device is equipped with different tips, electrodes and forceps, depending on the electrosurgical requirement. Unlike other types of electrosurgery, the hyfrecator does not employ a dispersive electrode pad that is attached to the patient in an area not being treated, and that leads back to the apparatus (sometimes loosely but not quite correctly called a "ground pad"). It is designed to work with non-grounded (insulated) patients.
The word hyfrecator is a portmanteau derived from “high-frequency eradicator.” It was introduced as a brand name for a device introduced in 1940 by the Birtcher Corporation of Los Angeles. Birtcher also trademark registered the name Hyfrecator in 1939, and rights to the registered trademark were acquired by CONMED Corporation when it acquired Birtcher in 1995. Today, machines with the name Hyfrecator are sold only by ConMed Corporation. However, the word "hyfrecator" is sometimes used as a genericized trademark to refer to any dedicated non-ground-return electrosurgical apparatus, and a number of manufacturers now produce such machines, although not by this name.
Differentiation from other types of electrosurgical equipment
The hyfrecator primarily differs from other electrosurgical devices in that it is low-powered and not intended for cutting tissue, thus enabling its use with conscious patients. The hyfrecator does not require a dispersive return pad, referred-to in the electrosurgery field as a "ground pad," or "patient plate," because the hyfrecator can pass a very low-powered current between forceps tips via bipolar output, or pass an A.C. current between one pointed metal electrode probe and the patient, with the patient's self-capacitance alone providing a current sink--this is equivalent to considering displacement current to be the return current.
In the latter mode, the patient must sit or lie on an insulated table, much as in the case with objects to be charged electrostatically with high-voltage D.C. (as from a Van de Graaff generator, for example). Stray ground paths between the patient and foreign conductors (such as a metal table leading somewhere to earth-ground) can offer another capacitative reservoir besides the patient, and burns out of the area of treatment may thus result, from current passing between patient and the earth-ground. For this reason, hyfrecation and all non-ground-pad electrosurgery is performed only on conscious patients, who would be aware of the burn and discomfort from an unwanted earth-ground path. (In types of electrosurgery which do employ a ground-pad, the ground-pad path serves as such a low resistance ground to the machine, that extraneous other ground paths become unimportant, and thus with proper precautions these methods can, and often are, used on anesthetized patients).
Because hyfrecation is always a relatively low-power modality, it can be used in some situations (such as very small nevus removal or skin tag removal) without local anaesthesia. In many other uses to destroy larger lesions, a local anesthetic injection or regional nerve block is used. The pain from hyfrecation is due to the burning of tissue, and the pain of electric current is absent, due to the high (radio) frequency which does not directly cause discharge of nerves.
Although the hyfrecator is not used primarily to cut tissue, it may be used in a secondary capacity to control bleeding, after tissue is cut by a standard surgical scalpel, or else it may be used to partly destroy superficial tissue, that is then removed by the scraping action of a curette. These are done under local anesthesia. An example of such a combination procedure is the standard method of electrodesiccation and curettage used by dermatologists to destroy skin cancers.
Modes of use
Hyfrecators are used in two principal modes:
Desiccation, in which electrical energy kills tissue near the probe tip by heating it past the temperature at which cells can survive. The method is called desiccation because it removes water from tissue as steam, leaving the tissue white and dead, without obviously being burned. This mode is usually employed with the probe in physical contact with the skin or lesion to be destroyed. This method is notable for causing relatively little actual destruction at the point of skin contact, but a large zone of destruction beneath the skin, as the current from the probe fans out into the tissue below the point of contact. Such effects may be deliberately employed in destruction of subcutaneous nodules, where minimal damage to the intact and normal skin surface is desired, at the same time as destruction and degeneration of a larger mass immediately beneath the skin, such as a subcutaneous wart or sebaceous gland.
Fulguration, in which a deliberate spark is generated by touching or nearly touching the sharp probe to the lesion or skin. This results in far higher temperatures at the point of contact of the spark to skin, causing very high temperatures and carbonization (eschar) of the tissue immediately at the spark-contact point, and just below it. Thus, it results in the highest effect at the point of spark contact. This is most useful for completely destroying very superficial structures, such as nevi and skin tags, which protrude above the skin surface.
Targets of use
The hyfrecator has a large number of uses, such as removal of warts (especially recalcitrant warts), pearly penile papules, desiccation of sebaceous gland disorders, electrocautery of bleeding, epilation, destruction of small cosmetically unwanted superficial veins, in certain types of plastic surgery, and many other dermatological tasks. It may also be instrumental in the destruction of skin cancers such as basal cell carcinoma. For larger amounts of tissue destruction, the hyfrecator may be used in multiple sessions in the same area or point, as for example to gradually reduce the size of a large subcutaneous structure, such as a plantar wart.
The hyfrecator is useful to control bleeding in dermatological office surgery in conscious patients, after tissue-cutting, tissue removal, or biopsy is first done mechanically, with a scalpel. See electrodesiccation and curettage.
The hyfrecator can be used in almost all fields of medicine, e.g. podiatry, dentistry, ophthalmology, gynecology, and veterinary medicine.
More recently, the hyfrecator is being used by those performing body modification services as a more precise way to brand the skin for aesthetic purposes. It allows more intricate and elaborate designs to be burned into the skin.
References
External links
Hyfrecator on ConMed site
"The hyfrecator: a treatment for radiation induced telangiectasia in breast cancer patients" -- British Journal of Radiology
"Comparison of potassium titanyl phosphate vascular laser and hyfrecator in the treatment of vascular spiders and cherry angiomas." (Abstract) - Clinical and experimental dermatology
Medical equipment | Hyfrecator | Biology | 1,636 |
242,084 | https://en.wikipedia.org/wiki/Traceability%20matrix | In software development, a traceability matrix (TM) is a document, usually in the form of a table, used to assist in determining the completeness of a relationship by correlating any two baselined documents using a many-to-many relationship comparison. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the product to the matching parts of high-level design, detailed design, test plan, and test cases.
A requirements traceability matrix may be used to check if the current project requirements are being met, and to help in the creation of a request for proposal, software requirements specification, various deliverable documents, and project plan tasks.
Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if a relationship must be made. Large values imply that the relationship is too complex and should be simplified.
To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward and forward traceability. That way, when an item is changed in one baselined document, it is easy to see what needs to be changed in the other.
Sample traceability matrix
See also
Requirements traceability
Software engineering
List of requirements engineering tools
References
External links
Bidirectional Requirements Traceability by Linda Westfall
StickyMinds article: Traceability Matrix by Karthikeyan V
Why Software Requirements Traceability Remains a Challenge by Andrew Kannenberg and Dr. Hossein Saiedian
Software testing
Software requirements | Traceability matrix | Engineering | 394 |
7,777,747 | https://en.wikipedia.org/wiki/Web%20science | Web science is an emerging interdisciplinary field concerned with the study of large-scale socio-technical systems, particularly the World Wide Web. It considers the relationship between people and technology, the ways that society and technology co-constitute one another and the impact of this co-constitution on broader society. Web Science combines research from disciplines as diverse as sociology, computer science, economics, and mathematics.
An earlier definition was given by American computer scientist Ben Shneiderman: "Web Science" is processing the information available on the web in similar terms to those applied to natural environment.
The Web Science Institute describes Web Science as focusing "the analytical power of researchers from disciplines as diverse as mathematics, sociology, economics, psychology, law and computer science to understand and explain the Web. It is necessarily interdisciplinary – as much about social and organizational behaviour as about the underpinning technology." A central pillar of Web science development is Artificial Intelligence or "AI". The current artificial intelligence that in development at the moment is Human-Centered, with goals to further professional development courses as well as influencing public policy. Artificial intelligence developers are focused on the most impactful uses of this technology, while also hoping to expedite the growth and development of the human race.
Areas of activity
Emergent properties
Philip Tetlow, an IBM-based scientist influential in the emergence of web science as an independent discipline, argued for the concept of web life, which considers the Web not as a connected network of computers, as in common interpretations of the Internet, but rather as a sociotechnical machine capable of fusing together individuals and organisations into larger coordinated groups. It argues that unlike the technologies that have come before it, the Web is different in that its phenomenal growth and complexity are starting to outstrip our capability to control it directly, making it impossible for us to grasp its completeness in one go. Tetlow made use of Fritjof Capra's concept of the 'web of life' as a metaphor.
Research groups
There are numerous academic research groups engaged in Web Science research, many of which are members of WSTNet, the Web Science Trust Network of research labs. Health Web Science emerged as a sub-discipline of Web Science that studies the role of the Web's impact on human's health outcomes and how to further utilize the Web to improve health outcomes. These groups focus on the developmental possibilities, provided through Web Science, in areas such as health care and social welfare. Discussion of web science has been widely adopted as a method in which the internet can have a real world impact in the field of medicine, currently coined Medicine 2.0. The World Wide Web acts as a medium for the spread and circulation of knowledge, though these various research groups consider themselves responsible for maintaining verifiable and testable knowledge. Using their knowledge of the healthcare system as well as web science, researchers are focused on formatting and structuring their knowledge in a way that is easily accessible throughout the internet. The World Wide Web is quickly evolving meaning that the information we provide and its formatting must also. Recognizing the overlap between both aspects, the spread of knowledge and development of the internet, allows us to properly display our knowledge in a manner that evolves as quickly as the internet and everyday medical research. The accessibility of the internet and quick development of knowledge must be companied with efficient formatting to allocate successful dissemination of information, as described by these various researcher groups.
Related major conferences
Association for Computing Machinery (ACM), Hypertext Conference (HT) sponsored by SIGWEB
ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)
International AAAI Conference on Weblogs and Social Media (ICWSM)
The Web Conference (WWW)
Association for Computing Machinery (ACM) Web Science Conference (WebSci)
See also
Digital anthropology
Digital sociology
Health Web Science
Sociology of the Internet
Technology and society
Web Science Trust
References
External links
A Framework for Web Science
Talk on web science by W3C
MSc on Web Science at Institute WeST, University of Koblenz-Landau, Germany
MSc on Web Sciences divided into different branches of study at Johannes Kepler University Linz, Austria
The Web Science Education Workshop
The Web Science Education Map
Master's Programme WebScience at Cologne University of Applied Sciences
The Web Science Institute at the University of Southampton
Cyberspace
Digital media | Web science | Technology | 885 |
30,277,300 | https://en.wikipedia.org/wiki/Errored%20second | In telecommunications and data communication systems, an errored second is an interval of a second during which any error whatsoever has occurred, regardless of whether that error was a single bit error or a complete loss of communication for that entire second. The type of error is not important for the purpose of counting errored seconds.
In communication systems with very low uncorrected bit error rates, such as modern fiber-optic transmission systems, or systems with higher low-level error rates that are corrected using large amounts of forward error correction, errored seconds are often a better measure of the effective user-visible error rate than the raw bit error rate.
For many modern packet-switched communication systems, even a single uncorrected bit error is enough to cause the loss of a data packet by causing its CRC check to fail; whether that packet loss was caused by a single bit error or a hundred-bit-long error burst is irrelevant.
For systems using large amounts of forward error correction, the reverse applies; a single low-level bit error will almost never occur, since any small errors will almost always be corrected, but any error sufficiently large to cause the forward error correction to fail will almost always result in a large burst error.
More specialist and precise definitions of errored seconds exist in standards such as the T1 and DS1 transport systems.
External links
Cisco DS1, T1 and E1 Glossary
Data transmission
Network performance
Error measures
Telecommunications | Errored second | Technology | 292 |
36,038,781 | https://en.wikipedia.org/wiki/Eta2%20Coronae%20Australis | {{DISPLAYTITLE:Eta2 Coronae Australis}}
Eta2 Coronae Australis (Eta2 CrA), Latinized from η2 Coronae Australis, is a solitary star located in the southern constellation of Corona Australis. It is visible to the naked eye as a dim, blue-white hued star with an apparent visual magnitude of 5.59. Gaia DR3 parallax measurements imply a distance of 770 light years from the Solar System, but it is drifting closer with a radial velocity of . At its current distance Eta2 CrA's brightness is diminished by 0.27 magnitudes due to stellar extinction from interstellar dust and it has an absolute magnitude of −0.24.
This object has a stellar classification of B9 IV, indicating that is a slightly evolved a B-type subgiant star. However, Zorec & Royer (2012) model it to be a dwarf star that has completed 80.4% of its main sequence lifetime. It is estimated to be 213 million years old and it has a mass that is 3.23 times that of the Sun. The star is radiating 171 times the luminosity of the Sun from its photosphere 5.82 times the radius of the Sun at an effective temperature of . Eta2 CrA has a near-solar metallicity at [Fe/H] = +0.06 and spins modestly with a projected rotational velocity of . Some earlier catalogues listed the object as a chemically peculiar star but that status is now considered to be doubtful.
Sources
B-type subgiants
Corona Australis
Corona Australis, Eta2
CD-43 12854
173861
092382
7068
Coronae Australis, 26 | Eta2 Coronae Australis | Astronomy | 365 |
5,855,366 | https://en.wikipedia.org/wiki/Cumulus%20%28software%29 | Cumulus is a digital asset management software designed for client/server system which is developed by Canto Software. The product makes use of metadata for indexing, organizing, and searching.
History
Cumulus was first released as a Macintosh application in 1992, and was named by Apple Computer as the "Most Innovative Product of 1992". Cumulus introduced search capabilities beyond those available in the Macintosh at the time, particularly relating to thumbnails.
Cumulus 1.0 was a single-user product with no network capabilities. Among the main features of Cumulus 1.0, the search function automatically generated previews and contained support for the included AppleTalk – Peer-to-Peer – network.
Cumulus 2.5 was available in five different languages and received the 1993 MacUser magazine Eddy award for "Best Publishing & Graphics Utility". In 1995, Canto introduced the scanner software "Cirrus" to focus on the development of Cumulus.
Cumulus 3, released in 1996, introduced a server version for the first time and contained the possibility to spread files over the Internet via the "Web Publisher". Since Apple offered Cumulus 3 with its "Workgroup Server" as a bundle, Cumulus became one of the leading digital asset management systems.
Cumulus 4 was the first version that was network-ready, and was available for Macintosh, Windows and UNIX operating systems allowing for cross-platform file sharing. Released in 1998, the support of Solaris was discounted later.
Cumulus 5 modified the software core to use an open architecture providing an API to external systems and databases. The open architecture of Cumulus 5 also enabled a more functional bridge between Cumulus and the Internet.
Cumulus 6 introduced Embedded Java Plugin (EJP) which allowed system integrators to build custom Java plug-ins in order to extend the functionality of the Cumulus client.
Cumulus 6.5 marked the end of the Cumulus Single User Edition product, which was licensed to MediaDex for further development and distribution.
Cumulus 7 was introduced summer of 2006.
Cumulus 8 was released in June 2009, with new indexing capabilities taking advantage of multicore/multiprocessor systems, and ability to manage a wider variety of file formats.
Cumulus 8.5 was released in May 2011. Support was added for multilingual metadata, sometimes referred to as "World Metadata." Cumulus Sites was updated to support metadata editing and file uploads.
Cumulus 8.6 was released in July 2012, and contains an updated user interface for the administration of Cumulus Sites and additional features for web-based administration of Cumulus. Other additions include features for collaboration links, multi-language support and automated version control.
Cumulus 9 was released in September 2013 and introduced a new Web Client User Interface and the Cumulus Video Cloud. The Cumulus Web Client UI was redesigned to provide users with a modern, easy-to-use interface to support and guide the user while addressing modern business needs. The Cumulus Video Cloud extends the Cumulus video handling capabilities to add conversion and global streaming. Cumulus 9 also saw the addition of upload collection links which allow external collaborators to drag and drop files directly into Cumulus without needing a Cumulus account.
Cumulus 9.1 was released in May 2014 and introduced the Adobe Drive Adapter for Cumulus which allows users to browse and search digital assets in Cumulus directly from Adobe work environments such as Photoshop, InDesign, Illustrator, Premier and other Adobe applications.
Cumulus 10 (Cumulus X) was released July 2015 and introduced two mobile-friendly products: the Cumulus app and Portals. The Cumulus app on iOS was designed to allow users to collaborate either on an iPhone or iPad. Portals is the read-only version of the Cumulus Web Client where users can work with assets that admins allow.
Cumulus 10.1 was introduced in January 2016 and included the InDesign Client integration where users can work with Adobe InDesign while accessing their assets from Cumulus.
Cumulus 10.2 was introduced in September 2016 and brought the Media Delivery Cloud using Amazon Web Services (AWS). It allows users to manage their media rendition in a single source and distribute media files globally across different channels and devices.
Cumulus 10.2.3 was released in February 2017 and came with a "crop and customize photos" feature for Portals and the Web Client.
Product overview
The cataloging of the file via upload into the archive is where Cumulus transfers maximum information about the file from the metadata. For image or photo files, this is typically Exif and IPTC data. The metadata is mainly used to search the archive. The use of embargo data supports license management for copyrighted material.
The managed files can be cataloged and their usage can be set. The indexing is based on a predefined taxonomy, which is governed by the internal rules of the organization or by industry standards. You can specify whether files can only be used for specific purposes or only by certain groups of people. The production management system includes version management for files. Via the publication function, the files can be distributed directly via links or e-mails. It's also possible to access from the outside via the Cumulus Portals web interface, which allows a read access to released content from the catalog.
There are different variants, starting with the "Workgroup archive server" up to the "Enterprise Business Server" for large companies. Both server and client are extensible through a Java-based plug-in architecture. Since version 7.0, there is a web application based on Ajax with a separate user interface. For access to the Cumulus catalog on mobile, there has been an application for Apple devices based on iOS since 2010.
Miscellaneous
In 2015, Cumulus developer Canto established the first Canto digital asset management (DAM) event. The event is held annually in Berlin.
The Henry Stewart team has been hosting DAM conferences since 2006.
See also
Comparison of image viewers
References
External links
Graphics software
Information technology management | Cumulus (software) | Technology | 1,236 |
49,213,295 | https://en.wikipedia.org/wiki/Self-assembly%20based%20manufacturing | Self-assembly based manufacturing refers to a controlled process of using self-assembly and programmable matter to manufacture a product on an industrial scale. In traditional manufacturing and fabrication, there are physical and precision limitations on a workpiece; namely, lower minimal dimension of a workpiece has been a major challenge in modern manufacturing. Engineering self-assembly methods have a significant potentials in overcoming the dimensional limitation of a workpiece. In general, there are three key ingredients in most of self assembly applications: geometry (order), interaction, energy. To improve the efficiency or take shape in self-assembly based manufacturing, it must utilize one or more than one of these three ingredients. This is an emerging market with few examples to date. However, this field shows a strong potential to revolutionize many industrial markets from nanoelectronics to bio-engineering.
Successful processes
Many processes have been successfully developed at laboratory scale and show promise for future expansion into large-scale industrial manufacturing.
Sequence-specific molecular lithography on single DNA molecules
Direct molecular assembly on metal surfaces
Amyloid fibers and selective metal deposition: NM protein fibers have been demonstrated to create nanowires useful in connecting electrodes in laboratory testing.
Surface-tension-directed-self-assembly of electronic components.
One example is the automated reel to reel fluidic self-assembly machine demonstrated by University of Minnesota researchers. The machine was designed to produce lighting panels using Light-emitting diodes. Assembly was performed at twice the hourly rate of commercially available pick and place machines for SMT placement equipment, 15,000 chips per hour compared to 8,000 chips per hour. At the same time, the self-assembly exceeded the accuracy rate of the pick and place machine as well.
Potential future applications
Fabrication of materials used in most extreme environments, such as space, high altitude, free-fall scenarios, or deep sea. environments have advantageous conditions for allowing increase in self assembly interaction with less or minimum energy consumption. Applications in these environments often require high precision and have more difficulties; however, it has less constraints in existing construction.
References
Manufacturing | Self-assembly based manufacturing | Engineering | 421 |
22,051,600 | https://en.wikipedia.org/wiki/Hercules%20InColor%20Card | The Hercules InColor Card (GB222) is an IBM PC compatible 8-bit ISA graphics controller card released in April 1987 by Hercules Computer Technology, Inc. It supported a fixed hardware palette of 64 colours, with the ability to display with 16 colours on an EGA monitor and software redefinable fonts.
After the success of the monochrome Hercules Graphics Card (HGC) and Hercules Graphics Card Plus (HGC+) which gained wide developer support, the market was changing with the release of new colour cards which were becoming increasingly affordable. So Hercules released the InColor to compete primarily with IBM's new high-end VGA card, and also with many existing EGA compatible cards on the market.
The card came with drivers for popular programs like Lotus 1-2-3, AutoCAD, WordPerfect 5.0 or Microsoft Windows. Some compatible games with the card included Karateka, Microsoft Flight Simulator 3.0 and 4, 3-D Helicopter Simulator, RAPCON: Military Air Traffic Control Simulator and Eco Adventures in the Oceans.
The InColor did not bring the success that Hercules had hoped for, and revenue slowly declined until Hercules was eventually acquired by Guillemot Corporation in October 1999 for $8.5m USD.
See also
Hercules Graphics Card
Hercules Graphics Card Plus
Hercules Network Card Plus
References
Graphics cards
Products introduced in 1987 | Hercules InColor Card | Technology | 280 |
27,197,866 | https://en.wikipedia.org/wiki/Ciclindole | Ciclindole (; developmental code name WIN-27,147-2), also known as cyclindole (), is an antipsychotic with a tricyclic and tryptamine-like structure that was never marketed.
It displaces spiperone binding in vitro and elevates dopamine levels in the striatum, indicating that it acts as a dopamine D2 receptor antagonist. It also shows apparent affinity for the α1-adrenergic receptor, the serotonin S1 receptor, and the serotonin S2 receptor. However, its affinities for all of the preceding targets are weak, in the low micromolar range.
The related drug flucindole is about 5 to 10times more potent than ciclindole both in vitro and in vivo.
See also
Frovatriptan
α,N,N-Trimethyltryptamine
Iprindole
References
Abandoned drugs
Antipsychotics
Carbazoles
Dimethylamino compounds
Tricyclic antidepressants
Tryptamines | Ciclindole | Chemistry | 227 |
53,843,914 | https://en.wikipedia.org/wiki/Ruqian%20Wu | Ruqian Wu is a professor of physics and astronomy at the University of California, Irvine (UCI). His primary research area is condensed matter physics.
He gained a Ph.D. at the Institute of Physics, Academia Sinica
He was awarded the status of Fellow in the American Physical Society, after he was nominated by their Division of Computational Physics in 2001, for contributions to the understanding of magnetic, electronic, mechanical, chemical and optical properties of compounds, alloys, interfaces, thin films and surfaces using first-principles calculations and for development of the methods and codes for such components.
References
Year of birth missing (living people)
Living people
University of California, Irvine faculty
Condensed matter physicists
Fellows of the American Physical Society | Ruqian Wu | Physics,Materials_science | 146 |
2,459,800 | https://en.wikipedia.org/wiki/Atmospheric%20Reentry%20Demonstrator | The Advanced Reentry Demonstrator (ARD) was a European Space Agency (ESA) suborbital reentry vehicle. It was developed and operated for experimental purposes, specifically to validate the multiple reentry technologies integrated upon it and the vehicle's overall design, as well as to gain greater insight into the various phenomenon encountered during reentry.
The ARD only performed a single spaceflight. On 21 October 1998, the vehicle was launched upon the third flight of the Ariane 5 expendable launch system. Reaching a recorded altitude of 830 km, the ARD performed a guided reentry back to Earth before splashing down relatively close to its intended target point in the Pacific Ocean after one hour and 41 minutes of flight. Following its recovery and subsequent analysis, the vehicle was found to have performed well, the nose cone and heat shield thermal protection having remaining in an ideal state and having remained completely airtight and perfectly intact.
The ARD was the first guided sub-orbital reentry vehicle to be manufactured, launched and recovered by Europe. One of the core purposes of the mission was the gathering of knowledge that could be subsequently used during the development of future re-entry vehicles and precise landing capabilities. In the aftermath of the programme, the ESA decided to embark on a follow-up reentry demonstrator, known as the Intermediate eXperimental Vehicle (IVX). The first IXV vehicle underwent its first successful test flight during February 2015. The ARD and IVX demonstrators are intended to serve as developmental stepping stones towards a vehicle called Space Rider, meant to be the first of a series of production-standard spaceplanes.
Development
From the 1980s onwards, there was growing international interest in the development of reusable spacecraft; at this time, only the superpowers of the era, the Soviet Union and the United States, had developed this capability. European nations such as Britain and France embarked on their own national programmes to produce spaceplanes, such as HOTOL and Hermes, while attempting to attract the backing of the multinational European Space Agency (ESA). While these programmes ultimately did not garner enough support to continue development, there was still demand within a number of the ESA's member states to pursue the development of reusable space vehicles. Accordingly, shortly after the abandonment of the Hermes programme, it was decided to conduct a technology demonstrator programme with the aim of producing a vehicle which would support the development of subsequent reusable spacecraft. The ESA later referred to this programme, which became known as the Atmospheric Reentry Demonstrator (ARD), as being: "major step towards developing and operating space transportation vehicles that can return to Earth... For the first time, Europe will fly a complete space mission – launching a vehicle into space and recovering it safely."
The ARD was developed and operated as a cooperative civilian space programme under the oversight of the ESA; it fell within the agency's Manned Space Transportation Program (MSTP) framework. Under this framework, the programme was pursued with a pair of expressed principal objectives. First, the ESA was keen to perform a demonstration of the ability of the European space industry to design and produce low-cost reentry vehicles, as well as its ability to handle the critical mission phases involved in their operation, such as sub-orbital flight, reentry and vehicle recovery. In addition, the ARD was equipped with a comprehensive suite of sensors and recording equipment so that detailed measurements were obtained during testing; it was recognised that exploration of various phenomena across the successive phases of flight would be of high value. The data gained would be subsequently catalogued and harnessed during further programmes, especially future reentry vehicles and reusable launch systems.
The prime contractor selected to perform the ARD's development and construction was French aerospace company Aérospatiale (which later merged into the multinational EADS – SPACE Transportation group). During 1995 and 1996, multiple development studies exploring concepts for the shape of such a vehicle were conducted; ultimately, it was decided to adopt a configuration that resembled the classical manned Apollo capsule which had been previously operated by NASA. The use of an existing shape was a deliberate measure to avoid a length exploration of the craft's aerodynamic properties; both the dimensions and mass of the craft were also defined by the capabilities of the Ariane 5 expendable launch system used to deploy the vehicle.
It has been claimed that even early on, the programme schedule was relatively tight and funding was limited. According to the ESA, the restrictive financing of the programme was an intentional effort, to prove that such a vehicle could be demonstrated with a smaller budget than previous efforts had been.
The experience and data obtained through ARD and IVX demonstrators are serving as developmental stepping stones towards a vehicle called Space Rider.
Design
The ARD is an unmanned 3-axis stabilised automated capsule which served as experimental reentry vehicle primarily for technology-proving and data-gathering purposes. In terms of its shape, the vehicle bares an external resemblance to a 70 per cent-scale version of the American Apollo capsule, and considered by the ESA to be a 50 per cent-scale vehicle of a prospective potentially operational transportation vehicle; as such, it is 2.8 meters in diameter and weighs 2.8 tons at atmospheric interface point. The ARD possesses an air- and water-tight pressurised structure primarily composed of an aluminium alloy, which is protected by a layer of Norcoat 62250 FI cork composite tiles across the exterior of nosecone and by an arrangement of aleastrasil silicon dioxide-phenol formaldehyde resin tiles over the heat shield. The vehicle itself can be divided into three distinct sections: the frontshield section, the rear-cone section and the backcover section.
The ARD possesses manoeuvrability capabilities during re-entry; a favourable lift-to-drag ratio is achieved via an off-set center of gravity. The guidance law is akin to that of Apollo and the Space Shuttle, being based on a drag-velocity profile control and bank angles manoeuvres in order to conform with heating, load factor, rebound, and other required conditions; according to the ESA, this provided acceptable final guidance accuracy (within 5 km) with limited real-time calculation complexity. In operation, the guidance system becomes active once the aerodynamic forces become efficient and as long as the reaction control system remains efficient. Instead of using flight control surfaces, non-linear control is instead ensured by an assortment of seven hydrazine thrusters which, according to the manufacturer, were derived from the Ariane 5 expendable launch system. These rocket thrusters, each typically generating 400-N of thrust, were arranged in a blow-down configuration and positioned so that three units provide pitch control, two for roll and two for yaw.
During its reentry into the atmosphere, the ARD's heat shield is exposed to temperatures reaching as high as 2000 °C and a heat flux peaking at 1000 kW/m2, resulting from the ionisation of the atmosphere, which is in turn caused by the vehicle travelling at hypersonic speeds, in excess of 27,000 km/h during parts of its reentry descent. While the conical area of the vehicle may reach 1000 °C, with a heat flux of 90–125 kW/m2, the interior temperature will not rise above 40 °C. The thermal protection measures used were a combination of pre-existing materials that Aerospatiale had already developed under French military programmes along with multiple new-generation materials, the latter of which had been principally included for testing purposes. During reentry, the ARD's head shield loses only 0.5 mm of its thickness, keeping its aerodynamic shape relatively constant, which in turn simplifies the flight control algorithms.
The vehicle is equipped with a Descent Recovery System (DRS), deployed prior to splashdown in order to limit the impact loads and to ensure its flotation for up to 36 hours. This system involving the deployment of multiple parachutes, stored within the internal space of the tip of the nose cone; in total, one flat ribbon pilot chute, one conical ribbon drogue chute with a single reefing stage, and three slotted ribbon main parachutes with two reefing stages are typically deployed. For buoyancy purposes, a pair of inflatable balloons are also present in the DRS, helping to keep the vehicle upright. To aid in its recovery, the ARD is furnished with both a satellite-based search and rescue radio beacon and flashing light.
The internal space of the ARD and was packed with the most advanced technologies to test and qualify new technologies and flight control capabilities for atmospheric reentry and landing. The avionics of the vehicle were primarily sourced from existing equipment used upon the Ariane 5 launcher. The guidance and navigation systems used a computerised inertial navigation system which, via a databus, would be automatically corrected by GPS during the ballistic phase of flight. However, the ARD was designed to be tolerant to instances of GPS failure; this is achieved via a series of control loop algorithms that verify the GPS-derived data to be within a pre-established ‘credibility window’, defined by the inertial navigation readings. During the vehicle's sole mission, it continuously recorded and transmitted to the ground in excess of 200 critical parameters that were used to analyse the ARD's flight performance as well as the behaviour of the equipment on board.
Operational history
The ARD only performed a single spaceflight. On 21 October 1998, the ARD was launched upon the third flight of the Ariane 5 expendable launch system. It was released shortly after separation of the launcher's cryogenic main stage (at an altitude of about 216 km) 12 minutes after lift-off from the Guiana Space Centre, Europe's spaceport in Kourou, French Guiana. The ARD attained a recorded altitude of 830 km, after which a guided reentry into the atmosphere was conducted. It splashed down to within 4.9 km of its target point in the Pacific Ocean between the Marquesas Islands and Hawaii after one hour and 41 minutes of flight.
The ARD was recovered roughly five hours following splash down. Following recovery, the vehicle was transported back to Europe and subject to detailed technical analysis in order to acquire more information on its performance. Engineers analysing data from its sub-orbital flight reported that all the capsule's systems had performed well and according to expectations; analysis of the craft's real-time telemetry broadcast during the flight had also reported that all electrical equipment and propulsion systems functioned nominally. The onboard telemetry systems and reception stations had all performed well, and the onboard GPS receiver worked satisfactorily during the entire flight except, as expected, during black-out in reentry.
Following post-mission analysis of the ARD's performance, it was announced that all of the demonstration and system requirements of the programme had been successfully achieved. The test flight itself was described as having been "nearly nominal", particularly the trajectory and flight control aspects; additionally, many of the onboard systems, such as the navigation (primary and backup), propulsion, thermal protection, communication, and DRS were found to have performed either as predicted or to have been outside these predictions by only a small margin. During reentry, the heat shield temperature reached a recorded peak temperature of 900 °C; nevertheless, both the vehicle's cone and heat shield thermal protection were found in a perfect state following its retrieval.
Issues highlighted during analysis included the role of design uncertainties having led to difficulties in observing some physical phenomena such as real gas effects, addressing aerothermal environment characterization was also hindered due to the premature failure of some thermocouples. Overall, the flight was stated to have brought a great amount of high quality aerodynamic information back which, amongst other benefits, served to confirm and enhance the capabilities of ground-based prediction tools. Since its retrieval and the conclusion of post-mission examination, the sole ARD vehicle itself has been preserved and has become a publicly-accessible exhibit at the European Space Research and Technology Centre in Noordwijk, Netherlands.
See also
IXV, follow-up ESA reentry demonstrator, tested in February 2015.
OREX, equivalent Japanese demonstrator from 1994, developed and flown by NASDA
CARE, experimental test vehicle for the ISRO Orbital Vehicle launched on 18 December 2014 atop GSLV Mk III LVM 3X
References
External links
EADS ARD web page
ESA ACRV review
ARD drop test from a stratospheric balloon performed in Southern Italy in 1996
Atmospheric entry
European Space Agency
Noordwijk
Suborbital spaceflight
Spacecraft which reentered in 1998
Space hardware returned to Earth intact
Technology demonstrations | Atmospheric Reentry Demonstrator | Engineering | 2,621 |
33,320,313 | https://en.wikipedia.org/wiki/Oligocrystalline%20material | Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM).
References
Crystallography | Oligocrystalline material | Physics,Chemistry,Materials_science,Engineering | 60 |
62,763,880 | https://en.wikipedia.org/wiki/Norman%20Laurence%20Gilbreath | Norman Laurence Gilbreath (born 1936) is an American magician and author known for originating the Gilbreath shuffle. He is also known for Gilbreath's conjecture concerning prime numbers.
Life and career
Gilbreath got a BS in mathematics at University of California, Los Angeles (UCLA). Following graduate work in applied mathematics, which saw him work under C. C. Chang, he spent his entire career at the Rand Corporation as an expert on compilers and optimization tasks. His book Magic for an Audience was published in 1989 as a series of three articles in Genii Magazine. He lives in Los Angeles and performed regularly in the 2000s at Hollywood's Magic Castle.
The Gilbreath shuffle is a method of shuffling a deck of cards, by riffling two packs of cards after reversing one of them. Unlike a standard riffle, it preserves certain properties of the sequence of cards, leading to its use in magic tricks.
Gilbreath introduced the Gilbreath principle the basis of card tricks using the Gilbreath shuffle in 1958 in an article in Genii magazine. In 1966 he published a generalization that is now called the Second Gilbreath principle. But the principal only became well known after Martin Gardner wrote about it in his July 1972 "Mathematical Games column" in Scientific American.
In number theory, he is known for Gilbreath's conjecture, an unproven pattern in the difference sequences of prime numbers. Gilbreath found this as a student in 1958 at UCLA. Two other students, R. B. Killgrove and K. E. Ralston, took advantage of the state-of-the-art SWAC computer installed at UCLA and confirmed it for the first 63419 primes. Unknown to them, the same pattern had been observed many years earlier by François Proth, but it is by Gilbreath's name that the conjecture is now known.
Books
Magic for an Audience, Genii Magazine, Vol. 52, No. 09-10-11, Spring 1989.
Beyond Imagination, Publisher: H&R Magic Books (2014).
References
External links
Norman Gilbreath Live
American magicians
Recreational mathematicians
American applied mathematicians
1936 births
Living people | Norman Laurence Gilbreath | Mathematics | 449 |
6,166,215 | https://en.wikipedia.org/wiki/Greater%20palatine%20nerve | The greater palatine nerve is a branch of the pterygopalatine ganglion. This nerve is also referred to as the anterior palatine nerve, due to its location anterior to the lesser palatine nerve. It carries both general sensory fibres from the maxillary nerve, and parasympathetic fibers from the nerve of the pterygoid canal. It may be anaesthetised for procedures of the mouth and maxillary (upper) teeth.
Structure
The greater palatine nerve is a branch of the pterygopalatine ganglion. It descends through the greater palatine canal, moving anteriorly and inferiorly. Here, it is accompanied by the descending palatine artery. It emerges upon the hard palate through the greater palatine foramen. It then passes forward in a groove in the hard palate, nearly as far as the incisor teeth.
While in the pterygopalatine canal, it gives off lateral posterior inferior nasal branches, which enter the nasal cavity through openings in the palatine bone, and ramify over the inferior nasal concha and middle and inferior meatuses. At its exit from the canal, a palatine branch is distributed to both surfaces of the soft palate.
Function
The greater palatine nerve carries both general sensory fibres from the maxillary nerve, and parasympathetic fibers from the nerve of the pterygoid canal. It supplies the gums, the mucous membrane and glands of the hard palate, and communicates in front with the terminal filaments of the nasopalatine nerve.
Clinical significance
The greater palatine nerve may be anaesthetised to perform dental procedures on the maxillary (upper) teeth, and sometimes for cleft lip and cleft palate surgery.
References
External links
()
Diagram at adi-visuals.com
Trigeminal nerve
Facial nerve
Nervous system
Nerves of the head and neck
Otorhinolaryngology | Greater palatine nerve | Biology | 386 |
48,597,951 | https://en.wikipedia.org/wiki/K-way%20merge%20algorithm | In computer science, k-way merge algorithms or multiway merges are a specific type of sequence merge algorithms that specialize in taking in k sorted lists and merging them into a single sorted list. These merge algorithms generally refer to merge algorithms that take in a number of sorted lists greater than two. Two-way merges are also referred to as binary merges. The k-way merge is also an external sorting algorithm.
Two-way merge
A 2-way merge, or a binary merge, has been studied extensively due to its key role in merge sort. An example of such is the classic merge that appears frequently in merge sort examples. The classic merge outputs the data item with the lowest key at each step; given some sorted lists, it produces a sorted list containing all the elements in any of the input lists, and it does so in time proportional to the sum of the lengths of the input lists.
Denote by A[1..p] and B[1..q] two arrays sorted in increasing order.
Further, denote by C[1..n] the output array.
The canonical 2-way merge algorithm stores indices i, j, and k into A, B, and C respectively.
Initially, these indices refer to the first element, i.e., are 1.
If A[i] < B[j], then the algorithm copies A[i] into C[k] and increases i and k.
Otherwise, the algorithm copies B[j] into C[k] and increases j and k.
A special case arises if either i or j have reached the end of A or B.
In this case the algorithm copies the remaining elements of B or A into C and terminates.
k-way merge
The k-way merge problem consists of merging k sorted arrays to produce a single sorted array with the same elements.
Denote by n the total number of elements.
n is equal to the size of the output array and the sum of the sizes of the k input arrays.
For simplicity, we assume that none of the input arrays is empty.
As a consequence , which simplifies the reported running times.
The problem can be solved in running time with space.
Several algorithms that achieve this running time exist.
Iterative 2-way merge
The problem can be solved by iteratively merging two of the k arrays using a 2-way merge until only a single array is left. If the arrays are merged in arbitrary order, then the resulting running time is only O(kn). This is suboptimal.
The running time can be improved by iteratively merging the first with the second, the third with the fourth, and so on. As the number of arrays is halved in each iteration, there are only Θ(log k) iterations. In each iteration every element is moved exactly once. The running time per iteration is therefore in Θ(n) as n is the number of elements. The total running time is therefore in Θ(n log k).
We can further improve upon this algorithm, by iteratively merging the two shortest arrays. It is clear that this minimizes the running time and can therefore not be worse than the strategy described in the previous paragraph. The running time is therefore in O(n log k). Fortunately, in border cases the running time can be better. Consider for example the degenerate case, where all but one array contain only one element. The strategy explained in the previous paragraph needs Θ(n log k) running time, while the improved one only needs Θ(n + k log k) running time.
Direct k-way merge
In this case, we would simultaneously merge k-runs together.
A straightforward implementation would scan all k arrays to determine the minimum.
This straightforward implementation results in a running time of Θ(kn).
Note that this is mentioned only as a possibility, for the sake of discussion. Although it would work, it is not efficient.
We can improve upon this by computing the smallest element faster.
By using either heaps, tournament trees, or splay trees, the smallest element can be determined in O(log k) time.
The resulting running times are therefore in O(n log k).
The heap is more commonly used, although a tournament tree is faster in practice. A heap uses approximately 2*log(k) comparisons in each step because it handles the tree from the root down to the bottom and needs to compare both children of each node. Meanwhile, a tournament tree only needs log(k) comparisons because it starts on the bottom of the tree and works up to the root, only making a single comparison in each layer. The tournament tree should therefore be the preferred implementation.
Heap
The idea is to maintain a min-heap of the k lists, each keyed by their smallest current element. A simple algorithm builds an output buffer with nodes from the heap. Start by building a min-heap of nodes, where each node consists of a head element of the list, and the rest (or tail) of the list. Because the lists are sorted initially, the head is the smallest element of each list; the heap property guarantees that the root contains the minimum element over all lists. Extract the root node from the heap, add the head element to the output buffer, create a new node out of the tail, and insert it into the heap. Repeat until there is only one node left in the heap, at which point just append that remaining list (head and tail) to the output buffer.
Using pointers, an in-place heap algorithm
allocates a min-heap of pointers into the input arrays.
Initially these pointers point to the smallest elements of the input array.
The pointers are sorted by the value that they point to.
In an O(k) preprocessing step the heap is created using the standard heapify procedure.
Afterwards, the algorithm iteratively transfers the element that the root pointer points to, increases this pointer and executes the standard decrease key procedure upon the root element.
The running time of the increase key procedure is bounded by O(log k).
As there are n elements, the total running time is O(n log k).
Note that the operation of replacing the key and iteratively doing decrease-key or sift-down are not supported by many Priority Queue libraries such as C++ stl and Java. Doing an extract-min and insert function is less efficient.
Tournament Tree
The Tournament Tree is based on an elimination tournament, as in sports competitions. In each game, two of the input elements compete. The winner is promoted to the next round. Therefore, we get a binary tree of games. The list is sorted in ascending order, so the winner of a game is the smaller one of both elements.
For k-way merging, it is more efficient to only store the loser of each game (see image). The data structure is therefore called a loser tree. When building the tree or replacing an element with the next one from its list, we still promote the winner of the game to the top. The tree is filled like in a sports match but the nodes only store the loser. Usually, an additional node above the root is added that represents the overall winner. Every leaf stores a pointer to one of the input arrays. Every inner node stores a value and an index. The index of an inner node indicates which input array the value comes from. The value contains a copy of the first element of the corresponding input array.
The algorithm iteratively appends the minimum element to the result and then removes the element from the corresponding input list. It updates the nodes on the path from the updated leaf to the root (replacement selection). The removed element is the overall winner. Therefore, it has won each game on the path from the input array to the root. When selecting a new element from the input array, the element needs to compete against the previous losers on the path to the root. When using a loser tree, the partner for replaying the games is already stored in the nodes. The loser of each replayed game is written to the node and the winner is iteratively promoted to the top. When the root is reached, the new overall winner was found and can be used in the next round of merging.
The images of the tournament tree and the loser tree in this section use the same data and can be compared to understand the way a loser tree works.
Algorithm
A tournament tree can be represented as a balanced binary tree by adding sentinels to the input lists (i.e. adding a member to the end of each list with a value of infinity) and by adding null lists (comprising only a sentinel) until the number of lists is a power of two. The balanced tree can be stored in a single array. The parent element can be reached by dividing the current index by two.
When one of the leaves is updated, all games from the leaf to the root are replayed. In the following pseudocode, an object oriented tree is used instead of an array because it is easier to understand. Additionally, the number of lists to merge is assumed to be a power of two.
function merge(L1, ..., Ln)
buildTree(heads of L1, ..., Ln)
while tree has elements
winner := tree.winner
output winner.value
new := winner.index.next
replayGames(winner, new) // Replacement selection
function replayGames(node, new)
loser, winner := playGame(node, new)
node.value := loser.value
node.index := loser.index
if node != root
replayGames(node.parent, winner)
function buildTree(elements)
nextLayer := new Array()
while elements not empty
el1 := elements.take()
el2 := elements.take()
loser, winner := playGame(el1, el2)
parent := new Node(el1, el2, loser)
nextLayer.add(parent)
if nextLayer.size == 1
return nextLayer // only root
else
return buildTree(nextLayer)
Running time
In the beginning, the tree is first created in time Θ(k). In each step of merging, only the games on the path from the new element to the root need to be replayed. In each layer, only one comparison is needed. As the tree is balanced, the path from one of the input arrays to the root contains only Θ(log k) elements. In total, there are n elements that need to be transferred. The resulting total running time is therefore in Θ(n log k).
Example
The following section contains a detailed example for the replacement selection step and one example for a complete merge containing multiple replacement selections.
Replacement selection
Games are replayed from the bottom to the top. In each layer of the tree, the currently stored element of the node and the element that was provided from the layer below compete. The winner is promoted to the top until we found the new overall winner. The loser is stored in the node of the tree.
Merge
To execute the merge itself, the overall smallest element is repeatedly replaced with the next input element. After that, the games to the top are replayed.
This example uses four sorted arrays as input.
{2, 7, 16}
{5, 10, 20}
{3, 6, 21}
{4, 8, 9}
The algorithm is initiated with the heads of each input list. Using these elements, a binary tree of losers is built. For merging, the lowest list element 2 is determined by looking at the overall minimum element at the top of the tree. That value is then popped off, and its leaf is refilled with 7, the next value in the input list. The games on the way to the top are replayed like in the previous section about replacement selection. The next element that is removed is 3. Starting from the next value in the list, 6, the games are replayed up until the root. This is being repeated until the minimum of the tree equals infinity.
Lower bound on running time
One can show that no comparison-based k-way merge algorithm exists with a running time in O(n f(k)) where f grows asymptotically slower than a logarithm, and n being the total number of elements. (Excluding data with desirable distributions such as disjoint ranges.) The proof is a straightforward reduction from comparison-based sorting. Suppose that such an algorithm existed, then we could construct a comparison-based sorting algorithm with running time O(n f(n)) as follows: Chop the input array into n arrays of size 1. Merge these n arrays with the k-way merge algorithm. The resulting array is sorted and the algorithm has a running time in O(n f(n)). This is a contradiction to the well-known result that no comparison-based sorting algorithm with a worst case running time below O(n log n) exists.
External sorting
k-way merges are used in external sorting procedures. External sorting algorithms are a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). k-way merge algorithms usually take place in the second stage of external sorting algorithms, much like they do for merge sort.
A multiway merge allows for the files outside of memory to be merged in fewer passes than in a binary merge. If there are 6 runs that need be merged then a binary merge would need to take 3 merge passes, as opposed to a 6-way merge's single merge pass. This reduction of merge passes is especially important considering the large amount of information that is usually being sorted in the first place, allowing for greater speed-ups while also reducing the amount of accesses to slower storage.
References
Sorting algorithms | K-way merge algorithm | Mathematics | 2,873 |
29,665,262 | https://en.wikipedia.org/wiki/Biodosimetry | Biodosimetry is a measurement of biological response as a surrogate for radiation dose. The International Commission on Radiation Units and Measurements and International Atomic Energy Agency have issued guidance on performing biodosimetry and interpreting data.
Notes and references
Radiation health effects
Medical signs | Biodosimetry | Chemistry,Materials_science | 54 |
23,962,040 | https://en.wikipedia.org/wiki/Internal%20tide | Internal tides are generated as the surface tides move stratified water up and down sloping topography, which produces a wave in the ocean interior. So internal tides are internal waves at a tidal frequency. The other major source of internal waves is the wind which produces internal waves near the inertial frequency. When a small water parcel is displaced from its equilibrium position, it will return either downwards due to gravity or upwards due to buoyancy. The water parcel will overshoot its original equilibrium position and this disturbance will set off an internal gravity wave. Munk (1981) notes, "Gravity waves in the ocean's interior are as common as waves at the sea surface-perhaps even more so, for no one has ever reported an interior calm."
Simple explanation
The surface tide propagates as a wave in which water parcels in the whole water column oscillate in the same direction at a given phase (i.e., in the trough or at the crest, Fig. 1, top). This means that while the form of the surface wave itself may propagate across the surface of the water, the fluid particles themselves are restricted to a relatively small neighborhood. Fluid moves upwards as the crest of the surface wave is passing and downwards as the trough passes. Lateral motion only serves to make up for the height difference in the water column between the crest and trough of the wave: as the surface rises at the top of the water column, water moves laterally inward from adjacent downwards-moving water columns to make up for the change in volume of the water column. While this explanation focuses on the motion of the ocean water, the phenomenon being described is in nature an interfacial wave, with mirroring processes happening on either side of the interface between two fluids: ocean water and air. At the simplest level, an internal wave can be thought of as an interfacial wave (Fig. 1, bottom) at the interface of two layers of the oceans differentiated by a change in the water's properties, such as a warm surface layer and cold deep layer separated by a thermocline. As the surface tide propagates between these two fluid layers at the ocean surface, a homologous internal wave mimics it below, forming the internal tide. The interfacial movement between two layers of ocean is large compared to surface movement because although as with surface waves, the restoring force for internal waves and tides is still gravity, its effect is reduced because the densities of the two layers are relatively similar compared to the large density difference at the air-sea interface. Thus larger displacements are possible inside the ocean than are possible at the sea surface.
Tides occur mainly at diurnal and semidiurnal periods. The principal lunar semidiurnal constituent is known as M2 and generally has the largest amplitudes. (See external links for more information.)
Location
The largest internal tides are generated at steep, midocean topography such as the Hawaiian Ridge, Tahiti, the Macquarie Ridge, and submarine ridges in the Luzon Strait.
Continental slopes such as the Australian North West Shelf also generate large internal tides.
These internal tide may propagate onshore and dissipate much like surface waves. Or internal tides may propagate away from the topography into the open ocean. For tall, steep, midocean topography, such as the Hawaiian Ridge, it is estimated that about 85% of the energy in the internal tide propagates away into the deep ocean with about 15% of its energy being lost within about 50 km of the generation site. The lost energy contributes to turbulence and mixing near the generation sites.
It is not clear where the energy that leaves the generation site is dissipated, but there are 3 possible processes: 1) the internal tides scatter and/or break at distant midocean topography, 2) interactions with other internal waves remove energy from the internal tide, or 3) the internal tides shoal and break on continental shelves.
Propagation and dissipation
Briscoe (1975)noted that “We cannot yet answer satisfactorily the questions: ‘where does the internal wave energy come from, where does it go, and what happens to it along the way?’”
Although technological advances in instrumentation and modeling have produced greater knowledge of internal tide and near-inertial wave generation, Garrett and Kunze (2007) observed 33 years later that “The fate of the radiated [large-scale internal tides] is still uncertain. They may scatter into [smaller scale waves] on further encounter with islands
or the rough seafloor
, or transfer their energy to smaller-scale internal waves in the ocean interior
” or “break on distant continental slopes
”.
It is now known that most of the internal tide energy generated at tall, steep midocean topography radiates away as large-scale internal waves. This radiated internal tide energy is one of the main sources of energy into the deep ocean, roughly half of the wind energy input
. Broader interest in internal tides is spurred by their impact on the magnitude and spatial inhomogeneity of mixing, which in turn has first order effect on the meridional overturning circulation
.
The internal tidal energy in one tidal period going through an area perpendicular to the direction of propagation is called the energy flux and is measured in Watts/m. The energy flux at one point can be summed over depth- this is the depth-integrated energy flux and is measured in Watts/m. The Hawaiian Ridge produces depth-integrated energy fluxes as large as 10 kW/m. The longest wavelength waves are the fastest and thus carry most of the energy flux. Near Hawaii, the typical wavelength of the longest internal tide is about 150 km while the next longest is about 75 km. These waves are called mode 1 and mode 2, respectively. Although Fig. 1 shows there is no sea surface expression of the internal tide, there actually is a displacement of a few centimeters. These sea surface expressions of the internal tide at different wavelengths can be detected with the Topex/Poseidon or Jason-1 satellites (Fig. 2).
Near 15 N, 175 W on the Line Islands Ridge, the mode-1 internal tides scatter off the topography, possibly creating turbulence and mixing, and producing smaller wavelength mode 2 internal tides.
The inescapable conclusion is that energy is lost from the surface tide to the internal tide at midocean topography and continental shelves, but the energy in the internal tide is not necessarily lost in the same place. Internal tides may propagate thousands of kilometers or more before breaking and mixing the abyssal ocean.
Abyssal mixing and meridional overturning circulation
The importance of internal tides and internal waves in general relates to their breaking, energy dissipation, and mixing of the deep ocean. If there were no mixing in the ocean, the deep ocean would be a cold stagnant pool with a thin warm surface layer.
While the meridional overturning circulation (also referred to as the thermohaline circulation) redistributes about 2 PW of heat from the tropics to polar regions, the energy source for this flow is the interior mixing which is comparatively much smaller- about 2 TW.
Sandstrom (1908) showed a fluid which is both heated and cooled at its surface cannot develop a deep overturning circulation.
Most global models have incorporated uniform mixing throughout the ocean because they do not include or resolve internal tidal flows.
However, models are now beginning to include spatially variable mixing related to internal tides and the rough topography where they are generated and distant topography where they may break. Wunsch and Ferrari (2004) describe the global impact of spatially inhomogeneous mixing near midocean topography: “A number of lines of evidence, none complete, suggest that the oceanic general circulation, far from being a heat engine, is almost wholly governed by the forcing of the wind field and secondarily by deep water tides... The now inescapable conclusion that over most of the ocean significant ‘vertical’ mixing is confined to topographically complex boundary areas implies a potentially radically different interior circulation than is possible with uniform mixing. Whether ocean circulation models... neither explicitly accounting for the energy input into the system nor providing for spatial variability in the mixing, have any physical relevance under changed climate conditions is at issue.” There is a limited understanding of “the sources controlling the internal wave energy in the ocean and the rate at which it is dissipated” and are only now developing some “parameterizations of the mixing generated by the interaction of internal waves, mesoscale eddies, high-frequency barotropic fluctuations, and other motions over sloping topography.”
Internal tides at the beach
Internal tides may also dissipate on continental slopes and shelves
or even reach within 100 m of the beach (Fig. 3). Internal tides bring pulses of cold water shoreward and produce large vertical temperature differences. When surface waves break, the cold water is mixed upwards, making the water cold for surfers, swimmers, and other beachgoers. Surface waters in the surf zone can change by about 10 °C in about an hour.
Internal tides, internal mixing, and biological enhancement
Internal tides generated by tidal semidiurnal currents impinging on steep submarine ridges in island passages, ex: Mona Passage, or near the shelf edge, can enhance turbulent dissipation and internal mixing near the generation site. The development of Kelvin-Helmholtz instability during the breaking of the internal tide can explain the formation of high diffusivity patches that generate a vertical flux of nitrate (NO3−) into the photic zone and can sustain new production locally.
Another mechanism for higher nitrate flux at spring tides results from pulses of strong turbulent dissipation associated with high frequency internal soliton packets.
Some internal soliton packets are the result of the nonlinear evolution of the internal tide.
See also
Tide
Internal wave
Physical oceanography
References
External links
Scripps Institution of Oceanography
Southern California Coastal Ocean Observing System
Internal Tides of the Oceans, Harper Simmons, by Jenn Wagaman of Arctic Region Supercomputing Center
Principal tidal constituents in Physical oceanography textbook, Bob Stewart of Texas A&M University
Eric Kunze's work on internal waves, internal tides, mixing, and more
Tides
Physical oceanography | Internal tide | Physics | 2,105 |
62,162,863 | https://en.wikipedia.org/wiki/Serpentine%20Art%20and%20Nature%20Commons | The Serpentine Art and Nature Commons ("SANC" or "Serpentine Commons") is a not-for-profit organization founded in 1978. SANC is dedicated to preserving and maintaining the woodlands and serpentine ridge on the east shore of Staten Island and more specifically within the neighborhoods of Grymes Hill and Silver Lake.
The Serpentine Commons is a community-based group that provides open space, hikes and other educational opportunities to the North Shore of Staten Island on the more than 10 acres of the approximately 40 acres of land in the Serpentine Ridge Nature Preserve of the Special Hillsides Preservation District.
SANC owns the four lots comprising the over 10 acres of land thanks to a grant by the Trust for Public Land.
The steep slope park is open to everyone without charge. The hiking trails start at the bottom of the hill at 599 Van Duzer Street. There is also an entrance from the top of the hill at 255 Howard Avenue as well as a gated entrance by 30 Howard Circle.
The members meet monthly on the second Monday of the month at 7.30pm in the Kairos House at Wagner College. Anyone is invited to participate.
References
North Shore, Staten Island
Wildlife conservation
Grymes Hill, Staten Island | Serpentine Art and Nature Commons | Biology | 251 |
14,566,980 | https://en.wikipedia.org/wiki/Water%20sky | Water sky is a phenomenon that is closely related to ice blink. It forms in regions with large areas of ice and low-lying clouds and so is limited mostly to the extreme northern and southern sections of earth, in Antarctica and in the Arctic.
When light hits the blue oceans or seas, some of it bounces back and enables the observer to physically see the water. However, some of the light also is reflected back up on to the bottoms of low-lying clouds and causes a dark spot to appear underneath some clouds. These clouds may be visible when the seas are not and can show alert and knowledgeable travelers the general direction of water. The dark clouds over open water have long been used by polar explorers and scientists to navigate in sea ice. For example, Arctic explorer Fridtjof Nansen and his assistant Hjalmar Johansen used the phenomenon to find lanes of water in their failed expedition to the North Pole, as did Louis Bernacchi and Douglas Mawson in Antarctica.
Sources
Mawson, Sir Douglas; The Home of the Blizzard
Reed, William; The Phantom of the Poles 1906
External links
Water Sky and Ice Blink
Water Sky
Atmospheric optical phenomena | Water sky | Physics | 238 |
63,772,144 | https://en.wikipedia.org/wiki/Pisuviricota | Pisuviricota is a phylum of RNA viruses that includes all positive-strand and double-stranded RNA viruses that infect eukaryotes and are not members of the phylum Kitrinoviricota, Lenarviricota or Duplornaviricota. The name of the group is a syllabic abbreviation of “picornavirus supergroup” with the suffix -viricota, indicating a virus phylum. Phylogenetic analyses suggest that Birnaviridae and Permutotetraviridae, both currently unassigned to a phylum in Orthornavirae, also belong to this phylum and that both are sister groups. Another proposed family of the phylum is unassigned Polymycoviridae in Riboviria.
Classes
The following classes are recognized:
Duplopiviricetes
Pisoniviricetes
Stelpaviricetes
References
Viruses | Pisuviricota | Biology | 194 |
1,434,057 | https://en.wikipedia.org/wiki/Phase%20vocoder | A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting).
At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms. The STFT converts a time domain representation of sound into a time-frequency representation (the "analysis" phase), allowing modifications to the amplitudes or phases of specific frequency components of the sound, before resynthesis of the time-frequency domain representation into the time domain by the inverse STFT. The time evolution of the resynthesized sound can be changed by means of modifying the time position of the STFT frames prior to the resynthesis operation
allowing for time-scale modification of the original sound file.
Phase coherence problem
The main problem that has to be solved for all cases of manipulation of the STFT is the fact that individual signal components (sinusoids, impulses) will be spread over multiple frames and multiple STFT frequency locations (bins). This is because the STFT analysis is done using overlapping analysis windows. The windowing results in spectral leakage such that the information of individual sinusoidal components is spread over adjacent STFT bins. To avoid border effects of tapering of the analysis windows, STFT analysis windows overlap in time. This time overlap results in the fact that adjacent STFT analyses are strongly correlated (a sinusoid present in analysis frame at time "t" will be present in the subsequent frames as well). The problem of signal transformation with the phase vocoder is related to the problem that all modifications that are done in the STFT representation need to preserve the appropriate correlation between adjacent frequency bins (vertical coherence) and time frames (horizontal coherence). Except in the case of extremely simple synthetic sounds, these appropriate correlations can be preserved only approximately, and since the invention of the phase vocoder research has been mainly concerned with finding algorithms that would preserve the vertical and horizontal coherence of the STFT representation after the modification. The phase coherence problem was investigated for quite a while before appropriate solutions emerged.
History
The phase vocoder was introduced in 1966 by Flanagan as an algorithm that would preserve horizontal coherence between the phases of bins that represent sinusoidal components. This original phase vocoder did not take into account the vertical coherence between adjacent frequency bins, and therefore, time stretching with this system produced sound signals that were missing clarity.
The optimal reconstruction of the sound signal from STFT after amplitude modifications has been proposed by Griffin and Lim in 1984. This algorithm does not consider the problem of producing a coherent STFT, but it does allow finding the sound signal that has an STFT that is as close as possible to the modified STFT even if the modified STFT is not coherent (does not represent any signal).
The problem of the vertical coherence remained a major issue for the quality of time scaling operations until 1999 when Laroche and Dolson proposed a means to preserve phase consistency across spectral bins. The proposition of Laroche and Dolson has to be seen as a turning point in phase vocoder history. It has been shown that by means of ensuring vertical phase consistency very high quality time scaling transformations can be obtained.
The algorithm proposed by Laroche did not allow preservation of vertical phase coherence for sound onsets (note onsets). A solution for this problem has been proposed by Roebel.
An example of software implementation of phase vocoder based signal transformation using means similar to those described here to achieve high quality signal transformation is Ircam's SuperVP.
Use in music
British composer Trevor Wishart used phase vocoder analyses and transformations of a human voice as the basis for his composition Vox 5 (part of his larger Vox Cycle). Transfigured Wind by American composer Roger Reynolds uses the phase vocoder to perform time-stretching of flute sounds. The music of JoAnn Kuchera-Morin makes some of the earliest and most extensive use of phase vocoder transformations, such as in Dreampaths (1989).
See also
Audio time stretching and pitch scaling
References
External links
The Phase Vocoder: A Tutorial - A good description of the phase vocoder
New Phase-Vocoder Techniques for Pitch-Shifting, Harmonizing and Other Exotic Effects
A new Approach to Transient Processing in the Phase Vocoder
Phase Vocoder - Phase vocoder description with figures and equations
Signal processing
Speech synthesis | Phase vocoder | Technology,Engineering | 972 |
3,482,748 | https://en.wikipedia.org/wiki/Scratch%20space | Scratch space is space on the hard disk drive that is dedicated for storage of temporary user data, by analogy of "scratch paper." It is unreliable by intention and has no backup. Scratch disks may occasionally be set to erase all data at regular intervals so that the disk space is left free for future use. The management of scratch disk space is typically dynamic, occurring when needed. Its advantage is that it is faster than e.g. network filesystems.
Scratch space is commonly used in scientific computing workstations, and in graphic design programs such as Adobe Photoshop. It is used when programs need to use more data than can be stored in system RAM. A common error in that program is "scratch disks full", which occurs when one has the scratch disks configured to be on the boot drive. Many computer users gradually fill up their primary hard drive with permanent data, slowly reducing the amount of space the scratch disk may take up.
Partitioning off a significant fraction of the boot hard drive and leaving that space empty will ensure a reliable scratch disk. Hard drive space, on a per-gigabyte basis, is far cheaper than RAM, though performs far slower. Although dedicating a separate physical drive from the main operating system and software can improve performance, a scratch disk will not match RAM for speed.
See also
Scratch tape
Swap partition
Temporary folder
TMPDIR
References
Computing terminology
Computer files
Rotating disc computer storage media | Scratch space | Technology | 291 |
32,176,298 | https://en.wikipedia.org/wiki/Translation%20functor | In mathematical representation theory, a translation functor is a functor taking representations of a Lie algebra to representations with a possibly different central character. Translation functors were introduced independently by and . Roughly speaking, the functor is given by taking a tensor product with a finite-dimensional representation, and then taking a subspace with some central character.
Definition
By the Harish-Chandra isomorphism, the characters of the center Z of the universal enveloping algebra of a complex reductive Lie algebra can be identified with the points of L⊗C/W, where L is the weight lattice and W is the Weyl group. If λ is a point of L⊗C/W then write χλ for the corresponding character of Z.
A representation of the Lie algebra is said to have central character χλ if every vector v is a generalized eigenvector of the center Z with eigenvalue χλ; in other words if z∈Z and v∈V then (z − χλ(z))n(v)=0 for some n.
The translation functor ψ takes representations V with central character χλ to representations with central character χμ. It is constructed in two steps:
First take the tensor product of V with an irreducible finite dimensional representation with extremal weight λ−μ (if one exists).
Then take the generalized eigenspace of this with eigenvalue χμ.
References
Representation theory
Functors | Translation functor | Mathematics | 297 |
59,328,051 | https://en.wikipedia.org/wiki/Feeding%20behavior%20of%20spotted%20hyenas | The spotted hyena is the most carnivorous member of the Hyaenidae. Unlike its brown and striped cousins, the spotted hyena is primarily a predator rather than a scavenger. One of the earliest studies to demonstrate its hunting abilities was done by Hans Kruuk, a Dutch wildlife ecologist who showed through a 7-year study of hyena populations in Ngorongoro and Serengeti National Park during the 1960s that spotted hyenas hunt as much as lions, and with later studies this has been shown to be the average in all areas of Africa. However spotted hyenas remain mislabeled as scavengers, often even by ecologists and wildlife documentary channels.
Prey
Blue wildebeest are the most commonly taken medium-sized ungulate prey item in both Ngorongoro and the Serengeti, with zebra and Thomson's gazelles coming close behind. Cape buffalo are rarely attacked due to differences in habitat preference, though adult bulls have been recorded to be taken on occasion. In Kruger National Park, blue wildebeest, cape buffalo, Burchell's zebra, greater kudu and impala are the spotted hyena's most important prey, while giraffe, impala, wildebeest and zebra are its major food sources in the nearby Timbavati area. Springbok and kudu are the main prey in Namibia's Etosha National Park, and springbok in the Namib. In the southern Kalahari, gemsbok, wildebeest and springbok are the principal prey. In Chobe, the spotted hyena's primary prey consists of migratory zebra and resident impala. In Kenya's Masai Mara, 80% of the spotted hyena's prey consists of topi and Thomson's gazelle, save for during the four-month period when zebra and wildebeest herds migrate to the area. Bushbuck, suni and buffalo are the dominant prey items in the Aberdare Mountains, while Grant's gazelle, gerenuk, sheep, goats and cattle are likely preyed upon in northern Kenya.
In west Africa, the spotted hyena is primarily a scavenger who will occasionally attack domestic stock and medium-size antelopes in some areas. In Cameroon, it is common for spotted hyenas to feed on small antelopes like kob, but may also scavenge on reedbuck, kongoni, buffalo, giraffe, African elephant, topi and roan antelope carcasses. Records indicate that spotted hyenas in Malawi feed on medium to large-sized ungulates such as waterbuck and impala. In Tanzania's Selous Game Reserve, spotted hyenas primarily prey on wildebeest, followed by buffalo, zebra, impala, giraffe, reedbuck and kongoni. In Uganda, it is thought that the species primarily preys on birds and reptiles, while in Zambia it is considered a scavenger.
Spotted hyenas have also been found to catch fish, tortoises, humans, black rhino, hippo calves, young African elephants, pangolins and pythons. There is at least one record of four hyenas killing an adult or subadult hippopotamus in Kruger National Park. Spotted hyenas may consume leather articles such as boots and belts around campsites. Jane Goodall recorded spotted hyenas attacking or savagely playing with the exterior and interior fittings of cars, and the species is thought to be responsible for eating car tyres.
The fossil record indicates that the now extinct European spotted hyenas primarily fed on Przewalski's horses, Irish elk, reindeer, red deer, roe deer, fallow deer, wild boar, ibex, steppe wisent, aurochs, and woolly rhinoceros. Spotted hyenas are thought to be responsible for the dis-articulation and destruction of some cave bear skeletons. Such large carcasses were an optimal food resource for hyenas, especially at the end of winter, when food was scarce.
Hunting behaviour
Unlike other large African carnivores, spotted hyenas do not preferentially prey on any species, and only African buffalo and giraffe are significantly avoided. Spotted hyenas prefer prey with a body mass range of , with a mode of . When hunting medium to large sized prey, spotted hyenas tend to select certain categories of animal; young animals are frequently targeted, as are old ones, though the latter category is not so significant when hunting zebras, due to their aggressive anti-predator behaviours. Small prey is killed by being shaken in the mouth, while large prey is eaten alive.
The spotted hyena tracks live prey by sight, hearing and smell. Carrion is detected by smell and the sound of other predators feeding. During daylight hours, they watch vultures descending upon carcasses. Their auditory perception is powerful enough to detect sounds of predators killing prey or feeding on carcasses over distances of up to . Unlike the grey wolf, the spotted hyena relies more on sight than smell when hunting, and does not follow its prey's prints or travel in single file.
Spotted hyenas usually hunt wildebeest either singly, or in groups of two or three. They catch adult wildebeest usually after chases at speeds of up to 60 km/h (37 mi/h). Chases are usually initiated by one hyena and, with the exception of cows with calves, there is little active defence from the wildebeest herd. Wildebeest will sometimes attempt to escape hyenas by taking to water although, in such cases, the hyenas almost invariably catch them.
Zebras require different hunting methods to those used for wildebeest, due to their habit of running in tight groups and aggressive defence from stallions. Typical zebra hunting groups consist of 10–25 hyenas, though there is one record of a hyena killing an adult zebra unaided. During a chase, zebras typically move in tight bunches, with the hyenas pursuing behind in a crescent formation. Chases are usually relatively slow, with an average speed of 15–30 km/h. A stallion will attempt to place himself between the hyenas and the herd, though once a zebra falls behind the protective formation it is immediately set upon, usually after a chase of . Though hyenas may harass the stallion, they usually only concentrate on the herd and attempt to dodge the stallion's assaults. Unlike stallions, mares typically only react aggressively to hyenas when their foals are threatened. Unlike wildebeest, zebras rarely take to water when escaping hyenas.
When hunting Thomson's gazelles, spotted hyenas usually operate alone, and prey primarily on young fawns. Chases against both adult and young gazelles can cover distances of with speeds of 60 km/h (37 mi/h). Female gazelles do not defend their fawns, though they may attempt to distract hyenas by feigning weakness.
Feeding habits
A single spotted hyena can eat at least 14.5 kg of meat per meal, and although they act aggressively toward each other when feeding, they compete with each other mostly through speed of eating, rather than by fighting as lions do. Spotted hyenas can take less than two minutes to eat a gazelle fawn, while a group of 35 hyenas can completely consume an adult zebra in 36 minutes. Spotted hyenas do not require much water, and typically only spend 30 seconds drinking.
When feeding on an intact carcass, spotted hyenas will first consume the meat around the loins and anal region, then open the abdominal cavity and pull out the soft organs. Once the stomach, its wall and contents are consumed, the hyenas will eat the lungs and abdominal and leg muscles. Once the muscles have been eaten, the carcass is disassembled and the hyenas carry off pieces to eat in peace. Spotted hyenas are adept at eating their prey in water: they have been observed to dive under floating carcasses to take bites, then resurface to swallow.
The spotted hyena is very efficient at eating its prey; not only is it able to splinter and eat the largest ungulate bones, it is also able to digest them completely. Spotted hyenas can digest all organic components in bones, not just the marrow. Any inorganic material is excreted with the faeces, which consist almost entirely of a white powder with few hairs. They react to alighting vultures more readily than other African carnivores, and are more likely to stay in the vicinity of lion kills or human settlements.
References
Bibliography
Hyenas
Behavioral ecology
Predation | Feeding behavior of spotted hyenas | Biology | 1,826 |
35,934,051 | https://en.wikipedia.org/wiki/PVC%20decking | PVC decking is composed entirely of polyvinyl chloride (PVC) and contains no wood. PVC decking is a more expensive option in the alternative decking industry, but it provides significant fade and stain resistance and lower maintenance requirements compared to other products, including real teak wood.
History
The alternative yacht deck covering was developed by the Dunlop company, known in the automotive industry, in the second half of the 20th century. Its popularity began to rise in the early 2000s due to the relatively high price of wooden decking and the maintenance-free nature of synthetic teak. Moreover, manufacturers have developed increasingly realistic alternatives over the decades, which could be installed more quickly and cheaply, even retrofitted on a yacht deck..
Production
The production of PVC decking was challenging in the past and resulted in a significant scrap rate, which could reach as high as 12%. Today, however, many manufacturers can achieve scrap rates below 2%. This improvement is partly due to enhanced manufacturing technologies and partly to the reuse of materials.
Producing PVC decking is a relatively difficult process called co-extrusion. During production, various stabilizers and colorants are added to the PVC granules, which are melted by the extruder and shaped into teak planks through a specialized mold. In the process, the thermoplastic material is formed at high temperatures and then solidified in a cooling medium. The deck board core is coated and bound with an outer plastic shell, but the materials can be temperamental and hard to work with. Commercial production is challenging, not only for this reason, but also because about one eighth of the deck boards produced are considered unsellable and therefore scrapped. The fragile nature of this production process requires careful ingredient mixing and precise execution.
The extruded strips need to be manually processed, glued together, caulked, and then adhered to the deck. The high labor requirement makes the process relatively expensive. Due to the desire for realism, it can only be automated to a limited extent. However, this method provides the installer with great flexibility, as these coverings have properties that make them highly shapeable, formable, and bendable.
With the colors of the caulking, a great variety can be achieved. Some manufacturers offer caulking colors that can be selected from the RAL color scale.
Advantages
PVC decking offers the most significant fade, stain, and mold resistance among decking products. The products are marketed as low-maintenance and are typically easy to clean and maintain. PVC decking typically doesn't require staining, sanding, or painting. It is sometimes partially composed of recycled plastic material, making it an environmentally friendly, efficient use of resources. The product is significantly lighter compared to wood composite products and can be bent and shaped to create custom configurations.
Additional benefits:
recyclable material
does not absorb red wine and other liquids
easy to clean
durable, retains its original beauty for 8-10 years
Disadvantages
Compared to other synthetic decking products, PVC is the most expensive. The 100% PVC makeup of the board makes it costlier than the wood powder/plastic mix used in composite decking. This cost means that PVC will be a more expensive investment up front, although manufacturers claim that the long life and low maintenance requirements of the deck make it an economical decision in the long run. PVC lacks the realistic feel of wood. Although manufacturers form the product with a realistic wood grain or brushstroke, some contractors and homeowners simply do not like the artificial sheen of the product. PVC is also formulated to resist scratches, stains, and mold, but some wear will still show over the life of the product.
Additional disadvantages:
In the heat of summer, the products of certain manufacturers get very hot
Installation requires expertise
Heavier than teak wood
See also
Composite lumber
Wood-plastic composite
References
Plastics applications
Floors | PVC decking | Engineering | 802 |
25,025,571 | https://en.wikipedia.org/wiki/List%20of%20Adobe%20Flash%20software | The following is a list of notable software for creating, modifying and deploying Adobe Flash and Adobe Shockwave format.
Playback
Adobe Flash Player
Adobe Flash Lite
Adobe AIR
Gameswf
Gnash
Lightspark
Ruffle
Shumway
Scaleform GFx
Swfdec
Authoring
"Authoring" in computing, is the act of creating a document, especially a multimedia document, hypertext or hypermedia.
Adobe Flash Professional
Adobe Flash Builder
Adobe Flash Catalyst
Adobe Flash Media Live Encoder
Ajax Animator
FlashDevelop
Haxe
Powerflasher FDT
MTASC
OpenFL
OpenLaszlo
Print2Flash
Qflash
SWFTools
swfmill
SWiSH Max
Stencyl
Compilers
Adobe Flash Professional
Apache Flex
CrossBridge
Google Swiffy
SWFTools
swfmill
Debuggers and profilers
Adobe Scout
FlashFirebug
Libraries
Ming
SWFAddress
SWFObject
SWFFit
Papervision3D
Stage3D
Away3D
Flare3D
Starling
Server software
Adobe Flash Media Server
See also
Flash for Linux
References
External links
ScriptSWF at SourceForge
OSFlash
Adobe Flash
Adobe Flash software | List of Adobe Flash software | Technology | 231 |
2,571,276 | https://en.wikipedia.org/wiki/Computational%20genomics | Computational genomics refers to the use of computational and statistical analysis to decipher biology from genome sequences and related data, including both DNA and RNA sequence as well as other "post-genomic" data (i.e., experimental data obtained with technologies that require the genome sequence, such as genomic DNA microarrays). These, in combination with computational and statistical approaches to understanding the function of the genes and statistical association analysis, this field is also often referred to as Computational and Statistical Genetics/genomics. As such, computational genomics may be regarded as a subset of bioinformatics and computational biology, but with a focus on using whole genomes (rather than individual genes) to understand the principles of how the DNA of a species controls its biology at the molecular level and beyond. With the current abundance of massive biological datasets, computational studies have become one of the most important means to biological discovery.
History
The roots of computational genomics are shared with those of bioinformatics. During the 1960s, Margaret Dayhoff and others at the National Biomedical Research Foundation assembled databases of homologous protein sequences for evolutionary study. Their research developed a phylogenetic tree that determined the evolutionary changes that were required for a particular protein to change into another protein based on the underlying amino acid sequences. This led them to create a scoring matrix that assessed the likelihood of one protein being related to another.
Beginning in the 1980s, databases of genome sequences began to be recorded, but this presented new challenges in the form of searching and comparing the databases of gene information. Unlike text-searching algorithms that are used on websites such as Google or Wikipedia, searching for sections of genetic similarity requires one to find strings that are not simply identical, but similar. This led to the development of the Needleman-Wunsch algorithm, which is a dynamic programming algorithm for comparing sets of amino acid sequences with each other by using scoring matrices derived from the earlier research by Dayhoff. Later, the BLAST algorithm was developed for performing fast, optimized searches of gene sequence databases. BLAST and its derivatives are probably the most widely used algorithms for this purpose.
The emergence of the phrase "computational genomics" coincides with the availability of complete sequenced genomes in the mid-to-late 1990s. The first meeting of the Annual Conference on Computational Genomics was organized by scientists from The Institute for Genomic Research (TIGR) in 1998, providing a forum for this speciality and effectively distinguishing this area of science from the more general fields of Genomics or Computational Biology. The first use of this term in scientific literature, according to MEDLINE abstracts, was just one year earlier in Nucleic Acids Research. The final Computational Genomics conference was held in 2006, featuring a keynote talk by Nobel Laureate Barry Marshall, co-discoverer of the link between Helicobacter pylori and stomach ulcers. As of 2014, the leading conferences in the field include Intelligent Systems for Molecular Biology (ISMB) and Research in Computational Molecular Biology (RECOMB).
The development of computer-assisted mathematics (using products such as Mathematica or Matlab) has helped engineers, mathematicians and computer scientists to start operating in this domain, and a public collection
of case studies and demonstrations is growing, ranging from whole genome comparisons to gene expression analysis. This has increased the introduction of different ideas, including concepts from systems and control, information theory, strings analysis and data mining. It is anticipated that computational approaches will become and remain a standard topic for research and teaching, while students fluent in both topics start being formed in the multiple courses created in the past few years.
Contributions of computational genomics research to biology
Contributions of computational genomics research to biology include:
proposing cellular signalling networks
proposing mechanisms of genome evolution
predict precise locations of all human genes using comparative genomics techniques with several mammalian and vertebrate species
predict conserved genomic regions that are related to early embryonic development
discover potential links between repeated sequence motifs and tissue-specific gene expression
measure regions of genomes that have undergone unusually rapid evolution
Genome comparison
Computational tools have been developed to assess the similarity of genomic sequences. Some of them are alignment-based distances such as Average Nucleotide Identity. These methods are highly specific, while being computationally slow.
Other, alignment-free methods, include statistical and probabilistic approaches. One example is Mash, a probabilistic approach using minhash. In this method, given a number k, a genomic sequence is transformed into a shorter sketch through a random hash function on the possible k-mers. For example, if , sketches of size 4 are being constructed and given the following hash function
the sketch of the sequence
is {0,1,1,2} which are the smallest hash values of its k-mers of size 2. These sketches are then compared to estimate the fraction of shared k-mers (Jaccard index) of the corresponding sequences.
It is worth noticing that a hash value is a binary number. In a real genomic setting a useful size of k-mers ranges from 14 to 21, and the size of the sketches would be around 1000.
By reducing the size of the sequences, even hundreds of times, and comparing them in an alignment-free way, this method reduces significantly the time of estimation of the similarity of sequences.
Clusterization of genomic data
Clustering data is a tool used to simplify statistical analysis of a genomic sample. For example, in the authors developed a tool (BiG-SCAPE) to analize sequence similarity networks of biosynthetic gene clusters (BGC). In successive layers of clusterization of biosynthetic gene clusters are used in the automated tool BiG-MAP, both to filter redundant data and identify gene clusters families. This tool profiles the abundance and expressions levels of BGC's in microbiome samples.
Biosynthetic gene clusters
Bioinformatic tools have been developed to predict, and determine the abundance and expression of, this kind of gene cluster in microbiome samples, from metagenomic data. Since the size of metagenomic data is considerable, filtering and clusterization thereof are important parts of these tools. These processes can consist of dimensionality -reduction techniques, such as Minhash, and clusterization algorithms such as k-medoids and affinity propagation. Also several metrics and similarities have been developed to compare them.
Genome mining for biosynthetic gene clusters (BGCs) has become an integral part of natural product discovery. The >200,000 microbial genomes now publicly available hold information on abundant novel chemistry. One way to navigate this vast genomic diversity is through comparative analysis of homologous BGCs, which allows identification of cross-species patterns that can be matched to the presence of metabolites or biological activities. However, current tools are hindered by a bottleneck caused by the expensive network-based approach used to group these BGCs into gene cluster families (GCFs).
BiG-SLiCE (Biosynthetic Genes Super-Linear Clustering Engine), a tool designed to cluster massive numbers of BGCs. By representing them in Euclidean space, BiG-SLiCE can group BGCs into GCFs in a non-pairwise, near-linear fashion.
Satria et. al, 2021 across BiG-SLiCE demonstrate the utility of such analyses by reconstructing a global map of secondary metabolic diversity across taxonomy to identify uncharted biosynthetic potential, opens up new possibilities to accelerate natural product discovery and offers a first step towards constructing a global and searchable interconnected network of BGCs. As more genomes are sequenced from understudied taxa, more information can be mined to highlight their potentially novel chemistry.
Compression algorithms
See also
Bioinformatics
Computational biology
Earth BioGenome Project
Genomics
Microarray
BLAST
Computational epigenetics
Nvidia Parabricks - suite of free software for genome analysis developed by Nvidia
References
External links
Harvard Extension School Biophysics 101, Genomics and Computational Biology, http://www.courses.fas.harvard.edu/~bphys101/info/syllabus.html
University of Bristol course in Computational Genomics, http://www.computational-genomics.net/
Bioinformatics
Genomics
Computational fields of study | Computational genomics | Technology,Engineering,Biology | 1,706 |
46,266,831 | https://en.wikipedia.org/wiki/Acid%20grassland | Acid grassland is a nutrient-poor habitat characterised by grassy tussocks and bare ground.
Habitat
The vegetation is dominated by grasses and herbaceous plants, growing on soils deficient in lime (calcium). These may be found on acid sedimentary rock such as sandstone; acid igneous rock such as granite; and fluvial or glacial deposits such as sand and gravel. Typical plants of lowland acid grassland in Britain include common bent grass, Agrostis capillaris, wavy hair-grass, Deschampsia flexuosa, bristle bent grass, Agrostis curtisii, tormentil, Potentilla erecta, and flowers such as sheep's sorrel, Rumex acetosella and heath bedstraw, Galium saxatile.
In Britain
In Britain, under 30,000 hectares of lowland acid grassland remain, often on common land and nature reserves. It is considered a nationally important habitat; areas are found in London on freely-draining sandy and gravelly soils. 271 Sites of Special Scientific Interest have been notified with acid grassland as a principal reason for the designation. Greater London's Richmond Park, Epping Forest and Wimbledon Common are all Special Areas of Conservation with considerable areas of acid grassland.
References
Ecosystems
Grasslands | Acid grassland | Biology | 250 |
31,349,351 | https://en.wikipedia.org/wiki/Quantum%20spin%20liquid | In condensed matter physics, a quantum spin liquid is a phase of matter that can be formed by interacting quantum spins in certain magnetic materials. Quantum spin liquids (QSL) are generally characterized by their long-range quantum entanglement, fractionalized excitations, and absence of ordinary magnetic order.
The quantum spin liquid state was first proposed by physicist Phil Anderson in 1973 as the ground state for a system of spins on a triangular lattice that interact antiferromagnetically with their nearest neighbors, i.e. neighboring spins seek to be aligned in opposite directions. Quantum spin liquids generated further interest when in 1987 Anderson proposed a theory that described high-temperature superconductivity in terms of a disordered spin-liquid state.
Basic properties
The simplest kind of magnetic phase is a paramagnet, where each individual spin behaves independently of the rest, just like atoms in an ideal gas. This highly disordered phase is the generic state of magnets at high temperatures, where thermal fluctuations dominate. Upon cooling, the spins will often enter a ferromagnet (or antiferromagnet) phase. In this phase, interactions between the spins cause them to align into large-scale patterns, such as domains, stripes, or checkerboards. These long-range patterns are referred to as "magnetic order," and are analogous to the regular crystal structure formed by many solids.
Quantum spin liquids offer a dramatic alternative to this typical behavior. One intuitive description of this state is as a "liquid" of disordered spins, in comparison to a ferromagnetic spin state, much in the way liquid water is in a disordered state compared to crystalline ice. However, unlike other disordered states, a quantum spin liquid state preserves its disorder to very low temperatures. A more modern characterization of quantum spin liquids involves their topological order, long-range quantum entanglement properties, and anyon excitations.
Examples
Several physical models have a disordered ground state that can be described as a quantum spin liquid.
Frustrated magnetic moments
Localized spins are frustrated if there exist competing exchange interactions that can not all be satisfied at the same time, leading to a large degeneracy of the system's ground state. A triangle of Ising spins (meaning that the only possible orientation of the spins are either "up" or "down"), which interact antiferromagnetically, is a simple example for frustration. In the ground state, two of the spins can be antiparallel but the third one cannot. This leads to an increase of possible orientations (six in this case) of the spins in the ground state, enhancing fluctuations and thus suppressing magnetic ordering.
A recent research work used this concept in analyzing brain networks and surprisingly indicated frustrated interactions in the brain corresponding to flexible neural interactions. This observation highlights the generalization of the frustration phenomenon and proposes its investigation in biological systems.
Resonating valence bonds (RVB)
To build a ground state without magnetic moment, valence bond states can be used, where two electron spins form a spin 0 singlet due to the antiferromagnetic interaction. If every spin in the system is bound like this, the state of the system as a whole has spin 0 too and is non-magnetic. The two spins forming the bond are maximally entangled, while not being entangled with the other spins. If all spins are distributed to certain localized static bonds, this is called a valence bond solid (VBS).
There are two things that still distinguish a VBS from a spin liquid: First, by ordering the bonds in a certain way, the lattice symmetry is usually broken, which is not the case for a spin liquid. Second, this ground state lacks long-range entanglement. To achieve this, quantum mechanical fluctuations of the valence bonds must be allowed, leading to a ground state consisting of a superposition of many different partitionings of spins into valence bonds. If the partitionings are equally distributed (with the same quantum amplitude), there is no preference for any specific partitioning ("valence bond liquid"). This kind of ground state wavefunction was proposed by P. W. Anderson in 1973 as the ground state of spin liquids and is called a resonating valence bond (RVB) state. These states are of great theoretical interest as they are proposed to play a key role in high-temperature superconductor physics.
Excitations
The valence bonds do not have to be formed by nearest neighbors only and their distributions may vary in different materials. Ground states with large contributions of long range valence bonds have more low-energy spin excitations, as those valence bonds are easier to break up. On breaking, they form two free spins. Other excitations rearrange the valence bonds, leading to low-energy excitations even for short-range bonds. Something very special about spin liquids is that they support exotic excitations, meaning excitations with fractional quantum numbers. A prominent example is the excitation of spinons which are neutral in charge and carry spin . In spin liquids, a spinon is created if one spin is not paired in a valence bond. It can move by rearranging nearby valence bonds at low energy cost.
Realizations of (stable) RVB states
The first discussion of the RVB state on square lattice using the RVB picture only consider nearest neighbour bonds that connect different sub-lattices. The constructed RVB state is an equal amplitude superposition of all the nearest-neighbour bond configurations. Such a RVB state is believed to contain emergent gapless gauge field which may confine the spinons etc. So the equal-amplitude nearest-neighbour RVB state on square lattice is unstable and does not corresponds to a quantum spin phase. It may describe a critical phase transition point between two stable phases. A version of RVB state which is stable and contains deconfined spinons is the chiral spin state. Later, another version of stable RVB state with deconfined spinons, the Z2 spin liquid, is proposed, which realizes the simplest topological order – Z2 topological order. Both chiral spin state and Z2 spin liquid state have long RVB bonds that connect the same sub-lattice. In chiral spin state, different bond configurations can have complex amplitudes, while in Z2 spin liquid state, different bond configurations only have real amplitudes. The RVB state on triangle lattice also realizes the Z2 spin liquid, where different bond configurations only have real amplitudes. The toric code model is yet another realization of Z2 spin liquid (and Z2 topological order) that explicitly breaks the spin rotation symmetry and is exactly solvable.
Experimental signatures and probes
Since there is no single experimental feature which identifies a material as a spin liquid, several experiments have to be conducted to gain information on different properties which characterize a spin liquid.
Magnetic susceptibility
In a high-temperature, classical paramagnet phase, the magnetic susceptibility is given by the Curie–Weiss law
Fitting experimental data to this equation determines a phenomenological Curie–Weiss temperature, . There is a second temperature, , where magnetic order in the material begins to develop, as evidenced by a non-analytic feature in . The ratio of these is called the frustration parameter
In a classic antiferromagnet, the two temperatures should coincide and give . An ideal quantum spin liquid would not develop magnetic order at any temperature and so would have a diverging frustration parameter . A large value is therefore a good indication of a possible spin liquid phase. Some frustrated materials with different lattice structures and their Curie–Weiss temperature are listed in the table below. All of them are proposed spin liquid candidates.
Other
One of the most direct evidence for absence of magnetic ordering give NMR or μSR experiments. If there is a local magnetic field present, the nuclear or muon spin would be affected which can be measured. 1H-NMR measurements on κ-(BEDT-TTF)2Cu2(CN)3 have shown no sign of magnetic ordering down to 32 mK, which is four orders of magnitude smaller than the coupling constant J≈250 K between neighboring spins in this compound. Further investigations include:
Specific heat measurements give information about the low-energy density of states, which can be compared to theoretical models.
Thermal transport measurements can determine if excitations are localized or itinerant.
Neutron scattering gives information about the nature of excitations and correlations (e.g. spinons).
Reflectance measurements can uncover spinons, which couple via emergent gauge fields to the electromagnetic field, giving rise to a power-law optical conductivity.
Candidate materials
RVB type
Neutron scattering measurements of cesium chlorocuprate Cs2CuCl4, a spin-1/2 antiferromagnet on a triangular lattice, displayed diffuse scattering. This was attributed to spinons arising from a 2D RVB state. Later theoretical work challenged this picture, arguing that all experimental results were instead consequences of 1D spinons confined to individual chains.
Afterwards, it was observed in an organic Mott insulator (κ-(BEDT-TTF)2Cu2(CN)3) by Kanoda's group in 2003. It may correspond to a gapless spin liquid with spinon Fermi surface (the so-called uniform RVB state). The peculiar phase diagram of this organic quantum spin liquid compound was first thoroughly mapped using muon spin spectroscopy.
Herbertsmithite
Herbertsmithite is one of the most extensively studied QSL candidate materials. It is a mineral with chemical composition ZnCu3(OH)6Cl2 and a rhombohedral crystal structure. Notably, the copper ions within this structure form stacked two-dimensional layers of kagome lattices. Additionally, superexchange over the oxygen bonds creates a strong antiferromagnetic interaction between the copper spins within a single layer, whereas coupling between layers is negligible. Therefore, it is a good realization of the antiferromagnetic spin-1/2 Heisenberg model on the kagome lattice, which is a prototypical theoretical example of a quantum spin liquid.
Synthetic, polycrystalline herbertsmithite powder was first reported in 2005, and initial magnetic susceptibility studies showed no signs of magnetic order down to 2K. In a subsequent study, the absence of magnetic order was verified down to 50 mK, inelastic neutron scattering measurements revealed a broad spectrum of low energy spin excitations, and low-temperature specific heat measurements had power law scaling. This gave compelling evidence for a spin liquid state with gapless spinon excitations. A broad array of additional experiments, including 17O NMR, and neutron spectroscopy of the dynamic magnetic structure factor, reinforced the identification of herbertsmithite as a gapless spin liquid material, although the exact characterization remained unclear as of 2010.
Large (millimeter size) single crystals of herbertsmithite were grown and characterized in 2011. These enabled more precise measurements of possible spin liquid properties. In particular, momentum-resolved inelastic neutron scattering experiments showed a broad continuum of excitations. This was interpreted as evidence for gapless, fractionalized spinons. Follow-up experiments (using 17O NMR and high-resolution, low-energy neutron scattering) refined this picture and determined there was actually a small spinon excitation gap of 0.07–0.09 meV.
Some measurements were suggestive of quantum critical behavior. Magnetic response of this material displays scaling relation in both the bulk ac susceptibility and the low energy dynamic susceptibility, with the low temperature heat capacity strongly depending on magnetic field. This scaling is seen in certain quantum antiferromagnets, heavy-fermion metals, and two-dimensional 3He as a signature of proximity to a quantum critical point.
In 2020, monodisperse single-crystal nanoparticles of herbertsmithite (~10 nm) were synthesized at room temperature, using gas-diffusion electrocrystallization, showing that their spin liquid nature persists at such small dimensions.
It may realize a U(1)-Dirac spin liquid.
Kitaev spin liquids
Another evidence of quantum spin liquid was observed in a 2-dimensional material in August 2015. The researchers of Oak Ridge National Laboratory, collaborating with physicists from the University of Cambridge, and the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, measured the first signatures of these fractional particles, known as Majorana fermions, in a two-dimensional material with a structure similar to graphene. Their experimental results successfully matched with one of the main theoretical models for a quantum spin liquid, known as a Kitaev honeycomb model.
Strongly correlated quantum spin liquid
The strongly correlated quantum spin liquid (SCQSL) is a specific realization of a possible quantum spin liquid (QSL) representing a new type of strongly correlated electrical insulator (SCI) that possesses properties of heavy fermion metals with one exception: it resists the flow of electric charge. At low temperatures T the specific heat of this type of insulator is proportional to Tn, with n less or equal 1 rather than n=3, as it should be in the case of a conventional insulator whose heat capacity is proportional to T3. When a magnetic field B is applied to SCI the specific heat depends strongly on B, contrary to conventional insulators. There are a few candidates of SCI; the most promising among them is Herbertsmithite, a mineral with chemical structure ZnCu3(OH)6Cl2.
Kagome type
Ca10Cr7O28 is a frustrated kagome bilayer magnet, which does not develop long-range order even below 1 K, and has a diffuse spectrum of gapless excitations.
Toric code type
In December 2021, the first direct measurement of a quantum spin liquid of the toric code type was reported, it was achieved by two teams: one exploring ground state and anyonic excitations on a quantum processor and the other implementing a theoretical blueprint of atoms on a ruby lattice held with optical tweezers on a quantum simulator.
Specific properties: topological fermion condensation quantum phase transition
The experimental facts collected on heavy fermion (HF) metals and two dimensional Helium-3 demonstrate that the quasiparticle effective mass M* is very large, or even diverges. Topological fermion condensation quantum phase transition (FCQPT) preserves quasiparticles, and forms flat energy band at the Fermi level. The emergence of FCQPT is directly related to the unlimited growth of the effective mass M*. Near FCQPT, M* starts to depend on temperature T, number density x, magnetic field B and other external parameters such as pressure P, etc. In contrast to the Landau paradigm based on the assumption that the effective mass is approximately constant, in the FCQPT theory the effective mass of new quasiparticles strongly depends on T, x, B etc. Therefore, to agree/explain with the numerous experimental facts, extended quasiparticles paradigm based on FCQPT has to be introduced. The main point here is that the well-defined quasiparticles determine the thermodynamic, relaxation, scaling and transport properties of strongly correlated Fermi systems and M* becomes a function of T, x, B, P, etc.
The data collected for very different strongly correlated Fermi systems demonstrate universal scaling behavior; in other words distinct materials with strongly correlated fermions unexpectedly turn out to be uniform, thus forming a new state of matter that consists of HF metals, quasicrystals, quantum spin liquid, two-dimensional Helium-3, and compounds exhibiting high-temperature superconductivity.
Applications
Materials supporting quantum spin liquid states may have applications in data storage and memory. In particular, it is possible to realize topological quantum computation by means of spin-liquid states. Developments in quantum spin liquids may also help in the understanding of high temperature superconductivity.
References
Correlated electrons
Liquids
Phases of matter
Condensed matter physics
Quasiparticles | Quantum spin liquid | Physics,Chemistry,Materials_science,Engineering | 3,331 |
31,518,594 | https://en.wikipedia.org/wiki/Matrixx%20Initiatives%2C%20Inc.%20v.%20Siracusano | Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27 (2011), is a decision by the Supreme Court of the United States regarding whether a plaintiff can state a claim for securities fraud under §10(b) of the Securities Exchange Act of 1934, as amended, 15 U.S.C. §78j(b), and Securities and Exchange Commission Rule 10b-5, 17 CFR §240.10b-5 (2010), based on a pharmaceutical company's failure to disclose reports of adverse events associated with a product if the reports do not find statistically significant evidence that the adverse effects may be caused by the use of the product. In a 9–0 opinion delivered by Justice Sonia Sotomayor, the Court affirmed the Court of Appeals for the Ninth Circuit's ruling that the respondents, plaintiffs in a securities fraud class action against Matrixx Initiatives, Inc., and three Matrixx executives, had stated a claim under §10(b) and Rule 10b-5.
Parties
Petitioners: Matrixx Initiatives, Inc., Carl Johnson, William Hemelt, and Timothy Clarot (collectively "Matrixx")
Respondents: James Siracusano and NECA-IBEW Pension Fund, on behalf of themselves and all others similarly situated who purchased Matrixx securities between October 22, 2003, and February 6, 2004
Background
Petitioner Matrixx Initiatives, Inc., is a pharmaceutical company that sells cold remedy products through its wholly owned subsidiary Zicam, LLC. One of Zicam's main products is Zicam Cold Remedy (Zicam), which is produced in the form of a nasal spray or gel containing the active ingredient zinc gluconate. On April 27, 2004, respondents brought a class action suit against petitioners, alleging that petitioners violated §10(b) of the Securities Exchange Act and SEC Rule 10b-5 by failing to disclose reports that Zicam could cause anosmia, or loss of the sense of smell. Petitioners filed a motion to dismiss respondents' complaint for failure to state a claim. The District Court for the District of Arizona granted the motion without prejudice, reasoning that the allegation of user complaints were neither material nor statistically significant, and that respondents failed to allege scienter. Respondents appealed to the Court of Appeals for the Ninth Circuit, which issued a decision on October 28, 2009, to reverse and remand the judgement of the District Court. On March 23, 2010, petitioners filed their petition for a writ of certiorari to the Ninth Circuit with the United States Supreme Court.
Decision
On March 22, 2011, Justice Sotomayor delivered the 9–0 opinion that held "[r]espondents have stated a claim under §10(b) and Rule 10b-5", affirming 585 F.3d 1167.
Reactions to the decision
An article by Carl Bialik appearing in The Wall Street Journal on April 2, 2011, reported:
In [the] opinion, the justices said companies can't only rely on statistical significance when deciding what they need to disclose to investors.
Amen, say several statisticians who have long argued that the concept of statistical significance has unjustly overtaken other barometers used to determine which experimental results are valid and warrant public distribution. "Statistical significance doesn't tell you everything about the truth of the hypothesis you're exploring," says Steven Goodman, an epidemiologist and biostatistician at the Johns Hopkins Bloomberg School of Public Health.
Erik Olson, a partner at the Morrison & Foerster law firm in San Francisco which filed an amicus brief on behalf of BayBio, said that the court's ruling risks leaving companies without a clear guideline for deciding when they need to disclose adverse events. Olson, Stephen Thau, and Stefan Szpajda wrote a press release stating:
Life sciences companies and other public companies can learn at least two lessons from the decision. First and foremost, be careful what you say. As the Court emphasized, the securities laws focus on false or misleading speech. "[C]ompanies can control what they have to disclose under these provisions by controlling what they say to the market." (Slip Op. at 16). Rash or categorical comments are far more likely to form the basis for a lawsuit than measured, careful statements about the facts.
Second, life sciences companies should consult carefully with lawyers regarding specific disclosures and policies and practices for disclosing adverse events.
See also
List of United States Supreme Court cases, volume 563
References
External links
Opinion Below
United States Supreme Court cases
2011 in United States case law
International Brotherhood of Electrical Workers
United States securities case law
Drug safety
United States Supreme Court cases of the Roberts Court | Matrixx Initiatives, Inc. v. Siracusano | Chemistry | 989 |
3,665,897 | https://en.wikipedia.org/wiki/Lateralization%20of%20brain%20function | The lateralization of brain function (or hemispheric dominance/ lateralization) is the tendency for some neural functions or cognitive processes to be specialized to one side of the brain or the other. The median longitudinal fissure separates the human brain into two distinct cerebral hemispheres, connected by the corpus callosum. Although the macrostructure of the two hemispheres appears to be almost identical, different composition of neuronal networks allows for specialized function that is different in each hemisphere.
Lateralization of brain structures is based on general trends expressed in healthy patients; however, there are numerous counterexamples to each generalization. Each human's brain develops differently, leading to unique lateralization in individuals. This is different from specialization, as lateralization refers only to the function of one structure divided between two hemispheres. Specialization is much easier to observe as a trend, since it has a stronger anthropological history.
The best example of an established lateralization is that of Broca's and Wernicke's areas, where both are often found exclusively on the left hemisphere. Function lateralization, such as semantics, intonation, accentuation, and prosody, has since been called into question and largely been found to have a neuronal basis in both hemispheres. Another example is that each hemisphere in the brain tends to represent one side of the body. In the cerebellum, this is the same body side, but in the forebrain this is predominantly the contralateral side.
Lateralized functions
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right-handed individuals. While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers. This is particularly important when it comes to writing, a form of language that involves hand use. Studies attempting to isolate the linguistic component of written language in terms of brain lateralization could not provide enough evidence of a difference in the relative activation of the brain hemispheres between left-handed and right-handed adults.
Broca's area and Wernicke's area, associated with the production of speech and comprehension of speech, respectively, are located in the left cerebral hemisphere for about 95% of right-handers but about 70% of left-handers. Social interactions, demonstrating fierce emotions, and mathematical information are all provided by the right hemisphere.
Sensory processing
The processing of basic sensory information is lateralized by being divided into left and right sides of the body or the space around the body.
In vision, about half the neurons of the optic nerve from each eye cross to project to the opposite hemisphere, and about half do not cross to project to the hemisphere on the same side. This organizes visual information so that the left side of the visual field is processed largely by the visual cortex of the right hemisphere and vice versa for the right side of the visual field.
In hearing, about 90% of the neurons of the auditory nerve from one ear cross to project to the auditory cortex of the opposite hemisphere.
In the sense of touch, most of the neurons from the skin cross to project to the somatosensory cortex of the opposite hemisphere.
Because of this functional division of the left and right sides of the body and of the space that surrounds it, the processing of information in the sensory cortices is essentially identical. That is, the processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally. Numerical estimation, comparison and online calculation depend on bilateral parietal regions while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.
Value systems
Rather than just being a series of places where different brain modules occur, there are running similarities in the kind of function seen in each side, for instance how right-side impairment of drawing ability making patients draw the parts of the subject matter with wholly incoherent relationships, or where the kind of left-side damage seen in language impairment not damaging the patient's ability to catch the significance of intonation in speech. This has led British psychiatrist Iain McGilchrist to view the two hemispheres as having different value systems, where the left hemisphere tends to reduce complex matters such as ethics to rules and measures, and the right hemisphere is disposed to the holistic and metaphorical.
Clinical significance
Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes". Additionally, "left hemisphere lesions result in an omissive response bias or error pattern whereas right hemisphere lesions result in a commissive response bias or error pattern." The delusional misidentification syndromes, reduplicative paramnesia and Capgras delusion are also often the result of right hemisphere lesions.
Hemisphere damage
Damage to either the right or left hemisphere, and its resulting deficits provide insight into the function of the damaged area. There is truth to the idea that some brain functions reside more on one side of the brain than the other. We know this in part from what is lost when a stroke affects a particular part of the brain. Left hemisphere damage has many effects on language production and perception. Damage or lesions to the right hemisphere can result in a lack of emotional prosody or intonation when speaking. The left hemisphere is often involved with dealing of detail-oriented perception while the right hemisphere deals mostly with wholeness or an overall concept of things.
Right hemisphere damage also has grave effects on understanding discourse. People with damage to the right hemisphere have a reduced ability to generate inferences, comprehend and produce main concepts, and a reduced ability to manage alternative meanings. Furthermore, people with right hemisphere damage often exhibit discourse that is abrupt and perfunctory or verbose and excessive. They can also have pragmatic deficits in situations of turn taking, topic maintenance and shared knowledge. . Although both sides of the hemisphere has different responsibilities and tasks, they both complete each other and create a bigger picture.
Lateral brain damage can also affect visual perceptual spatial resolution. People with left hemisphere damage may have impaired perception of high resolution, or detailed, aspects of an image. People with right hemisphere damage may have impaired perception of low resolution, or big picture, aspects of an image.
Plasticity
If a specific region of the brain, or even an entire hemisphere, is injured or destroyed, its functions can sometimes be assumed by a neighboring region in the same hemisphere or the corresponding region in the other hemisphere, depending upon the area damaged and the patient's age. When injury interferes with pathways from one area to another, alternative (indirect) connections may develop to communicate information with detached areas, despite the inefficiencies.
Broca's aphasia
Broca's aphasia is a specific type of expressive aphasia and is so named due to the aphasia that results from damage or lesions to the Broca's area of the brain, that exists most commonly in the left inferior frontal hemisphere. Thus, the aphasia that develops from the lack of functioning of the Broca's area is an expressive and non-fluent aphasia. It is called 'non-fluent' due to the issues that arise because Broca's area is critical for language pronunciation and production. The area controls some motor aspects of speech production and articulation of thoughts to words and as such lesions to the area result in specific non-fluent aphasia.
Wernicke's aphasia
Wernicke's aphasia is the result of damage to the area of the brain that is commonly in the left hemisphere above the Sylvian fissure. Damage to this area causes primarily a deficit in language comprehension. While the ability to speak fluently with normal melodic intonation is spared, the language produced by a person with Wernicke's aphasia is riddled with semantic errors and may sound nonsensical to the listener. Wernicke's aphasia is characterized by phonemic paraphasias, neologism or jargon. Another characteristic of a person with Wernicke's aphasia is that they are unconcerned by the mistakes that they are making.
Society and culture
Possible misapplication
The concept of "right-brained" or "left-brained" individuals is considered a widespread myth which oversimplifies the true nature of the brain's cerebral hemispheres (for a recent counter position, though, see below). Proof leading to the "mythbuster" of the left-/right-brained concept is increasing as more and more studies are brought to light. Harvard Health Publishing includes a study from the University of Utah in 2013, that exhibited brain scans revealing similarity on both sides of the brain, personality and environmental factors aside. Although certain functions show a degree of lateralization in the brain—with language predominantly processed in the left hemisphere, and spatial and nonverbal reasoning in the right—these functions are not exclusively tied to one hemisphere.
Terence Hines states that the research on brain lateralization is valid as a research program, though commercial promoters have applied it to promote subjects and products far outside the implications of the research. For example, the implications of the research have no bearing on psychological interventions such as eye movement desensitization and reprocessing (EMDR) and neurolinguistic programming, brain-training equipment, or management training.
Popular psychology
Some popularizations oversimplify the science about lateralization, by presenting the functional differences between hemispheres as being more absolute than is actually the case. Interestingly, research has shown quite opposite function of brain lateralisation, i.e. right hemisphere creatively and chaotically links between concepts and left hemisphere tends to adhere to specific date and time, although generally adhering to the pattern of left-brain as linguistic interpretation and right brain as spatio-temporal.
Sex differences
In the 19th century and to a lesser extent the 20th, it was thought that each side of the brain was associated with a specific gender: the left corresponding with masculinity and the right with femininity and each half could function independently. The right side of the brain was seen as the inferior and thought to be prominent in women, savages, children, criminals, and the insane. A prime example of this in fictional literature can be seen in Robert Louis Stevenson's Strange Case of Dr. Jekyll and Mr. Hyde.
History
Broca
One of the first indications of brain function lateralization resulted from the research of French physician Pierre Paul Broca, in 1861. His research involved the male patient nicknamed "Tan", who had a speech deficit (aphasia); "tan" was one of the few words he could articulate, hence his nickname. In Tan's autopsy, Broca determined he had a syphilitic lesion in the left cerebral hemisphere. This left frontal lobe brain area (Broca's area) is an important speech production region. The motor aspects of speech production deficits caused by damage to Broca's area are known as expressive aphasia. In clinical assessment of this type of aphasia, patients have difficulty producing speech.
Wernicke
German physician Karl Wernicke continued in the vein of Broca's research by studying language deficits unlike expressive aphasia. Wernicke noted that not every deficit was in speech production; some were linguistic. He found that damage to the left posterior, superior temporal gyrus (Wernicke's area) caused language comprehension deficits rather than speech production deficits, a syndrome known as receptive aphasia.
Imaging
These seminal works on hemispheric specialization were done on patients or postmortem brains, raising questions about the potential impact of pathology on the research findings. New methods permit the in vivo comparison of the hemispheres in healthy subjects. Particularly, magnetic resonance imaging (MRI) and positron emission tomography (PET) are important because of their high spatial resolution and ability to image subcortical brain structures.
Movement and sensation
In the 1940s, neurosurgeon Wilder Penfield and his neurologist colleague Herbert Jasper developed a technique of brain mapping to help reduce side effects caused by surgery to treat epilepsy. They stimulated motor and somatosensory cortices of the brain with small electrical currents to activate discrete brain regions. They found that stimulation of one hemisphere's motor cortex produces muscle contraction on the opposite side of the body. Furthermore, the functional map of the motor and sensory cortices is fairly consistent from person to person; Penfield and Jasper's famous pictures of the motor and sensory homunculi were the result.
Split-brain patients
Research by Michael Gazzaniga and Roger Wolcott Sperry in the 1960s on split-brain patients led to an even greater understanding of functional laterality. Split-brain patients are patients who have undergone corpus callosotomy (usually as a treatment for severe epilepsy), a severing of a large part of the corpus callosum. The corpus callosum connects the two hemispheres of the brain and allows them to communicate. When these connections are cut, the two halves of the brain have a reduced capacity to communicate with each other. This led to many interesting behavioral phenomena that allowed Gazzaniga and Sperry to study the contributions of each hemisphere to various cognitive and perceptual processes. One of their main findings was that the right hemisphere was capable of rudimentary language processing, but often has no lexical or grammatical abilities. Eran Zaidel also studied such patients and found some evidence for the right hemisphere having at least some syntactic ability.
Language is primarily localized in the left hemisphere. While the left hemisphere has proven to be more optimized for language, the right hemisphere has the capacity with emotions, such as sarcasm, that can express prosody in sentences when speaking. According to Sheppard and Hillis, "The right hemisphere is critical for perceiving sarcasm (Davis et al., 2016), integrating context required for understanding metaphor, inference, and humour, as well as recognizing and expressing affective or emotional prosody—changes in pitch, rhythm, rate, and loudness that convey emotions". One of the experiments carried out by Gazzaniga involved a split-brain male patient sitting in front of a computer screen while having words and images presented on either side of the screen, and the visual stimuli would go to either the right or left visual field, and thus the left or right brain, respectively. It was observed that if the patient was presented with an image to his left visual field (right brain), he would report not seeing anything. If he was able to feel around for certain objects, he could accurately pick out the correct object, despite not having the ability to verbalize what he saw.
Additional images
See also
Functional specialization (brain)
Alien hand syndrome
Ambidexterity
Bicameral mentality
Brain asymmetry
Chirality
Contralateral brain
Cross-dominance
Divided consciousness
Dual consciousness
Emotional lateralization
Handedness
Hemispherectomy
Laterality
Left brain interpreter
The Master and His Emissary
Parallel computing
Psychoneuroimmunology
Right hemisphere brain damage
Of Two Minds (book)
Wada test
Yakovlevian torque
References
External links
Left Brain, Right Brain? Wrong
Bibliography
Further resources
Neuropsychology
Cerebrum
Brain
Brain asymmetry | Lateralization of brain function | Physics | 3,247 |
55,642,234 | https://en.wikipedia.org/wiki/W%20Orionis | W Orionis is a carbon star in the constellation Orion, approximately away. It varies regularly in brightness between extremes of magnitude 4.4 and 6.9 roughly every 7 months. When it is near its maximum brightness, it is faintly visible to the naked eye of an observer with good observing conditions.
Variability
Evelyn Leland discovered that the star is a variable star based on observations done in the last decades of the 19th century, when it was known as BD +00°939. The discovery was announced in 1895. It was listed with its variable star designation, W Orionis, in Annie Jump Cannon's 1907 work Second Catalog of Variable Stars.
W Orionis is a semiregular variable with an approximately 212‑day cycle. A long secondary period of 2,450 days has also been reported.
Properties
The angular diameter of W Orionis has been measured using interferometry and a value of 9.7 mas is found. Although it is known to be a pulsating variable star, no changes in the diameter were seen.
Technetium has not been detected in W Orionis, an unexpected result since this s-process element should be dredged up in all thermally-pulsating AGB stars and especially in carbon stars.
References
External links
W Orionis Kaler's Stars
Astronomy Picture of the Day
Orion (constellation)
Orionis, W
032736
023680
Durchmusterung objects
Semiregular variable stars
Carbon stars | W Orionis | Astronomy | 301 |
4,408,335 | https://en.wikipedia.org/wiki/Optical%20lens%20design | Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it.
Design requirements
Performance requirements can include:
Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific.
Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements.
Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding.
Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties.
Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently a particular glass type is made by a given manufacturer, and can seriously affect manufacturing cost and schedule.
Process
Lenses can first be designed using paraxial theory to position images and pupils, then real surfaces inserted and optimized. Paraxial theory can be skipped in simpler cases and the lens directly optimized using real surfaces. Lenses are first designed using average index of refraction and dispersion (see Abbe number) properties published in the glass manufacturer's catalog and through glass model calculations. However, the properties of the real glass blanks will vary from this ideal; index of refraction values can vary by as much as 0.0003 or more from catalog values, and dispersion can vary slightly. These changes in index and dispersion can sometimes be enough to affect the lens focus location and imaging performance in highly corrected systems.
The lens blank manufacturing process is as follows:
The glass batch ingredients for a desired glass type are mixed in a powder state,
the powder mixture is melted in a furnace,
the fluid is further mixed while molten to maximize batch homogeneity,
poured into lens blanks and
annealed according to empirically determined time-temperature schedules.
The glass blank pedigree, or "melt data", can be determined for a given glass batch by making small precision prisms from various locations in the batch and measuring their index of refraction on a spectrometer, typically at five or more wavelengths. Lens design programs have curve fitting routines that can fit the melt data to a selected dispersion curve, from which the index of refraction at any wavelength within the fitted wavelength range can be calculated. A re-optimization, or "melt re-comp", can then be performed on the lens design using measured index of refraction data where available. When manufactured, the resulting lens performance will more closely match the desired requirements than if average glass catalog values for index of refraction were assumed.
Delivery schedules are impacted by glass and mirror blank availability and lead times to acquire, the amount of tooling a shop must fabricate prior to starting on a project, the manufacturing tolerances on the parts (tighter tolerances mean longer fab times), the complexity of any optical coatings that must be applied to the finished parts, further complexities in mounting or bonding lens elements into cells and in the overall lens system assembly, and any post-assembly alignment and quality control testing and tooling required. Tooling costs and delivery schedules can be reduced by using existing tooling at any given shop wherever possible, and by maximizing manufacturing tolerances to the extent possible.
Lens optimization
A simple two-element air-spaced lens has nine variables (four radii of curvature, two thicknesses, one airspace thickness, and two glass types). A multi-configuration lens corrected over a wide spectral band and field of view over a range of focal lengths and over a realistic temperature range can have a complex design volume having over one hundred dimensions.
Lens optimization techniques that can navigate this multi-dimensional space and proceed to local minima have been studied since the 1940s, beginning with early work by James G. Baker, and later by Feder, Wynne, Glatzel, Grey and others. Prior to the development of digital computers, lens optimization was a hand-calculation task using trigonometric and logarithmic tables to plot 2-D cuts through the multi-dimensional space. Computerized ray tracing allows the performance of a lens to be modelled quickly, so that the design space can be searched rapidly. This allows design concepts to be rapidly refined. Popular optical design software includes Zemax's OpticStudio, Synopsys's Code V, and Lambda Research's OSLO. In most cases the designer must first choose a viable design for the optical system, and then numerical modelling is used to refine it. The designer ensures that designs optimized by the computer meet all requirements, and makes adjustments or restarts the process when they do not.
See also
Optical engineering
Fabrication and testing (optical components)
Ray transfer matrix analysis
Photographic lens design
Surface imperfections (optics)
Stray light
References
Notes
Bibliography
Smith, Warren J., Modern Lens Design, McGraw-Hill, Inc., 1992,
Kingslake, Rudolph, Lens Design Fundamentals, Academic Press, 1978
Shannon, Robert R., The Art and Science of Optical Design, Cambridge University Press, 1997.
External links
The GNU Optical design and simulation library
Geometrical optics
Glass chemistry
Glass engineering and science
Lenses
Physical optics | Optical lens design | Chemistry,Materials_science,Engineering | 1,287 |
71,731,023 | https://en.wikipedia.org/wiki/F%C3%A9lix%20I | Félix I (officially "F-360-BD") was a Brazilian Army Technical School (today's Military Institute of Engineering) project led by Lieutenant Colonel Manoel dos Santos Lage which aimed, in 1959, to launch the Flamengo cat into space. But the project was canceled due to pressure from animal advocacy groups, and the launch never took place.
History
Origins
The project, also known as "Operation Meow", with limited financial resources, was part of the graduation class of 1958 of the Army Technical School that aimed to create a sounding rocket, something unheard of in Brazil at the time. The official name was "Rocket Sonda 360-BD", unrelated to the later .
The rocket had an outer diameter of 400 mm, a length of 4.3 meters, and a total mass of 350 kg with the payload, and it used only a single stage and was propelled by gunpowder, reaching a maximum speed of 1.950 m/s. Lieutenant-Colonel Myearel dos Santos Lage's ultimate goal, head of the Rocket Program and leader of the project, but not shared by the institution, was to develop a satellite launch vehicle. The project also had the collaboration of scientists Carlos Chagas Filho and César Lattes. Carlos Chagas Filho was responsible for the idea of choosing a cat, because he was interested in observing how these animals reacted under laboratory conditions. Most of the material used to build the rocket was obtained from the War Arsenal.
The project, which aimed to test a guided missile costing Cr$600,000, was nicknamed "Felix I" by the Rio de Janeiro press after they discovered their intention to launch a cat, Flamengo, into space. Originally they planned for the rocket to reach the 300 km mark, but this was abandoned due to difficulties in the calculations. The final decision was that the class of 1958 would develop a rocket that reached an apogee of 120 km and the class of 1950 would work on one that reached 300 km, with the ultimate goal of developing a Thor-type rocket that would reach orbits greater than 500 km by June 1960.
Initially the rocket was to be launched in 1957, but it was delayed twice and by December 1958 they hoped to launch in early January 1959.
Flight plan
The rocket would be launched from a base in Cabo Frio. Its accelerometer would be connected to a transmitter at a frequency of 73 Mc/s. César Lattes was responsible for building three transmitters and the instruments aimed at cosmic ray detection; Lieutenant-Colonel Carlos Alberto Braga Coelho built the electronics of the rocket; Carlos Chagas Filho (IBCCF) developed the instruments for monitoring the cat's health; and astronomer Mário Ferreira Dias, from the Valongo Observatory, developed the calculations related to the flight.
The combustion chamber was built by the Army War Arsenal in company with the students of the Armaments Course, with the carbon steel plate produced by the Companhia Siderúrgica Nacional. The rocket was painted silver with red stripes in a spiral, to help the visibility of the rocket in flight, as the process would be monitored by the National Observatory.
The rocket thrust was predicted to be 1,920 kgf with 6G of acceleration, 19.3s of combustion, and a final velocity of 1,960 m/s. The propellant, developed by the Army Technical School, was called "BD 1000C Gunpowder". The rocket would carry a 180-kilogram payload of gunpowder to reach the ionosphere.
The payload fairing, with a final mass of 30 kg, would contain an acrylic chamber for the cat, as well as the other instruments for the mission. The chamber, with the return speed estimated at 1,800 m/s, would initially be rescued by two air braking devices, and would be followed by a 68 kg parachute developed by the Army Air Ground Division Core, open at an altitude of 5,000 meters, all in an automatic way. The cat would have four hours of oxygen and would be placed face up on a nylon mattress. The flight would last 40 minutes, falling into the sea 30 kilometers from the launch pad, off Angra dos Reis, and would be rescued by the Brazilian Navy. Rescuing the cat alive was considered the greatest challenge of the project. The rocket stages would be rescued by two parachutes. Finally, the flight date would be analyzed by César Lattes.
If the mission was successful, the future rockets would be made available to the National Nuclear Energy Council and the for scientific research.
Flamengo
Flamengo, popularly known as "Meow", the tomcat of Lieutenant-Colonel Lage's daughters, was one of the twelve candidates for the flight. He was the leading candidate and would only be released if he was in good health on the day of the flight and his presence on the flight was already confirmed in December 1958. But in October 1958, the Diário do Paraná announced that Carlos Chagas Filho would replace the animal with an amoeba, arguing that a microscopic animal would be of greater scientific use in the study of cosmic rays. Despite this, Colonel Lage kept the cat in the project and when asked in 1959 about the reason for launching the cat, he replied: "... the recovery of this cat, alive, will be an extraordinary achievement". On 19 December 1958 the cat posed for the media inside the Technical School. If the launch had taken place, it would have been Latin America's first living being in space.
Controversy
Carlos Chagas Filho, when the experiment began to gain visibility in the media, renounced any renewed interest in sending a cat on the mission and the possibility of any scientific learning, besides citing that the acrylic capsule would face difficulties with drastic temperature changes.
In addition to the disagreement with Carlos Chagas Filho, the project team received protests from the "North American Feline Society," something that the project manager disregarded, believing in the safety of the vehicle. The also opposed the use of the cat.
Members of the Faculty of Veterinary Medicine and other experts were also skeptical of Flamengo's chances of survival, and Leo Rosen, vice president of SUIPA, also reiterated the group's position against the experiment SUIPA also sent an appeal and a petition, signed by, among others, Rachel de Queiroz and Carlos Drummond de Andrade, to the Commander of the Army Technical School and to the Minister of War, General Teixeira Lott, against launching the cat in the rocket. On the issue of animal experiments, SUIPA only advocated when extremely necessary, and was skeptical of the need for the cat experiment. The Brazilian government received thousands of letters protesting against the experiment, but the Army ignored them. And despite all the protests, including from Europe, the project leader continued with his plans.
In November 1958 it was announced that the launch would be held in secret to "avoid sensationalism" and in the same month Colonel João Luís Vieira Maldonado, director of the Meteorology Service, said that the rocket would only carry sounding devices, and no longer the cat. However, in January 1959, Colonel Lage still hoped to make the launch with the cat and in February of the same year they planned to launch in March. However, in May 1959, the launch had not yet occurred and freshmen from the National Engineering School held a parade where, among other things, they criticized and satirized the project. In December 1958 the Army announced that it would test a prototype of the rocket before the official launch.
End of project
In January 1959 the rocket was on display at the Armament Museum of the Army Technical School. By 1961 it was already clear that the launch had not taken place. It was the last rocket project that Colonel Lage participated in and was terminated without flying. Finally, on 18 October 1963, the cat Félicette made a suborbital flight as part of the French space program, returned alive, and was sacrificed after two months for an autopsy and study of her brain. Colonel Lage was transferred from the Army Technical School in 1960 and all the equipment related to the rocket was disassembled. Myearel Lage, already a General, born on 4 June 1910, died on 5 August 1977. The Army Technical School was abolished in favor of the Military Institute of Engineering.
Because of the project, at that time Brazil was considered one of the three countries with space technology, alongside the United States and the Soviet Union. In terms of satellite launch capabilities, years later Brazil developed the unsuccessful VLS project, terminated in 2016. The country is currently working on the VLM project.
See also
Félicette
Animals in space
Notes
References
Bibliography
(Chronological order)
1958 in Brazil
1958 in spaceflight
Animals in space
Astronomical controversies
Individual cats
Cancelled space missions
Brazilian Army
Space program of Brazil
Sounding rockets of Brazil
Suborbital spaceflight | Félix I | Chemistry,Astronomy,Biology | 1,814 |
12,699,391 | https://en.wikipedia.org/wiki/Insight%20Seminars | Insight Seminars is an international non-profit organization headquartered in Santa Monica, California. The first seminar was led in 1978 by founders John-Roger and Russell Bishop under the name Insight Training Seminars. Insight has held seminars in 34 countries for adults, teens, and children, in addition to Business Insight corporate trainings. Seminars are primarily presented in English and Spanish, and have also been translated into Russian, Portuguese, Bulgarian, and Greek.
Arianna Huffington shared about her experience with Insight Seminars and how it transformed her life.
Insight Seminar Series
The Insight Seminar series includes five core seminars, taken in sequence, and a variety of graduate and public events. In each seminar, facilitators lead groups of 40-200+ participants through group exercises, partner discussions, lectures, and guided visualization processes.
Insight I: The Awakening Heart Seminar is the introductory 4-day seminar. Topics include personal responsibility, choice, the power of commitment, and intention.
Insight II: The Opening Heart Seminar is a 5-day seminar that provides personal attention to each participant. The focus is personal expansion, taking risks and liberation from self-imposed limitations.
Insight III: The Centering in The Heart Seminar is held over 5-days in retreat, focusing on developing the capacity to observe and respond to life's challenges with balance and compassion.
Insight IV: Knowing the Purpose of your Heart Seminar is a 28-day professional seminar designed to develop self-facilitation and personal presentation style while identifying and manifesting goals.
Insight Masters Class is a 3-month program with a curriculum focus of Living Loving.
Insight also offers the Wisdom of the Heart program with Peace Theological Seminary & College of Philosophy, with education focusing on bringing the body, heart, mind, and soul into harmony. The Wisdom of the Heart program is open to all and has no prerequisites.
Guidelines
Participants are introduced and asked to comply with the following concepts, for the duration of the seminar.
Take care of yourself so you can help take care of others.
Don't hurt yourself and don't hurt others.
Use everything for your upliftment, growth and learning.
References
External links
Insight Seminars Worldwide
Movement of Spiritual Inner Awareness
Group processes
Human Potential Movement
Large-group awareness training
New Age organizations
Personal development
Self religions | Insight Seminars | Biology | 458 |
39,198,217 | https://en.wikipedia.org/wiki/Timir%20Datta | Timir Datta is an Indian-American physicist specializing in high transition temperature superconductors and a professor of physics in the department of Physics and Astronomy at the University of South Carolina, in Columbia, South Carolina.
Early life and education
Datta grew up in India along with elder brother Jyotirmoy Datta a noted journalist; his father B.N. Dutt a scion of two land owning families from Khulna and Jessore in south central Bengal (British India) was an eminent sugar-refining engineer and on his mother's side a relative of Michael Madhusudan Dutt, the famed poet. He received a master's degree in theoretical plasma physics from Boston College in 1974 under the direction of Gabor Kalman. Datta also worked at the Jet Propulsion laboratory (JPL) in Pasadena, California, as a pre-doctoral NASA research associate of Robert Somoano. He also collaborated with Carl H. Brans at Loyola University New Orleans on a gravitational problem of frame dragging and worked with John Perdew on the behavior of charge density waves in jellium. Datta is of Bengali origin.
Work and research history
Datta was a NSF post-doctoral fellow with Marvin Silver and studied charge propagation in non-crystalline systems at the University of North Carolina in Chapel Hill. At UNC-CH he continued his theoretical interests and worked on retarded Vander Waals potential with L. H. Ford. Since 1982, he has been on the faculty of the University of South Carolina in Columbia.
He collaborated with several laboratories involved with the early discoveries of high temperature superconductivity, especially the team at NRL, led by Donald U Gubser and Stuart Wolf. This research group at USC was the also first to observe (i) bulk Meissner effect in Tl-copper oxides and thus confirm the discovery by Allen Herman's team at the University of Arkansas of high temperature superconductivity in these compounds. He coined the term "triple digit superconductivity", and his group was the first to observe (ii) fractional quantum hall effect in 3-dimensional carbon.
In a paper with Raphael Tsu he derived the first quantum mechanical wave impedance formula for Schrödinger wave functions. He was also the first to show that Bragg's law of X-ray scattering from crystals is a direct consequence of Euclidean length invariance of the incident wave vector; in fact Max von Laue's three diffraction equations are not independent but related by length conservation.
Datta is an active researcher, with over 100 papers listed in the SAO/NASA Astrophysics Data System (ADS) as of 2014.
Patents
Datta was issued one US patent in 1995: "Flux-trapped superconducting magnets and method of manufacture", with two co-inventors.
Anti-gravity work
Datta was involved in the university-funded development of a "Gravity Generator" in 1996 and 1997, with then-fellow university researcher Douglas Torr. According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and the university planned to patent and license this device. Neither information about this university research project nor any "Gravity Generator" device was ever made public.
Despite the apparent less than successful outcome of the "Gravity Generator" development effort with Torr, Datta became interested in the effects of electric fields on gravitation, expanding on Torr's theoretical work on the subject.
Selected publications
See also
Eugene Podkletnov
Ning Li (physicist)
References
External links
Department of Physics and Astronomy at the University of South Carolina
Timir Datta's page at the University of South Carolina
University of South Carolina faculty
Morrissey College of Arts & Sciences alumni
American people of Indian descent
20th-century American physicists
Superconductivity
Anti-gravity
Year of birth missing (living people)
Living people | Timir Datta | Physics,Materials_science,Astronomy,Engineering | 822 |
13,776,049 | https://en.wikipedia.org/wiki/Donnan%20potential | Donnan potential is the difference in the Galvani potentials which appears as a result of Donnan equilibrium, named after Frederick G. Donnan, which refers to the distribution of ion species between two ionic solutions separated by a semipermeable membrane or boundary. The boundary layer maintains an unequal distribution of ionic solute concentration by acting as a selective barrier to ionic diffusion. Some species of ions may pass through the barrier while others may not. The solutions may be gels or colloids as well as ionic liquids, and as such the phase boundary between gels or a gel and a liquid can also act as a selective barrier. The Electric potential that arises between two solutions is called Donnan potential.
Donnan equilibrium is prominent in the triphasic model for articular cartilage proposed by Mow and Ratcliffe, as well as in electrochemical fuel cells and dialysis.
The Donnan effect is extra osmotic pressure attributable to cations (Na+ and K+) attached to dissolved plasma proteins.
See also
Chemical equilibrium
Nernst equation
Double Layer (biospecific)
References
Van C. Mow and Anthony Ratcliffe Basic Orthopedic Biomechanics, 2nd Ed. Lippincott-Raven Publishers, Philadelphia, 1997
Physical chemistry
Colloidal chemistry
sl:Donnanovo ravnovesje
Electrochemical potentials | Donnan potential | Physics,Chemistry | 281 |
13,088 | https://en.wikipedia.org/wiki/Granite | Granite ( ) is a coarse-grained (phaneritic) intrusive igneous rock composed mostly of quartz, alkali feldspar, and plagioclase. It forms from magma with a high content of silica and alkali metal oxides that slowly cools and solidifies underground. It is common in the continental crust of Earth, where it is found in igneous intrusions. These range in size from dikes only a few centimeters across to batholiths exposed over hundreds of square kilometers.
Granite is typical of a larger family of granitic rocks, or granitoids, that are composed mostly of coarse-grained quartz and feldspars in varying proportions. These rocks are classified by the relative percentages of quartz, alkali feldspar, and plagioclase (the QAPF classification), with true granite representing granitic rocks rich in quartz and alkali feldspar. Most granitic rocks also contain mica or amphibole minerals, though a few (known as leucogranites) contain almost no dark minerals.
Granite is nearly always massive (lacking any internal structures), hard (falling between 6 and 7 on the Mohs hardness scale), and tough. These properties have made granite a widespread construction stone throughout human history.
Description
The word "granite" comes from the Latin granum, a grain, in reference to the coarse-grained structure of such a completely crystalline rock. Granites can be predominantly white, pink, or gray in color, depending on their mineralogy. Granitic rocks mainly consist of feldspar, quartz, mica, and amphibole minerals, which form an interlocking, somewhat equigranular matrix of feldspar and quartz with scattered darker biotite mica and amphibole (often hornblende) peppering the lighter color minerals. Occasionally some individual crystals (phenocrysts) are larger than the groundmass, in which case the texture is known as porphyritic. A granitic rock with a porphyritic texture is known as a granite porphyry. Granitoid is a general, descriptive field term for lighter-colored, coarse-grained igneous rocks. Petrographic examination is required for identification of specific types of granitoids. The alkali feldspar in granites is typically orthoclase or microcline and is often perthitic. The plagioclase is typically sodium-rich oligoclase. Phenocrysts are usually alkali feldspar.
Granitic rocks are classified according to the QAPF diagram for coarse grained plutonic rocks and are named according to the percentage of quartz, alkali feldspar (orthoclase, sanidine, or microcline) and plagioclase feldspar on the A-Q-P half of the diagram. True granite (according to modern petrologic convention) contains between 20% and 60% quartz by volume, with 35% to 90% of the total feldspar consisting of alkali feldspar. Granitic rocks poorer in quartz are classified as syenites or monzonites, while granitic rocks dominated by plagioclase are classified as granodiorites or tonalites. Granitic rocks with over 90% alkali feldspar are classified as alkali feldspar granites. Granitic rock with more than 60% quartz, which is uncommon, is classified simply as quartz-rich granitoid or, if composed almost entirely of quartz, as quartzolite.
True granites are further classified by the percentage of their total feldspar that is alkali feldspar. Granites whose feldspar is 65% to 90% alkali feldspar are syenogranites, while the feldspar in monzogranite is 35% to 65% alkali feldspar. A granite containing both muscovite and biotite micas is called a binary or two-mica granite. Two-mica granites are typically high in potassium and low in plagioclase, and are usually S-type granites or A-type granites, as described below.
Another aspect of granite classification is the ratios of metals that potentially form feldspars. Most granites have a composition such that almost all their aluminum and alkali metals (sodium and potassium) are combined as feldspar. This is the case when K2O + Na2O + CaO > Al2O3 > K2O + Na2O. Such granites are described as normal or metaluminous. Granites in which there is not enough aluminum to combine with all the alkali oxides as feldspar (Al2O3 < K2O + Na2O) are described as peralkaline, and they contain unusual sodium amphiboles such as riebeckite. Granites in which there is an excess of aluminum beyond what can be taken up in feldspars (Al2O3 > CaO + K2O + Na2O) are described as peraluminous, and they contain aluminum-rich minerals such as muscovite.
Physical properties
The average density of granite is between , its compressive strength usually lies above 200 MPa (29,000 psi), and its viscosity near STP is 3–6·1020 Pa·s.
The melting temperature of dry granite at ambient pressure is ; it is strongly reduced in the presence of water, down to 650 °C at a few hundred megapascals of pressure.
Granite has poor primary permeability overall, but strong secondary permeability through cracks and fractures if they are present.
Chemical composition
A worldwide average of the chemical composition of granite, by mass percent, based on 2485 analyses:
The medium-grained equivalent of granite is microgranite. The extrusive igneous rock equivalent of granite is rhyolite.
Occurrence
Granitic rock is widely distributed throughout the continental crust. Much of it was intruded during the Precambrian age; it is the most abundant basement rock that underlies the relatively thin sedimentary veneer of the continents. Outcrops of granite tend to form tors, domes or bornhardts, and rounded massifs. Granites sometimes occur in circular depressions surrounded by a range of hills, formed by the metamorphic aureole or hornfels. Granite often occurs as relatively small, less than 100 km2 stock masses (stocks) and in batholiths that are often associated with orogenic mountain ranges. Small dikes of granitic composition called aplites are often associated with the margins of granitic intrusions. In some locations, very coarse-grained pegmatite masses occur with granite.
Origin
Granite forms from silica-rich (felsic) magmas. Felsic magmas are thought to form by addition of heat or water vapor to rock of the lower crust, rather than by decompression of mantle rock, as is the case with basaltic magmas. It has also been suggested that some granites found at convergent boundaries between tectonic plates, where oceanic crust subducts below continental crust, were formed from sediments subducted with the oceanic plate. The melted sediments would have produced magma intermediate in its silica content, which became further enriched in silica as it rose through the overlying crust.
Early fractional crystallisation serves to reduce a melt in magnesium and chromium, and enrich the melt in iron, sodium, potassium, aluminum, and silicon. Further fractionation reduces the content of iron, calcium, and titanium. This is reflected in the high content of alkali feldspar and quartz in granite.
The presence of granitic rock in island arcs shows that fractional crystallization alone can convert a basaltic magma to a granitic magma, but the quantities produced are small. For example, granitic rock makes up just 4% of the exposures in the South Sandwich Islands. In continental arc settings, granitic rocks are the most common plutonic rocks, and batholiths composed of these rock types extend the entire length of the arc. There are no indication of magma chambers where basaltic magmas differentiate into granites, or of cumulates produced by mafic crystals settling out of the magma. Other processes must produce these great volumes of felsic magma. One such process is injection of basaltic magma into the lower crust, followed by differentiation, which leaves any cumulates in the mantle. Another is heating of the lower crust by underplating basaltic magma, which produces felsic magma directly from crustal rock. The two processes produce different kinds of granites, which may be reflected in the division between S-type (produced by underplating) and I-type (produced by injection and differentiation) granites, discussed below.
Alphabet classification system
The composition and origin of any magma that differentiates into granite leave certain petrological evidence as to what the granite's parental rock was. The final texture and composition of a granite are generally distinctive as to its parental rock. For instance, a granite that is derived from partial melting of metasedimentary rocks may have more alkali feldspar, whereas a granite derived from partial melting of metaigneous rocks may be richer in plagioclase. It is on this basis that the modern "alphabet" classification schemes are based.
The letter-based Chappell & White classification system was proposed initially to divide granites into I-type (igneous source) granite and S-type (sedimentary sources). Both types are produced by partial melting of crustal rocks, either metaigneous rocks or metasedimentary rocks.
I-type granites are characterized by a high content of sodium and calcium, and by a strontium isotope ratio, 87Sr/86Sr, of less than 0.708. 87Sr is produced by radioactive decay of 87Rb, and since rubidium is concentrated in the crust relative to the mantle, a low ratio suggests origin in the mantle. The elevated sodium and calcium favor crystallization of hornblende rather than biotite. I-type granites are known for their porphyry copper deposits. I-type granites are orogenic (associated with mountain building) and usually metaluminous.
S-type granites are sodium-poor and aluminum-rich. As a result, they contain micas such as biotite and muscovite instead of hornblende. Their strontium isotope ratio is typically greater than 0.708, suggesting a crustal origin. They also commonly contain xenoliths of metamorphosed sedimentary rock, and host tin ores. Their magmas are water-rich, and they readily solidify as the water outgasses from the magma at lower pressure, so they less commonly make it to the surface than magmas of I-type granites, which are thus more common as volcanic rock (rhyolite). They are also orogenic but range from metaluminous to strongly peraluminous.
Although both I- and S-type granites are orogenic, I-type granites are more common close to the convergent boundary than S-type. This is attributed to thicker crust further from the boundary, which results in more crustal melting.
A-type granites show a peculiar mineralogy and geochemistry, with particularly high silicon and potassium at the expense of calcium and magnesium and a high content of high field strength cations (cations with a small radius and high electrical charge, such as zirconium, niobium, tantalum, and rare earth elements.) They are not orogenic, forming instead over hot spots and continental rifting, and are metaluminous to mildly peralkaline and iron-rich. These granites are produced by partial melting of refractory lithology such as granulites in the lower continental crust at high thermal gradients. This leads to significant extraction of hydrous felsic melts from granulite-facies resitites. A-type granites occur in the Koettlitz Glacier Alkaline Province in the Royal Society Range, Antarctica. The rhyolites of the Yellowstone Caldera are examples of volcanic equivalents of A-type granite.
M-type granite was later proposed to cover those granites that were clearly sourced from crystallized mafic magmas, generally sourced from the mantle. Although the fractional crystallisation of basaltic melts can yield small amounts of granites, which are sometimes found in island arcs, such granites must occur together with large amounts of basaltic rocks.
H-type granites were suggested for hybrid granites, which were hypothesized to form by mixing between mafic and felsic from different sources, such as M-type and S-type. However, the big difference in rheology between mafic and felsic magmas makes this process problematic in nature.
Granitization
Granitization is an old, and largely discounted, hypothesis that granite is formed in place through extreme metasomatism. The idea behind granitization was that fluids would supposedly bring in elements such as potassium, and remove others, such as calcium, to transform a metamorphic rock into granite. This was supposed to occur across a migrating front. However, experimental work had established by the 1960s that granites were of igneous origin. The mineralogical and chemical features of granite can be explained only by crystal-liquid phase relations, showing that there must have been at least enough melting to mobilize the magma.
However, at sufficiently deep crustal levels, the distinction between metamorphism and crustal melting itself becomes vague. Conditions for crystallization of liquid magma are close enough to those of high-grade metamorphism that the rocks often bear a close resemblance. Under these conditions, granitic melts can be produced in place through the partial melting of metamorphic rocks by extracting melt-mobile elements such as potassium and silicon into the melts but leaving others such as calcium and iron in granulite residues. This may be the origin of migmatites. A migmatite consists of dark, refractory rock (the melanosome) that is permeated by sheets and channels of light granitic rock (the leucosome). The leucosome is interpreted as partial melt of a parent rock that has begun to separate from the remaining solid residue (the melanosome). If enough partial melt is produced, it will separate from the source rock, become more highly evolved through fractional crystallization during its ascent toward the surface, and become the magmatic parent of granitic rock. The residue of the source rock becomes a granulite.
The partial melting of solid rocks requires high temperatures and the addition of water or other volatiles which lower the solidus temperature (temperature at which partial melting commences) of these rocks. It was long debated whether crustal thickening in orogens (mountain belts along convergent boundaries) was sufficient to produce granite melts by radiogenic heating, but recent work suggests that this is not a viable mechanism. In-situ granitization requires heating by the asthenospheric mantle or by underplating with mantle-derived magmas.
Ascent and emplacement
Granite magmas have a density of 2.4 Mg/m3, much less than the 2.8 Mg/m3 of high-grade metamorphic rock. This gives them tremendous buoyancy, so that ascent of the magma is inevitable once enough magma has accumulated. However, the question of precisely how such large quantities of magma are able to shove aside country rock to make room for themselves (the room problem) is still a matter of research.
Two main mechanisms are thought to be important:
Stokes diapir
Fracture propagation
Of these two mechanisms, Stokes diapirism has been favoured for many years in the absence of a reasonable alternative. The basic idea is that magma will rise through the crust as a single mass through buoyancy. As it rises, it heats the wall rocks, causing them to behave as a power-law fluid and thus flow around the intrusion allowing it to pass without major heat loss. This is entirely feasible in the warm, ductile lower crust where rocks are easily deformed, but runs into problems in the upper crust which is far colder and more brittle. Rocks there do not deform so easily: for magma to rise as a diapir it would expend far too much energy in heating wall rocks, thus cooling and solidifying before reaching higher levels within the crust.
Fracture propagation is the mechanism preferred by many geologists as it largely eliminates the major problems of moving a huge mass of magma through cold brittle crust. Magma rises instead in small channels along self-propagating dykes which form along new or pre-existing fracture or fault systems and networks of active shear zones. As these narrow conduits open, the first magma to enter solidifies and provides a form of insulation for later magma.
These mechanisms can operate in tandem. For example, diapirs may continue to rise through the brittle upper crust through stoping, where the granite cracks the roof rocks, removing blocks of the overlying crust which then sink to the bottom of the diapir while the magma rises to take their place. This can occur as piecemeal stopping (stoping of small blocks of chamber roof), as cauldron subsidence (collapse of large blocks of chamber roof), or as roof foundering (complete collapse of the roof of a shallow magma chamber accompanied by a caldera eruption.) There is evidence for cauldron subsidence at the Mt. Ascutney intrusion in eastern Vermont. Evidence for piecemeal stoping is found in intrusions that are rimmed with igneous breccia containing fragments of country rock.
Assimilation is another mechanism of ascent, where the granite melts its way up into the crust and removes overlying material in this way. This is limited by the amount of thermal energy available, which must be replenished by crystallization of higher-melting minerals in the magma. Thus, the magma is melting crustal rock at its roof while simultaneously crystallizing at its base. This results in steady contamination with crustal material as the magma rises. This may not be evident in the major and minor element chemistry, since the minerals most likely to crystallize at the base of the chamber are the same ones that would crystallize anyway, but crustal assimilation is detectable in isotope ratios. Heat loss to the country rock means that ascent by assimilation is limited to distance similar to the height of the magma chamber.
Weathering
Physical weathering occurs on a large scale in the form of exfoliation joints, which are the result of granite's expanding and fracturing as pressure is relieved when overlying material is removed by erosion or other processes.
Chemical weathering of granite occurs when dilute carbonic acid, and other acids present in rain and soil waters, alter feldspar in a process called hydrolysis. As demonstrated in the following reaction, this causes potassium feldspar to form kaolinite, with potassium ions, bicarbonate, and silica in solution as byproducts. An end product of granite weathering is grus, which is often made up of coarse-grained fragments of disintegrated granite.
Climatic variations also influence the weathering rate of granites. For about two thousand years, the relief engravings on Cleopatra's Needle obelisk had survived the arid conditions of its origin before its transfer to London. Within two hundred years, the red granite has drastically deteriorated in the damp and polluted air there.
Soil development on granite reflects the rock's high quartz content and dearth of available bases, with the base-poor status predisposing the soil to acidification and podzolization in cool humid climates as the weather-resistant quartz yields much sand. Feldspars also weather slowly in cool climes, allowing sand to dominate the fine-earth fraction. In warm humid regions, the weathering of feldspar as described above is accelerated so as to allow a much higher proportion of clay with the Cecil soil series a prime example of the consequent Ultisol great soil group.
Natural radiation
Granite is a natural source of radiation, like most natural stones. Potassium-40 is a radioactive isotope of weak emission, and a constituent of alkali feldspar, which in turn is a common component of granitic rocks, more abundant in alkali feldspar granite and syenites. Some granites contain around 10 to 20 parts per million (ppm) of uranium. By contrast, more mafic rocks, such as tonalite, gabbro and diorite, have 1 to 5 ppm uranium, and limestones and sedimentary rocks usually have equally low amounts.
Many large granite plutons are sources for palaeochannel-hosted or roll front uranium ore deposits, where the uranium washes into the sediments from the granite uplands and associated, often highly radioactive pegmatites.
Cellars and basements built into soils over granite can become a trap for radon gas, which is formed by the decay of uranium. Radon gas poses significant health concerns and is the number two cause of lung cancer in the US behind smoking.
Thorium occurs in all granites. Conway granite has been noted for its relatively high thorium concentration of 56±6 ppm.
There is some concern that some granite sold as countertops or building material may be hazardous to health. Dan Steck of St. Johns University has stated
that approximately 5% of all granite is of concern, with the caveat that only a tiny percentage of the tens of thousands of granite slab types have been tested. Resources from national geological survey organizations are accessible online to assist in assessing the risk factors in granite country and design rules relating, in particular, to preventing accumulation of radon gas in enclosed basements and dwellings.
A study of granite countertops was done (initiated and paid for by the Marble Institute of America) in November 2008 by National Health and Engineering Inc. of USA. In this test, all of the 39 full-size granite slabs that were measured for the study showed radiation levels well below the European Union safety standards (section 4.1.1.1 of the National Health and Engineering study) and radon emission levels well below the average outdoor radon concentrations in the US.
Industry and uses
Granite and related marble industries are considered one of the oldest industries in the world, existing as far back as Ancient Egypt. Major modern exporters of granite include China, India, Italy, Brazil, Canada, Germany, Sweden, Spain and the United States.
Antiquity
The Red Pyramid of Egypt (), named for the light crimson hue of its exposed limestone surfaces, is the third largest of Egyptian pyramids. Pyramid of Menkaure, likely dating 2510 BC, was constructed of limestone and granite blocks. The Great Pyramid of Giza (c. 2580 BC) contains a granite sarcophagus fashioned of "Red Aswan Granite". The mostly ruined Black Pyramid dating from the reign of Amenemhat III once had a polished granite pyramidion or capstone, which is now on display in the main hall of the Egyptian Museum in Cairo (see Dahshur). Other uses in Ancient Egypt include columns, door lintels, sills, jambs, and wall and floor veneer. How the Egyptians worked the solid granite is still a matter of debate. Tool marks described by the Egyptologist Anna Serotta indicate the use of flint tools on finer work with harder stones, e.g. when producing the hieroglyphic inscriptions. Patrick Hunt has postulated that the Egyptians used emery, which has greater hardness.
The Seokguram Grotto in Korea is a Buddhist shrine and part of the Bulguksa temple complex. Completed in 774 AD, it is an artificial grotto constructed entirely of granite. The main Buddha of the grotto is a highly regarded piece of Buddhist art, and along with the temple complex to which it belongs, Seokguram was added to the UNESCO World Heritage List in 1995.
Rajaraja Chola I of the Chola Dynasty in South India built the world's first temple entirely of granite in the 11th century AD in Tanjore, India. The Brihadeeswarar Temple dedicated to Lord Shiva was built in 1010. The massive Gopuram (ornate, upper section of shrine) is believed to have a mass of around 81 tonnes. It was the tallest temple in south India.
Imperial Roman granite was quarried mainly in Egypt, and also in Turkey, and on the islands of Elba and Giglio. Granite became "an integral part of the Roman language of monumental architecture". The quarrying ceased around the third century AD. Beginning in Late Antiquity the granite was reused, which since at least the early 16th century became known as spolia. Through the process of case-hardening, granite becomes harder with age. The technology required to make tempered metal chisels was largely forgotten during the Middle Ages. As a result, Medieval stoneworkers were forced to use saws or emery to shorten ancient columns or hack them into discs. Giorgio Vasari noted in the 16th century that granite in quarries was "far softer and easier to work than after it has lain exposed" while ancient columns, because of their "hardness and solidity have nothing to fear from fire or sword, and time itself, that drives everything to ruin, not only has not destroyed them but has not even altered their colour."
Modern
Sculpture and memorials
In some areas, granite is used for gravestones and memorials. Granite is a hard stone and requires skill to carve by hand. Until the early 18th century, in the Western world, granite could be carved only by hand tools with generally poor results.
A key breakthrough was the invention of steam-powered cutting and dressing tools by Alexander MacDonald of Aberdeen, inspired by seeing ancient Egyptian granite carvings. In 1832, the first polished tombstone of Aberdeen granite to be erected in an English cemetery was installed at Kensal Green Cemetery. It caused a sensation in the London monumental trade and for some years all polished granite ordered came from MacDonald's. As a result of the work of sculptor William Leslie, and later Sidney Field, granite memorials became a major status symbol in Victorian Britain. The royal sarcophagus at Frogmore was probably the pinnacle of its work, and at 30 tons one of the largest. It was not until the 1880s that rival machinery and works could compete with the MacDonald works.
Modern methods of carving include using computer-controlled rotary bits and sandblasting over a rubber stencil. Leaving the letters, numbers, and emblems exposed and the remainder of the stone covered with rubber, the blaster can create virtually any kind of artwork or epitaph.
The stone known as "black granite" is usually gabbro, which has a completely different chemical composition.
Buildings
Granite has been extensively used as a dimension stone and as flooring tiles in public and commercial buildings and monuments. Aberdeen in Scotland, which is constructed principally from local granite, is known as "The Granite City". Because of its abundance in New England, granite was commonly used to build foundations for homes there. The Granite Railway, America's first railroad, was built to haul granite from the quarries in Quincy, Massachusetts, to the Neponset River in the 1820s.
Engineering
Engineers have traditionally used polished granite surface plates to establish a plane of reference, since they are relatively impervious, inflexible, and maintain good dimensional stability. Sandblasted concrete with a heavy aggregate content has an appearance similar to rough granite, and is often used as a substitute when use of real granite is impractical. Granite tables are used extensively as bases or even as the entire structural body of optical instruments, CMMs, and very high precision CNC machines because of granite's rigidity, high dimensional stability, and excellent vibration characteristics. A most unusual use of granite was as the material of the tracks of the Haytor Granite Tramway, Devon, England, in 1820. Granite block is usually processed into slabs, which can be cut and shaped by a cutting center. In military engineering, Finland planted granite boulders along its Mannerheim Line to block invasion by Russian tanks in the Winter War of 1939–40.
Paving
Granite is used as a pavement material. This is because it is extremely durable, permeable and requires little maintenance. For example, in Sydney, Australia black granite stone is used for the paving and kerbs throughout the Central Business District.
Curling stones
Curling stones are traditionally fashioned of Ailsa Craig granite. The first stones were made in the 1750s, the original source being Ailsa Craig in Scotland. Because of the rarity of this granite, the best stones can cost as much as US$1,500. Between 60 and 70 percent of the stones used today are made from Ailsa Craig granite. Although the island is now a wildlife reserve, it is still quarried under license for Ailsa granite by Kays of Scotland for curling stones.
Rock climbing
Granite is one of the rocks most prized by climbers, for its steepness, soundness, crack systems, and friction. Well-known venues for granite climbing include the Yosemite Valley, the Bugaboos, the Mont Blanc massif (and peaks such as the Aiguille du Dru, the Mourne Mountains, the Adamello-Presanella Alps, the Aiguille du Midi and the Grandes Jorasses), the Bregaglia, Corsica, parts of the Karakoram (especially the Trango Towers), the Fitzroy Massif and the Paine Massif in Patagonia, Baffin Island, Ogawayama, the Cornish coast, the Cairngorms, Sugarloaf Mountain in Rio de Janeiro, Brazil, and the Stawamus Chief, British Columbia, Canada.
Gallery
See also
References
Citations
Further reading
External links
Felsic rocks
National symbols of Finland
Plutonic rocks
Sculpture materials
Symbols of Wisconsin
Industrial minerals | Granite | Chemistry | 6,215 |
47,372,974 | https://en.wikipedia.org/wiki/C5H8N2O4 | {{DISPLAYTITLE:C5H8N2O4}}
The molecular formula C5H8N2O4 (molar mass: 160.13 g/mol, exact mass: 160.0484 u) may refer to:
Thymine glycol (5,6-dihydroxy-5,6-dihydrothymine)
Tricholomic acid | C5H8N2O4 | Chemistry | 86 |
62,417,498 | https://en.wikipedia.org/wiki/Truthful%20resource%20allocation | Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources.
Model
There are m resources that are assumed to be homogeneous and divisible. Examples are:
Materials, such as wood or metal;
Virtual resources, such as CPU time or computer memory;
Financial resources, such as shares in firms.
There are n agents. Each agent has a function that attributes a numeric value to each "bundle" (combination of resources).
It is often assumed that the agents' value functions are linear, so that if the agent receives a fraction rj of each resource j, then his/her value is the sum of rj ∗vj .
Design goals
The goal is to design a truthful mechanism, that will induce the agents to reveal their true value functions, and then calculate an allocation that satisfies some fairness and efficiency objectives. The common efficiency objectives are:
Pareto efficiency (PE);
Utilitarian social welfare defined as the sum of agents' utilities. An allocation maximizing this sum is called utilitarian or max-sum; it is always PE.
Nash social welfare defined as the product of agents' utilities. An allocation maximizing this product is called Nash-optimal or max-product or proportionally-fair; it is always PE. When agents have additive utilities, it is equivalent to the competitive equilibrium from equal incomes.
The most common fairness objectives are:
Equal treatment of equals (ETE) if two agents have exactly the same utility function, then they should get exactly the same utility.
Envy-freeness no agent should envy another agent. It implies ETE.
Egalitarian in lieu of equitable markets are analogous to laissez-faire early-stage capitalism, which form the basis of common marketplaces bearing fair trade policies in world markets' market evaluation; financiers can capitalise on financial controls and financial leverage and the concomitant exchange.
Trivial algorithms
Two trivial truthful algorithms are:
The equal split algorithm which gives each agent exactly 1/n of each resource. This allocation is envy-free (and obviously ETE), but usually it is very inefficient.
The serial dictatorship algorithm which orders the agents arbitrarily, and lets each agent in turn take all resources that he wants, from among the remaining ones. This allocation is PE, but usually it is unfair.
It is possible to mix these two mechanisms, and get a truthful mechanism that is partly-fair and partly-efficient. But the ideal mechanism would satisfy all three properties simultaneously: truthfulness, efficiency and fairness.
At most one object per agent
In a variant of the resource allocation problem, sometimes called one-sided matching or assignment, the total amount of objects allocated to each agent must be at most 1.
When there are 2 agents and 2 objects, the following mechanism satisfies all three properties: if each agent prefers a different object, give each agent his preferred object; if both agents prefer the same object, give each agent 1/2 of each object (It is PE due to the capacity constraints). However, when there are 3 or more agents, it may be impossible to attain all three properties.
Zhou proved that, when there are 3 or more agents, each agent must get at most 1 object, and each object must be given to at most 1 agent, no truthful mechanism satisfies both PE and ETE.
When there are multiple units of each object (but each agent must still get at most 1 object), there is a weaker impossibility result: no PE and ETE mechanism satisfies Group strategyproofness.
He leaves open the more general resource allocation setting, in which each agent may get more than one object.
There are analogous impossibility results for agents with ordinal utilities:
For agents with strict ordinal utilities, Bogomolnaia and Moulin prove that no mechanism satisfies possible-PE, necessary-truthfulness, and ETE.
For agents with weak ordinal utilities, Katta and Sethuraman prove that no mechanism satisfies possible-PE, possible-truthfulness, and necessary-envy-freeness.
See also: Truthful one-sided matching.
Approximation Algorithms
There are several truthful algorithms that find a constant-factor approximation of the maximum utilitarian or Nash welfare.
Guo and Conitzer studied the special case of n=2 agents. For the case of m=2 resources, they showed a truthful mechanism attaining 0.828 of the maximum utilitarian welfare, and showed an upper bound of 0.841. For the case of many resources, they showed that all truthful mechanisms of the same kind approach 0.5 of the maximum utilitarian welfare. Their mechanisms are complete - they allocate all the resources.
Cole, Gkatzelis and Goel studied mechanisms of a different kind - based on the max-product allocation. For many agents, with valuations that are homogeneous functions, they show a truthful mechanism called Partial Allocation that guarantees to each agent at least 1/e ≈ 0.368 of his/her utility in the max-product allocation. Their mechanism is envy-free when the valuations are additive linear functions. They show that no truthful mechanism can guarantee to all agents more than 0.5 of their max-product utility.
For the special case of n=2 agents, they show a truthful mechanism that attains at least 0.622 of the utilitarian welfare. They also show that the mechanism running the equal-split mechanism and the partial-allocation mechanism, and choosing the outcome with the highest social welfare, is still truthful, since both agents always prefer the same outcome. Moreover, it attains at least 2/3 of the optimal welfare. They also show an algorithm for computing the max-product allocation, and show that the Nash-optimal allocation itself attains at least 0.933 of the utilitarian welfare.
They also show a mechanism called Strong Demand Matching, which is tailored for a setting with many agents and few resources (such as the privatization auction in the Czech republic). The mechanism guarantees to each agent at least p/(p+1) of the max-product utility, when p is the smallest equilibrium price of a resource when each agent has a unit budget. When there are many more agents than resources, the price of each resource is usually high, so the approximation factor approaches 1. In particular, when there are two resources, this fraction is at least n/(n+1). This mechanism assigns to each agent a fraction of a single resource.
Cheung improved the competitive ratios of previous works:
The ratio for two agents and two resources improved from 0.828 to 5/6 ≈ 0.833 with a complete-allocation mechanism, and strictly more than 5/6 with a partial-allocation mechanism. The upper bound improved from 0.841 to 5/6+ε; for a complete-allocation mechanism, and to 0.8644 for a partial mechanism.
The ratio for two agents and many resources improved from 2/3 to 0.67776, by using a weighted average of two mechanisms: partial-allocation, and max (partial-allocation, equal-split).
Related problems
Truthful cake-cutting - a variant of the problem in which there is a single heterogeneous resource ("cake"), and each agent has a personal value-measure over the resource.
Strategic fair division - the study of equilibria of fair division games when the agents act strategically rather than sincerely.
Truthful allocation of two kinds of resources - plentiful and scarce.
Truthful fair division of indivisible items.
Relation between truthful fair division and wagering strategies.
References
Mechanism design
Fair division protocols | Truthful resource allocation | Mathematics | 1,616 |
68,834,254 | https://en.wikipedia.org/wiki/Silicification | In geology, silicification is a petrification process in which silica-rich fluids seep into the voids of Earth materials, e.g., rocks, wood, bones, shells, and replace the original materials with silica (SiO2). Silica is a naturally existing and abundant compound found in organic and inorganic materials, including Earth's crust and mantle. There are a variety of silicification mechanisms. In silicification of wood, silica permeates into and occupies cracks and voids in wood such as vessels and cell walls. The original organic matter is retained throughout the process and will gradually decay through time. In the silicification of carbonates, silica replaces carbonates by the same volume. Replacement is accomplished through the dissolution of original rock minerals and the precipitation of silica. This leads to a removal of original materials out of the system. Depending on the structures and composition of the original rock, silica might replace only specific mineral components of the rock. Silicic acid (H4SiO4) in the silica-enriched fluids forms lenticular, nodular, fibrous, or aggregated quartz, opal, or chalcedony that grows within the rock. Silicification happens when rocks or organic materials are in contact with silica-rich surface water, buried under sediments and susceptible to groundwater flow, or buried under volcanic ashes. Silicification is often associated with hydrothermal processes. Temperature for silicification ranges in various conditions: in burial or surface water conditions, temperature for silicification can be around 25°−50°; whereas temperatures for siliceous fluid inclusions can be up to 150°−190°. Silicification could occur during a syn-depositional or a post-depositional stage, commonly along layers marking changes in sedimentation such as unconformities or bedding planes.
Sources of silica
The sources of silica can be divided into two categories: silica in organic and inorganic materials. The former category is also known as biogenic silica, which is a ubiquitous material in animals and plants. The latter category is the second most abundant element in Earth's crust. Silicate minerals are the major components of 95% of presently identified rocks.
Biology
Biogenic silica is the major source of silica for diagenesis. One of the prominent examples is the presence of silica in phytoliths in the leaves of plants, i.e. grasses, and Equisetaceae. Some suggested that silica present in phytoliths can serve as a defense mechanism against the herbivores, where the presence of silica in leaves increases the difficulty in digestion, harming the fitness of herbivores. However, evidence on the effects of silica on the wellbeing of animals and plants is still insufficient.
Besides, sponges are another biogenic source of naturally occurring silica in animals. They belong to the phylum Porifera in the classification system. Silicious sponges are commonly found with silicified sedimentary layers, for example in the Yanjiahe Formation in South China. Some of them occur as sponge spicules and are associated with microcrystalline quartz or other carbonates after silicification. It could also be the main source of precipitative beds such as cherts beds or cherts in petrified woods.
Diatoms, an important group of microalgae living in marine environments, contribute significantly to the source of diagenetic silica. They have cell walls made of silica, also known as diatom frustules. In some silicified sedimentary rocks, fossils of diatoms are unearthed. This suggests that diatoms frustules were sources of silica for silicification. Some examples are silicified limestones of Miocene Astoria Formation in Washington, silicified ignimbrite in El Tatio Geyser Field in Chile, and Tertiary siliceous sedimentary rocks in western pacific deep sea drills. The presence of biogenic silica in various species creates a large-scale marine silica cycle that circulates silica through the ocean. Silica content is therefore high in active silica upwelling areas in the deep-marine sediments. Besides, carbonate shells that deposited in shallow marine environments enrich silica contents at continental shelf areas.
Geology
The major component of the Earth's upper mantle is silica (SiO2), which makes it the primary source of silica in hydrothermal fluids. SiO2 is a stable component. It often appears as quartz in volcanic rocks. Some quartz that is derived from pre-existing rocks, appear in the form of sand and detrital quartz that interact with seawater to produce siliceous fluids. In some cases, silica in siliceous rocks are subjected to hydrothermal alteration and react with seawater at certain temperatures, forming an acidic solution for silicification of nearby materials. In the rock cycle, the chemical weathering of rocks also releases silica in the form of silicic acid as by-products. Silica from weathered rocks is washed into waters and deposit into shallow-marine environments.
Mechanisms of silicification
The presence of hydrothermal fluids is essential as a medium for geochemical reactions during silicification. In the silicification of different materials, different mechanisms are involved. In the silicification of rock materials like carbonates, replacement of minerals through hydrothermal alteration is common; while the silicification of organic materials such as woods is solely a process of permeation.
Replacement
The replacement of silica involves two processes:
1) Dissolution of rock minerals
2) Precipitation of silica
It could be explained through the carbonate-silica replacement. Hydrothermal fluids are undersaturated with carbonates and supersaturated with silica. When carbonate rocks get in contact with hydrothermal fluids, due to the difference in gradient, carbonates from the original rock dissolve into the fluid whereas silica precipitate out of it. The carbonate that dissolved is therefore pulled out from the system while the silica precipitated recrystallizes into various silicate minerals, depending on the silica phase. The solubility of silica strongly depends on the temperature and pH value of the environment where pH9 is the controlling value. Under a condition of pH lower than 9, silica precipitates out of the fluid; when the pH value is above 9, silica becomes highly soluble.
Permeation
In the silicification of woods, silica dissolves in hydrothermal fluid and seeps into lignin in cell walls. Precipitation of silica out of the fluids produces silica deposition within the voids, especially in the cell walls. Cell materials are broken down by the fluids, yet the structure remains stable due to the development of minerals. Cell structures are slowly replaced by silica. Continuous penetration of siliceous fluids results in different stages of silicification i.e. primary and secondary. The loss of fluids over time leads to the cementation of silicified woods through late silica addition.
The rate of silicification depends on a few factors:
1) Rate of breakage of original cells
2) Availability of silica sources and silica content in the fluid
3) Temperature and pH of silicification environment
4) Interference of other diagenetic processes
These factors affect the silicification process in many ways. The rate of breakage of original cells controls the development of the mineral framework, hence the replacement of silica. Availability of silica directly determines the silica content in fluids. The higher the silica content, the faster silicification could take place. The same concept applies to the availability of hydrothermal fluids. The temperature and pH of the environment determine the condition for silicification to occur. This is closely connected to the burial depth or association with volcanic events. Interference of other diagenetic processes could sometimes create disturbance to silicification. The relative time of silicification to other geological processes could serve as a reference for further geological interpretations.
Examples
Volcanic rocks
In the Conception Bay in Newfoundland, Southeastern coast of Canada, a series of Pre-Cambrian to Cambrian-linked volcanic rocks were silicified. The rocks mainly consist of rhyolitic and basaltic flows, with crystal tuffs and breccia interbedded. Regional silicification was taken place as a preliminary alteration process before other geochemical processes occurred. The source of silica near the area was from hot siliceous fluids from rhyolitic flow under a static condition. A significant portion of silica appeared in the form of white chalcedonic quartz, quartz veins as well as granular quartz crystal. Due to the difference in rock structures, silica replaces different materials in rocks of close locations. The following table shows the replacement of silica at different localities:
Metamorphic rocks
In the Semail Nappe of Oman in the United Arb Emirates, silicified serpentinite was found. The occurrence of such geological features is rather unusual. It is a pseudomorphic alteration where the protolith of serpentinite was already silicified. Due to tectonic events, basal serpentinite was fractured and groundwater permeated along the faults, forming a large-scale circulation of groundwater within the strata. Through hydrothermal dissolution, silica precipitated and crystallized around the voids of serpentinite. Therefore, silicification can only be seen along groundwater paths. The silicification of serpentinite was formed under the condition where groundwater flow and carbon dioxide concentration are low.
Carbonates
Silicified carbonates can appear as silicified carbonate rock layers, or in the form of silicified karsts. The Paleogene Madrid Basin in Central Spain is a foreland basin resulted from the Alpine uplift, an example of silicified carbonates in rock layers. The lithology consists of carbonate and detritus units that were formed in a lacustrine environment. The rock units are silicified where cherts, quartz, and opaline minerals are found in the layers. It is conformable with the underlying evaporitic beds, also dated from similar ages. It is found that there were two stages of silicification within the rock strata. The earlier stage of silicification provided a better condition and site for the precipitation of silica. The source of silica is still uncertain. There are no biogenic silica detected from the carbonates. However, microbial films in carbonates are found, which could suggest the presence of diatoms.
Karsts are carbonate caves formed from a dissolution of carbonate rocks such as limestones and dolomites. They are usually susceptible to groundwater and are dissolved in these drainage. Silicified karsts and cave deposits are formed when siliceous fluids enter karsts through faults and cracks. The Mid-Proterozoic Mescal Limestone from the Apache Group in central Arizona is classic examples of silicified karsts. A portion of the carbonates are replaced by cherts in early diagenesis and the remaining portion is completely silicified in later stages. The source of silica in carbonates are usually associated with the presence of biogenetic silica; however, the source of silica in Mescal Limestone is from weathering of overlying basalts, which are extrusive igneous rocks that have high silica content.
Silicified woods
Silicification of woods usually occur in terrestrial conditions, but sometimes it could be done in aquatic environments. Surface water silicification can be done through the precipitation of silica in silica-enriched hot springs. On the northern coast of central Japan, the Tateyama hot spring has a high silica content that contributes to the silicification of nearby fallen woods and organic materials. Silica precipitates rapidly out of the fluids and opal is the main form of silica. With a temperature of around 70 °C and a pH value of around 3, the opal deposited is composed of silica spheres of different sizes arranged randomly.
Early silicification
Mafic magma dominated the seafloor at around 3.9 Ga during the Hadean-Archean transition. Due to rapid silicification, the felsic continental crust began to form. In the Archean, the continental crust was composed of tonalite–trondhjemite–granodiorite (TTG) as well as granite–monzonite–syenite suites.
The Mount Goldsworthy in the Pilbara Craton located in Western Australia holds one of the earliest silicification example with an Archean clastic meta-sedimentary rock sequence, revealing the surface environment of the Earth in the early times with evidence from silicification and hydrothermal alteration. The unearthed rocks are found to be SiO2 dominant in terms of mineral composition. The succession was subjected to a high degree of silicification due to hydrothermal interaction with seawater at low temperatures. Lithic fragments were replaced with microcrystalline quartz and protoliths were altered during silicification. The condition of silicification and the elements that were present suggested that the surface temperature and carbon dioxide contents were high during either or both syn-deposition and post-deposition.
The Barberton Greenstone Belt in South Africa, specifically the Eswatini Supergroup of around 3.5–3.2 Ga, is a suite of well-preserved silicified volcanic-sedimentary rocks. With the composition ranging from ultramafic to felsic, the silicified volcanic rocks are directly beneath the bedded chert layer. Rocks are more silicified near the bedded chert contact, suggesting a relationship between chert deposition and silicification. The silica altered zones reveal that hydrothermal activities, as in seawater circulation, actively circulate the rock layers through fractures and fault during the deposition of bedded chert. The seawater was heated up and therefore picked up silicious materials from underneath volcanic origin. The silica enriched fluids bring about silicification of rocks through seeping into porous materials in the syn-depositional stage at a low-temperature condition.
See also
Metasomatism
Permineralization
Pseudomorph
Silica cycle
References
Sedimentary rocks
Geochemical processes
Silicate minerals | Silicification | Chemistry | 2,955 |
2,369,853 | https://en.wikipedia.org/wiki/Michelson%E2%80%93Gale%E2%80%93Pearson%20experiment | The Michelson–Gale–Pearson experiment (1925) is a modified version of the Michelson–Morley experiment and the Sagnac-Interferometer. It measured the Sagnac effect due to Earth's rotation, and thus tests the theories of special relativity and luminiferous ether along the rotating frame of Earth.
Experiment
The aim, as it was first proposed by Albert A. Michelson in 1904 and then executed in 1925 by Michelson and Henry G. Gale, was to find out whether the rotation of the Earth has an effect on the propagation of light in the vicinity of the Earth.
The Michelson-Gale experiment was a very large ring interferometer, (a perimeter of 1.9 kilometers), large enough to detect the angular velocity of the Earth. Like the original Michelson-Morley experiment, the Michelson-Gale-Pearson version compared the light from a single source (carbon arc) after travelling in two directions. The major change was to replace the two "arms" of the original MM version with two rectangles, one much larger than the other. Light was sent into the rectangles, reflecting off mirrors at the corners, and returned to the starting point. Light exiting the two rectangles was compared on a screen just as the light returning from the two arms would be in a standard MM experiment. The expected fringe shift in accordance with the stationary aether and special relativity was given by Michelson as:
where is the displacement in fringes, the area of the ring, the latitude of the experiment site in Clearing, Illinois (41° 46'), the speed of light, the angular velocity of Earth, the effective wavelength used. In other words, this experiment was aimed to detect the Sagnac effect due to Earth's rotation.
Result
The outcome of the experiment was that the angular velocity of the Earth as measured by astronomy was confirmed to within measuring accuracy. The ring interferometer of the Michelson-Gale experiment was not calibrated by comparison with an outside reference (which was not possible, because the setup was fixed to the Earth). From its design it could be deduced where the central interference fringe ought to be if there would be zero shift. The measured shift was 230 parts in 1000, with an accuracy of 5 parts in 1000. The predicted shift was 237 parts in 1000. According to Michelson/Gale, the experiment is compatible with both the idea of a stationary ether and special relativity.
As it was already pointed out by Michelson in 1904, a positive result in such experiments contradicts the hypothesis of complete aether drag, as the spinning surface of the Earth experiences an aether wind. The Michelson-Morley experiment shows on the contrary that a hypothetical aether could not be moving relative to the Earth, that is, as the Earth orbits it would have to drag the aether along. Those two results are not incompatible per se, but in the absence of a model to reconcile them, they are more ad hoc than the explanation of both experiments within special relativity. The experiment is consistent with relativity for the same reason as all other Sagnac type experiments (see Sagnac effect). That is, rotation is absolute in special relativity, because there is no inertial frame of reference in which the whole device is at rest during the complete process of rotation, thus the light paths of the two rays are different in all of those frames, consequently a positive result must occur. It's also possible to define rotating frames in special relativity (Born coordinates), yet in those frames the speed of light is not constant in extended areas any more, thus also in this view a positive result must occur. Today, Sagnac type effects due to Earth's rotation are routinely incorporated into GPS.
References
Physics experiments
Aether theories
1925 in science | Michelson–Gale–Pearson experiment | Physics | 778 |
72,176,379 | https://en.wikipedia.org/wiki/Hanseniaspora%20clermontiae | Hanseniaspora clermontiae is a species of yeast in the family Saccharomycetaceae. It was first isolated from stem rot occurring in a lobelioid plant in Hawaii, and may be endemic to the Hawaiian Islands.
Taxonomy
The species was first described by Neža Čadež, Gé A. Poot, Peter Raspor, and Maudy Th. Smith in 2003 after isolating a sample in stem rot of a Clermontia plant in Hawaii. The specific epithet is derived from the genus name of the host plant where it was first isolated.
Description
Microscopic examination of the yeast cells in YM liquid medium after 48 hours at 25°C reveals cells that are 3.5 to 18 μm by 2.5 to 5.0 μm in size, apiculate, ovoid to elongate, appearing singly or in pairs. Reproduction is by budding, which occurs at both poles of the cell. In broth culture, sediment is present, and after one month a very thin ring is formed.
Colonies that are grown on malt agar for one month at 25°C appear cream-colored, butyrous, and smooth. Growth is flat to slightly raised at the center, with an entire to slightly undulating margin. The yeast forms poorly-developed pseudohyphae on cornmeal or potato agar. The yeast has been observed to form two to four hat-shaped ascospores when grown for two weeks on 5% Difco malt extract agar.
The yeast can ferment glucose and cellobiose, but not galactose, sucrose, maltose, lactose, raffinose or trehalose. It has a positive growth rate at 25°C, but no growth at 30°C or above. It can grow on agar media containing 0.1% cycloheximide and 10% sodium but growth on 50% glucose-yeast extract agar is weak.
Ecology
The type sample was obtained in Hawaii, and in 2005, Marc-André Lachance described the species as possibly endemic to the Hawaiian Islands. It is considered unlikely to be a human pathogen due to its inability to grow at human body temperatures.
References
Saccharomycetes
Yeasts
Fungi described in 2003
Fungus species | Hanseniaspora clermontiae | Biology | 468 |
11,443,297 | https://en.wikipedia.org/wiki/Shear%20force | In solid mechanics, shearing forces are unaligned forces acting on one part of a body in a specific direction, and another part of the body in the opposite direction. When the forces are collinear (aligned with each other), they are called tension forces or compression forces. Shear force can also be defined in terms of planes: "If a plane is passed through a body, a force acting along this plane is called a shear force or shearing force."
Force required to shear steel
This section calculates the force required to cut a piece of material with a shearing action. The relevant information is the area of the material being sheared, i.e. the area across which the shearing action takes place, and the shear strength of the material. A round bar of steel is used as an example. The shear strength is calculated from the tensile strength using a factor which relates the two strengths. In this case 0.6 applies to the example steel, known as EN8 bright, although it can vary from 0.58 to 0.62 depending on application.
EN8 bright has a tensile strength of 800MPa and mild steel, for comparison, has a tensile strength of 400MPa.
To calculate the force to shear a 25 mm diameter bar of EN8 bright steel;
area of the bar in mm2 = (12.52)(π) ≈ 490.8mm2
0.8kN/mm2 × 490.8mm2 = 392.64kN ≈ 40tonne-force
40tonne-force × 0.6 (to change force from tensile to shear) = 24tonne-force
When working with a riveted or tensioned bolted joint, the strength comes from friction between the materials bolted together. Bolts are correctly torqued to maintain the friction. The shear force only becomes relevant when the bolts are not torqued.
A bolt with property class 12.9 has a tensile strength of 1200MPa (1MPa = 1N/mm2) or 1.2kN/mm2 and the yield strength is 0.90 times tensile strength, 1080MPa in this case.
A bolt with property class 4.6 has a tensile strength of 400MPa (1MPa = 1N/mm2) or 0.4 kN/mm2 and yield strength is 0.60 times tensile strength, 240MPa in this case.
See also
ASTM F568M, mechanical properties of different grades of steel fasteners
Cantilever method
Résal effect
Newton's laws of motion § Newton's third law
References
Force
Civil engineering | Shear force | Physics,Mathematics,Engineering | 544 |
40,248 | https://en.wikipedia.org/wiki/Variable%20Specific%20Impulse%20Magnetoplasma%20Rocket | The Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is an electrothermal thruster under development for possible use in spacecraft propulsion. It uses radio waves to ionize and heat an inert propellant, forming a plasma, then a magnetic field to confine and accelerate the expanding plasma, generating thrust. It is a plasma propulsion engine, one of several types of spacecraft electric propulsion systems.
The VASIMR method for heating plasma was originally developed during nuclear fusion research. VASIMR is intended to bridge the gap between high thrust, low specific impulse chemical rockets and low thrust, high specific impulse electric propulsion, but has not yet demonstrated high thrust. The VASIMR concept originated in 1977 with former NASA astronaut Franklin Chang Díaz, who has been developing the technology ever since.
Design and operation
VASIMR is a type of electrothermal plasma thruster/electrothermal magnetoplasma thruster. In these engines, a neutral, inert propellant is ionized and heated using radio waves. The resulting plasma is then accelerated with magnetic fields to generate thrust. Other related electrically powered spacecraft propulsion concepts are the electrodeless plasma thruster, the microwave arcjet rocket, and the pulsed inductive thruster.
The propellant, a neutral gas such as argon or xenon, is injected into a hollow cylinder surfaced with electromagnets. On entering the engine, the gas is first heated to a "cold plasma" by a helicon RF antenna/coupler that bombards the gas with electromagnetic energy, at a frequency of 10 to 50 MHz, stripping electrons off the propellant atoms and producing a plasma of ions and free electrons. By varying the amount of RF heating energy and plasma, VASIMR is claimed to be capable of generating either low-thrust, high–specific impulse exhaust or relatively high-thrust, low–specific impulse exhaust. The second phase of the engine is a strong solenoid-configuration electromagnet that channels the ionized plasma, acting as a convergent-divergent nozzle like the physical nozzle in conventional rocket engines.
A second coupler, known as the Ion Cyclotron Heating (ICH) section, emits electromagnetic waves in resonance with the orbits of ions and electrons as they travel through the engine. Resonance is achieved through a reduction of the magnetic field in this portion of the engine that slows the orbital motion of the plasma particles. This section further heats the plasma to greater than —about 173 times the temperature of the Sun's surface.
The path of ions and electrons through the engine approximates lines parallel to the engine walls; however, the particles actually orbit those lines while traveling linearly through the engine. The final, diverging, section of the engine contains an expanding magnetic field that ejects the ions and electrons from the engine at velocities as great as .
Advantages
In contrast to the typical cyclotron resonance heating processes, VASIMR ions are immediately ejected from the magnetic nozzle before they achieve thermalized distribution. Based on novel theoretical work in 2004 by Alexey V. Arefiev and Boris N. Breizman of University of Texas at Austin, virtually all of the energy in the ion cyclotron wave is uniformly transferred to ionized plasma in a single-pass cyclotron absorption process. This allows for ions to leave the magnetic nozzle with a very narrow energy distribution, and for significantly simplified and compact magnet arrangement in the engine.
VASIMR does not use electrodes; instead, it magnetically shields plasma from most hardware parts, thus eliminating electrode erosion, a major source of wear in ion engines. Compared to traditional rocket engines with very complex plumbing, high performance valves, actuators and turbopumps, VASIMR has almost no moving parts (apart from minor ones, like gas valves), maximizing long term durability.
Disadvantages
According to Ad Astra as of 2015, the VX-200 engine requires 200 kW electrical power to produce of thrust, or 40 kW/N. In contrast, the conventional NEXT ion thruster produces 0.327 N with only 7.7 kW, or 24 kW/N. Electrically speaking, NEXT is almost twice as efficient, and successfully completed a 48,000 hours (5.5 years) test in December 2009.
New problems also emerge with VASIMR, such as interaction with strong magnetic fields and thermal management. The inefficiency with which VASIMR operates generates substantial waste heat that needs to be channeled away without creating thermal overload and thermal stress. The superconducting electromagnets necessary to contain hot plasma generate tesla-range magnetic fields that can cause problems with other onboard devices and produce unwanted torque by interaction with the magnetosphere. To counter this latter effect, two thruster units can be packaged with magnetic fields oriented in opposite directions, making a net zero-torque magnetic quadrupole.
Research and development
The first VASIMR experiment was conducted at Massachusetts Institute of Technology in 1983. Important refinements were introduced in the 1990s, including the use of the helicon plasma source, which replaced the plasma gun originally envisioned and its electrodes, adding to durability and long life.
As of 2010, Ad Astra Rocket Company (AARC) was responsible for VASIMR development, signing the first Space Act Agreement on 23 June 2005 to privatize VASIMR technology. Franklin Chang Díaz is Ad Astra's chairman and CEO, and the company had a testing facility in Liberia, Costa Rica on the campus of Earth University.
VX-10 to VX-50
In 1998, the first helicon plasma experiment was performed at the ASPL. VASIMR experiment 10 (VX-10) in 1998 achieved a helicon RF plasma discharge of up to 10 kW and VX-25 in 2002 of up to 25 kW. By 2005 progress at ASPL included full and efficient plasma production and acceleration of the plasma ions with the 50 kW, thrust VX-50. Published data on the 50 kW VX-50 showed the electrical efficiency to be 59% based on a 90% coupling efficiency and a 65% ion speed boosting efficiency.
VX-100
The 100 kilowatt VASIMR experiment was successfully running by 2007 and demonstrated efficient plasma production with an ionization cost below 100eV. VX-100 plasma output tripled the prior record of the VX-50.
The VX-100 was expected to have an ion speed boosting efficiency of 80%, but could not achieve this efficiency due to losses from the conversion of DC electric current to radio frequency power and the auxiliary equipment for the superconducting magnet. In contrast, 2009 state-of-the-art, proven ion engine designs such as NASA's High Power Electric Propulsion (HiPEP) operated at 80% total thruster/PPU energy efficiency.
VX-200
On 24 October 2008, the company announced in a press release that the helicon plasma generation component of the 200 kW VX-200 engine had reached operational status. The key enabling technology, solid-state DC-RF power-processing, reached 98% efficiency. The helicon discharge used 30 kW of radio waves to turn argon gas into plasma. The remaining 170 kW of power was allocated for acceleration of plasma in the second part of the engine, via ion cyclotron resonance heating.
Based on data from VX-100 testing, it was expected that, if room temperature superconductors are ever discovered, the VX-200 engine would have a system efficiency of 60–65% and a potential thrust level of 5 N. Optimal specific impulse appeared to be around 5,000 s using low cost argon propellant. One of the remaining untested issues was whether the hot plasma actually detached from the rocket. Another issue was waste heat management. About 60% of input energy became useful kinetic energy. Much of the remaining 40% is secondary ionizations from plasma crossing magnetic field lines and exhaust divergence. A significant portion of that 40% was waste heat (see energy conversion efficiency). Managing and rejecting that waste heat is critical.
Between April and September 2009, 200 kW tests were performed on the VX-200 prototype with 2 tesla superconducting magnets that are powered separately and not accounted for in any "efficiency" calculations. During November 2010, long duration, full power firing tests were performed, reaching steady state operation for 25 seconds and validating basic design characteristics.
Results presented in January 2011 confirmed that the design point for optimal efficiency on the VX-200 is 50 km/s exhaust velocity, or an I of 5000s. The 200 kW VX-200 had executed more than 10,000 engine firings with argon propellant at full power by 2013, demonstrating greater than 70% thruster efficiency relative to RF power input.
VX-200SS
In March 2015, Ad Astra announced a $10 million award from NASA to advance the technology readiness of the next version of the VASIMR engine, the VX-200SS to meet the needs of deep space missions. The SS in the name stands for "steady state", as a goal of the long duration test is to demonstrate continuous operation at thermal steady state.
In August 2016, Ad Astra announced completion of the milestones for the first year of its 3-year contract with NASA. This allowed for first high-power plasma firings of the engines, with a stated goal to reach 100hr and 100 kW by mid-2018. In August 2017, the company reported completing its Year 2 milestones for the VASIMR electric plasma rocket engine. NASA gave approval for Ad Astra to proceed with Year 3 after reviewing completion of a 10-hour cumulative test of the VX-200SS engine at 100kW. It appears as though the planned 200 kW design is being run at 100 kW for reasons that are not mentioned in the press release.
In August 2019, Ad Astra announced the successful completion of tests of a new generation radio-frequency (RF) Power Processing Unit (PPU) for the VASIMR engine, built by Aethera Technologies Ltd. of Canada. Ad Astra declared a power of 120 kW and >97% electrical-to-RF power efficiency, and that, at 52 kg, the new RF PPU is about 10x lighter than the PPUs of competing electric thrusters (power-to-weight ratio: 2.31 kW/kg)
In July 2021, Ad Astra announced the completion of a record-breaking test for the engine, running it for 28 hours at a power level of 82.5kW. A second test, conducted from July 12 to 16, successfully ran the engine for 88 hours at a power level of 80kW. Ad Astra anticipates conducting 100kW power level tests in 2023.
Potential applications
VASIMR has a comparatively poor thrust-to-weight ratio, and requires an ambient vacuum.
Proposed applications for VASIMR such as the rapid transportation of people to Mars would require a very high power, low mass energy source, ten times more efficient than a nuclear reactor (see nuclear electric rocket). In 2010 NASA Administrator Charles Bolden said that VASIMR technology could be the breakthrough technology that would reduce the travel time on a Mars mission from 2.5 years to 5 months. However this claim has not been repeated in the last decade.
In August 2008, Tim Glover, Ad Astra director of development, publicly stated that the first expected application of VASIMR engine is "hauling things [non-human cargo] from low-Earth orbit to low-lunar orbit" supporting NASA's return to Moon efforts.
Mars in 39 days
In order to conduct an imagined crewed trip to Mars in 39 days, the VASIMR would require an electrical power level far beyond anything currently possible.
On top of that, any power generation technology will produce waste heat. The necessary 200 megawatt reactor "with a power-to-mass density of 1,000 watts per kilogram" would require extremely efficient radiators to avoid the need for "football-field sized radiators".
See also
Comparison of orbital rocket engines
Rocket propulsion technologies (disambiguation)
Electric propulsion
Helicon Double Layer Thruster
Magnetoplasmadynamic thruster
Nano-particle field extraction thruster
Pulsed plasma thruster
Space fission reactors
Project Prometheus
Safe Affordable Fission Engine
Systems for Nuclear Auxiliary Power
TOPAZ nuclear reactor
References
Further reading
External links
"Plasma Rocket" (Video). Brink. Science. December 18, 2008.
NASA documents
Technical Paper: Rapid Mars Transits with Exhaust-Modulated Plasma Propulsion (PDF)
Variable-Specific-Impulse Magnetoplasma Rocket (Tech Brief)
Advanced Space Propulsion Laboratory: VASIMR
Propulsion Systems of the Future
Magnetic propulsion devices
Plasma technology and applications
Rocket propulsion
Ion engines | Variable Specific Impulse Magnetoplasma Rocket | Physics,Chemistry | 2,658 |
28,988,325 | https://en.wikipedia.org/wiki/Lactarius%20vinaceorufescens | Lactarius vinaceorufescens, commonly known as the yellow-staining milkcap or the yellow-latex milky, is a poisonous species of fungus in the family Russulaceae. It produces mushrooms with pinkish-cinnamon caps up to wide held by pinkish-white stems up to long. The closely spaced whitish to pinkish buff gills develop wine-red spots in age. When it is cut or injured, the mushroom oozes a white latex that rapidly turns bright sulfur-yellow. The species, common and widely distributed in North America, grows in the ground in association with conifer trees. There are several other Lactarius species that bear resemblance to L. vinaceorufescens, but most can be distinguished by differences in staining reactions, macroscopic characteristics, or habitat.
Taxonomy and classification
The species was first described by American mycologists Lexemuel Ray Hesler and Alexander H. Smith in 1960, based on specimens collected in Muskegon, Michigan in 1936. In the same publication, they also named the variety Lactarius vinaceorufescens var. fallax to account for individuals with prominently projecting pleurocystidia measuring 9–12 μm broad, but they reduced this to synonymy with the main species in their 1979 monograph of North American Lactarius species. The fungus is classified in the subsection Croceini of the subgenus Piperates in the genus Lactarius, along with other species with latex that stains the fruit body tissue yellow, or with latex that slowly become yellow upon exposure to air.
The specific epithet vinaceorufescens is derived from the Latin word meaning "becoming wine reddish". The mushroom is commonly known as the "yellow-latex milky" or the "yellow-staining milkcap".
Description
The cap of L. vinaceorufescens is initially convex, then becomes broadly convex to nearly flat, and reaches diameters of wide. The cap margin is rolled inwards at first, but later expands, becoming somewhat uplifted and uneven with age. The cap surface is smooth, pale pinkish cinnamon with pinkish buff at the margin when young, becoming darker pinkish cinnamon to orangey cinnamon when older, faintly zoned with bands or water spots of nearly the same color. The gills are attached to slightly decurrent, narrow, close together, and often forked near the stem. There are several tiers of lamellulae (short gills that do not fully extend to the stem from the cap margin). The gills are initially whitish to pinkish buff, later spotting wine red (vinaceous) to pinkish brown or dark reddish brown. The latex that is exuded when the mushroom is cut or injured is initially white, but rapidly turns sulfur-yellow.
The stem is long by thick, nearly equal in width throughout or enlarged slightly downward, and hollow. The stem surface is nearly smooth, with white to brownish stiff hairs at the base, pinkish-white overall, and darkening with age. The flesh is moderately thick, white to pinkish, staining bright sulfur yellow. It has an acrid taste. The spore print is white to yellowish. The mushrooms are poisonous; as a general rule, several guide books recommend to avoid the consumption of Lactarius species with latex that turns yellow.
The spores are roughly spherical to broadly ellipsoid, hyaline (translucent), amyloid, and measure 6.5–9 by 6–7 μm. They are ornamented with warts and ridges that sometimes form a partial reticulum, with prominences up to 0.8 μm. The basidia (spore-bearing cells) are four-spored, and measure 28–33 by 8–10 μm. The pleurocystidia (cystidia found on the gill faces) are roughly cylindrical to narrowly club-shaped when they are young, but soon broaden in the mid portion and taper to an abrupt point; they reach dimensions of 40–68 (up to 80 μm) by 9–13 μm. The cheilocystidia (cystidia on the gill edges) are roughly club-shaped or ventricose with acute apices, and measure 32–44 by 6–10 μm. Clamp connections are absent in the hyphae. The cap cuticle is a thin ixocutis composed of gelatinous hyphae that are typically 2–4 μm wide. Projecting out from the cuticle surface are the ends of numerous connective hyphae, about 5–15 μm long.
Similar species
Lactarius xanthogalactus has nearly identical microscopic features to L. vinaceorufescens, but macroscopically, it does not have the reddish-vinaceous stains that develop on the cap, gills, and stem of L. vinaceorufescens, and it grows on the ground under oak. Another lookalike is L. colorascens, but it may be distinguished from L. vinaceorufescens by several features: a smaller fruit body; a whitish cap that becomes brownish red with age and does not spot vinaceous or brown; bitter to faintly acrid latex; and slightly smaller spores. L. chrysorrheus is also similar, but it has a whitish to pale yellowish-cinnamon cap with slightly darker spots and grows under hardwoods (especially oak) on well-drained, often sandy soil, and its gills do not discolor or spot vinaceous or brown. L. scorbiculatus also has yellow staining latex.
Other superficially similar species include L. rubrilacteus, L. rufus, L. subviscidus, L. fragilis and L. rufulus, but none of these species have the yellow staining reaction characteristic of L. vinaceorufescens. The edible species Lactarius helvus has an orange-brown to light grayish-brown cap with thin bands of dark grayish-brown, a watery latex, and whitish to tan flesh with an odor resembling maple sugar or burnt sugar. Lactarius theiogalus, the "sulfur-milk Lactarius", has an oranger cap and white latex that slowly changes yellow upon exposure to air; it is typically found in broadleaf and mixed woods.
Habitat and distribution
The fruit bodies of Lactarius vinaceorufescens grow scattered or in groups on the ground under pine between August and October. The species is known to develop mycorrhizal associations with Douglas fir (Pseudotsuga menziesii). It is a fairly common and widely distributed species in North America. The mushroom has been found in boreal forests and high-elevation forests of the Southern Appalachians, associated with the tree genera Picea, Abies, and Pinus. In California, it has been noted to commonly co-occur with L. fragilis, L. rubrilacteus, Russula emetica, and R. cremoricolor.
See also
List of Lactarius species
References
Cited text
External links
YouTube Video of yellowing reaction
vinaceorufescens
Fungi described in 1960
Fungi of North America
Poisonous fungi
Taxa named by Alexander H. Smith
Fungus species | Lactarius vinaceorufescens | Biology,Environmental_science | 1,507 |
14,925,692 | https://en.wikipedia.org/wiki/Cleaver%20%28geometry%29 | In geometry, a cleaver of a triangle is a line segment that bisects the perimeter of the triangle and has one endpoint at the midpoint of one of the three sides. They are not to be confused with splitters, which also bisect the perimeter, but with an endpoint on one of the triangle's vertices instead of its sides.
Construction
Each cleaver through the midpoint of one of the sides of a triangle is parallel to the angle bisectors at the opposite vertex of the triangle.
The broken chord theorem of Archimedes provides another construction of the cleaver. Suppose the triangle to be bisected is , and that one endpoint of the cleaver is the midpoint of side . Form the circumcircle of and let be the midpoint of the arc of the circumcircle from through to . Then the other endpoint of the cleaver is the closest point of the triangle to , and can be found by dropping a perpendicular from to the longer of the two sides and .
Related figures
The three cleavers concur at a point, the center of the Spieker circle.
See also
Splitter (geometry)
References
External links
Straight lines defined for a triangle | Cleaver (geometry) | Mathematics | 257 |
51,525,870 | https://en.wikipedia.org/wiki/Dianethole | Dianethole is a naturally occurring organic compound that is found in anise and fennel. It is a dimeric polymer of anethole. It has estrogenic activity, and along with anethole and photoanethole, may be responsible for the estrogenic effects of anise and fennel. These compounds bear resemblance to the estrogens stilbene and diethylstilbestrol, which may explain their estrogenic activity. In fact, it is said that diethylstilbestrol and related drugs were originally modeled after dianethole and photoanethole.
See also
Anol
Hexestrol
References
Phytoestrogens | Dianethole | Chemistry | 139 |
22,679,570 | https://en.wikipedia.org/wiki/Helvella%20elastica | Helvella elastica, commonly known as the flexible Helvella or the elastic saddle, is a species of fungus in the family Helvellaceae of the order Pezizales. It is found in Asia, Europe, and North America. It has a roughly saddle-shaped yellow-brown cap atop a whitish stipe, and grows on soil in woods. Another colloquial name is the brown elfin saddle.
Description
The fruit body of the fungus is grayish or olive-brown, saddle- or mitral-shaped (i.e., resembling a double mitre) and is attached only to the top of the stipe; it may be up to wide. The underside is white. The stipe is white, solid or filled with loosely stuffed hyphae, has a smooth surface, and is up to long by thick. The flesh of H. elastica is brittle and thin. The odor and taste are indistinct.
Microscopic characteristics
The spores are oblong to elliptical in shape, translucent (hyaline), contain one central oil drop (guttulate), and have dimensions of 18–22 by 10–14 μm; young spores have coarse surface warts, while older ones are smooth. The spore-bearing cells, the asci, are 260 by 17–19 μm. The paraphyses (sterile cells interspersed between the asci) are club-shaped, filled with oil drops, sometimes branched, and are 6–10 μm at the apex.
Similar species
The closely related fungus Helvella albipes has a thicker stipe and a two- to four-lobed cap. H. compressa and H. latispora have cap edges that are curled upward, rather than inward as in H. elastica. H. maculata has a similar cap but a ribbed stem. Gyromitra infula has an orange and more defined cap.
Distribution and habitat
This fungus is typically found fruiting singly, scattered, or clustered together on the ground or on wood in coniferous and deciduous woods. It has been found in Europe, western North America, Japan, and China. It is present in summer and fall.
Potential toxicity
Consumption of this fungus is not recommended as similar species in the family Helvellaceae contain the toxin gyromitrin.
Fibrinolytic activity
A 2005 Korean study investigated the ability of extracts from 67 different mushroom species to perform fibrinolysis, the process of breaking down blood clots caused by the protein fibrin. H. elastica was one of seven species that had this ability; the activity of the extract was 60% of that of plasmin, the positive control used in the experiment.
References
elastica
Fungi described in 1785
Fungi of Asia
Fungi of Europe
Fungi of North America
Fungus species | Helvella elastica | Biology | 588 |
3,295,254 | https://en.wikipedia.org/wiki/Testosterone%20enanthate | Testosterone enanthate is an androgen and anabolic steroid (AAS) medication which is used mainly in the treatment of low testosterone levels in men. It is also used in hormone therapy for women and transgender men. It is given by injection into muscle or subcutaneously usually once every one to four weeks.
Side effects of testosterone enanthate include symptoms of masculinization like acne, increased hair growth, voice changes, and increased sexual desire. The drug is a synthetic androgen and anabolic steroid and hence is an agonist of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). Testosterone enanthate is a testosterone ester and a long-lasting prodrug of testosterone in the body. Because of this, it is considered to be a natural and bioidentical form of testosterone, which make it useful for producing masculinization and suitable for androgen replacement therapy. Esterase enzymes break the ester bond in testosterone enantate, releasing free testosterone and enanthic acid through hydrolysis.
This process ensures a sustained release of testosterone in the body.
Testosterone enanthate was introduced for medical use in 1954. Along with testosterone cypionate, testosterone undecanoate, and testosterone propionate, it is one of the most widely used testosterone esters. In addition to its medical use, testosterone enanthate is used to improve physique and performance. The drug is a controlled substance in many countries and so non-medical use is generally illicit.
Medical uses
Testosterone enanthate is used primarily in androgen replacement therapy. It is the most widely used form of testosterone in androgen replacement therapy. The medication is specifically approved, in the United States, for the treatment of hypogonadism in men, delayed puberty in boys, and breast cancer in women. It is also used in masculinizing hormone therapy for transgender men.
Side effects
Side effects of testosterone enanthate include virilization among others. Approximately 10 percent of testosterone enanthate will be converted to 5α-dihydrotestosterone in normal men. 5α-Dihydrotestosterone (DHT) can promote masculine characteristics in both males and females. These masculine characteristics include: clitoral hypertrophy, androgenic alopecia, growth of body hair and deepening of the vocal cords. Dihydrotestosterone also plays an important role in male sexual function and may also be a contributing factor of ischemic priapism in males as shown in a study conducted on the use of finasteride to treat ischemic priapism in males. Testosterone enanthate can also lead to an increase in IGF-1 and IGFBP. Testosterone enanthate can also be converted to estradiol (E2) by the aromatase enzyme, which may lead to gynecomastia in males. Aromatase inhibitors, such as anastrozole, letrozole, exemestane, etc., can help to prevent the subsequent estrogenic activity of testosterone enanthate metabolites in the body.
Pharmacology
Pharmacodynamics
Testosterone enanthate is a prodrug of testosterone and is an androgen and anabolic–androgenic steroid (AAS). That is, it is an agonist of the androgen receptor (AR).
Testosterone enanthate is converted by the body to testosterone that has both androgenic effects and anabolic effects; still, the relative potency of these effects can depend on various factors and is a topic of ongoing research. Esterase enzymes break the ester bond in testosterone enantate, releasing free testosterone and enanthic acid through hydrolysis. This process ensures a sustained release of free bioavailable and bioactive testosterone in the body. Testosterone can either directly exert effects on target tissues or be metabolized by the 5α-reductase enzymes into 5α-dihydrotestosterone (DHT) or aromatized to estradiol (E2). Aromatization in this context is the process where testosterone is converted to estradiol (E2) by the enzyme aromatase (CYP19A1 in humans). This conversion involves changing the structure of testosterone to include an aromatic ring A of a steroid nucleus, making it an estrogen, a so-called female hormone, which plays various roles in the body, such as regulating reproductive functions and bone density. If not aromatized (not converted into an estrogen), both testosterone and DHT are bioactive and bind to an androgen receptor; however, DHT has a stronger binding affinity than testosterone and may have more androgenic effect in certain tissues (such as prostate gland, skin and hair follicles) at lower levels.
Pharmacokinetics
Testosterone enanthate has an elimination half-life of 4.5 days and a mean residence time of 8.5 days when used as a depot intramuscular injection. It requires frequent administration of approximately once per week, and large fluctuations in testosterone levels result with it, with levels initially being elevated and supraphysiological. When testosterone enanthate is dissolved in an oil (such as castor oil), the oil acts as a depot, or reservoir, that slowly releases the drug into the bloodstream. This slow release is due to the oil's viscosity and the gradual breakdown of the ester bond by esterase enzymes. The oil creates a barrier that slows the diffusion of testosterone enanthate into the surrounding tissues, resulting in a more controlled and prolonged release compared to injecting pure testosterone enanthate. The rate at which testosterone enanthate is released from oils can vary based on the oil's viscosity and other properties such as drug solubility in the oil.
Chemistry
Testosterone enanthate, or testosterone 17β-heptanoate, is a synthetic androstane steroid and a derivative of testosterone. It is an androgen ester; specifically, it is the C17β enanthate (heptanoate) ester of testosterone.
History
Testosterone enanthate was described as early as 1952 and was first introduced for medical use in the United States in 1954 under the brand name Delatestryl.
Society and culture
Generic names
Testosterone enanthate is the generic name of the drug and its and . It has also referred to as testosterone heptanoate.
Brand names
Testosterone enanthate is marketed primarily under the brand name Delatestryl.
It is or has been marketed under a variety of other brand names as well, including, among others:
Andro LA
Andropository
Cypionat
Cypoprime
Depandro
Durathate
Everone
Testocyp
Testostroval
Testrin
Testro LA
Xyosted
pharmaqo labs
Availability
Testosterone enanthate is available in the United States and widely elsewhere throughout the world. Testosterone enanthate (testosterone heptanoate) is often available in concentrations of 200 mg per milliliter of fluid.
Legal status
Testosterone enanthate, along with other AAS, is a schedule III controlled substance in the United States under the Controlled Substances Act and a schedule IV controlled substance in Canada under the Controlled Drugs and Substances Act.
Research
As of October 2017, an auto-injection formulation of testosterone enanthate was in preregistration for the treatment of hypogonadism in the United States.
Xyosted
On October 1, 2018, the U.S. Food and Drug Administration (FDA) announced the approval of Xyosted. Xyosted, a product of Antares Pharma, Inc., is a single-use disposable auto-injector that dispenses testosterone enanthate. Xyosted is the first FDA-approved subcutaneous testosterone enanthate product for testosterone replacement therapy in adult males.
References
Anabolic–androgenic steroids
Androstanes
Enanthate esters
Ketones
Testosterone esters | Testosterone enanthate | Chemistry | 1,707 |
52,525,066 | https://en.wikipedia.org/wiki/Winmostar | Winmostar is a molecular modelling and visualisation software program that computes quantum chemistry, molecular dynamics, and solid physics.
Development history
2001 Winmostar V0.40 Windows
2008 Winmostar V3.71
2012 Winmostar V4.00
2014 Winmostar V5.00
2015 Winmostar V6.00
2016 Winmostar V7.00
2017 Winmostar V8.00
2019 Winmostar V9.00
2020 Winmostar V10.00
References
External links
Winmostar web page
Molecular dynamics software | Winmostar | Chemistry | 111 |
1,876,394 | https://en.wikipedia.org/wiki/Cystocele | The cystocele, also known as a prolapsed bladder, is a medical condition in which a woman's bladder bulges into her vagina. Some may have no symptoms. Others may have trouble starting urination, urinary incontinence, or frequent urination. Complications may include recurrent urinary tract infections and urinary retention. Cystocele and a prolapsed urethra often occur together and is called a cystourethrocele. Cystocele can negatively affect quality of life.
Causes include childbirth, constipation, chronic cough, heavy lifting, hysterectomy, genetics, and being overweight. The underlying mechanism involves weakening of muscles and connective tissue between the bladder and vagina. Diagnosis is often based on symptoms and examination.
If the cystocele causes few symptoms, avoiding heavy lifting or straining may be all that is recommended. In those with more significant symptoms a vaginal pessary, pelvic muscle exercises, or surgery may be recommended. The type of surgery typically done is known as a colporrhaphy. The condition becomes more common with age. About a third of women over the age of 50 are affected to some degree.
Signs and symptoms
The symptoms of a cystocele may include:
a vaginal bulge
the feeling that something is falling out of the vagina
the sensation of pelvic heaviness or fullness
difficulty starting a urine stream
a feeling of incomplete urination
frequent or urgent urination
fecal incontinence
frequent urinary tract infections
back and pelvic pain
fatigue
painful sexual intercourse
bleeding
A bladder that has dropped from its normal position and into the vagina can cause some forms of incontinence and incomplete emptying of the bladder.
Complications
Complications may include urinary retention, recurring urinary tract infections and incontinence. The anterior vaginal wall may actually protrude though the vaginal introitus (opening). This can interfere with sexual activity. Recurrent urinary tract infections are common for those who have urinary retention. In addition, though cystocele can be treated, some treatments may not alleviate troubling symptoms, and further treatment may need to be performed. Cystocele may affect the quality of life, women who have cystocele tend to avoid leaving their home and avoid social situations. The resulting incontinence puts women at risk of being placed in a nursing home or long-term care facility.
Cause
A cystocele occurs when the muscles, fascia, tendons and connective tissues between a woman's bladder and vagina weaken, or detach. The type of cystocele that can develop can be due to one, two or three vaginal wall attachment failures: the midline defect, the paravaginal defect, and the transverse defect. The midline defect is a cystocele caused by the overstretching of the vaginal wall; the paravaginal defect is the separation of the vaginal connective tissue at the arcus tendineus fascia pelvis; the transverse defect is when the pubocervical fascia becomes detached from the top (apex) of the vagina. There is some pelvic prolapse in 40–60% of women who have given birth. Muscle injuries have been identified in women with cystocele. These injuries are more likely to occur in women who have given birth than those who have not. These muscular injuries result in less support to the anterior vaginal wall.
Some women with connective tissue disorders are predisposed to developing anterior vaginal wall collapse. Up to one third of women with Marfan syndrome have a history of vaginal wall collapse. Ehlers-Danlos syndrome in women is associated with a rate of 3 out of 4.
Risk factors
Risk factors for developing a cystocele are:
an occupation involving or history of heavy lifting
pregnancy and childbirth
chronic lung disease/smoking
family history of cystocele
exercising incorrectly
ethnicity (risk is greater for Hispanic and whites)
hypoestrogenism
pelvic floor trauma
connective tissue disorders
spina bifida
hysterectomy
cancer treatment of pelvic organs
childbirth; correlates to the number of births
forceps delivery
age
chronically high intra-abdominal pressures
chronic obstructive pulmonary disease
constipation
obesity
Connective tissue disorders predispose women to developing cystocele and other pelvic organ prolapse. The tissues tensile strength of the vaginal wall decreases when the structure of the collagen fibers change and become weaker.
Diagnosis
There are two types of cystocele. The first is distension. This is thought to be due to the overstretching of the vaginal wall and is most often associated with aging, menopause and vaginal delivery. It can be observed when the rugae are less visible or even absent. The second type is displacement. Displacement is the detachment or abnormal elongation of supportive tissue.
The initial assessment of cystocele can include a pelvic exam to evaluate leakage of urine when the women is asked to bear down or give a strong cough (Valsalva maneuver), and the anterior vaginal wall measured and evaluated for the appearance of a cystocele. If a woman has difficulty emptying her bladder, the clinician may measure the amount of urine left in the woman's bladder after she urinates called the postvoid residual. This is measured by ultrasound. A voiding cystourethrogram is a test that involves taking x-rays of the bladder during urination. This x-ray shows the shape of the bladder and lets the doctor see any problems that might block the normal flow of urine. A urine culture and sensitivity test will assess the presence of a urinary tract infection that may be related to urinary retention. Other tests may be needed to find or rule out problems in other parts of the urinary system. Differential diagnosis will be improved by identifying possible inflammation of the Skene's glands and Bartholin glands.
Grading
A number of scales exist to grade the severity of a cystocele.
The pelvic organ prolapse quantification (POP-Q) assessment, developed in 1996, quantifies the descent of the cystocele into the vagina. The POP-Q provides reliable description of the support of the anterior, posterior and apical vaginal wall. It uses objective and precise measurements to the reference point, the hymen. Cystocele and prolapse of the vagina from other causes is staged using POP-Q criteria can range from good support (no descent into the vagina) reported as a POP-Q stage 0 or I to a POP-Q score of IV which includes prolapse beyond the hymen. It also used to quantifies the movement of other structures into the vaginal lumen and their descent.
The Baden–Walker Halfway Scoring System is used as the second most used system and assigns the classifications as mild (grade 1) when the bladder droops only a short way into the vagina; (grade 2) cystocele, the bladder sinks far enough to reach the opening of the vagina; and (grade 3) when the bladder bulges out through the opening of the vagina.
Classifications
Cystocele can be further described as being apical, medial, or lateral.
Apical cystocele is located upper third of the vagina. The structures involved are the endopelvic fascia and ligaments. The cardinal ligaments and the uterosacral ligaments suspend the upper vaginal-dome. The cystocele in this region of the vagina is thought to be due to a cardinal ligament defect.
Medial cystocele forms in the mid-vagina and is related to a defect in the suspension provided by to a sagittal suspension system defect in the uterosacral ligaments and pubocervical fascia. The pubocervical fascia may thin or tear and create the cystocele. An aid in diagnosis is the creation of a 'shiny' spot on the epithelium of the vagina. This defect can be assessed by MRI.
Lateral cystocele forms when both the pelviperineal muscle and its ligamentous–fascial develop a defect. The ligamentous– fascial creates a 'hammock-like' suspension and support for the lateral sides of the vagina. Defects in this lateral support system results in a lack of bladder support. Cystocele that develops laterally is associated with an anatomic imbalance between anterior vaginal wall and the arcus tendineus fasciae pelvis – the essential ligament structure.
Prevention
Cystocele may be mild enough not to result in symptoms that are troubling to a woman. In this case, steps to prevent it from worsening include:
smoking cessation
losing weight
pelvic floor strengthening
treatment of a chronic cough
maintaining healthy bowel habits
eating high fiber foods
avoiding constipation and straining
Treatment
Treatment options range from no treatment for a mild cystocele to surgery for a more extensive cystocele. If a cystocele is not bothersome, the clinician may only recommend avoiding heavy lifting or straining that could cause the cystocele to worsen. If symptoms are moderately bothersome, the doctor may recommend a pessary, a device placed in the vagina to hold the bladder in place and to block protrusion. Treatment can consist of a combination of non-surgical and surgical management. Treatment choice is also related to age, desire to have children, severity of impairment, desire to continue sexual intercourse and other diseases that a woman may have.
Non-surgical
Cystocele is often treated by non-surgical means:
Pessary – This is a removable device inserted into the vagina to support the anterior vaginal wall. Pessaries come in many different shapes and sizes. Vaginal pessaries can immediately relieve prolapse and prolapse-related symptoms. There are sometimes complications with the use of a pessary.
Pelvic floor muscle therapy – Pelvic floor exercises to strengthen vaginal support can be of benefit. Specialized physical therapy can be prescribed to help strengthen the pelvic floor muscles.
Dietary changes – Ingesting high fiber foods will aid in promoting bowel movements.
Estrogen – intravaginal administration helps to prevent pelvic muscle atrophy
Surgery
The surgery to repair the anterior vaginal wall may be combined with other procedures that will repair the other points of pelvic organ support such as anterior-posterior repair and anterior colporrhaphy. Treatment of cystocele often accompanies the more invasive hysterectomy. Since the failure rate in cystocele repair remains high, additional surgery may be needed. Women who have surgery to repair a cystocele have a 17% of needing another operation within the next ten years.
The surgical treatment of cystocele will depend on the cause of the defect and whether it occurs at the top (apex), middle, or lower part of the anterior vaginal wall. The type of surgery will also depend on the type of damage that exists between supporting structures and the vaginal wall. One of the most common surgical repairs is colporrhaphy. This surgical procedure consists of making a longitudinal folding of the vaginal tissue, suturing it into place and creating a stronger point of resistance to the intruding bladder wall. Surgical mesh is sometimes used to strengthen the anterior vaginal wall. It has a 10–50% failure rate. In some cases a surgeon may choose to use surgical mesh to strengthen the repair.
During surgery, the repair of the vaginal wall consists of folding over and then suturing the existing tissue between the vagina and bladder to strengthen it. This tightens the layers of tissue to promote the replacement of the pelvic organs into their normal place. The surgery also provides more support for the bladder. This surgery is done by a surgeon specializing in gynecology and is performed in a hospital. Anesthesia varies according to the needs of each woman. Recovery may take four to six weeks. Other surgical treatment may be performed to treat cystocele. Support for the vaginal wall is accomplished with the paravaginal defect repair. This is a surgery, usually laproscopic, that is done to the ligaments and fascia through the abdomen. The lateral ligaments and supportive structures are repaired, sometimes shortened to provide additional support to the vaginal wall.
Sacrocolpopexy is a procedure that stabilizes the vaginal vault (the uppermost portion of the vagina) and is often chosen as the treatment for cystocele, especially if previous surgeries were not successful. The procedure consists of attaching the vaginal vault to the sacrum. It has a success rate of 90%. Some women choose not to have surgery to close the vagina. This surgery, called colpocleisis, treats cystocele by closing the vaginal opening. This can be an option for women who no longer want to have vaginal intercourse.
If an enterocele/sigmoidocele, or prolapse of the rectum/colon, is also present, the surgical treatment will take this concurrent condition into account while planning and performing the repairs. Estrogen that is administered vaginally before surgical repair can strengthen the vaginal tissue providing a more successful outcome when mesh or sutures are used for the repair. Vaginal thickness increases after estrogen therapy. Another review on the surgical management of cystocele describes a more successful treatment that more strongly attaches the ligaments and fascia to the vagina to lift and stabilize it.
Post surgical complications can develop. The complications following surgical treatment of cystocele are:
side effects or reactions to anesthesia
bleeding
infection
painful intercourse
Urinary incontinence
constipation
bladder injuries
urethral injuries
urinary tract infection.
vaginal erosion due to mesh
After surgery, a woman is instructed to restrict her activities and monitor herself for signs of infection such as an elevated temperature, discharge with a foul odor and consistent pain. Clinicians may recommend that sneezing, coughing, and constipation are to be avoided. Splinting the abdomen while coughing provides support to an incised area and decreases pain on coughing. This is accomplished by applying gentle pressure to the surgical site for bracing during a cough.
Recurrent surgery on the pelvic organs may not be due to a failure of the surgery to correct the cystocele. Subsequent surgeries can be directly or indirectly relating to the primary surgery. Prolapse can occur at a different site in the vagina. Further surgery after the initial repair can be to treat complications of mesh displacement, pain, or bleeding. Further surgery may be needed to treat incontinence.
One goal of surgical treatment is to restore the vagina and other pelvic organs to their anatomically normal positions. This may not be the outcome that is most important to the woman being treated who may only want relief of symptoms and an improvement in her quality of life. The International Urogynecological Association (IUGA) has recommended that the data collected regarding the success of cystocele and pelvic organ repairs include the presence or absence of symptoms, satisfaction and Quality of Life. Other measures of a successful outcome should include perioperative data, such as operative time and hospital stay. Standardized Healthcare Quality of Life should be part of the measure of a successful resolution of cystocele. Data regarding short- and long-term complications is included in the recommendations of the IUGA to better assess the risk–benefit ratio of each procedure. Current investigations into the superiority of using biological grafting versus native tissue or surgical mesh indicates that using grafts provides better results.
Epidemiology
A large study found a rate of 29% over the lifetime of a woman. Other studies indicate a recurrence rate as low as 3%.
In the US, greater than 200,000 surgeries are performed each year for pelvic organ prolapse and 81% of these are to correct cystocele. Cystocele occurs most frequently compared to the prolapse of other pelvic organs and structure. Cystocele is found to be three times as common as vaginal vault prolapse and twice as often as posterior vaginal wall defects. The incidence of cystocele is around 9 per 100 women-years. The highest incidence of symptoms occurs between ages of 70 and 79 years. Based on population growth statistics, the number of women with prolapse will increase by a minimum of 46% by the year 2050 in the US. Surgery to correct prolapse after hysterectomy is 3.6 per 1,000 women-years.
History
Notable is the mention of cystocele in many older cultures and locations. In 1500 B.C. Egyptians wrote about the "falling of the womb". In 400 B.C. a Greek physician documented his observations and treatments:
"After the patient had been tied to a ladder-like frame, she was tipped upward so that her head was toward the bottom of the frame. The frame was then moved upward and downward more or less rapidly for approximately 3–5 min. As the patient was in an inverted position, it was thought that the prolapsing organs of the genital tract would be returned to their normal position by the force of gravity and the shaking motion."
Hippocrates had his own theories regarding the cause of prolapse. He thought that recent childbirth, wet feet, 'sexual excesses', exertion, and fatigue may have contributed to the condition. Polybus, Hippocrates's son-in-law, wrote: "a prolapsed uterus was treated by using local astringent lotions, a natural sponge packed into the vagina, or placement of half a pomegranate in the vagina." In 350 A.D., another practitioner named Soranus described his treatments which stated that the pomegranate should be dipped into vinegar before insertion. Success could be enhanced if the woman was on bed rest and reduced intake of fluid and food. If the treatment was still not successful, the woman's legs were tied together for three days.
In 1521, Berengario da Carpi performed the first surgical treatment for prolapse. This was to tie a rope around the prolapse, tighten it for two days until it was no longer viable and cut it off. Wine, aloe, and honey were then applied to the stump.
In the 1700s, a Swiss gynecologist, Peyer, published a description of a cystocele. He was able to describe and document both cystocele and uterine prolapse. In 1730, Halder associated cystocele with childbirth. During this same time, efforts began to standardize the terminology that is still familiar today. In the 1800s, the surgical advancements of anesthesia, suturing, suturing materials and acceptance of Joseph Lister's theories of antisepsis improved outcomes for women with cystocele. The first surgical techniques were practiced on female cadavers. In 1823, Geradin proposed that an incision and resection may provide treatment. In 1830, the first dissection of the vagina was performed by Dieffenbach on a living woman. In 1834, Mendé proposed that dissecting and repair of the edges of the tissues could be done. In 1859, Huguier proposed the amputation of the cervix was going to solve the problem for elongation.
In 1866, a method of correcting a cystocele was proposed that resembled current procedures. Sim subsequently developed another procedure that did not require the full-thickness dissection of the vaginal wall. In 1888, another method of treating anterior vaginal wall Manchester combined an anterior vaginal wall repair with an amputation of the cervix and a perineorrhaphy. In 1909, White noted the high rate of recurrence of cystocele repair. At this time it was proposed that reattaching the vagina to support structures was more successful and resulted in less recurrence. This same proposal was proposed again in 1976 but further studies indicated that the recurrence rate was not better.
In 1888, treatments were tried that entered the abdomen to make reattachments. Some did not agree with this and suggested an approach through the inguinal canal. In 1898, further abdominal approaches were proposed. No further advances have been noted until 1961 when reattachment of the anterior vaginal wall to Cooper's ligament began to be used. Unfortunately, posterior vaginal wall prolapse occurred in some patients even though the anterior repair was successful.
In 1955, using mesh to support pelvic structures came into use. In 1970, tissue from pigs began to be used to strengthen the anterior vaginal wall in surgery. Beginning in 1976, improvement in suturing began along with the surgical removal of the vagina being used to treat prolapse of the bladder. In 1991, assumptions about the detailed anatomy of the pelvic support structures began to be questioned regarding the existence of some pelvic structures and the non-existence of others. More recently, the use of stem cells, robot-assisted laparoscopic surgery are being used to treat cystocele.
See also
Hysterectomy
Fecal incontinence
Sigmoidocele
Urethropexy
References
Further reading
Using splinting to support and diminish pain while coughing, Craven and Hirnle's Fundamentals of Nursing: Human Health and Function, 6th edition
External links
Cystocele, Pelvic Organ Prolapse
Noninflammatory disorders of female genital tract
Vagina
Wikipedia medicine articles ready to translate
Women's health
Urology
Incontinence
Gynecological surgery
Reproductive system
Oncology
Urinary bladder disorders
Urinary incontinence
Urinary system
Surgery | Cystocele | Biology | 4,604 |
33,835,484 | https://en.wikipedia.org/wiki/Herschel%20wedge | A Herschel wedge or Herschel prism is an optical prism used in solar observation to refract most of the light out of the optical path, allowing safe visual observation. It was first proposed and used by astronomer John Herschel in the 1830s.
Overview
The prism in a Herschel wedge has a trapezoidal cross section. The surface of the prism facing the light acts as a standard diagonal mirror, reflecting a small portion of the incoming light at 90 degrees into the eyepiece. The trapezoidal prism shape refracts the remainder of the light gathered by the telescope's objective away at an angle. The Herschel wedge reflects about 4.6% of the light that passes through one of the prism faces that is flat to 1/10 of the wavelength of the light. The remaining ~95.4% of the light and heat goes into the prism and exits through the other face and out the back door of the housing; thus, the excess light and heat is disposed of and not used for observing. While they decrease the intensity of the light, they do not affect the visible spectra, resulting in a more accurate spectral profile, which can be filtered to bring out certain details. They are an alternative to white light filters, which, despite their name, inherently must block certain visible spectra.
Limitations
Hershel Wedges present a unique set of hazards and design considerations for the amateur astronomer. Unlike a full aperture ND solar filter, a sub aperture solar filter like a Herschel Wedge allows the full intensity of sunlight to be concentrated by the primary optic.
Secondary optics such as field flatteners, focal reducers, secondary mirrors, and bandpass filters, that are upstream of the Herschel wedge but downstream of the primary optic can overheat and be damaged. Reflectors are extremely dangerous to use with Herschel wedges, since their optical path is poorly contained. While many fear damaging their telescope is the primary reason for avoiding sub-aperture solar filters on reflective telescopes, blinding hazards with reflectors is perhaps even more compelling.
Unlike refractors, whose focal planes lie to the rear of the telescope, reflectors like SCT, Newtonian, RCT, Gregorian, and RASA telescopes have primary mirrors that focus light to a plane in front of the telescope. While some designs use this focal plane as is, others use additional lenses or reflective optics to both correct and move a small portion of this focal plane to a separate area on the telescope. However, it’s important to remember the majority of this focal plane remains in free space, and when it is allowed to focus unfiltered sunlight, like in the case of a Herschel Wedge telescope, it can have disastrous consequences. Looking down the front of a reflecting telescope in direct unfiltered sunlight is no different than staring into the eyepiece of a telescope aimed at the sun without a filter. The large size of reflecting primary mirrors creates the potential for this focal point to burn the inside of a telescope tube or even nearby objects in the vicinity of the telescope.
People who have made a habit of inspecting the inside of their telescope by viewing it from the front or even those who simply want to cap it while its outside may not realize that the same action during the day under sunlight will blind them. Others who use Newtonian telescopes, where a user needs to stand directly above a telescope to enjoy the eyepiece may be burned or blinded by sunlight when slewing on it in the sky.
It is also important to note that even at 4.5%, (~N.D. 1.35) the light from the sun is still strong enough to burn the retina, and so an appropriate neutral density filter must still be used.
See also
List of telescope parts and construction
External links
Astronomical instruments
Astronomical imaging
Solar telescopes
Prisms (optics) | Herschel wedge | Astronomy | 775 |
18,146,395 | https://en.wikipedia.org/wiki/Insulated%20pipe | Insulated pipes (called also preinsulated pipes or bonded pipe ) are widely used for district heating and hot water supply. They consist of a steel pipe called "service pipe", a thermal insulation layer and an outer casing. The insulation bonds the service pipe and the casing together. The main purpose of such pipes is to maintain the temperature of the fluid inside the service pipes. Insulated pipes are commonly used for transport of hot water from district heating plants to district heating networks and for distribution of hot water inside district heating networks.
Thermal insulation material usually used is polyurethane foam or similar, with a thermal conductivity λ50 of about 0.024–0.033 W/(m·K).
While polyurethane has outstanding mechanical and thermal properties, the high toxicity of the [diisocyanates] required for its manufacturing has caused a restriction on their use. This has triggered research on alternative insulating foam fitting the application, which include polyethylene terephthalate (PET) and polybutylene (PB-1).
The outer casing is usually made of high-density polyethylene (HDPE).
Preinsulated pipes for district heating are described in European standards EN 253 and EN 15698-1. EN 253 describes "District heating pipes - Bonded single pipe systems for directly buried hot water networks - Factory made pipe assembly of steel service pipe, polyurethane thermal insulation and a casing of polyethylene". EN 15698-1 describes "District heating pipes - Bonded twin pipe systems for directly buried hot water networks - Factory made twin pipe assembly of steel service pipes, polyurethane thermal insulation and one casing of polyethylene". Both standards don't give "short names" or abbreviations for described pipes.
According to EN 253:2019 & EN 15698-1:2019, pipes must be produced to work at constant temperature of for 30 years. Thermal conductivity λ50 in unaged condition shall not exceed 0.029 W/(m·K). Both standards describe three insulation thickness levels. Both standards require use of polyurethane foam for thermal insulation and HDPE for casing.
Insulated pipelines are usually assembled from pipes of , , or in length, directly buried in soil in depths of commonly .
See also
Heat conduction
Heat transfer
Heat transfer mechanisms
R-value
Specific Heat
Thermal bridge
Thermal contact conductance
Thermal diffusivity
Thermal resistance in electronics
Thermocouple
References
Energy conservation
Heating, ventilation, and air conditioning
Piping
Residential heating
de:Kunststoffmantelverbundrohr | Insulated pipe | Chemistry,Engineering | 546 |
24,346,670 | https://en.wikipedia.org/wiki/Motorola%20Pageboy | Motorola Pageboy was a pager produced by Motorola. In the 1960s, when pagers were mainly used by medical professionals, the Pageboy was considered "cutting edge and compact", measuring 5.25 inches by 2.36 inches.
As of 1967, low-frequency Pageboys were priced at $180, while very-high frequency VHP units cost $275 in the United States.
See also
List of Motorola products
References
Pageboy
Pagers | Motorola Pageboy | Technology | 92 |
44,198,564 | https://en.wikipedia.org/wiki/Lighting%20as%20a%20service | Lighting as a service (LaaS), also known as light as a service, is a service-based business model in which light service is charged on a subscription basis rather than via a one-time payment. It is managed by third parties, more precisely, by specialized service providers and may include light design, financing, installation, maintenance and other services. The model enables customers to outsource lighting aspects of their business over a set time.
In opposition to an operating lease, the LaaS provider does not transfer the ownership of a product but maintains the ownership of the equipment throughout the duration of the subscription contract. This makes it an environmentally friendly business approach allowing an extended product life with several lifecycles. The LaaS model has become more common in commercial installations of LED lights, specifically in retrofitting buildings and outdoor facilities, with the prior aim of reducing installation costs. Since nowadays lighting design has to consider aspects such as safety and health at work, environmental performance, energy consumption and durability of the products, the model is continuously gaining more and more importance for company structures. Refitting and maintaining buildings with energy efficient lighting systems by means of a LaaS provider enables it to be operated more economically. Light vendors have used an LaaS strategy in selling value-added services such as Internet-connected lighting and energy management.
History
Over the years, the lighting industry has faced a major disruption due to the shift in demand from conventional light sources to energy-efficient lighting. Especially in the commercial sector, this industrial change has caused a higher demand in long-life LED products, which come along with lower energy consumption and improved physical robustness. While light bulbs have to be replaced approximately every year or two, LED products last around 20 years. The idea of Lighting as a Service was invented in response to business demand in order to decrease the costs and boost the efficiency of energy without any investment from the end users side. All of this is achieved by the help of light control systems and the Internet of Things.
The concept of selling lighting as a service was developed by Thomas Rau in collaboration with Philips throughout the first "Pay per Lux" project. Customers would only pay for the amount of light they use. The idea originated when Thomas Rau equipped the office of RAU Architects in Amsterdam. Instead of purchasing lighting infrastructure that comes with high costs and the need to be replaced after some time, he applied the model of light as a service to furnish the building with lighting. LED installations along with a sensor and controller system to minimize energy use were provided by Philips, who maintained the ownership of the set-up. RAU Architects on the other hand benefited from the entire maintenance service.
Value proposition
Lighting as a service offering include indoor and outdoor service types and usually are demanded by end users from the commercial, municipal or industrial sectors. LaaS supplies client companies with efficient lighting technology and in the most cases undertakes associated services including project management, product delivery and installation as well as maintenance.
Pros
Since no upfront investment has to be done, the subscription to LaaS offers comes with less engagement of personal resources. This subsequently has a positive impact on the reduction of risks and costs for the end user. Furthermore, modern lighting technology solutions, such as intelligent LED light systems or daylight and shading technology, used by LaaS vendors result in a lower energy consumption, thus lower energy costs as well as the reduction of carbon emissions and a healthier lighting in general. The light supplier as an expert in the light industry remains accountable for monitoring and maintaining the lighting systems (including replacements), upholding new legal requirements and also deals with recycling issues.
Cons
The subscription to LaaS offerings does not involve the acquisition of the lighting product itself and makes the provider retain the ownership. Throughout the duration of the contract the client is bound to its provider which, depending upon the contract arrangement and funding model, can be set up to long durations.
Market
Development
The global lighting as a service market is growing rapidly, welcoming more and more lighting companies, IT integrators and facility management service providers trying to capture the new market by the means of refining their business models to "as a service" offerings, but also through mergers, collaborations and partnerships. In the face of the current global situation, smart building technology vendors are currently launching lighting propositions to improve the infection control of buildings in order to kill bacteria and viruses from surfaces with safe lighting rays such as ultraviolet light.
Keyplayers
In terms of geographical distribution, the LaaS market is categorized into seven main regions: North and Latin America, Eastern Europe, Western Europe, Asia-Pacific, Japan and the Middle East and Africa. North America constitutes the largest market for LaaS, closely followed by Western and Eastern Europe, which is expected to witness the fastest growth rate in 2018-2025. This change is attributed to the widespread adoption of LaaS in Germany and the UK. Global key players are Cooper Industries Inc. (US), SIB Lighting (US), Cree Inc. (US), Digital Lumens Inc. (US), Lutron Electronics Company Inc. (US), General Electric Lighting (US), Itelecom USA (US), Future Energy Solutions (US & UK), Igor Inc. (UK), Legrand (France), Koninklijke Philips N.V. (Netherlands), Eaton Corporation (Ireland), Urban Volt (Ireland), Zumtobel Group (Austria), Osram Ag (Germany), Deutsche Lichtmiete (Germany) and RCG Lighthouse (Latvia). The leading position of North America in the LaaS market is mostly due to the significant development regarding electricity consumption in the United States. The government implementations of certain energy standards help reduce its carbon emission as well as the energy consumption rate, mostly through usage of energy efficient light bulbs. These have to be either LED or fluorescent. The adoption of LED lighting on the other hand is catering to the market growth of LaaS models. The most prominent lighting renovation project is the Bristol-Myers Squibb in Lawrenceville. The prior aim of the project is to harvest daylight through control systems and thus reduce the energy consumption for office spaces. In the meantime LaaS companies in Europe come up with different lighting solutions such as for example the launch of emergency lighting for big open-plan areas by Eaton. Furthermore, cooperations between light vendors and LaaS providers such as the alliance between Zumtobel and Deutsche Lichtmiete lead to improved full-service business models, allowing to broaden the application segments.
See also
as a service
References
service
Architectural_lighting_design
Building_automation
Energy-saving_lighting | Lighting as a service | Engineering | 1,354 |
60,785,291 | https://en.wikipedia.org/wiki/Design%20infringement | Design is a form of intellectual property right concerned with the visual appearance of articles which have commercial or industrial use. The visual form of the product is what is protected rather than the product itself. The visual features protected are the shape, configuration, pattern or ornamentation. A design infringement is where a person infringes a registered design during the period of registration. The definition of a design infringement differs in each jurisdiction but typically encompasses the purported use and make of the design, as well as if the design is imported or sold during registration. To understand if a person has infringed the monopoly of the registered design, the design is assessed under each jurisdiction's provisions. The infringement is of the visual appearance of the manufactured product rather than the function of the product, which is covered under patents. Often infringement decisions are more focused on the similarities between the two designs, rather than the differences.
Legislation
Australia
In Australia, a person infringes a registered design if a party manufactures and sells, uses or imports the same or similar design to the registered design without permission of the registered owner. This is held under s71 of the Design Act 2003 (Cth). The following is an extract of s71 of the Designs Act 2003 (Cth), under Infringement of Design."(1) A person infringes a registered design if, during the term of registration of the design, and without the licence or authority of the registered owner of the design, the person:
(a) makes or offers to make a product, in relation to which the design is registered, which embodies a design that is identical to, or substantially similar in overall impression to, the registered design; or
(b) imports such a product into Australia for sale, or for use for the purposes of any trade or business’ or
(c) sells, hires or otherwise disposes of, or offers to sell, hire or otherwise dispose of, such a product; or
(d) uses such a product in any way for the purposes of any trade or business; or
(e) keeps such a product for the purpose of doing any of the things mentioned in paragraph (c) or (d)"The Designs Act recognises two types of infringement: primary and secondary infringement. A primary infringement relates to s71(1)(a), where a person directs, causes or procures the product to be made by a third party. Secondary infringement relate to ss 71(1)(b), (c), (d), (e), where a person infringes a registered design if there is no licence or authority given. A parallel import of a registered design is allowed in Australia.
The Designs Act 2003 replaced the Designs Act 1906, having a particular change to the way design infringements are identified. Key changes included removing tests of obvious and fraudulent imitations. Also introduced was that a certificate of examination must be issued prior to infringement proceedings. The test for infringement is significantly broader as it expressly requires an assessment of the similarities and differences between the registered design and the purported infringing design.
United Kingdom
Under the Registered Designs Act 1949, a design right is infringed when a person without consent from the registered design holder makes, offers, import or exports the product. The infringement rights in registered designs is laid out within Section 7A of the Registered Designs Act 1949. An infringement of the right in a registered design is actionable by the registered proprietor. The Act advises that the right in a registered design is not infringed if, the act is done in private and not commercial in nature, is experimental, or a reproduction for teaching purposes. The UK Court of Appeal confirmed that in determining an infringing article, the registered design, the alleged infringing object, and the prior art must be evaluated. Simply the test is a visual comparison between the two designs. The Act also provides an exemption for innocent infringers. Damages are not awarded against a defendant if there is sufficient evidence to prove that he/she was not aware that the design was registered.
An alternative protection that the United Kingdom legislation offers is the principle of an unregistered design. To prosecute for an infringement, the unregistered design right holder must prove that they created the design in the first place and that the infringing article is a deliberate duplication. Further it must be proved that the shape and overall configuration of the protected product is not the same as any products that have been publicised before the design was created.
United States
In the United States, designs are governed by the patent statute, set out in 35 USC § 171 Chapter 16. Here, protection is given for a new, original and ornamental design of an article. As with other jurisdictions, the design patent within the US only provides protection for the visual design aspects of the article, rather than the function. Chapter 28 of 35 USC § 171 covers the infringement of patents, and defines and infringement as without authority, makes, uses, offers, or sells a patented design. An infringement also covers any attempt in infringing a design, and selling components of patented articles.
Section 127 outlines both direct and indirect infringement. Direct infringement encompasses the unauthorised importation of patented products (35 U.S.C. § 271(a)), and unauthorised importation of products of a patented process (35 U.S.C. § 271(a)). Indirect infringement imposes liability upon those why gave aided another in direct infringement of a registered design (35 U.S.C. § 271(b)) or contributed to infringement (35 U.S.C. § 271(c)). This highlights the most common type of infringement, where the infringer's knowledge of the actions taken are established to confirm if there is an infringement. Proof of intent is also necessary to show the contribution to infringement. The statute does not provide protection for unregistered designs. To gain protection from infringement, and any design patent right, it is necessary to file a patent application.
Testing infringement
Infringement of a registered design can be identified through ‘the eyes of an ordinary observer’ test. This means that the appearance of an accused design is seen to be an infringement, if the design is significantly similar, and one may purchase the accused design product thinking that it is the patented design. This test is based on an ordinary observer being familiar with a product and being able to distinguish between the registered design and prior art designs. The case of Egyptian Goddess Inc. v Swisa Inc. was key in adopting the ordinary observer test. The Court found that for a design to be infringed, the accused design must have appropriated the registered design. The infringement lies within the similarity between the designs what is distinguished from the prior art base. In assessing the overall similarity of designs, for example, s19 of the Designs Act 2003 (Cth) provides a list of factors to consider in testing infringement. This includes understanding the differences between designs rather than emphasising the similarities. More weight must be given to the similarities between two designs. If one aspect of a design is substantially similar, weight must be given to the importance of that aspect of the design.
The decision maker must further take the point of view from the standard of a person who is familiar with the products, the informed user. The ‘informed user’ test can also be used to identify an infringement of a registered or unregistered design. An informed user can range from a consumer to a sectoral expert with technical proficiency. The user will be able to notice small difference between design and can be seen as particularly observant having had a personal experience with, or key knowledge of the product. The informed user should not always be an expert of consumer to be the test in all cases. The selected informed user must be a person who has significant familiarity with the product's appearance, use and nature.
Audiences
Within design and patent law, experts are seen to be the primary decision makers in assessing the similarities and differences of designs. As infringement is judged from different audiences of the design, e.g., consumers and experts, the motivations for infringement are distinguished. Consumers note accused designs to be substitutes where they function in similar ways. To consumers the designs would be interchangeable.
The test for infringement of a design patent draws much more from trademark than from patent law. As the test evokes an audience of reasonable purchasers of the design or product, similar to that of the trademark test. As mentioned, infringement is judged “in the eye of an ordinary observer”. From this, the audience for the test of infringement is an ordinary observer who is placed in the position to determine the similarities of the designs.
Enforcement
To commence enforcement proceedings it must be decided if the Court will establish that the product is infringing the registered design, and if the design is a valid registration. The registered design owner can only consider enforcement proceedings once a certificate of examination has been provided. To enforce design rights against an infringing designer the owner of the registered design must initiate the process of examination. This is in line with the certificate of examination. A design Registrar will not grant a certificate of examination of the design is found to be invalid as there is no newness or distinctiveness to the design. The examination will consist of a comparison of the designs that existed prior to the lodgement of the design application. The test of the ordinary observer enables registered designs with significant similarities to have a broader scope of enforcement. Among many jurisdictions is common for a party threatened with infringement to be allowed to seek relief even through the design has not yet been certified. Action may be taken to protect the goodwill and reputation of the design holder.
Courts
Courts are an essential aspect in the enforcement of design infringement. Court appointed experts are beneficial to enforcement proceedings, as a panel of assessors such as patent attorneys, designers and engineers enhance the limited technical knowledge a judge may have in a certain area. The Courts will assess damages based on the loss of profit and reputation of the design holder, and the profits made by the infringer. Case management is supported by the Court to enable the most economic and efficient method to bring the infringement proceedings to trial.
Alternative dispute resolution
Alternative dispute resolution can be a more effective way of resolving design infringement, as enforcement mechanisms are often not suited to the common disputes that arise. Design disputes can involve complex technical and commercial issues that can be better determined by an expert within alternative dispute resolution rather than employing witnesses within the courts. Arbitration and mediation are suitable for resolving intellectual property disputes, as most common disputes involve small claims for damages. For intellectual property disputes, alternative dispute resolution provides benefits including confidentiality, greater control over the process and a more neutral outcome. Alternative dispute resolution is more cost effective than litigation, therefore more attractive to smaller companies and individuals without the resources, time and funding to resolve cases in court.
References
Design
Intellectual property infringement
Intangible assets | Design infringement | Engineering | 2,234 |
6,916,771 | https://en.wikipedia.org/wiki/MVDS | MVDS is an acronym for terrestrial "Multipoint Video Distribution System".
MVDS currently is a part of broader MWS (Multimedia Wireless System) standards.
In the European Union MWS works in 10.7–13.5 and 40.5–43.5 GHz frequency bands.
Research for 42 GHz frequency has been done under the European Commission EMBRACE (Efficient Millimetre Broadband Radio Access for Convergence and Evolution) initiative.
Standards
ETSI
EN 300 748
EN 301 215-3
EN 301 997-2
UK Standards
MPT 1550 (obsolete)
MPT 1560 (obsolete)
CEPT
ERC/DEC/(99)15
ECC/REC/(01)04
Manufacturers of MVDS equipment
MDS America Inc
Newtec
EF Data
BluWan
Philips Broadband Network
Hughes Network systems
Thales Group (Thomson)
Trophy electronics
Technosystem Digital Network S.p.A. (TDN)
Marconi Technology Centres (GMTT)
United Monolithic Semiconductors (UMS)
DOK Ltd (Elvalink)
Q-par Angus Ltd
ROKS
Mobile technology | MVDS | Technology | 221 |
77,741,822 | https://en.wikipedia.org/wiki/C13H16N2O4 | {{DISPLAYTITLE:C13H16N2O4}}
The molecular formula C13H16N2O4 may refer to:
N1-Acetyl-N2-formyl-5-methoxykynuramine
Phenylacetylglutamine | C13H16N2O4 | Chemistry | 64 |
71,542,093 | https://en.wikipedia.org/wiki/Myrtenal | Myrtenal is a bicyclic monoterpenoid with the chemical formula C10H14O. It is a naturally occurring molecule that can be found in numerous plant species including Hyssopus officinalis, Salvia absconditiflora, and Cyperus articulatus.
Biological research
Myrtenal was shown to inhibit acetylcholinesterase, which is a common method of treatment of alzheimer's disease and dementia, in-vitro. In addition, mytenal has been shown to have antioxidant properties in rats.
See also
Myrtenol
References
Aldehydes
Bicyclic compounds
Cycloalkenes
Monoterpenes | Myrtenal | Chemistry | 146 |
18,477,835 | https://en.wikipedia.org/wiki/Language%20intensity | Most investigators accept the definition of language intensity proposed by John Waite Bowers: a quality of language that "indicates the degree to which toward a concept deviates from neutrality." Intensity as a lexical variable in communication studies has generated extensive empirical research.
Theoretical setting
A theory proposed by Bradac, Bowers, and Courtright (1979, 1980) asserts causal relationships among intensity and a number of other psychological, social, and communication variables. An experimental study by Hamilton, Hunter, and Burgoon (1990) generally supports the relationships proposed by the theory at least in the limited domain of persuasion.
Intensity has been related to:
Other message variables including verbal immediacy, lexical diversity, message style, and verbal aggressiveness.
Psychological variables such as cognitive stress, arousal, and need for approval.
Attributional variables including attributions of source internality, attributions of source competence, and attributions of source similarity with audience.
Speaker–audience attitudinal congruency and discrepancy.
Credibility of message sources and of messages.
Information processing.
Practical variables such as response rate in e-mail surveys and family interventions protecting children from ultraviolet radiation.
Language expectancy theory
References
Anderson, P.A. & Blackburn, T.R. (2004). An experimental study of language intensity and response rate in email surveys. Communication Reports, 17, 73–84.
Badzinski, D.M. (1989). Message intensity and cognitive representations of discourse effects on inferential processing. Human Communication Research, 16, 3–32.
Basehart, J.R. (1971). Message opinionation and approval-dependence as determinants of receiver attitude change and recall. Speech Monographs, 38, 302–10.
Bourhis, R.; Giles, H. & Tajfel, H. (1973). Language as a determinant of Welsh identity. European Journal of Social Psychology, 3, 447–60.
Bowers, J.W. (1963). Language intensity, social introversion, and attitude change. Speech Monographs, 30, 345–52.
Bowers, J.W. (1964). Some correlates of language intensity. Quarterly Journal of Speech, 50, 415–20.
Bowers, J.W. (2006). Old eyes take a new look at Bradac's favorite variables. Journal of Language and Social Psychology, 25, 7–24.
Bradac, J.J.; Bowers, J.W. & Courtright, J.A. (1979). Three language variables in communication research: Intensity, immediacy, and diversity. Human Communication Research, 5, 257–69.
Bradac, J.J.; Bowers, J.W. & Courtright, J.A. (1980). Lexical variations in intensity, immediacy, and diversity: An axiomatic theory and causal model. In St. Clair, R.N. & Giles, H. (Eds.). The social and psychological contexts of language. Hillsdale, NJ: Lawrence Erlbaum, pp. 193–223.
Bradac, J.J.; Hosman, L.A. & Tardy, C.H. (1978). Reciprocal disclosures and language intensity: Attributional consequences. Communication Monographs, 45, 1–17.
Bradac, J.J.; Konsky, C.W. & Elliott, N.D. (1976). Verbal behavior of interviewees: The effects of several situational variables on verbal productivity, disfluency, and lexical diversity. Journal of Communication Disorders, 9, 211–25.
Buller, D.B.; Burgoon, M.; Hall, J.R.; Levine, N.; Taylor, A.M.; Beach, B.H.; Melcher, C.; Buller, M.K.; Bowen, S.L.; Hunsaker, F.G. & Bergen, A. (2000). Using language intensity to increase the success of family intervention to protect children from ultraviolet radiation. Preventive Medicine, 30, 103–13.
Burgoon, M.; Jones, S.B. & Stewart, D. (1975). Toward a message-centered theory of persuasion: Three empirical investigations of language intensity. Human Communication Research, 1, 240–56.
Burgoon, M. & Miller, G.R. (1971). Prior attitudes and language intensity as predictors of message style and attitude change following counterattitudinal advocacy. Journal of Personality and Social Psychology, 20, 240–53.
Carmichael, C.W. & Cronkhite, G.L. (1965). Frustration and language intensity. Speech Monographs, 32, 107–11.
Daly, J.A. & Miller, M.D. (1975). Apprehension of writing as a predictor of message intensity. Journal of Psychology, 89, 175–7.
Franzwa, H.H. (1969). Psychological factors influencing use of "evaluative-dynamic" language. Speech Monographs, 36, 103–9.
Greenberg, B.S. (1976). The effects of language intensity modifications on perceived verbal aggressiveness. Communication Monographs, 43, 130–9.
Hamilton, M.A.; Hunter, J.E. & Burgoon, M. (1990). An empirical test of an axiomatic model of the relationship between language intensity and persuasion. Journal of Language and Social Psychology, 9, 235–56.
Infante, D.A. (1975). Effects of opinionated language on communicative image and as conferring resistance to persuasion. Western Speech Communication, 39, 112–29.
McEwen, W.J. & Greenberg, B.S. (1970). Effects of message intensity on receiver evaluation of source, message, and topic. Journal of Communication, 20, 340–50.
Mehrley, R.S. & McCroskey, J.C. (1970). Opinionated statements and attitude intensity as predictors of attitude change and source credibility. Speech Monographs, 37, 47–52.
Miller, G.R. & Basehart, J. (1969). Source trustworthiness, opinionated statements, and responses to persuasive communication. Speech Monographs, 36, 1–7.
Miller, G.R. & Lobe, J. (1967). Opinionated language, open- and closed-mindedness and response to persuasive communications. Journal of Communication, 17, 333–41.
Osgood, C.E. & Walker, E.G. (1959). Motivation and language behavior: A content analysis of suicide notes. Journal of Abnormal and Social Psychology, 59, 58–67.
Rotter, J.B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs, 80, whole no. 609.
Wheeless, L.R. (1978). A follow-up study of the relationship among trust, disclosure, and interpersonal solidarity. Human Communication Research, 4, 143–57.
Notes
Pragmatics
Behavioral concepts | Language intensity | Biology | 1,516 |
842,004 | https://en.wikipedia.org/wiki/Piano%20nobile | Piano nobile (Italian for "noble floor" or "noble level", also sometimes referred to by the corresponding French term, bel étage) is the architectural term for the principal floor of a palazzo. This floor contains the main reception and bedrooms of the house.
The German term is Beletage (meaning "beautiful storey", from the French bel étage). Both date to the 17th century.
Characteristics
The piano nobile is usually the first floor (in European terminology; second floor in American terms) or sometimes the second storey and contains major rooms, located above the rusticated ground floor containing the minor rooms and service rooms. The reasons were so that the rooms above the ground floor would have finer views and to avoid the dampness and odours of the street level. That is especially true in Venice, where the piano nobile of the many palazzi is especially obvious from the exterior by virtue of its larger windows and balconies and open loggias. Examples are Ca' Foscari, Ca' d'Oro, Ca' Vendramin Calergi and Palazzo Barbarigo.
Larger windows than those on other floors are usually the most obvious feature of the piano nobile. In England and Italy, the piano nobile is often reached by an ornate outer staircase, which avoided for the floor's inhabitants of the need to enter the house by the servant's floor below. Kedleston Hall is an example of this in England, as is Villa Capra "La Rotonda" in Italy.
Most houses contained a secondary floor above the piano nobile, which contained more intimate withdrawing and bedrooms for private use by the family of the house when no honoured guests were present. Above that floor would often be an attic floor containing staff bedrooms.
In Italy, especially in Venetian palazzi, the floor above the piano nobile is sometimes referred to as the "secondo piano nobile" (second principal floor), especially if the loggias and balconies reflect those below on a slightly smaller scale. In those instances and occasionally in museums, the principal piano nobile is described as the primo piano nobile to differentiate it.
The arrangement of floors continued throughout Europe as large houses continued to be built in the classical style. The arrangement was designed at Buckingham Palace as recently as the mid-19th century. Holkham Hall, Osterley Park and Chiswick House are among the innumerable 18th-century English houses that employed the design.
Bibliography
Copplestone, Trewin (1963). World Architecture. Hamlyn.
Dal Lago, Adalbert (1966). Ville Antiche. Milan: Fratelli Fabbri.
Halliday, E. E. (1967). Cultural History of England. London: Thames and Hudson.
Harris, John; de Bellaigue, Geoffrey; & Miller, Oliver (1968). Buckingham Palace.
Hussey, Christopher (1955). English Country Houses: Early Georgian 1715–1760 London, Country Life.
Jackson-Stops, Gervase (1990). The Country House in Perspective. Pavilion Books Ltd.
Kaminski Marion, Art and Architecture of Venice, 1999, Könemann,
London:Nelson.
Architectural elements
Floors
Italian words and phrases | Piano nobile | Technology,Engineering | 674 |
36,533,748 | https://en.wikipedia.org/wiki/Optical%20tracer | An optical tracer is an X-Y tooling machine which utilises a photoeye to track toolpaths printed on a full-scale drawing and move a tool head accordingly. No Z-axis or cut commands can be read from the drawing, so an operator was still required to tell the machine, when and how deep it should cut.
It has been made largely obsolete by CNC systems.
See also
Polygraph
CNC
Machine tools | Optical tracer | Engineering | 92 |
508,181 | https://en.wikipedia.org/wiki/Marine%20VHF%20radio | Marine VHF radio is a worldwide system of two way radio transceivers on ships and watercraft used for bidirectional voice communication from ship-to-ship, ship-to-shore (for example with harbormasters), and in certain circumstances ship-to-aircraft. It uses FM channels in the very high frequency (VHF) radio band in the frequency range between 156 and 174 MHz, designated by the International Telecommunication Union as the VHF maritime mobile band. In some countries additional channels are used, such as the L and F channels for leisure and fishing vessels in the Nordic countries (at 155.5–155.825 MHz). Transmitter power is limited to 25 watts, giving them a range of about .
Marine VHF radio equipment is installed on all large ships and most seagoing small craft. It is also used, with slightly different regulation, on rivers and lakes. It is used for a wide variety of purposes, including marine navigation and traffic control, summoning rescue services and communicating with harbours, locks, bridges and marinas.
Background
Marine radio was the first commercial application of radio technology, allowing ships to keep in touch with shore and other ships, and send out a distress call for rescue in case of emergency. Guglielmo Marconi invented radio communication in the 1890s, and the Marconi Company installed wireless telegraphy stations on ships beginning around 1900. Marconi built a string of shore stations and in 1904 established the first Morse code distress call, the letters CQD, used until 1906 when SOS was agreed on. The first significant marine rescue due to radio was the 1909 sinking of the luxury liner RMS Republic, in which 1,500 lives were saved. This and the 1912 rescue brought the field of marine radio to public consciousness, and marine radio operators were regarded as heroes. By 1920, the US had a string of 12 coastal stations stretched along the Atlantic seaboard from Bar Harbor, Maine to Cape May, New Jersey.
The first marine radio transmitters used the longwave bands. During World War I amplitude modulation was developed, and in the 1920s spark radiotelegraphy equipment was replaced by vacuum tube radiotelephony allowing voice communication. Also in the 1920s, the ionospheric skip or skywave phenomenon was discovered, which allowed lower power vacuum tube transmitters operating in the shortwave bands to communicate at long distances.
Hoping to foil German detection during the World War II Battle of the Atlantic, American and British convoy escorts used Talk-Between-Ships (TBS) radios operating on VHF.
Types of equipment
Sets can be fixed or portable. A fixed set generally has the advantages of a more reliable power source, higher transmit power, a larger and more effective antenna and a bigger display and buttons. A portable set (often essentially a waterproof, VHF walkie-talkie in design) can be carried on a kayak, or to a lifeboat in an emergency, has its own power source and is waterproof if GMDSS-approved. A few portable VHFs are even approved to be used as emergency radios in environments requiring intrinsically safe equipment (e.g. gas tankers, oil rigs, etc.).
Voice-only
Voice only equipment is the traditional type, which relies totally on the human voice for calling and communicating.
Many lower priced handheld units are voice only as well as older fixed units.
Digital selective calling
DSC equipment, a part of the Global Maritime Distress Safety System (GMDSS), provides all the functionality of voice-only equipment and, additionally, allows several other features:
The ability to call another vessel using a unique identifier known as a Maritime Mobile Service Identity (MMSI). This information is carried digitally and the receiving set will alert the operator of an incoming call once its own MMSI is detected. Calls are set up on the dedicated VHF channel 70 which DSC equipment must listen on continuously. The actual voice communication then takes place on a different channel specified by the caller.
A distress button, which automatically sends a digital distress signal identifying the calling vessel and the nature of the emergency
A built in GPS receiver or facility to connect an external GPS receiver so that the user's location may be transmitted automatically along with a distress call.
When a DSC radio is bought new the user will get the opportunity to program it with the MMSI number of the ship it is intended to be used on. However to change the MMSI after the initial programming can be problematic and require special proprietary tools. This is allegedly done to prevent theft.
Automatic identification system
More advanced transceiver units support AIS. This relies on a GPS receiver built into the VHF equipment or an externally connected one by which the transceiver obtains its position and transmits this information along with some other details about the ship (MMSI, cargo, draught, destination and some others) to nearby ships. AIS operates as a mesh network and full featured units relay AIS messages from other ships, greatly extending the range of this system; however some low-end units are receive only or do not support the relaying functionality.
AIS data is carried on dedicated VHF channels 87B and 88B at a baud rate of 9,600bit/s using GMSK modulation and uses a form of time-division multiplexing.
Text messaging
Using the RTCM 12301.1 standard it is possible to send and receive text messages in a similar fashion to SMS between marine VHF transceivers which comply with this standard. However, as of 2019 very few transceivers support this feature. The recipient of the message needs to be tuned to the same channel as the transmitting station in order to receive it.
Regulation
In the United States, any person can legally purchase a Marine VHF radio and use it to communicate without requiring any special license as long as they abide by certain rules, but in a great many other countries a license is required to transmit on Marine VHF frequencies.
In the United Kingdom and Ireland and some other European countries both the operator and the equipment must be separately licensed. A Short Range Certificate is the minimum requirement to use an installed marine VHF radio. This is usually obtained after completing a course of around two days and passing an exam. This is intended for those operating on lakes and in coastal areas whereas a Long Range Certificate is usually recommended for those operating further out as it also covers HF and MF radios as well as INMARSAT systems. Installations fixed on a particular vessel require a Ship Radio License. Portable equipment that could be used in multiple craft, dinghys etc. required a Ship Portable Radio Licence.
Automatic Transmitter Identification System (marine)
For use on the inland waterways within continental Europe, a compulsory Automatic Transmitter Identification System (ATIS) transmission conveys the vessel's identity after each voice transmission. This is a ten-digit code that is either an encoded version of the ship's alphanumeric call sign, or for vessels from outside the region, the ship MMSI prefixed with "9". The requirement to use ATIS in Europe, and which VHF channels may be used, are strongly regulated, most recently by the Basel agreements.
Channels and frequencies
A marine VHF set is a combined transmitter and receiver and only operates on standard, international frequencies known as channels. Channel 16 (156.8 MHz) is the international calling and distress channel. Transmission power ranges between 1 and 25 watts, giving a maximum range of up to about 60 nautical miles (111 km) between aerials mounted on tall ships and hills, and between aerials mounted on small boats at sea level. Frequency modulation (FM) is used, with vertical polarization, meaning that antennas have to be vertical in order to have good reception. For longer range communication at sea, marine MF and marine HF bands and satellite phones can be used.
Half-duplex channels here are listed with the A and B frequencies the same. The frequencies, channels, and some of their purposes are governed by the ITU. For an authoritative list see. The original allocation of channels consisted of only channels 1 to 28 with 50 kHz spacing between channels, and the second frequency for full-duplex operation 4.6 MHz higher.
Improvements in radio technology later meant that the channel spacing could be reduced to 25 kHz with channels 60 to 88 interspersed between the original channels.
Channels 75 and 76 are omitted as they are either side of the calling and distress channel 16, acting as guard channels. The frequencies which would have been the second frequencies on half-duplex channels are not used for marine purposes and can be used for other purposes that vary by country. For example, 161.000 to 161.450 MHz are part of the allocation to the Association of American Railroads channels used by railways in the US and Canada.
Operating procedure
Marine VHF mostly uses half-duplex audio equipment and non-relayed transmissions. Ship to ship communication is over a single radio frequency (simplex), while ship to shore often uses full duplex frequency pairs, however the transceivers are usually half-duplex devices that cannot receive when transmitting even on a full-duplex channel. To transmit the user presses a "push to talk" button on the set or microphone which turns the transmitter on and the receiver off in a device with half-duplex audio, even on a full-duplex radio channel; on devices with full-duplex audio the receiver is left on while transmitting on a full-duplex radio channel. Communication can take place in both directions simultaneously on full-duplex channels when the equipment on both ends allows it. Full duplex channels can be used to place calls over the public telephone network for a fee via a marine operator. When equipment supporting full-duplex audio is used, the call is similar to one using a mobile phone or landline. When half-duplex is used, voice is only carried one way at a time and the party on the boat must press the transmit button only when speaking. This facility is still available in some areas, though its use has largely died out with the advent of mobile and satellite phones. Marine VHF radios can also receive weather radio broadcasts, where they are available.
The accepted conventions for use of marine radio are collectively termed "proper operating procedure". These international conventions include:
Stations should listen for 30 seconds before transmitting and not interrupt other stations.
Maintaining a watch listening on Channel 16 when not otherwise using the radio. All calls are established on channel 16, except for distress working switch to a working ship-to-ship or ship-to-shore channel. (procedure varies in the U.S. only when calls can be established on Ch. 9)
During distress operations silence maintained on ch. 16 for other traffic until the channel is released by the controlling station using the pro-word "Silence Fini". If a station does use Ch. 16 during distress operations controlling station issues the command "silence mayday".
Using a set of international "calling" procedures such as the "Mayday" distress call, the "Pan-pan" urgency call and "Sécurité" navigational hazard call.
Using "pro-words" based on the English language such as Acknowledge, All after, All before, All stations, Confirm, Correct, Correction, In figures, In letters, Over, Out, Radio check, Read back, Received, Say again, Spell, Standby, Station calling, This is, Wait, Word after, Word before, Wrong (local language is used for some of these, when talking to local stations)
Using the NATO phonetic alphabet: Alfa, Bravo, Charlie, Delta, Echo, Foxtrot, Golf, Hotel, India, Juliett, Kilo, Lima, Mike, November, Oscar, Papa, Quebec, Romeo, Sierra, Tango, Uniform, Victor, Whiskey, X-ray, Yankee, Zulu
Using a phonetic numbering system based on the English language or a combination of English and Roman languages: Wun, Too, Tree, Fow-er, Fife, Six, Sev-en, Ait, Nin-er, Zero, Decimal; alternatively in marine communication: unaone, bissotwo, terrathree, kartefour, pantafive, soxisix, setteseven, oktoeight, novenine, nadazero
Slightly adjusted regulations can apply for inland shipping, such as the Basel rules (:de:Regionale Vereinbarung über den Binnenschifffahrtsfunk) in Western Europe.
Future
In 2022, the ETSI issued a proposal for implementing the use of FDMA protocols on the band in response to increasingly scarce availability of voice channels in some circumstances owing to the widespread use of systems such as AIS. The plan includes significantly narrower 6.25 kHz channel spacing, and would support voice and data applications.
See also
2182 kHz
Automated Maritime Telecommunications System
Maritime mobile amateur radio
Radio horizon
Ship-to-shore
References
External links
US Coast Guard basic radio information for boaters
Coast Guard marine channel listing (with frequencies)
US FCC marine channel listing (by function)
UK MCA advice on use of VHF at sea, including collision avoidance, effective ranges, and International channel usage*
Canadian VHF Bands in the Maritime Service
VHF marine band plan in Turkey (Türkiye'deki VHF Deniz Telsiz Frekans Kanal Listesi)
New Zealand VHF Radio Resource Center
Navigational equipment
Maritime communication
Rescue equipment
Marine electronics
fr:Bandes marines#Bande VHF | Marine VHF radio | Engineering | 2,739 |
49,517,223 | https://en.wikipedia.org/wiki/PSR%20J0538%2B2817 | PSR J0538+2817 is a pulsar situated in the constellation of Taurus. Discovered in 1996, it stirred interest from the fact that it is physically linked to the supernova remnant SNR G180.8–02.2.
The characteristic age of PSR J0538+2817 gives an older estimate: 618,000 years. However, observation of the pulsar's proper motion gives a much younger result: 30,000 ± 4,000 years, meaning that the pulsar must have begun rotating at a relatively slow pace, at 139 milliseconds.
References
Taurus (constellation)
Pulsars | PSR J0538+2817 | Astronomy | 139 |
67,092,078 | https://en.wikipedia.org/wiki/Women%20in%20Data%20Science%20Initiative | In the field of AI and data science, companies lag in their ability to attract and retain talent, innovate, and meet shareholder/stakeholder expectations. As a significant portion of the population, the immense potential of women's talent is still excluded. Women in Data Science (WiDS) addresses the gender imbalance in data science/AI and works to remove barriers along the woman's journey – from her secondary school years to becoming a leader in her field. WiDS was sparked at Stanford University, California by Dr. Margot Gerritsen, Karen Matthys, and Dr. Esteban Arcaute, as the Women in Data Science (WiDS) Worldwide Initiative. Their mission is to achieve 30% representation for women in the field of data science by 2030, with a long-term vision of full and equal representation in decision-making, economic prosperity, and opportunities. WiDS's position on being able to create impact is with a strong network of universities, a global ambassador model, holistic programs addressing data science barriers, and nine years of experience with educational resources, this initiative provides scalable, culturally sensitive support and reaches over 150,000 women worldwide.
Impact
WiDS holds a Women in Data Science Worldwide conference annually, spotlighting only women speakers. These conferences are intended to inspire, educate, and sustain data science worldwide. In 2020, over 30,000 people participated, from 50 different countries. WiDs has reached over 100,000 women around the world. The Pune, India chapter of WiDS, for example, has over 5,000 members. Sucheta Dhere, ambassador of the WiDS Pune Chapter noted that computer vision, natural language processing, and machine learning "have a huge hiring potential in India," particularly for women. In 2019, more than 250 women convened in Madrid for the WIDS conference, which brought together women working on artificial intelligence and robotics. The Cambridge WiDS event was held at the Massachusetts Institute of Technology in 2020. Its signature event was a panel discussion on data science and fake news called “Data Weaponized, data Scrutinized: a war on information".
WiDS elevates women on the WiDS Platform through workshops, webinars, podcasts (on topics ranging from actionable ethics, automating machine learning, data analysis for health, and exploring artificial intelligence) and stories to raise visibility and inspire, while educating and lowering barriers to entry through programs like the Datathon and NextGen Data Days. Additionally, WiDS has global ambassadors which they empower by supporting and amplifying their efforts and providing opportunities for lifelong learning, career development, and progression, including through the WiDS UpLink job platform.
In 2024, WiDS has reached over 150,000 participants globally, including 5,000+ Datathon participants (75% women) from 100 countries, 2,300+ live workshop viewers, 1,000+ ambassadors across 77 countries, and 200 events worldwide, with 54% of ambassadors affiliated with universities or colleges.
Additionally, WiDS offers multiple ways to participate: Collaborators and sponsors support WiDS through active participation and funding of events, initiatives, and programs, while participants engage by attending conferences, workshops, and Datathons, and following WiDS across media platforms. Volunteers assist leadership and ambassadors in executing activities, while speakers, instructors, and podcast guests inspire by sharing knowledge. Ambassadors organize global events, advisors offer expert insights, and the central team leads in developing original content and resources for the community.
WiDS is also available to connect with across multiple social media platforms such as LinkedIn and Facebook Groups, Instagram, and their YouTube channel where they have a variety of content. WiDS also has a website that features blogs, events, monthly newsletters, and programs and resources they have to offer.
References
Stanford University
Facial recognition software
Diversity in computing | Women in Data Science Initiative | Technology | 786 |
11,337,263 | https://en.wikipedia.org/wiki/Milman%27s%20reverse%20Brunn%E2%80%93Minkowski%20inequality | In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn–Minkowski inequality for convex bodies in n-dimensional Euclidean space Rn. Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies.
Introduction
Let K and L be convex bodies in Rn. The Brunn–Minkowski inequality states that
where vol denotes n-dimensional Lebesgue measure and the + on the left-hand side denotes Minkowski addition.
In general, no reverse bound is possible, since one can find convex bodies K and L of unit volume so that the volume of their Minkowski sum is arbitrarily large. Milman's theorem states that one can replace one of the bodies by its image under a properly chosen volume-preserving linear map so that the left-hand side of the Brunn–Minkowski inequality is bounded by a constant multiple of the right-hand side.
The result is one of the main structural theorems in the local theory of Banach spaces.
Statement of the inequality
There is a constant C, independent of n, such that for any two centrally symmetric convex bodies K and L in Rn, there are volume-preserving linear maps φ and ψ from Rn to itself such that for any real numbers s, t > 0
One of the maps may be chosen to be the identity.
Notes
References
Asymptotic geometric analysis
Euclidean geometry
Geometric inequalities
Theorems in measure theory | Milman's reverse Brunn–Minkowski inequality | Mathematics | 329 |
11,746,640 | https://en.wikipedia.org/wiki/ProSyst | ProSyst Software GmbH was founded in Cologne in 1997 as a company specializing in Java software and middleware. ProSyst's first commercial application was a Java EE application server. In 2000, the company sold this server technology and has since focused completely on OSGi solutions.
In 1999, ProSyst was among the first companies to join the OSGi Alliance and since then has made important contributions to the development of each release of OSGi specifications (Release 1–4). ProSyst is a member of the OSGi Alliance board of directors alongside IBM, Nokia, NTT, Siemens, Oracle Corporation, Samsung, Motorola and Telcordia. Additionally, members of ProSyst staff serve in several positions on the OSGi Alliance.
In recent years ProSyst set its focus exclusively on the development of OSGi related software such as Frameworks, Bundles, Remote Management Systems and OSGi tools for developers including a full SDK available for download. ProSyst's OSGi applications are used by SmartHome devices, mobile phone manufacturers, network equipment providers (in CPEs), white goods manufacturers, car manufacturers and in the eHealth market.
ProSyst employs more than 120 Java and OSGi experts and offers OSGi related training, support (SLAs), technical consulting and development services.
As a member, ProSyst contributes to OSGi, Eclipse, Java Community Process, Nokia Forum Pro and the CVTA Connected Vehicle Trade Association.
Prosyst was acquired by Bosch in February 2015, and was merged into Bosch Group's software and systems unit Bosch Software Innovations GmbH.
Notable products
Commercial off-the-shelf products around OSGi mBS
Reduced-size Java client from 1999
References
Companies based in Cologne
Java platform software
Middleware
Software companies of Germany | ProSyst | Technology,Engineering | 362 |
8,220,315 | https://en.wikipedia.org/wiki/Verticillium%20wilt | Verticillium wilt is a wilt disease affecting over 350 species of eudicot plants. It is caused by six species of Verticillium fungi: V. dahliae, V. albo-atrum, V. longisporum, V. nubilum, V. theobromae and V. tricorpus. Many economically important plants are susceptible including cotton, tomatoes, potatoes, oilseed rape, eggplants, peppers and ornamentals, as well as others in natural vegetation communities. Many eudicot species and cultivars are resistant to the disease and all monocots, gymnosperms and ferns are immune.
Signs are superficially similar to Fusarium wilts. There are no fungicides characterized for the control of this disease but soil fumigation with chloropicrin has been proven successful in dramatically reducing Verticillium wilt in diverse crops such as vegetables using plasticulture production methods, and in non-tarped potato production in North America . Additional strategies to manage the disease include crop rotation, the use of resistant varieties and deep plowing (to accelerate the decomposition of infected plant residue). In recent years, pre-plant soil fumigation with chloropicrin in non-tarped, raised beds has proven to be economically viable and beneficial for reducing wilt disease and increasing yield and quality of potato in North America. Soil fumigation is a specialized practice requiring special permits, equipment, and expertise, so qualified personnel must be employed.
Hosts and symptoms
Verticillium spp. attack a very large host range including more than 350 species of vegetables, fruit trees, flowers, field crops, and shade or forest trees. Most vegetable species have some susceptibility, so it has a very wide host range. A list of known hosts is at the bottom of this page.
The symptoms are similar to most wilts with a few specifics to Verticillium. Wilt itself is the most common symptom, with wilting of the stem and leaves occurring due to the blockage of the xylem vascular tissues and therefore reduced water and nutrient flow. In small plants and seedlings, Verticillium can quickly kill the plant while in larger, more developed plants the severity can vary. Some times only one side of the plant will appear infected because once in the vascular tissues, the disease migrates mostly upward and not as much radially in the stem. Other symptoms include stunting, chlorosis or yellowing of the leaves, necrosis or tissue death, and defoliation. Internal vascular tissue discoloration might be visible when the stem is cut.
In Verticillium, the symptoms and effects will often only be on the lower or outer parts of plants or will be localized to only a few branches of a tree. In older plants, the infection can cause death, but often, especially with trees, the plant will be able to recover, or at least continue living with the infection. The severity of the infection plays a large role in how severe the signs are and how quickly they develop.
Disease cycle
While Verticillium spp. are very diverse, the basic life cycle of the pathogen is similar across species, except in their survival structures. The survival structures vary by species with V. albo-atrum forming mycelium, V. dahliae forming microsclerotia, V. nigrescens and V. nubilum forming chlamydospores, and V. tricorpus forming all three. While resting, many factors such as soil chemistry, temperature, hydration, micro fauna, and non-host crops all have an effect on the viability of the resting structure. Mycelium have been observed remaining viable for at least 4 years, while microsclerotia have been observed in fields planted with non-host crops for over 10 years and even 15 years has been reported. Viability is reduced at these extremes, but the long survivability of these structures is an important aspect for Verticillium control.
When roots of a host crop come near the resting structure (about 2mm), root exudate promotes germination and the fungi grows out of the structure and toward the plant. Being a vascular wilt, it will try to get to the vascular system on the inside of the plant, and therefore must enter the plant. Natural root wounds are the easiest way to enter, and these wounds occur naturally, even in healthy plants because of soil abrasion on roots. Verticillium has also been observed entering roots directly, but these infections rarely make it to the vascular system, especially those that enter through root hairs.
Once the pathogen enters the host, it makes its way to the vascular system, and specifically the xylem. The fungi can spread as hyphae through the plant, but can also spread as spores. Verticillium produce conidia on conidiophores and once conidia are released in the xylem, they can quickly colonize the plant. Conidia have been observed traveling to the top of cotton plants, , 24 hours after initial conidia inoculation, so the spread throughout the plant can occur very quickly. Sometimes the flow of conidia will be stopped by cross sections of the xylem, and here the conidia will spawn, and the fungal hyphae can overcome the barrier, and then produce more conidia on the other side.
A heavily infected plant can succumb to the disease and die. As this occurs, the Verticillium will form its survival structures and when the plant dies, its survival structures will be where the plant falls, releasing inoculates into the environment. The survival structures will then wait for a host plant to grow nearby and will start the cycle all over again.
Besides being long lasting in the soil, Verticillium can spread in many ways. The most common way of spreading short distances is through root to root contact within the soil. Roots in natural conditions often have small damages or openings in them that are easily colonized by Verticillium from an infected root nearby. Air borne conidia have been detected and some colonies observed, but mostly the conidia have difficulty developing above ground on healthy plants. In open channel irrigation, V. dahliae have been found in the irrigation ditches up to a mile from the infected crop.
Without fungicidal seed treatments, infected seeds are easily transported and the disease spread, and Verticillium has been observed remaining viable for at least 13 months on some seeds. Planting infected seed potatoes can also be a source of inoculum to a new field. Finally, insects have also been shown to transmit the disease. Many insects including potato leaf hopper, leaf cutter bees, and aphids have been observed transmitting conidia of Verticillium and because these insects can cause damage to the plant creating an entry for the Verticillium, they can help transmit the disease.
Environment
While Verticillium wilts often have the same symptoms of Fusarium wilts, Verticillium can survive cold weather and winters much better than Fusarium, which prefers warmer climates. The resting structures of Verticillium are able to survive freezing, thawing, heat shock, dehydration, and many other factors and are quite robust and difficult to get rid of. The one factor they do not tolerate well is extended periods of anaerobic conditions (such as during flooding).
Verticillium will grow best between 20 and 28 degrees Celsius, but germination and growth can occur well below (or above) those temperatures. Still, Verticillium will generally not survive in the branches and trunks of infected trees during hot, dry seasons in regions such as summer in southern California. This does not generally "cure" the entire tree, however, and recurrence can happen via a reinfection from the roots during winter and spring. Water is necessary for resting structure germination, but is not as important for the spread of the fungus as in many other fungi. While not an environmental requirement for the fungus, stressed plants, often brought on by environmental changes, are easier to attack than healthy plants, so any conditions that will stress the plant but not directly harm the Verticillium will be beneficial for Verticillium wilt development.
Management
Verticillium wilt begins as a mild, local infection, which over a few years will grow in strength as more virile strains of the fungus develop. If left unchecked the disease will become so widespread that the crop will need to be replaced with resistant varieties, or a new crop will need to be planted altogether.
Control of Verticillium can be achieved by planting disease–free plants in uncontaminated soil, planting resistant varieties, and refraining from planting susceptible crops in areas that have been used repeatedly for solanaceous crops. Soil fumigation can also be used, with chloropicrin being particularly effective in reducing disease incidence in contaminated fields.
In tomato plants, the presence of ethylene during the initial stages of infection inhibits disease development, while in later stages of disease development the same hormone will cause greater wilt. Tomato plants are available that have been engineered with resistant genes that will tolerate the fungus while showing significantly lower signs of wilting.
Verticillium albo-altrum, V. dahliae and V. longisporum can overwinter as melanized mycelium or microsclerotia within live vegetation or plant debris. As a result, it can be important to clear plant debris to lower the spread of disease. V. dahliae and V. longisporum are able to survive as microsclerotia in soil for up to 15 years.
Importance
Verticillium wilt occurs in a broad range of hosts but has similar devastating effects on many of these plants. In general, it reduces the quality and quantity of a crop by causing discoloration in tissues, stunting, and premature defoliation and death. Stock from infested nurseries may be restricted. Once a plant is infected, there is no way to cure it. Verticillium wilt is especially a concern in temperate areas and areas that are irrigated. Verticllium spp. can naturally occur in forest soils and when these soils are cultivated, the pathogen will infect the crop.
The Salinas Valley in California has had severe problems with Verticillium wilt since 1995, most likely due to flooding in the winter of 1995. Many areas in the Salinas and Pajaro Valleys are unable to grow lettuce due to the high levels of Verticillium dahliae in the soil. Potatoes grown in Verticillium infested soils may have a reduced yield between 30–50% compared to potatoes grown in "clean" soil. Verticillium wilt has also caused a shift in peppermint cultivation from the Midwest in the mid- to late-1800s to western states such as Oregon, Washington and Idaho, to new, non-infested areas within these states now.
Lists of plants susceptible or resistant
Replanting susceptible species on the site of a removed plant that has succumbed to V. albo-atrum or V. dahliae is inadvisable because of the heightened risk of infection. Instead, resistant or immune varieties should be used. The following two lists show both susceptible and resistant/immune plants by Latin name.
(*) indicates that the plant occurs on both lists because different varieties or cultivars vary in their resistance.
(#) indicates that some strains are resistant.
(+) indicates susceptibility to some European strains of Verticillium albo-atrum.
Susceptible plants
Abelmoschus esculentus (also known as Hibiscus esculentus) (Okra)
Abutilon spp. (Abutilon)
Acer spp. (Maple)
Acer negundo (Box Elder)
Aconitum (Monkshood, Aconite)
Aesculus hippocastanum (Horsechestnut)
Aesculus glabra (Ohio Buckeye)
Ailanthus altissima (Tree of Heaven)
Albizia (Mimosa)
Amaranthus retroflexus (Rough Pigweed)
(*) Amelanchier (Serviceberry)
Antirrhinum majus (Snapdragon)
Arabidopsis thaliana (Thale cress)
Arachis hypogaea (Peanut)
Aralia cordata (Udo)
Aralia racemosa (American spikenard)
Armoracia lapathifolia (Horseradish)
Aster spp. (Aster)
Atropa belladonna (Belladonna)
Aucuba (Aucuba)
Berberis (Barberry)
Brassica napus (Oilseed rape, Rapeseed)
Brassica napobrassica (Rutabaga, Rapeseed)
Brassica oleracea var. botrytis (Cauliflower)
Brassica oleracea var. capitata (Cabbage)
Brassica oleracea var. gemmifera (Brussels Sprouts)
Buxus (Box, boxwood)
Calceolaria spp. (Slipperwort)
Callirhoe papaver (Poppy mallow)
Callistephus chinensis (Chinese Aster)
Camellia (Camellia)
Campanula spp. (Bellflower)
Campsis radicans (Trumpet Creeper)
Cannabis sativa (Hemp, Marijuana)
Capsicum spp. (Pepper)
Carpobrotus edulis (Ice Plant)
Carthamus tinctorius (Safflower)
Carya illinoensis (Pecan)
Catalpa speciosa (Northern Catalpa)
Catalpa bignonioides (Southern Catalpa)
Celosia argentea (Cockscomb)
Centaurea cyanus (Cornflower, Bachelor's button)
Centaurea imperialis (Sweet Sultan)
Ceratonia siliqua (Carob)
Cercis canadensis (Redbud)
Cercis siliquastrum (Judas Tree)
Chenopodium (Goosefoot)
(#) Chrysanthemum spp. (Chrysanthemum, Marguerite etc.)
Chrysanthemum leucanthemum (Oxeye Daisy)
Cinnamomum camphora (Camphor tree)
Cistus palhinhai (Rock rose)
Cistus x purpureus (Orchid Spot rock rose)
Citrullus vulgaris (Watermelon)
Cladrastis lutea (Yellow wood)
Clarkia elegans (Clarkia)
Coreopsis lanceolata (Tickseed)
(*) Cornus (Dogwood)
Cosmos (Cosmos)
Cotinus coggygria (Smoke Tree)
Cupaniopsis anacardioides (Carrotwood)
Cucumis melo (Honeydew, Cantaloupe and other melons)
Cucumis sativus (Cucumber)
Cucurbita pepo (Pumpkin)
Cydonia oblonga (Quince)
Cynara cardunculus (Globe artichoke)
Dahlia variabilis (Dahlia)
Delphinium ajacis (Rocket larkspur)
Digitalis purpurea (Foxglove)
Dimorphotheca sinuata (Cape marigold)
Diospyros virginiana (persimmon)
Dodonaea viscosa (Hopseed)
Echinacea purpurea (Eastern purple coneflower)
Elaeagnus (Oleaster, Russian Olive)
Erica spp. (Heather)
Erigeron (Fleabane)
Eschscholzia californica (California poppy)
Ficus benjamina (Weeping Fig)
Ficus retusa (Indian Laurel)
(#) Fragaria chiloensis (Strawberry)
Fraxinus pennsylvanica (Ash)
Fremontodendron spp. (Flannel bush, Fremontia)
Fuchsia spp. (Fuchsia)
Gerbera jamesonii (Transvaal daisy)
Gossypium spp. (Cotton)
Gymnocladus dioicus (Kentucky Coffeetree)
Hebe bollonsii (Hebe)
Hebe x carnea 'Carnea' (Hebe)
Hebe lewisii (Hebe)
Hedera (Ivy)
Helianthus spp. (Sunflower)
Helichrysum bracteatum (Strawflower)
Heliotropium arborescens (Heliotrope)
Humulus (Hop)
Impatiens balsamina (Garden balsam)
Impatiens walleriana (Busy Lizzie)
Jasminum (Jasmine)
Juglans regia (English walnut)
Koelreuteria paniculata (goldenrain tree)
Lampranthus spectabilis (Ice plant)
Lathyrus odoratus (Sweet pea)
Liatris spp. (Gayfeather)
Ligustrum spp. (Privet)
Linum usitatissimum (Linseed)
Liriodendron tulipifera (tulip tree)
Lobelia erinus (Lobelia)
Lonicera (Honeysuckle)
Lupinus polyphyllus (Lupin)
(#) Lycopersicon esculentum (Tomato)
Maclura pomifera (Osage orange)
Magnolia (Magnolia)
Matthiola incana (Stock)
Melia azedarach (Chinaberry, Persian Lilac)
Mentha spp. (Mint)
Monarda fistulosa (Wild Bergamot)
Nandina domestica (Heavenly bamboo)
Nicotiana benthamiana (Australian tobacco)
Nyssa sylvatica (Black Gum)
Olea europaea (Olive)
Osteospermum (African daisy)
Paeonia spp. (Peony)
Panax quinquefolius (American ginseng)
Papaver orientale (Oriental poppy)
Parthenium argentatum (Guayule)
Parthenocissus (Virginia Creeper)
Pelargonium spp. (Pelargonium, Geranium)
Persea americana (Avocado)
Petunia (Petunia)
Pistacia (Pistachio)
Phlox spp. (Phlox)
Phellodendron (Cork Tree)
Physalis alkekengi (Chinese lantern plant)
Polemonium spp. (Polemonium)
Populus tremula (European aspen)
Prunus (Cherry, Plum, Peach, Almond, other stone fruit)
Pyrola spp. (Pyrola)
Quercus palustris (Pin Oak)
Quercus rubra (Red oak)
Raphanus sativus (Radish)
Reseda odorata (Mignonette)
Rhaphiolepis (India Hawthorn, Yeddo Hawthorn)
Rheum rhaponticum (Rhubarb)
Rhododendron (Azalea, Rhododendron)
Rhus (Sumac, Lemonade berry)
Ribes (Gooseberry, Black, White, Red and other currants)
Ricinus communis (Castor bean)
Robinia pseudoacacia (Black Locust)
Romneya coulteri (Tree poppy)
Rorippa islandica (Marsh Cress)
Rosa (Rose)
Rosmarinus officinalis (Rosemary)
(#) Rubus (Black-, Rasp-, Dew- and other berries)
Rudbeckia serotinia (Black-eyed susan)
Salpiglossis sinuata (Painted tongue)
Salvia farinacea (Mealycup sage)
Salvia haematodes (Sage)
Salvia azurea (Blue sage)
Sambucus spp. (Elderberry)
Sassafras albidum (Sassafras)
Schinus (Pepper Tree)
Schizanthus pinnatus (Butterfly flower)
Senecio cruentus (Cineraria)
Senecio vulgaris (Groundsel)
Sisymbrium irio (London rocket)
Solanum aethiopicum (Ethiopian Eggplant)
Solanum carolinense (Carolina horsenettle)
Solanum elaeagnifolium (White horsenettle)
Solanum melongena (Eggplant)
Solanum nigrum (Black nightshade)
Solanum sarrachoides (Hairy Nightshade)
Solanum tuberosum (Potato)
Sorbus torminalis (Wild Service Tree)
Spinacia oleracea (Spinach)
Spirea (Meadowsweet, Spirea)
Styphnolobium (Japanese pagoda tree)
Syringa (Lilac)
Taraxacum officinale (Dandelion)
Tetragonia tetragonioides (formerly T. expansa) (New Zealand spinach)
(*) Tilia (Lime, Linden)
Trachelospermum jasminoides (Star jasmine)
Tragopogon porrifolius (Salsify)
Ulmus americana (American elm)
Ulmus procera (English elm)
Ulmus rubra (Slippery elm)
Venidium spp. (Namaqualand daisy)
Viburnum spp. (Viburnum, Wayfaring tree)
Vigna sesquipedalis (Yard-long bean)
Vigna sinensis (Cowpea)
Vitis (Grapevine)
Weigela (Weigela)
Plants resistant or immune
Clades
Polypodiopsida (ferns and allies)
Gymnospermae (pines, firs, cycads, ginkgos, etc.)
Monocotyledoneae (grasses, bananas, palms, lilies, etc.)
Cactaceae (cacti)
Species
Acer pseudoplatanus (Sycamore)
Ageratum spp. (Ageratum)
Alnus spp. (Alder)
Alyssum spp. (Alyssum)
Althaea rosea (Hollyhock)
(*) Amelanchier spp. (Serviceberry)
Anemone spp. (Anemone)
Apium graveolens (Celery)
Aquilegia spp. (Columbine)
Arctostaphylos spp. (Manzanita)
Asimina triloba (Pawpaw)
Asparagus officinalis (Asparagus)
Begonia semperflorens (Waxy or fibrous Begonia)
Begonia tuberhybrida (Tuberous Begonia)
Bellis perennis (English daisy)
Betula spp. (Birch, Hophornbeam)
Brassica oleracea Italica Group (Broccoli)
Browallia spp. (Browallia)
Buxus spp. (Boxwood)
Calendula officinalis (Marigold)
Carpinus spp. (Ironwood, Hornbeam)
Carya (Hickory, Pecan)
Castanea mollissima (Chinese chestnut)
Ceanothus spp. (Californian Lilac, Ceanothus, Red root)
Celtis spp. (Hackberry)
Cercidiphyllum japonicum (Katsura Tree)
Cheiranthus cheiri (Wallflower)
Cistus corbariensis (White rock rose)
Cistus salvifolius (Sage-leaf rock rose)
Cistus tauricus (Rock rose)
Citrus spp. (Orange, Lemon, Grapefruit, etc.)
Cleome spp. (Cleome)
(*) Cornus spp. (Dogwood)
Crataegus spp. (Hawthorn)
Daucus carota (Carrot)
Dianthus spp. (Carnation, Pink, Sweet William)
Eucalyptus spp. (Eucalyptus)
Fagus spp. (Beech)
Ficus carica (Fig)
Gaillardia spp. (Gaillardia)
Geum spp. (Geum)
Gleditsia spp. (Honey locust)
Gypsophila paniculata (Baby's breath)
Helianthemum nummularium (Sun rose)
Helleborus niger (Hellebore, Christmas Rose)
Heuchera sanguinea (Coral bells)
Iberis spp. (Candytuft)
Ilex spp. (Holly)
Impatiens sultani (Hardy Busy Lizzy)
Ipomoea batatas (Sweet potato)
Juglans spp. (Walnut, Butternut)
Juniperus spp. (Juniper)
Lactuca spp. (Lettuce)
Lantana spp. (Lantana)
Larix spp. (larch)
Liquidambar styraciflua (Sweet gum)
Lunaria annua (Honesty)
(+) Malus spp. (Apple)
(+) Medicago sativa (Alfalfa)
Mimulus spp. (Monkey flower)
Morus spp. (Mulberry)
Nemesia strumosa (Nemesia)
Nemophila menziesii (Baby blue eyes)
Nerium oleander (Oleander)
Nierembergia frutescens (Cupflower)
Oenothera spp. (Evening primrose)
Penstemon spp. (Penstemon)
Phaseolus spp. (Bean)
Pisum sativum (Pea)
Platanus spp. (Sycamore, Plane tree)
Platycodon grandiflorus (Balloon flower)
Populus (Poplar)
Portulaca grandiflora (Moss rose)
Potentilla spp. (Potentilla)
Primula spp. (Primrose)
Pyracantha spp. (Firethorn)
(+) Pyrus spp. (Pear)
Quercus alba (White oak)
Quercus falcata (Southern red oak)
Quercus phellos (Willow oak)
Quercus virginiana (Live oak)
Ranunculus asiaticus (Persian buttercup)
Saintpaulia ionantha (African violet)
Scabiosa atropurpurea (Scabious)
Salix spp. (Willow)
Sorbus aucuparia (European mountain ash)
(*) Tilia (Lime, Linden)
Torenia fournieri (Wishbone plant)
Tropaeolum majus (Nasturtium)
Umbellularia californica (Californian laurel)
Verbena hybrida (Verbena)
Veronica x franciscana (Hebe)
Veronica elliptica (syn. Hebe x menziesii) (Hebe)
Veronica salicifolia (Hebe)
Vinca minor (Periwinkle)
Viola spp. (Pansy, Viola, Violet)
Zelkova serrata (Zelkova)
Zinnia spp. (Zinnia)
References
Fungal plant pathogens and diseases
Tomato diseases
Fungal tree pathogens and diseases
Fungus common names | Verticillium wilt | Biology | 5,552 |
1,249,107 | https://en.wikipedia.org/wiki/Standard%20rate%20turn | Aircraft maneuvering is referenced to a standard rate turn, also known as a rate one turn (ROT).
A standard rate turn is defined as a 3° per second turn, which completes a 360° turn in 2 minutes. This is known as a 2-minute turn, or rate one (180°/min). Fast airplanes, or aircraft on certain precision approaches, use a half standard rate ('rate half' in some countries), but the definition of standard rate does not change.
Usage
Standardized turn rates are often employed in approaches and holding patterns to provide a reference for controllers and pilots so that each will know what the other is expecting. The pilot banks the airplane such that the turn and slip indicator points to the mark appropriate for that aircraft and then uses a clock to time the turn. The pilot can roll out at any desired direction depending on the length of time in the turn.
During a constant-bank level turn, increasing airspeed decreases the rate of turn, and increases the turn radius. A rate half turn (1.5° per second) is normally used when flying faster than 250 kn. The term rate two turn (6° per second) is used on some low speed aircraft.
Instrumentation
Instruments, either the turn and slip indicator or the turn coordinator, have the standard rate or half standard rate turn clearly marked. Slower aircraft are equipped with 2-minute turn indicators while faster aircraft are often equipped with 4-minute turn indicators.
Formulae
Angle of bank formula
The formula for calculating the angle of bank for a specific true airspeed (TAS) in SI units (or other coherent system) is:
where is the angle of bank, is true airspeed, is the radius of the turn, and is the acceleration due to gravity.
For a rate-one turn and velocity in knots (nautical miles per hour, symbol kn), this comes to
.
A convenient approximation for the bank angle in degrees is
For aircraft holding purposes, the International Civil Aviation Organization (ICAO) mandates that all turns should be made, "at a bank angle of 25° or at a rate of 3° per second, whichever requires the lesser bank." By the above formula, a rate-one turn at a TAS greater than 180 knots would require a bank angle of more than 25°. Therefore, faster aircraft just use 25° for their turns.
Radius of turn formula
One might also want to calculate the radius of a Rate 1, 2 or 3 turn at a specific TAS.
Where is the rate of turn.
If the velocity and the angle of bank is given,
where g is the gravitational acceleration. This is a simplified formula that ignores slip and returns zero for 90° of bank.
In metres (where gravity is approximately 9.81 metres per second per second, and velocity is given in metres per second):
Or in feet (where velocity is given in knots):
References
Aerial maneuvers
Units of rate | Standard rate turn | Mathematics | 592 |
34,162,078 | https://en.wikipedia.org/wiki/TRPN | TRPN is a member of the transient receptor potential channel family of ion channels, which is a diverse group of proteins thought to be involved in mechanoreception. The TRPN gene was given the name no mechanoreceptor potential C (nompC) when it was first discovered in fruit flies, hence the N in TRPN. Since its discovery in fruit flies, TRPN homologs have been discovered and characterized in worms, frogs, and zebrafish.
Structure
A structure of NOMPC was published in 2017, solved using electron cryo-microscopy. X-ray crystallography studies of channel segments cloned from fruit flies and zebrafish have led to the hypothesis that multiple ankyrin repeats at TRPN's N-terminus are involved in the gating of the channel pore. Crystallography studies of TRPY1, a yeast TRP homolog, have shown that aromatic residues conserved across TRP family members, including TRPN, in the sixth transmembrane domain are critical to the gating mechanism as well.
Function
As a mechanoreceptor, TRPN responds to impinging mechanical forces. Studies in TRPN deficient adult fruit flies and larvae have shown that these null mutants have severe difficulty moving, which suggests a role for TRPN in proprioception. This hypothesis is further strengthened by immunostaining studies in fruit flies that have shown TRPN localization in the cilia of campaniform sensilla and chordotonal organs in Johnston's organ. Further immunostaining studies in fruit flies have identified, with higher resolution techniques, that TRPN is localized at the distal end of motile mechanosensory cilia in Johnston's organ. However, TRPN is not required for transduction of mechanical stimuli in larvae or adult flies, suggesting that the TRPV channels nanchung and inactive may also serve a mechanosensory function.
Studies in worms have shown that TRPN mutants have locomotion defects, as well as a decreased basal slowing response, which is a reduction in rate of motion that is induced by contact with a food source. This result further strengthens the hypothesis that TRPN is vital to proprioception. Electrophysiological studies of single channels in worms have shown that TRPN responds to mechanical stimuli and has a preference for sodium ions, although a complete ion selectivity profile has yet to be identified.
Studies in zebrafish larvae have also shown that morpholino-mediated knockdown of TRPN function result in deafness as well as imbalance, suggesting a dual role in hearing as well as proprioception. Immunostaining studies in frog embryos have shown localization of TRPN at the tips of mechanosensory cilia in the lateral line, hair cells and ciliated epidermal cells, suggesting a role in a variety of mechanosensory functions. TRPN localizes to the kinocilia, not stereocilia, of amphibian hair cells, suggesting the presence of two distinct classes of mechanosensitive channel.
TRPN has the capability of performing a variety of roles in mechanosensory systems.
Genes
Genomic data from a variety of organisms show that TRPN is present in most animals, but it is absent in all amniotes. In most animals the number of ankyrin repeats is between 28 and 29.
The following is a list of genes encoding TRPN organized by the organism in which they are found. Gene names are specific to the organism and to the way in which they were discovered, which is why the gene name may not explicitly be "TRPN". Links to the NCBI Gene database are included whenever possible.
Fruit fly (Drosophila melanogaster)
nompC
Nematode worm (Caenorhabditis elegans)
trp-4
African clawed frog (Xenopus laevis)
nompc
Zebrafish (Danio rerio)
trpn1
References
Ion channels | TRPN | Chemistry | 847 |
10,186,385 | https://en.wikipedia.org/wiki/Quadrature%20domains | In the branch of mathematics called potential theory, a quadrature domain in two dimensional real Euclidean space is a domain D (an open connected set) together with
a finite subset {z1, …, zk} of D such that, for every function u harmonic and integrable over D with respect to area measure, the integral of u with respect to this measure is given by a "quadrature formula"; that is,
where the cj are nonzero complex constants independent of u.
The most obvious example is when D is a circular disk: here k = 1, z1 is the center of the circle, and c1 equals the area of D. That quadrature formula expresses the mean value property of harmonic functions with respect to disks.
It is known that quadrature domains exist for all values of k. There is an analogous definition of quadrature domains in Euclidean space of dimension d larger than 2. There is also an alternative, electrostatic interpretation of quadrature domains: a domain D is a quadrature domain if a uniform distribution of electric charge on D creates the same electrostatic field outside D as does a k-tuple of point charges at the points z1, …, zk.
Quadrature domains and numerous generalizations thereof (e.g., replace area measure by length measure on the boundary of D) have in recent years been encountered in various connections such as inverse problems of Newtonian gravitation, Hele-Shaw flows of viscous fluids, and purely mathematical isoperimetric problems, and interest in them seems to be steadily growing. They were the subject of an international conference at the University of California at Santa Barbara in 2003 and the state of the art as of that date can be seen in the proceedings of that conference, published by Birkhäuser Verlag.
References
Potential theory | Quadrature domains | Mathematics | 380 |
16,641,098 | https://en.wikipedia.org/wiki/Moving-boundary%20electrophoresis | Moving-boundary electrophoresis (MBE also free-boundary electrophoresis) is a technique for separation of chemical compounds by electrophoresis in a free solution.
History
Moving-boundary electrophoresis was developed by Arne Tiselius in 1930. Tiselius was awarded the 1948 Nobel Prize in chemistry for his work on the separation of colloids through electrophoresis, the motion of charged particles through a stationary liquid under the influence of an electric field.
Apparatus
The moving-boundary electrophoresis apparatus includes a U-shaped cell filled with buffer solution and electrodes immersed at its ends. The sample applied could be any mixture of charged components such as a protein mixture. On applying voltage, the compounds will migrate to the anode or cathode depending on their charges. The change in the refractive index at the boundary of the separated compounds is detected using schlieren optics at both ends of the solution in the cell.
See also
Capillary electrophoresis
References
External links
Arne Wilhelm Kaurin Tiselius — Information on Tiselius compiled from various sources.
Electromagnetism
Electrophoresis
Colloidal chemistry | Moving-boundary electrophoresis | Physics,Chemistry,Biology | 242 |
26,954,391 | https://en.wikipedia.org/wiki/FoldX | FoldX is a protein design algorithm that uses an empirical force field. It can determine the energetic effect of point mutations as well as the interaction energy of protein complexes (including Protein-DNA). FoldX can mutate protein and DNA side chains using a probability-based rotamer library, while exploring alternative conformations of the surrounding side chains.
Applications
Prediction of the effect of point mutations or human SNPs on protein stability or protein complexes
Protein design to improve stability or modify affinity or specificity
Homology modeling
The FoldX force field
The energy function includes terms that have been found to be important for protein stability, where the energy of unfolding (∆G) of a target protein is calculated using the equation:
∆G = ∆Gvdw + ∆GsolvH + ∆GsolvP + ∆Ghbond + ∆Gwb + ∆Gel + ∆Smc + ∆Ssc
Where ∆Gvdw is the sum of the Van der Waals contributions of all atoms with respect to the same interactions with the solvent. ∆GsolvH and ∆GsolvP is the difference in solvation energy for apolar and polar groups, respectively, when going from the unfolded to the folded state. ∆Ghbond is the free energy difference between the formation of an intra-molecular hydrogen-bond compared to inter-molecular hydrogen-bond formation (with solvent). ∆Gwb is the extra stabilizing free energy provided by a water molecule making more than one hydrogen-bond to the protein (water bridges) that cannot be taken into account with non-explicit solvent approximations. ∆Gel is the electrostatic contribution of charged groups, including the helix dipole. ∆Smc is the entropy cost for fixing the backbone in the folded state. This term is dependent on the intrinsic tendency of a particular amino acid to adopt certain dihedral angles. ∆Ssc is the entropic cost of fixing a side chain in a particular conformation. The energy values of ∆Gvdw, ∆GsolvH, ∆GsolvP and ∆Ghbond attributed to each atom type have been derived from a set of experimental data, and ∆Smc and ∆Ssc have been taken from theoretical estimates. The Van der Waals contributions are derived from vapor to water energy transfer, while in the protein we are going from solvent to protein.
For protein-protein interactions, or protein-DNA interactions FoldX calculates ∆∆G of interaction :
∆∆Gab = ∆Gab- (∆Ga + ∆Gb) + ∆Gkon + ∆Ssc
∆Gkon reflects the effect of electrostatic interactions on the kon. ∆Ssc is the loss of translational and rotational entropy upon making the complex.
Key features
RepairPDB: energy minimization of a protein structure
BuildModel: in silico mutagenesis or homology modeling with predicted energy changes
AnalyseComplex: interaction energy calculation
Stability: prediction of free energy changes between alternative structures
AlaScan: in silico alanine scan of a protein structure with predicted energy changes
SequenceDetail: per residue free energy decomposition into separate energy terms (hydrogen bonding, Van der Waals energy, electrostatics, ...)
Graphical interface
Native FoldX is run from the command line. A FoldX plugin for the YASARA molecular graphics program has been developed to access various FoldX tools inside a graphical environment. The results of e.g. in silico mutations or homology modeling with FoldX can be directly analyzed on screen.
Molecule Parametrization
In version 5.0, the possibility to parametrize previously not recognized molecules in JSON format was added into the software.
Further reading
External links
http://foldx.crg.es FoldX website
http://foldxyasara.switchlab.org FoldX plugin for YASARA
Molecular modelling software | FoldX | Chemistry | 800 |
33,994,804 | https://en.wikipedia.org/wiki/Bond%20beam | A bond beam is a horizontal structural element, usually found as an embedded part of a masonry wall assembly. The bond beam serves to impart horizontal strength to a wall where it may not otherwise be braced by floor or roof structure.
A bond beam is typically found near the top of a freestanding wall. It may also be used to provide a consistent anchorage for floor or roof structure.
Bond beam assembly
Bond beam assemblies are most commonly used in construction using concrete masonry units, where special shapes allow the beam to blend with the wall construction. Bond beams encase steel reinforcing in grout or concrete, binding the structure together horizontally, and often interlocking with additional vertical reinforcement.
Bond beams may also be built using brick or may be formed in concrete.
References
Building engineering
Architectural elements | Bond beam | Technology,Engineering | 159 |
39,541,633 | https://en.wikipedia.org/wiki/Large%20low-shear-velocity%20provinces | Large low-shear-velocity provinces (LLSVPs), also called large low-velocity provinces (LLVPs) or superplumes, are characteristic structures of parts of the lowermost mantle, the region surrounding the outer core deep inside the Earth. These provinces are characterized by slow shear wave velocities and were discovered by seismic tomography of deep Earth. There are two main provinces: the African LLSVP and the Pacific LLSVP, both extending laterally for thousands of kilometers and possibly up to 1,000 kilometres vertically from the core–mantle boundary. These have been named Tuzo and Jason respectively, after Tuzo Wilson and W. Jason Morgan, two geologists acclaimed in the field of plate tectonics. The Pacific LLSVP is across and underlies four hotspots on Earth's crust that suggest multiple mantle plumes underneath. These zones represent around 8% of the volume of the mantle, or 6% of the entire Earth.
Other names for LLSVPs and their superstructures include superswells, superplumes, thermo-chemical piles, or hidden reservoirs, mostly describing their proposed geodynamical or geochemical effects. For example, the name "thermo-chemical pile" interprets LLSVPs as lower-mantle piles of thermally hot and/or chemically distinct material. LLSVPs are still relatively mysterious, and many questions remain about their nature, origin, and geodynamic effects.
Seismological modeling
Directly above the core–mantle boundary is a thick layer of the lower mantle. This layer is known as the D″ ("D double-prime" or "D prime prime") or degree two structure. LLSVPs were discovered in full mantle seismic tomographic models of shear velocity as slow features at the D″ layer beneath Africa and the Pacific. The global spherical harmonics of the D″ layer are stable throughout most of the mantle but anomalies appear along the two LLSVPs. By using shear wave velocities, the locations of the LLSVPs can be verified, and a stable pattern for mantle convection emerges. This stable configuration is responsible for the geometry of plate motions at the surface.
The LLSVPs lie around the equator, but mostly on the Southern Hemisphere. Global tomography models inherently result in smooth features; local waveform modeling of body waves, however, has shown that the LLSVPs have sharp boundaries. The sharpness of the boundaries makes it difficult to explain the features by temperature alone; the LLSVPs need to be compositionally distinct to explain the velocity jump. Ultra-low velocity zones at smaller scales have been discovered mainly at the edges of these LLSVPs.
By using the solid Earth tide, the density of these regions has been determined. The bottom two thirds are 0.5% denser than the bulk of the mantle. However, tidal tomography cannot determine how the excess mass is distributed; the higher density may be caused by primordial material or subducted ocean slabs. The African LLSVP may be a potential cause for the South Atlantic Anomaly.
Origins
Several hypotheses have been proposed for the origin and persistence of LLSVPs, depending on whether the provinces represent purely thermal unconformities (i.e. are isochemical in nature, of the same chemical composition as the surrounding mantle) or represent chemical unconformities as well (i.e. are thermochemical in nature, of different chemical composition from the surrounding mantle). If LLSVPs represent purely thermal unconformities, then they may have formed as large mantle plumes of hot, upwelling mantle. However, geodynamical studies predict that isochemical upwelling of a hotter, lower viscosity material should produce long, narrow plumes, unlike the large, wide plumes seen in LLSVPs. It is important to remember, however, that the resolution of geodynamical models and seismic images of Earth's mantle are very different.
The current leading hypothesis for the LLSVPs is the accumulation of subducted oceanic slabs. This corresponds to the locations of known slab graveyards surrounding the Pacific LLSVP. These graveyards are thought to be the reason for the high velocity zone anomalies surrounding the Pacific LLSVP and are thought to have formed by subduction zones that were around long before the dispersion—some 750 million years ago—of the supercontinent Rodinia. Aided by the phase transformation, the temperature would partially melt the slabs to form a dense melt that pools and forms the ultra-low velocity zone structures at the bottom of the core-mantle boundary closer to the LLSVP than the slab graveyards. The rest of the material is then carried upwards via chemical-induced buoyancy and contributes to the high levels of basalt found at the mid-ocean ridge. The resulting motion forms small clusters of small plumes right above the core-mantle boundary that combine to form larger plumes and then contribute to superplumes. The Pacific and African LLSVP, in this scenario, are originally created by a discharge of heat from the core (4000 K) to the much colder mantle (2000 K); the recycled lithosphere is fuel that helps drive the superplume convection. Since it would be difficult for the Earth's core to maintain this high heat by itself, it gives support for the existence of radiogenic nuclides in the core, as well as the indication that if fertile subducted lithosphere stops subducting in locations preferable for superplume consumption, it will mark the demise of that superplume.
Another proposed origin for the LLSVPs is that their formation is related to the giant-impact hypothesis, which states that the Moon formed after the Earth collided with a planet-sized body called Theia. The hypothesis suggests that the LLSVPs may represent fragments of Theia's mantle which sank through to Earth's core-mantle boundary. The higher density of the mantle fragments is due to their enrichment in iron(II) oxide with respect to the rest of Earth's mantle. This higher iron(II) oxide composition would also be consistent with the isotope geochemistry of lunar samples, as well as that of the ocean island basalts overlying the LLSVPs.
Dynamics
Geodynamic mantle convection models have included compositional distinctive material. The material tends to get swept up in ridges or piles. When including realistic past plate motions into the modeling, the material gets swept up in locations that are remarkably similar to the present day location of the LLSVPs. These locations also correspond with known slab graveyard locations.
These types of models, as well as the observation that the D″ structure of the LLSVPs is orthogonal to the path of true polar wander, suggest these mantle structures have been stable over large amounts of time. This geometrical relationship is consistent with the position of Pangaea and the formation of the current geoid pattern due to continental break-up from the superswell below.
However, the heat from the core is not enough to sustain the energy needed to fuel the superplumes located at the LLSVPs. There is a phase transition from perovskite to post-perovskite from the down welling slabs that causes an exothermic reaction. This exothermic reaction helps to heat the LLSVP, but it is not sufficient to account for the total energy needed to sustain it. So it is hypothesized that the material from the slab graveyard can become extremely dense and form large pools of melt concentrate enriched in uranium, thorium, and potassium. These concentrated radiogenic elements are thought to provide the high temperatures needed. So, the appearance and disappearance of slab graveyards predicts the birth and death of an LLSVP, potentially changing the dynamics of all plate tectonics.
Structure and composition
A study by researchers from Utrecht University revealed that LLSVPs were not only hotter but also ancient, potentially over a billion years old. The findings suggested that their seismic properties are influenced by factors beyond temperature, such as composition or mineral grain size. Seismic waves passing through LLSVPs decelerate but lose less energy than expected, indicating compositional differences and shedding light on their complex structure.
See also
Low-velocity zone
Cataclysmic pole shift hypothesis
Inner core super-rotation
Intermediate axis theorem
References
External links
Geophysics
Structure of the Earth | Large low-shear-velocity provinces | Physics | 1,763 |
2,516,966 | https://en.wikipedia.org/wiki/Phosphor%20thermometry | Phosphor thermometry is an optical method for surface temperature measurement. The method exploits luminescence emitted by phosphor material. Phosphors are fine white or pastel-colored inorganic powders which may be stimulated by any of a variety of means to luminesce, i.e. emit light. Certain characteristics of the emitted light change with temperature, including brightness, color, and afterglow duration. The latter is most commonly used for temperature measurement.
History
The first mention of temperature measurement utilizing a phosphor is in two patents originally filed in 1932 by Paul Neubert.
Time dependence of luminescence
Typically a short duration ultraviolet lamp or laser source illuminates the phosphor coating which in turn luminesces visibly. When the illuminating source ceases, the luminescence will persist for a characteristic time, steadily decreasing. The time required for the brightness to decrease to 1/e of its original value is known as the decay time or lifetime and signified as . It is a function of temperature, T.
The intensity, I of the luminescence commonly decays exponentially as:
Where I0 is the initial intensity (or amplitude). The 't' is the time and is parameter which can be temperature dependent.
A temperature sensor based on direct decay time measurement has been shown to reach a temperature from 1000 to as high as 1,600 °C. In that work, a doped YAG phosphor was grown onto an undoped YAG fiber to form a monolithic structure for the probe, and a laser was used as the excitation source. Subsequently, other versions using LEDs as the excitation source were realized. These devices can measure temperature up to 1,000 °C, and are used in microwave and plasma processing applications.
If the excitation source is periodic rather than pulsed, then the time response of the luminescence is correspondingly different. For instance, there is a phase difference between a sinusoidally varying light-emitting diode (LED) signal of frequency f and the fluorescence that results (see figure). The phase difference varies with decay time and hence temperature as:
Temperature dependence of emission lines: intensity ratio
The second method of temperature detection is based on intensity ratios of two separate emission lines; the change in coating temperature is reflected by the change of the phosphorescence spectrum. This method enables surface temperature distributions to be measured. The intensity ratio method has the advantage that polluted optics has little effect on the measurement as it compares ratios between emission lines. The emission lines are equally affected by 'dirty' surfaces or optics.
Temperature dependence
Several observations are pertinent to the figure on the right:
Oxysulfide materials exhibit several different emission lines, each having a different temperature dependence. Substituting one rare-earth for another, in this instance changing La to Gd, shifts the temperature dependence.
The YAG:Cr material (Y3Al5O12:Cr3+) shows less sensitivity but covers a wider temperature range than the more sensitive materials.
Sometime decay times are constant over a wide range before becoming temperature dependent at some threshold value. This is illustrated for the YVO4:Dy curve; it also holds for several other materials (not shown in the figure). Manufacturers sometimes add a second rare earth as a sensitizer. This may enhance the emission and alter the nature of the temperature dependence. Also, gallium is sometimes substituted for some of the aluminium in YAG, also altering the temperature dependence.
The emission decay of dysprosium (Dy) phosphors is sometimes non-exponential with time. Consequently, the value assigned to decay time will depend on the analysis method chosen. This non-exponential character often becomes more pronounced as the dopant concentration increases.
In the high-temperature part, the two lutetium phosphate samples are single crystals rather than powders. This has minor effect on decay time and its temperature dependence though. However, the decay time of a given phosphor depends on the particle size, especially below one micrometer.
There are further parameters influencing the luminescence of thermographic phosphors, e.g. the excitation energy, the dopant concentration or the composition or the absolute pressure of the surrounding gas phase. Therefore, care has to be taken in order to keep constant these parameters for all measurements.
Thermographic phosphor application in a thermal barrier coating
A thermal barrier coating (TBC) allows gas turbine components to survive higher temperatures in the hot section of engines, while having acceptable life times. These coatings are thin ceramic coatings (several hundred micrometers) usually based on oxide materials.
Early works considered the integration of luminescent materials as erosion sensors in TBCs. The notion of a "thermal barrier sensor coating" (sensor TBC) for temperature detection was introduced in 1998. Instead of applying a phosphor layer on the surface where the temperature needs to be measured, it was proposed to locally modify the composition of the TBC so that it acts as a thermographic phosphor as well as a protective thermal barrier. This dual functional material enables surface temperature measurement but also could provide a means to measure temperature within the TBC and at the metal/topcoat interface, hence enabling the manufacturing of an integrated heat flux gauge. First results on yttria-stabilized zirconia co-doped with europia (YSZ:Eu) powders were published in 2000. They also demonstrated sub-surface measurements looking through a 50 μm undoped YSZ layer and detecting the phosphorescence of a thin (10 μm) YSZ:Eu layer (bi-layer system) underneath using the ESAVD technique to produce the coating. The first results on electron beam physical vapour deposition of TBCs were published in 2001. The coating tested was a monolayer coating of standard YSZ co-doped with dysprosia (YSZ:Dy). First work on industrial atmospheric plasma sprayed (APS) sensor coating systems commenced around 2002 and was published in 2005. They demonstrated the capabilities of APS sensor coatings for in-situ two-dimensional temperature measurements in burner rigs using a high speed camera system. Further, temperature measurement capabilities of APS sensor coatings were demonstrated beyond 1400 °C. Results on multilayer sensing TBCs, enabling simultaneous temperature measurements below and on the surface of the coating, were reported. Such a multilayer coating could also be used as a heat flux gauge in order to monitor the thermal gradient and also to determine the heat flux through the thickness of the TBC under realistic service conditions.
Applications for thermographic phosphors in TBCs
While the previously mentioned methods are focusing on the temperature detection, the inclusion of phosphorescent materials into the thermal barrier coating can also work as a micro probe to detect the aging mechanisms or changes to other physical parameters that affect the local atomic surroundings of the optical active ion. Detection was demonstrated of hot corrosion processes in YSZ due to vanadium attack.
See also
Fluorescence
Luminescence
Photoluminescence
Thermometer
Thermometry
References
Further reading
Thermometers
Measurement | Phosphor thermometry | Physics,Mathematics,Technology,Engineering | 1,495 |
58,808,190 | https://en.wikipedia.org/wiki/Applied%20Geochemistry | Applied Geochemistry is a monthly peer-reviewed scientific journal published by Elsevier on behalf of the International Association of GeoChemistry. It covers research on environmental and regional geochemistry and was established in 1986. It is published by Elsevier and from 2012 to 2022 the editor-in-chief was Michael Kersten. He was succeeded by Zimeng Wang in 2023.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.4.
References
External links
Elsevier academic journals
English-language journals
Geochemistry journals
Academic journals established in 1986
Monthly journals | Applied Geochemistry | Chemistry | 136 |
1,433,842 | https://en.wikipedia.org/wiki/Isopsephy | In numerology, isopsephy (; , ) or isopsephism is the practice of adding up the number values of the letters in a word to form a single number. The total number is then used as a metaphorical bridge to other words evaluating the equal number, which satisfies or "equal" in the term. Ancient Greeks used counting boards for numerical calculation and accounting, with a counter generically called ('pebble'), analogous to the Latin word , from which the English calculate is derived.
Isopsephy is related to gematria: the same practice using the Hebrew alphabet. It is also related to the ancient number systems of many other peoples (for the Arabic alphabet version, see Abjad numerals). A gematria of Latin script languages was also popular in Europe from the Middle Ages to the Renaissance, and its legacy remains an influence in code-breaking and numerology.
History
Until Arabic numerals were adopted and adapted from Indian numerals in the 8th and 9th centuries AD, and promoted in Europe by Fibonacci of Pisa with his 1202 book Liber Abaci, numerals were predominantly alphabetical. For instance in Ancient Greece, Greek numerals used the alphabet. It is just a short step from using letters of the alphabet in everyday arithmetic and mathematics to seeing numbers in words, and to writing with an awareness of the numerical dimension of words.
An early reference to isopsephy, albeit of more-than-usual sophistication (employing multiplication rather than addition), is from the mathematician Apollonius of Perga, writing in the 3rd century BC. He asks: "Given the verse: ('Nine maidens, praise the glorious power of Artemis'), what does the product of all its elements equal?"
More conventional are the instances of isopsephy found in graffiti at Pompeii, dating from around 79 AD. One reads "I love her whose number is 545."
Another says "Amerimnus thought upon his lady Harmonia for good. The number of her honorable name is 45."
Suetonius, writing in 121 AD, reports a political slogan that someone wrote on a wall in Rome:
which appears to be another example. In Greek, Νερων, Nero, has the numerical value:
the same value as:
A famous example is 666 in the Biblical Book of Revelation (13:18): "Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six." The word rendered "count", , , has the same "pebble" root as the word isopsephy.
Also in the 1st century AD, Leonidas of Alexandria created isopsephs, epigrams with equinumeral distichs, where the first hexameter and pentameter equal the next two verses in numerical value. He addressed some of them to Nero:
Which translates to: "The muse of Leonidas of the Nile offers up to thee, O Caesar, this writing, at the time of thy nativity; for the sacrifice of Calliope is always without smoke: but in the ensuing year he will offer up, if thou wilt, better things than this." Here the sum of both the first and second distich is 5699. In another of his distichs, the hexameter line is equal in number to its corresponding pentameter:
Which translates to: "One line is made equal in number to one, not two to two; for I no longer approve of long epigrams." Here each line totals 4111.
A headstone found at the Temple of Artemis at Sparta Orthia is a 2nd-century AD example of isopsephic elegiac verse. It says:
It is the votive stele for a boy who won a competition in singing. The words in each line add up to that is 2730, and that total is also given at the end of each line. Also in the 2nd century AD, Aelius Nicon of Pergamon, the Greek architect and builder described by his son, the famous physician Galen, as having "mastered all there was to know of the science of geometry and numbers", was a master in composing isopsephic works.
Letter values of the Greek alphabet
In Greek, each unit (1, 2, ..., 9) was assigned a separate letter, each tens (10, 20, ..., 90) a separate letter, and each hundreds (100, 200, ..., 900) a separate letter. This requires 27 letters, so the 24-letter alphabet was extended by using three obsolete letters: digamma (also used are stigma or, in modern Greek, ) for 6, qoppa for 90, and sampi for 900.
This alphabetic system operates on the additive principle in which the numeric values of the letters are added together to form the total. For example, 241 is represented as σμα (200 + 40 + 1).
See also
About the Mystery of the Letters
Chronogram
English Qaballa
Hermetic Qabalah
Marcosians
Sator Square
Theomatics
Notes
Further reading
Y.H.S. II (2021). The Jesus Code(x): A Geometrical Revelation. ISBN 9789464067590
External links
Greek Gematria/Isopsephia Calculator – Requires Flash 8 or greater – Also Accepts Unicode Greek letters and will add them up for you
Greek Isopsephia Line Calculator – Unicode – Flash not required
Greek words and phrases
Numerology
Language and mysticism | Isopsephy | Mathematics | 1,182 |
2,870,145 | https://en.wikipedia.org/wiki/Guanidinium%20thiocyanate | Guanidinium thiocyanate (GTC) or guanidinium isothiocyanate (GITC) is a chemical compound used as a general protein denaturant, being a chaotropic agent, although it is most commonly used as a nucleic acid protector in the extraction of DNA and RNA from cells.
GITC may also be recognized as guanidine thiocyanate. This is because guanidinium is the conjugate acid of guanidine and is called the guanidinium cation, [CH6N3]+.
Uses
Guanidinium thiocyanate can be used to deactivate a virus, such as the influenza virus that caused the 1918 "Spanish flu", so that it can be studied safely.
Guanidinium thiocyanate is also used to lyse cells and virus particles in RNA and DNA extractions, where its function, in addition to its lysing action, is to prevent activity of RNase enzymes and DNase enzymes by denaturing them. These enzymes would otherwise damage the extract.
A commonly used method is guanidinium thiocyanate-phenol-chloroform extraction. It is not strictly necessary to use phenol or chloroform if extracting RNA for Northern blotting or DNA for Southern blot analysis because the gel electrophoresis followed by transfer to a membrane will separate the RNA/DNA from the proteins. Additionally, since these methods use probes to bind to their conjugates, peptides that get through the process don't generally matter unless a peptide is an RNase or DNase, and then only if the enzyme manages to renature, which should not occur if proper protocols are followed. A possible exception might be when working with temperature extremophiles because some enzymes of these organisms can remain stable under extraordinary circumstances.
Preparation
This substance can be prepared by reacting guanidinium carbonate with ammonium sulfate
or ammonium thiocyanate under heat. Another method is the pyrolysis of ammonium thiocyanate or thiourea at 180°C.
See also
Guanidine hydrochloride
References
Thiocyanates
Guanidinium compounds
Chaotropic agents | Guanidinium thiocyanate | Chemistry | 479 |
36,502,121 | https://en.wikipedia.org/wiki/Peziza%20echinospora | Peziza echinospora is a species of apothecial fungus belonging to the family Pezizaceae. This European fungus is found at old fire sites, appearing from late spring to early autumn as cups up to 10 cm in diameter. The inner surface is dark brown and smooth while the outer surface is pale, sometimes almost white, and rough.
References
Pezizaceae
Fungi described in 1866
Fungus species | Peziza echinospora | Biology | 87 |
9,572,607 | https://en.wikipedia.org/wiki/Dugald%20Macpherson | H. Dugald Macpherson is a mathematician and logician. He is Professor of Pure Mathematics at the University of Leeds.
He obtained his DPhil from the University of Oxford in 1983 for his thesis entitled "Enumeration of Orbits of Infinite Permutation Groups" under the supervision of Peter Cameron. In 1997, he was awarded the Junior Berwick Prize by the London Mathematical Society. He continues to research into permutation groups and model theory. He is scientist in charge of the MODNET team at the University of Leeds. He co-authored the book Notes on Infinite Permutation Groups.
References
External links
Prof. Macpherson's homepage
Year of birth missing (living people)
20th-century British mathematicians
21st-century British mathematicians
Living people
Alumni of the University of Oxford
Academics of the University of Leeds
Model theorists
Place of birth missing (living people) | Dugald Macpherson | Mathematics | 178 |
3,317,988 | https://en.wikipedia.org/wiki/J%C3%B3nsson%20cardinal | In set theory, a Jónsson cardinal (named after Bjarni Jónsson) is a certain kind of large cardinal number.
An uncountable cardinal number κ is said to be Jónsson if for every function there is a set of order type such that for each , restricted to -element subsets of omits at least one value in .
Every Rowbottom cardinal is Jónsson. By a theorem of Eugene M. Kleinberg, the theories ZFC + “there is a Rowbottom cardinal” and ZFC + “there is a Jónsson cardinal” are equiconsistent. William Mitchell proved, with the help of the Dodd-Jensen core model that the consistency of the existence of a Jónsson cardinal implies the consistency of the existence of a Ramsey cardinal, so that the existence of Jónsson cardinals and the existence of Ramsey cardinals are equiconsistent.
In general, Jónsson cardinals need not be large cardinals in the usual sense: they can be singular. But the existence of a singular Jónsson cardinal is equiconsistent to the existence of a measurable cardinal. Using the axiom of choice, a lot of small cardinals (the , for instance) can be proved to be not Jónsson. Results like this need the axiom of choice, however: The axiom of determinacy does imply that for every positive natural number n, the cardinal is Jónsson.
A Jónsson algebra is an algebra with no proper subalgebras of the same cardinality. (They are unrelated to Jónsson–Tarski algebras). Here an algebra means
a model for a language with a countable number of function symbols, in other words a set with a countable number of functions from finite products of the set to itself. A cardinal is a Jónsson cardinal if and only if there are no Jónsson algebras of that cardinality. The existence of Jónsson functions shows that if algebras are allowed to have infinitary operations, then there are no analogues of Jónsson cardinals.
References
Large cardinals | Jónsson cardinal | Mathematics | 424 |
1,599,733 | https://en.wikipedia.org/wiki/Radiation%20implosion | Radiation implosion is the compression of a target by the use of high levels of electromagnetic radiation. The major use for this technology is in fusion bombs and inertial confinement fusion research.
History
Radiation implosion was first developed by Klaus Fuchs and John von Neumann in the United States, as part of their work on the original "Classical Super" hydrogen-bomb design. Their work resulted in a secret patent filed in 1946, and later given to the USSR by Fuchs as part of his nuclear espionage. However, their scheme was not the same as used in the final hydrogen-bomb design, and neither the American nor the Soviet programs were able to make use of it directly in developing the hydrogen bomb (its value would become apparent only after the fact). A modified version of the Fuchs-von Neumann scheme was incorporated into the "George" shot of Operation Greenhouse.
In 1951, Stanislaw Ulam had the idea to use hydrodynamic shock of a fission weapon to compress more fissionable material to extremely high densities in order to make megaton-range, two-stage fission bombs. He then realized that this approach might be useful for starting a thermonuclear reaction. He presented the idea to Edward Teller, who realized that radiation compression would be both faster and more efficient than mechanical shock. This combination of ideas, along with a fission "spark plug" embedded inside the fusion fuel, became what is known as the Teller–Ulam design for the hydrogen bomb.
Fission bomb radiation source
Most of the energy released by a fission bomb is in the form of x-rays. The spectrum is approximately that of a black body at a temperature of 50,000,000 kelvins (a little more than three times the temperature of the Sun's core). The amplitude can be modeled as a trapezoidal pulse with a one microsecond rise time, one microsecond plateau, and one microsecond fall time. For a 30 kiloton fission bomb, the total x-ray output would be 100 terajoules (more than 70% of the total yield).
Radiation transport
In a Teller-Ulam bomb, the object to be imploded is called the "secondary". It contains fusion material, such as lithium deuteride, and its outer layers are a material which is opaque to x-rays, such as lead or uranium-238.
In order to get the x-rays from the surface of the primary, the fission bomb, to the surface of the secondary, a system of "x-ray reflectors" is used.
The reflector is typically a cylinder made of a material such as uranium. The primary is located at one end of the cylinder and the secondary is located at the other end. The interior of the cylinder is commonly filled with a foam which is mostly transparent to x-rays, such as polystyrene.
The term reflector is misleading, since it gives the reader an idea that the device works like a mirror. Some of the x-rays are diffused or scattered, but the majority of the energy transport happens by a two-step process: the x-ray reflector is heated to a high temperature by the flux from the primary, and then it emits x-rays which travel to the secondary. Various classified methods are used to improve the performance of the reflection process.
Some Chinese documents show that Chinese scientists used a different method to achieve radiation implosion. According to these documents, an X-ray lens, not a reflector, was used to transfer the energy from primary to secondary during the making of the first Chinese H-bomb.
The implosion process in nuclear weapons
The term "radiation implosion" suggests that the secondary is crushed by radiation pressure, and calculations show that while this pressure is very large, the pressure of the materials vaporized by the radiation is much larger. The outer layers of the secondary become so hot that they vaporize and fly off the surface at high speeds. The recoil from this surface layer ejection produces pressures which are an order of magnitude stronger than the simple radiation pressure. The so-called radiation implosion in thermonuclear weapons is therefore thought to be a radiation-powered ablation-drive implosion.
Laser radiation implosions
There has been much interest in the use of large lasers to ignite small amounts of fusion material. This process is known as inertial confinement fusion (ICF). As part of that research, much information on radiation implosion technology has been declassified.
When using optical lasers, there is a distinction made between "direct drive" and "indirect drive" systems. In a direct drive system, the laser beam(s) are directed onto the target, and the rise time of the laser system determines what kind of compression profile will be achieved.
In an indirect drive system, the target is surrounded by a shell (called a Hohlraum) of some intermediate-Z material, such as selenium. The laser heats this shell to a temperature such that it emits x-rays, and these x-rays are then transported onto the fusion target. Indirect drive has various advantages, including better control over the spectrum of the radiation, smaller system size (the secondary radiation typically has a wavelength 100 times smaller than the driver laser), and more precise control over the compression profile.
References
External links
http://nuclearweaponarchive.org/Library/Teller.html
Radiation
Implosion | Radiation implosion | Physics,Chemistry | 1,129 |
11,305,118 | https://en.wikipedia.org/wiki/Seimatosporium%20rhododendri | Seimatosporium rhododendri is a plant pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Amphisphaeriales
Fungus species
Taxa named by Lewis David de Schweinitz | Seimatosporium rhododendri | Biology | 49 |
4,917,179 | https://en.wikipedia.org/wiki/Discharge%20ionization%20detector | A discharge ionization detector (DID) is a type of detector used in gas chromatography.
Principle
A DID is an ion detector which uses a high-voltage electric discharge to produce ions. The detector uses an electrical discharge in helium to
generate high energy UV photons and metastable helium which ionizes all compounds except helium. The ions produce an electric current, which is the signal output of the detector. The greater the concentration of the component, the more ions are produced, and the greater the current.
Application
DIDs are sensitive to a broad range of components.
In air separation plants, they are used to detect the components ; ; ; ; in argon product in ppm range.
DIDs are non-destructive detectors. They do not destroy or consume the components they detect. Therefore, they can be used before other detectors in multiple-detector configurations.
DIDs are an improvement over helium ionization detectors in that they contain no radioactive source.
References
Gas chromatography | Discharge ionization detector | Chemistry | 201 |
3,475,938 | https://en.wikipedia.org/wiki/Timeline%20of%20information%20theory | A timeline of events related to information theory, quantum information theory and statistical physics, data compression, error correcting codes and related subjects.
1872 – Ludwig Boltzmann presents his H-theorem, and with it the formula Σpi log pi for the entropy of a single gas particle
1878 – J. Willard Gibbs defines the Gibbs entropy: the probabilities in the entropy formula are now taken as probabilities of the state of the whole system
1924 – Harry Nyquist discusses quantifying and the speed at which it can be transmitted by a communication system
1927 – John von Neumann defines the von Neumann entropy, extending the Gibbs entropy to quantum mechanics
1928 – Ralph Hartley introduces Hartley information as the logarithm of the number of possible messages, with information being communicated when the receiver can distinguish one sequence of symbols from any other (regardless of any associated meaning)
1929 – Leó Szilárd analyses Maxwell's demon, showing how a Szilard engine can sometimes transform information into the extraction of useful work
1940 – Alan Turing introduces the deciban as a measure of information inferred about the German Enigma machine cypher settings by the Banburismus process
1944 – Claude Shannon's theory of information is substantially complete
1947 – Richard W. Hamming invents Hamming codes for error detection and correction (to protect patent rights, the result is not published until 1950)
1948 – Claude E. Shannon publishes A Mathematical Theory of Communication
1949 – Claude E. Shannon publishes Communication in the Presence of Noise – Nyquist–Shannon sampling theorem and Shannon–Hartley law
1949 – Claude E. Shannon's Communication Theory of Secrecy Systems is declassified
1949 – Robert M. Fano publishes Transmission of Information. M.I.T. Press, Cambridge, Massachusetts – Shannon–Fano coding
1949 – Leon G. Kraft discovers Kraft's inequality, which shows the limits of prefix codes
1949 – Marcel J. E. Golay introduces Golay codes for forward error correction
1951 – Solomon Kullback and Richard Leibler introduce the Kullback–Leibler divergence
1951 – David A. Huffman invents Huffman encoding, a method of finding optimal prefix codes for lossless data compression
1953 – August Albert Sardinas and George W. Patterson devise the Sardinas–Patterson algorithm, a procedure to decide whether a given variable-length code is uniquely decodable
1954 – Irving S. Reed and David E. Muller propose Reed–Muller codes
1955 – Peter Elias introduces convolutional codes
1957 – Eugene Prange first discusses cyclic codes
1959 – Alexis Hocquenghem, and independently the next year Raj Chandra Bose and Dwijendra Kumar Ray-Chaudhuri, discover BCH codes
1960 – Irving S. Reed and Gustave Solomon propose Reed–Solomon codes
1962 – Robert G. Gallager proposes low-density parity-check codes; they are unused for 30 years due to technical limitations
1965 – Dave Forney discusses concatenated codes
1966 – Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) develop linear predictive coding (LPC), a form of speech coding
1967 – Andrew Viterbi reveals the Viterbi algorithm, making decoding of convolutional codes practicable
1968 – Elwyn Berlekamp invents the Berlekamp–Massey algorithm; its application to decoding BCH and Reed–Solomon codes is pointed out by James L. Massey the following year
1968 – Chris Wallace and David M. Boulton publish the first of many papers on Minimum Message Length (MML) statistical and inductive inference
1970 – Valerii Denisovich Goppa introduces Goppa codes
1972 – Jørn Justesen proposes Justesen codes, an improvement of Reed–Solomon codes
1972 – Nasir Ahmed proposes the discrete cosine transform (DCT), which he develops with T. Natarajan and K. R. Rao in 1973; the DCT later became the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3
1973 – David Slepian and Jack Wolf discover and prove the Slepian–Wolf coding limits for distributed source coding
1976 – Gottfried Ungerboeck gives the first paper on trellis modulation; a more detailed exposition in 1982 leads to a raising of analogue modem POTS speeds from 9.6 kbit/s to 33.6 kbit/s
1976 – Richard Pasco and Jorma J. Rissanen develop effective arithmetic coding techniques
1977 – Abraham Lempel and Jacob Ziv develop Lempel–Ziv compression (LZ77)
1982 – Valerii Denisovich Goppa introduces algebraic geometry codes
1989 – Phil Katz publishes the .zip format including DEFLATE (LZ77 + Huffman coding); later to become the most widely used archive container
1993 – Claude Berrou, Alain Glavieux and Punya Thitimajshima introduce Turbo codes
1994 – Michael Burrows and David Wheeler publish the Burrows–Wheeler transform, later to find use in bzip2
1995 – Benjamin Schumacher coins the term qubit and proves the quantum noiseless coding theorem
2003 – David J. C. MacKay shows the connection between information theory, inference and machine learning in his book.
2006 – Jarosław Duda introduces first Asymmetric numeral systems entropy coding: since 2014 popular replacement of Huffman and arithmetic coding in compressors like Facebook Zstandard, Apple LZFSE, CRAM or JPEG XL
2008 – Erdal Arıkan introduces polar codes, the first practical construction of codes that achieves capacity for a wide array of channels
References
Information theory
Information theory
Thermodynamics | Timeline of information theory | Physics,Chemistry,Mathematics,Technology,Engineering | 1,165 |
36,372,498 | https://en.wikipedia.org/wiki/System76 | System76, Inc. is an American computer manufacturer based in Denver, Colorado that sells notebook computers, desktop computers, and servers. The company utilizes free and open-source software, and offers a choice of Ubuntu or their own Ubuntu-based Linux distribution Pop!_OS as preinstalled operating systems.
History
System76 was founded by Carl Richell and Erik Fetzer. In 2003, Fetzer registered the domain system76.com to sell computers with Linux operating systems preinstalled, but the idea was not pursued until two years later. The number 76 in the company name is a reference to 1776, the year the American Revolution took place. Richell explained that the company hoped to spark an "open source revolution", giving consumers a choice to not use proprietary software.
In mid-2005, the founders considered which Linux distribution to offer, with Red Hat Enterprise Linux, openSUSE, Yoper and other distributions evaluated. Ubuntu was initially dismissed, but Richell and Fetzer changed their mind quickly after a re-evaluation. Richell liked Canonical's business model of completely free software, backed by commercial support when needed. The first computers sold by System76 shipped with Ubuntu 5.10 Breezy Badger preinstalled.
In response to Canonical switching to the GNOME desktop from the Unity interface for future releases of Ubuntu in May 2017, System76 announced a new shell called Pop. The company announced in June 2017 that it would be creating its own Linux distribution based on Ubuntu called Pop!_OS.
System76 began manufacturing their Thelio line of desktops in 2018 at a factory in Denver, Colorado. The company moved into a 24,000-square-foot warehouse.
Products
System76's products include the Thelio series of desktops, the Meerkat mini computer, several laptops, and several rack mount servers. The computers are shipped with Pop! OS, the company's in-house Linux Distribution.
System76's computer models are named after various African animals.
In May 2016, the company released the Launch series of mechanical keyboards, which feature the open source QMK firmware and built-in USB hubs.
System76's firmware partly disables the Intel Management Engine; the Intel Management Engine is proprietary firmware which runs an operating system in post-2008 Intel chipsets.
On 4 April 2023, System76's CEO and founder Carl Richell announced System76's first in-house designed laptop, code-named "Virgo".
Pop!_OS
Pop!_OS is a Linux distribution developed by System76 based on Ubuntu, using the GNOME desktop environment. It is intended for use by "developers, makers, and computer science professionals". Pop!_OS provides full disk encryption by default as well as streamlined window management, workspaces, and keyboard shortcuts for navigation.
In 2022, a System76 Engineer revealed that the company was working on a new Desktop Environment for Pop!_OS called COSMIC.
Community relations
The company has sponsored the Ubuntu Developer Summit, Southern California Linux Expo, and other Open Source/Linux events and conferences. Their official support forums are hosted by Canonical Ltd., the primary developer of Ubuntu.
System76 is an active member in the Colorado Ubuntu Community, serving as the corporate sponsor for Ubuntu LoCo events and release parties in downtown Denver.
See also
Framework Computer
Linux adoption
Purism (company)
Pine64
Tuxedo Computers
Notes
References
External links
Companies based in Denver
Computer companies of the United States
Computer hardware companies
Computer systems companies
Consumer electronics brands
Online retailers of the United States
Ubuntu | System76 | Technology | 778 |
41,761,607 | https://en.wikipedia.org/wiki/Arabinopyranosyl-N-methyl-N-nitrosourea | Arabinopyranosyl-N-methyl-N-nitrosourea, also known as Aranose (Араноза) is a cytostatic anticancer chemotherapeutic drug of an alkylating type. Chemically it is a nitrosourea derivative. It was developed in the Soviet Union in the 1970s. It was claimed by its developers that its advantages over other nitrosoureas are a relatively low hematological toxicity (compared to other nitrosoureas available at that time) and a wider therapeutic index, which allows for its outpatient administration.
History
It was first synthesized in late 1970s in the Laboratory of Organic Synthesis of Soviet Cancer Research Institute (which belonged to Academy of Medical Sciences of the USSR). Its first clinical trials in USSR were conducted in the late 1980s. Those trials confirmed its potential clinical efficacy in melanoma and better relative safety & improved tolerability over other nitrosourea antineoplastic compounds available at that time. In 1996 the compound obtained a Russian Pharmacologic Committee (a Russian analog of the U.S. Food and Drug Administration (FDA) and EMA in the European Community) regulatory approval for its use in melanoma under the trade name Aranoza.
Chemical structure
The compound is basically a conjugate between the well-known cytotoxic and mutagenic residue of N-nitroso-N-methylurea and the sugar L-arabinose. The L-arabinose is a well-known component of some other effective anticancer drug molecules, including cytarabine (cytosine arabinoside) and fludarabine (2-fluoro-arabinoside of the nucleoside adenosine). The presence of L-arabinoside residue in the molecule greatly improves its penetration into malignant cells and its blood–brain barrier penetration and, while maintaining or even increasing anticancer activity, reduces the toxicity for normally fast dividing cells (bone marrow cells and mucosa of the gastro-intestinal system), improving the concentration ratio "tumor / normal tissue".
Mechanism of action
Alkylating DNA with DNA intra-strand adducts and, more problematic to the cell, cross-linking between strands. This, in turn, inhibits mitosis and promotes apoptosis of the cell affected.
Types of cancer for which it is indicated
Melanoma of the skin and eye, together with dacarbazine and interferon-alpha in combination chemotherapy. During preclinical trials it also shown some potential promise (in combination chemotherapy with cisplatin and gemcitabine or with cisplatin and irinotecan) for experimental non-squamous cell lung cancer.
Main side effects
Like many other cytotoxic drugs, it can often cause alopecia, headache, muscle pain, joint pain, nausea and vomiting, myelosuppression with leukopenia (especially neutropenia), lymphopenia, thrombocytopenia, anemia and immunosuppression. At therapeutic doses, those side effects are usually relatively milder compared with carmustine and lomustine.
See also
List of Russian drugs
References
External links
Aranose Patient Information (in Russian)
Nitrosoureas
Amino sugars
Arabinosides
Substances discovered in the 1970s
Drugs in the Soviet Union
Soviet inventions | Arabinopyranosyl-N-methyl-N-nitrosourea | Chemistry | 716 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.