id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
63,091,229
https://en.wikipedia.org/wiki/Lucy%20Collinson
Lucy Collinson is a microbiologist, electron microscopist and scientist working at the Francis Crick Institute in London. She is the head of electron microscopy. Early career In 1998, Collinson completed a PhD in molecular microbiology at Barts and The London School of Medicine and Dentistry. References External links Year of birth missing (living people) Living people 21st-century British scientists Alumni of Barts and The London School of Medicine and Dentistry British microbiologists Electron microscopy Microscopists Place of birth missing (living people)
Lucy Collinson
Chemistry
110
32,537,475
https://en.wikipedia.org/wiki/Characteristic%20based%20product%20configurator
A characteristic-based product configurator is a product configurator extension which uses a set of discrete variables, called characteristics (or features), to define all possible product variations. Characteristics There are two characteristic types: binary variables, which describe the presence or not of a specific feature, n-values variables, which describe a selection between n possible value for a specific product feature. Constraints The range of characteristic-value combinations is reduced by a variety of constraints that define which combinations can, cannot, and must occur alongside each other. These constraints can be reflective of technological or commercial constraints in the real world. The constraints can represent: incompatibility: they indicate the mutual exclusivity between some product feature-values; implication: they indicate the presence of a specific feature-value is constrained to the presence of another feature-value. Characteristic filters The use of characteristics permits the user to abstract the finished product by describing filter conditions, which describe subsets of product variations using boolean functions on the characteristics: The AND, NOR, OR logical operators use and simplify the boolean function definitions, because they permit the user to regroup together the characteristic-values which may be present (AND), absent (NOR), or not all absent (OR); Thanks to the decoupling introduced by the use of characteristics, it is not necessary to redefine the boolean functions when new commercial codes are introduced which can be mapped to some product-variation already covered by some characteristics combination. Closed or open configuration Using a characteristic-based configurator, it is possible to define a product variation in two ways: Open configuration: the user will simply valuate all the characteristics complying with the technological and commercial constraints, without having a set of base values to work from; Closed configuration: it starts from a pre-selected base-preparation (representing a sub-class of product variations) which fixates a subset of characteristics, to which the user will optionally add other information valuating the (still not fixed) characteristic-values, complying with the technological and commercial constraints. It can be useful to specify that a requested characteristic-value can replace another characteristic-value that is incompatible with the requested one present in the base-preparation Applications Some examples of applications where using a characteristics-based product configurator may be advantageous are: Bill of materials applications: to every part-number is associated a characteristics-filter, which selects the subset of product-variations in which the part-number will be used Manufacturing process management systems: a characteristics-filter is associated to each operation that will select the subset of product variants where that operation is done Commercial applications: the ordinality and mandatory requirements of a market are related to characteristic-filters that identify the product variations subset to which they apply Examples pCon.planner from EasternGraphics is an OFML-based complex product configurator used for interior design. References Computer programming
Characteristic based product configurator
Technology,Engineering
599
205,393
https://en.wikipedia.org/wiki/Binary%20classification
Binary classification is the task of classifying the elements of a set into one of two groups (each called class). Typical binary classification problems include: Medical testing to determine if a patient has a certain disease or not; Quality control in industry, deciding whether a specification has been met; In information retrieval, deciding whether a page should be in the result set of a search or not In administration, deciding whether someone should be issued with a driving licence or not In cognition, deciding whether an object is food or not food. When measuring the accuracy of a binary classifier, the simplest way is to count the errors. But in the real world often one of the two classes is more important, so that the number of both of the different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative). Four outcomes Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category: true positives TP (correct positive assignments), true negatives TN (correct negative assignments), false positives FP (incorrect positive assignments), and false negatives FN (incorrect negative assignments). These can be arranged into a 2×2 contingency table, with rows corresponding to actual value – condition positive or condition negative – and columns corresponding to classification value – test outcome positive or test outcome negative. Evaluation From tallies of the four basic outcomes, there are many approaches that can be used to measure the accuracy of a classifier or predictor. Different fields have different preferences. The eight basic ratios A common approach to evaluation is to begin by computing two ratios of a standard pattern. There are eight basic ratios of this form that one can compute from the contingency table, which come in four complementary pairs (each pair summing to 1). These are obtained by dividing each of the four numbers by the sum of its row or column, yielding eight numbers, which can be referred to generically in the form "true positive row ratio" or "false negative column ratio". There are thus two pairs of column ratios and two pairs of row ratios, and one can summarize these with four numbers by choosing one ratio from each pair – the other four numbers are the complements. The row ratios are: true positive rate (TPR) = (TP/(TP+FN)), aka sensitivity or recall. These are the proportion of the population with the condition for which the test is correct. with complement the false negative rate (FNR) = (FN/(TP+FN)) true negative rate (TNR) = (TN/(TN+FP), aka specificity (SPC), with complement false positive rate (FPR) = (FP/(TN+FP)), also called independent of prevalence The column ratios are: positive predictive value (PPV, aka precision) (TP/(TP+FP)). These are the proportion of the population with a given test result for which the test is correct. with complement the false discovery rate (FDR) (FP/(TP+FP)) negative predictive value (NPV) (TN/(TN+FN)) with complement the false omission rate (FOR) (FN/(TN+FN)), also called dependence on prevalence. In diagnostic testing, the main ratios used are the true column ratios – true positive rate and true negative rate – where they are known as sensitivity and specificity. In informational retrieval, the main ratios are the true positive ratios (row and column) – positive predictive value and true positive rate – where they are known as precision and recall. Cullerne Bown has suggested a flow chart for determining which pair of indicators should be used when. Otherwise, there is no general rule for deciding. There is also no general agreement on how the pair of indicators should be used to decide on concrete questions, such as when to prefer one classifier over another. One can take ratios of a complementary pair of ratios, yielding four likelihood ratios (two column ratio of ratios, two row ratio of ratios). This is primarily done for the column (condition) ratios, yielding likelihood ratios in diagnostic testing. Taking the ratio of one of these groups of ratios yields a final ratio, the diagnostic odds ratio (DOR). This can also be defined directly as (TP×TN)/(FP×FN) = (TP/FN)/(FP/TN); this has a useful interpretation – as an odds ratio – and is prevalence-independent. Other metrics There are a number of other metrics, most simply the accuracy or Fraction Correct (FC), which measures the fraction of all instances that are correctly categorized; the complement is the Fraction Incorrect (FiC). The F-score combines precision and recall into one number via a choice of weighing, most simply equal weighing, as the balanced F-score (F1 score). Some metrics come from regression coefficients: the markedness and the informedness, and their geometric mean, the Matthews correlation coefficient. Other metrics include Youden's J statistic, the uncertainty coefficient, the phi coefficient, and Cohen's kappa. Statistical binary classification Statistical classification is a problem studied in machine learning in which the classification is performed on the basis of a classification rule. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification. Some of the methods commonly used for binary classification are: Decision trees Random forests Bayesian networks Support vector machines Neural networks Logistic regression Probit model Genetic Programming Multi expression programming Linear genetic programming Each classifier is best in only a select domain based upon the number of observations, the dimensionality of the feature vector, the noise in the data and many other factors. For example, random forests perform better than SVM classifiers for 3D point clouds. Converting continuous values to binary Binary classification may be a form of dichotomization in which a continuous function is transformed into a binary variable. Tests whose results are of continuous values, such as most blood values, can artificially be made binary by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff. However, such conversion causes a loss of information, as the resultant binary classification does not tell how much above or below the cutoff a value is. As a result, when converting a continuous value that is close to the cutoff to a binary one, the resultant positive or negative predictive value is generally higher than the predictive value given directly from the continuous value. In such cases, the designation of the test of being either positive or negative gives the appearance of an inappropriately high certainty, while the value is in fact in an interval of uncertainty. For example, with the urine concentration of hCG as a continuous value, a urine pregnancy test that measured 52 mIU/ml of hCG may show as "positive" with 50 mIU/ml as cutoff, but is in fact in an interval of uncertainty, which may be apparent only by knowing the original continuous value. On the other hand, a test result very far from the cutoff generally has a resultant positive or negative predictive value that is lower than the predictive value given from the continuous value. For example, a urine hCG value of 200,000 mIU/ml confers a very high probability of pregnancy, but conversion to binary values results in that it shows just as "positive" as the one of 52 mIU/ml. See also Approximate membership query filter Examples of Bayesian inference Classification rule Confusion matrix Detection theory Kernel methods Multiclass classification Multi-label classification One-class classification Prosecutor's fallacy Receiver operating characteristic Thresholding (image processing) Uncertainty coefficient, aka proficiency Qualitative property Precision and recall (equivalent classification schema) References Bibliography Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ( SVM Book) John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. (Website for the book) Bernhard Schölkopf and A. J. Smola: Learning with Kernels. MIT Press, Cambridge, Massachusetts, 2002. Statistical classification Machine learning
Binary classification
Engineering
1,793
30,695,595
https://en.wikipedia.org/wiki/Biology%20Monte%20Carlo%20method
Biology Monte Carlo methods (BioMOCA) have been developed at the University of Illinois at Urbana-Champaign to simulate ion transport in an electrolyte environment through ion channels or nano-pores embedded in membranes. It is a 3-D particle-based Monte Carlo simulator for analyzing and studying the ion transport problem in ion channel systems or similar nanopores in wet/biological environments. The system simulated consists of a protein forming an ion channel (or an artificial nanopores like a Carbon Nano Tube, CNT), with a membrane (i.e. lipid bilayer) that separates two ion baths on either side. BioMOCA is based on two methodologies, namely the Boltzmann transport Monte Carlo (BTMC) and particle-particle-particle-mesh (P3M). The first one uses Monte Carlo method to solve the Boltzmann equation, while the later splits the electrostatic forces into short-range and long-range components. Backgrounds In full-atomic molecular dynamics simulations of ion channels, most of the computational cost is for following the trajectory of water molecules in the system. However, in BioMOCA the water is treated as a continuum dielectric background media. In addition to that, the protein atoms of the ion channel are also modeled as static point charges embedded in a finite volume with a given dielectric coefficient. So is the lipid membrane, which is treated as a static dielectric region inaccessible to ions. In fact the only non-static particles in the system are ions. Their motion is assumed classical, interacting with other ions through electrostatic interactions and pairwise Lennard-Jones potential. They also interact with the water background media, which is modeled using a scattering mechanism. The ensemble of ions in the simulation region, are propagated synchronously in time and 3-D space by integrating the equations of motion using the second-order accurate leap-frog scheme. Ion positions r and forces F are defined at time steps t, and t + dt. The ion velocities are defined at t – dt/2, t + dt/2. The governing finite difference equations of motion are where F is the sum of electrostatic and pairwise ion-ion interaction forces. Electrostatic field solution The electrostatic potential is computed at regular time intervals by solving the Poisson’s equation where and are the charge density of ions and permanent charges on the protein, respectively. is the local dielectric constant or permittivity, and is the local electrostatic potential. Solving this equation provides a self-consistent way to include applied bias and the effects of image charges induced at dielectric boundaries. The ion and partial charges on protein residues are assigned to a finite rectangular grid using the cloud-in-cell (CIC) scheme. Solving the Poisson equation on the grid counts for the particlemesh component of the P3M scheme. However, this discretization leads to an unavoidable truncation of the short-range component of electrostatic force, which can be corrected by computing the short-range charge-charge Coulombic interactions. Dielectric coefficient Assigning the appropriate values for dielectric permittivity of the protein, membrane, and aqueous regions is of great importance. The dielectric coefficient determines the strength of the interactions between charged particles and also the dielectric boundary forces (DBF) on ions approaching a boundary between two regions of different permittivity. However, in nano scales the task of assigning specific permittivity is problematic and not straightforward. The protein or membrane environment could respond to an external field in a number of different ways. Field induced dipoles, reorientation of permanent dipoles, protonation and deprotonation of protein residues, larger scale reorganization of ionized side-chains and water molecules, both within the interior and on the surface of the protein, are all examples of how complicated the assignment of permittivity is. In MD simulations, where all the charges, dipoles, and field induced atomic dipoles are treated explicitly then it is suggested that a dielectric value of 1 is appropriate. However, in reduced-particle ion simulation programs, such as ours, where the protein, membrane, and water are continuum backgrounds and treated implicitly, and on top of that, the ion motion takes place on the same time-scale as the protein’s response to its presence, it is very difficult to assign the dielectric coefficients. In fact, changing the dielectric coefficients could easily alter the channel characteristics, such as ion permeation and selectivity The assignment of dielectric coefficient for water is another key issue. The water molecules inside ion channels could be very ordered due to tapered size of the pore, which is often lined with highly charged residues, or hydrogen bond formation between water molecules and protein. As a result, the dielectric constant of water inside an ion channel could be quite different from the value under bulk conditions. To make the matter even more complicated, the dielectric coefficients of water inside nanopores is not necessarily an isotropic scalar value, but an anisotropic tensor having different values in different directions. Anisotropic permittivity It has become evident that the macroscopic properties of a system do not necessarily extend to the molecular length scales. In a recent research study carried by Reza Toghraee, R. Jay Mashl, and Eric Jakobsson at the University of Illinois, Urbana-Champaign, they used Molecular Dynamics simulations to study the properties of water in featureless hydrophobic cylinders with diameters ranging from 1 to 12 nm. This study showed that water undergoes distinct transitions in structure, dielectric properties, and mobility as the tube diameter is varied. In particular they found that the dielectric properties in the range of 1 to 10 nm is quite different from bulk water and is in fact anisotropic in nature. Though, such featureless hydrophobic channels do not represent actual ion channels and more research has to be done in this area before one could use such data for ion channels, it is evident that water properties like permittivity inside an ion channel or nano-pore could be much more complicated that it has been thought before. While a high axial dielectric constant shields ion’s electrostatic charges in the axial direction (along the channel), low radial dielectric constant increases the interaction between the mobile ion and the partial charges, or the dielectric charge images on the channel, conveying stronger selectivity in ion channels. Solving the Poisson equation based on an anisotropic permittivity has been incorporated into BioMOCA using the box integration discretization method, which has been briefly described below. Calculations Box integration discretization In order to use box integration for discretizing a D-dimensional Poisson equation with being a diagonal D × D tensor, this differential equation is reformulated as an integral equation. Integration the above equation over a D-dimensional region , and using Gauss theorem, then the integral formulation is obtained In this appendix it is assumed to be a two-dimensional case. Upgrading to a three-dimensional system would be straightforward and legitimate as the Gauss theorem is also valid for the one and three dimensions. is assumed to be given on the rectangular regions between nodes, while is defined on the grid nodes (as illustrated on figure at the right). The integration regions are then chosen as rectangles centered around node and extending to the 4 nearest neighbor nodes. The gradient is then approximated using centered difference normal to the boundary of the integration region , and average over the integration surface . This approach allows us to approximate the left hand side of the Poisson equation above in first order as where and are the two components of the diagonal of the tensor . Discretizing the right-hand side of the Poisson equation is fairly simple. is discretized on the same grid nodes, as it's been done for . Ion size The finite size of ions is accounted for in BioMOCA using pairwise repulsive forces derived from the 6–12 Lennard-Jones potential. A truncated-shifted form of the Lennard-Jones potential is used in the simulator to mimic ionic core repulsion. The modified form of the Lennard-Jones pairwise potential that retains only the repulsive component is given by Here, is the Lennard-Jones energy parameter and is the average of the individual Lennard-Jones distance parameters for particles i and j. Using a truncated form of the potential is computationally efficient while preventing the ions from overlapping or coalescing, something that would be clearly unphysical. Ion-protein interaction Availability of high-resolution X-ray crystallographic measurements of complete molecular structures provides information about the type and location of all atoms that forms the protein. In BioMOCA the protein atoms are modeled as static point charges embedded in a finite volume inaccessible to the ions and associated with a user-defined dielectric coefficient. Moreover, a number of force-field parameters are available that provide information about the charge and radii of atoms in different amino-acid groups. The conjunction of the molecular structure and force fields provide the coordinates, radii, and charge of each atom in the protein channel. BioMOCA uses such information in the standard PQR (Position-Charge-Radius) format to map the protein system onto a rectangular grid. Ideally, the steric interactions between protein atoms and the ions in the aqueous medium are to use a repulsive potential like Lennard-Jones to prevent ions from penetrating the protein. As this approach could add a significant load to the amount of calculations, a simpler approach is chosen that treats the protein surfaces as predetermined hard wall boundaries. Many recent open source molecular biology packages have built-in facilities that determine the volume accessible to ions in a protein system. The Adaptive Poisson Boltzmann Solver (APBS) scheme has been incorporated to BioMOCA to obtain the accessible volume region and therefore partition the simulation domain into continuous regions. Ions are deemed to have access to protein and lipid regions and if any point within the finite-size of ionic sphere crosses the protein or membrane boundary, a collision is assumed and the ion is reflected diffusively. Ion-water interactions As a reduced particle approach, BioMOCA replaces the explicit water molecules with continuum background and handles the ion-water interactions using BTMC method, in which, appropriate scattering rates should be chosen. In other words, ion trajectories are randomly interrupted by scattering events that account for the ions’ diffusive motion in water. In between these scattering events, ions follow the Newtonian forces. The free flight times, Tf, are generated statistically from the total scattering rate according to where r is a random number uniformly distributed on the unit interval. , a function of momentum, is the total scattering rate for all collision mechanisms. At the end of each free flight, the ion’s velocity is reselected randomly from a Maxwellian distribution. As the correct scattering mechanism for ion-water interactions in nonbulk electrolyte solutions has yet to be developed, a position dependent scattering rate linked to the local diffusivity is used in our model. This dependency on position comes from the fact that water molecules can have different order of organization in different regions, which will affect the scattering rate. Position-dependent diffusivity It is widely accepted that the ions and water molecules do not have the same mobility or diffusivity in confined regions as in bulk. In fact, it is more likely to have a lessening in the effective mobility of ions in ion channels. In reduced particle methods where the channel water is assumed as implicit continuum background, a mean ion mobility is needed to reveal how ions could diffuse due to local electrostatic forces and random events. In Transport Monte Carlo simulations, the total scattering rate (), is assumed to only result from ion-water interactions; it is related to ion diffusivity with the expression where m is the mass of the ion and D is its diffusion constant. As the equation indicates, reduced diffusivity of ions inside the lumen of the channel renders to increased incidence of scattering events. Hydration shells In addition to having a diffusive effect on ion transport, water molecules also form hydration shells around individual ions due to their polar nature. The hydration shell not only shields the charge on ions from other ions but also modulates the ion radial distribution function causing the formation of peaks and troughs. The average minimum distance between two ions is increased as there is always at least one layer of water molecules present between them, acting as a physical deterrent preventing two ions from getting too close to each other, in a manner that is similar to the short-range repulsive component of the Lennard-Jones potential. The theory of hydration shells is well developed in the physical chemistry literature however a simple model is required that captures the essential effects with as little computational overhead as possible. For this purpose the same pairwise potential discussed by Im and Roux is implemented to include the effect of hydration shells. The coefficients ci were determined empirically for a 1 M KCl solution, using MD simulations to benchmark the ion radial distribution functions against Equilibrium Monte Carlo simulations. The effect of hydration shells was found to be important in simulations at higher salt concentrations where the conductance of many ion channels, porin among them, is observed to saturate as the salt concentration in the electrolyte baths is further increased. Earlier simulations that did not include a model of hydration shells did not reproduce the conductance saturation behavior. This suggests an additional repulsive potential acting to prevent ion crowding, and hence limiting the concentration of ions and current density in the confined space of the pore even at high bath salt concentration. When the repulsive potential was included moderate channel conductance was observed. Conditions and methods Boundary conditions The electrical and physiological properties of ion channels are experimentally measured by inserting the channel into a lipid membrane separating two baths containing solutions of specific concentrations. A constant electrostatic bias is applied across the channel by immersing the electrodes in the two baths. Formulating boundary conditions that accurately represent these contact regions may require enormously large bath regions and is a challenging task. Beyond a Debye length from the membrane the electrostatic potential and ion densities do not vary appreciably. This assumption has been supported by the results of continuum results presented earlier. For typical salt concentrations used in ion channel simulations, the Debye length is of the order of 10 Å. Using the assumption, Dirichlet boundary conditions are imposed on the potential at the two domain boundary planes that are transverse to the channel, taking care that these planes are sufficiently far from the membrane. The other problem in duplicating the experimental conditions is the problem of maintaining fixed charge density in the two baths. This problem is treated by maintaining the specified density in two buffer regions extending from the boundary plane toward the membrane. The number of ions needed to maintain the density in the two buffer regions is calculated at the start of the simulations. The count of the ions in these buffers is sampled throughout the simulation and an ion is injected whenever a deficit is observed. The initial velocity of the injected particle is decided according to Maxwellian distribution. The ions can leave the system only by exiting through the two Dirichlet boundary planes and an ion is not removed artificially from these buffer regions. The reflections from the Neumann boundary planes are treated as elastic reflections. Multi-grids and grid focusing method In all most any of the methods in simulation of ion channels, the major computational cost comes from the calculation of electrostatic forces acting on the ions. In continuum models, for instance, where ionic density exist rather than explicit ions, the electrostatic potential is calculated in a self-consistent manner by solving the Poisson equation. In MD simulations, on the other hand, the electrostatic forces acting on the particles are calculated by explicit evaluation of the Coulombic force term, often splitting the short-range and long-range electrostatic forces so they could be computed with different methods. In a model such as a reduced particle method, the longrange electrostatic forces are evaluated by solving the Poisson equation and augmenting the forces so obtained with a short-range component. By solving the Poisson equation it is possible to self-consistently include the forces arising from the bias to the system, while this is a difficult issue to be addressed in MD simulations. Currently there are two Poisson solvers implemented in BioMOCA based on the finite difference method. One uses the pre-conditioned Conjugate Gradient scheme (pCG) and is used by default. The later is borrowed from an APBS solver, which uses a V-multi-grid scheme. Other than the numerical approach to solve the Poisson equation, the main difference between the two solvers is on how they address the permittivity in the system. In the first solver, a dielectric value is assigned to each cell in the grid, while in the APBS solver the dielectric coefficients are defined on the grid nodes. As discussed earlier box integration method is used in the pCG solver, which treats the Poisson equation in the most accurate way. Even though a full multigrid solver based on box-integration method has been under development, there is a neat way to reuse the already exiting code and treat the ion channel systems. Ion channel simulations require the presence of large bath regions for accurate treatment of screening. There being of such bath regions make the mesh domain of Poisson equation large and leads to either a large number of grid points with fine mesh resolution or a small number of grid points with very coarse discretization. From bulk simulations a coarse mesh is sufficient for describing the baths using the P3M scheme. However, a fine resolution is required in the channel domain because of the highly charged nature of these regions and the presence of spatially varying dielectric regions. Besides the ultimate interest is to study the channel behavior in terms of ion permeability, selectivity, gating, density, etc.... In other words, it is better off to put more computational resources in the channel region, and bare minimum in the baths to reduce the overall computational cost and speed up of simulations from weeks to perhaps days instead. A scheme based on the grid focusing method has been developed that makes it possible to satisfy the requirement of large bath region and a fine grid resolution in channel at the same time in a computationally effective way. This methodology is capable to have multiple fine mesh domains, which may be needed to describe multiple pore channels like OmpF porin, or an array of ion channels sharing the same bath regions or even having yet finer meshes inside a fine mesh for relatively large channels with narrow ion passages like Nicotine receptor channel. The first grid is coarse mesh spanning the entire problem domain including the bath regions and the channel region. The second grid (and so on for any other grids, 3rd, 4th, etc.) is a relatively much finer mesh that spans a sub-domain of the system containing the region that requires fine resolution like the channel pore. The Poisson equation is first solved on the coarse mesh with all the Dirichlet and Neumann boundary conditions, taking into account the applied bias. Next the boundary conditions for the secondary meshes are obtained by interpolating from the first or previous solutions of the Poisson equation. The Poisson equation is solved again for the finer meshes using the new boundary conditions. In this way, electrostatic fields with different mesh discretization for different regions can be generated. EMF and DBF The electro-motive-force (EMF) is the measurement of the energy needed for a charged particle like ion to cross the ion channel embedded in a membrane. Part of this potential energy barrier is due to the interaction between the crossing ion and the permanent/partial charges on the protein residues. The other part comes from the induced dipoles in the protein/membrane dielectric medium, and is referred as dielectric-boundary-force (DBF). To compute the DBF alone, one may turn off all the static charges on the protein residues and drag the ion through the pore and compute the energy barrier using It is important to note that EMF or DBF measurements are just qualitative measurements, as an ion does not necessarily cross the channel through the center of its lumen in a straight line and it is often accompanied by other ions moving in the same or opposite directions, which dramatically changes the dynamics of the system. Moreover, unlike steered MD calculations where the protein residues dynamically reposition themselves as an ion or ions are bouncing across the channel, in our EMF or DBF calculations protein is modeled as a static continuum, which further affects the energy calculations in a more quantitative way. Another issue that additionally impacts the measurements is absence of water hydration molecules, which move with the ion and shield part of its charge. Having said all of above, still computing EMF or DBF is valuable to address channel selectivity or gating. Computing either of these two energy barriers is available as an option in BioMOCA. Visualization using VMD VMD was equipped with the option of loading BioMOCA structures. This is a very useful feature as one could load both the protein structure (i.e. PDB or PQR file) along with the structures generated by BioMOCA to make comparisons. Figure at the right shows how BioMOCA has generated a structure for Gramicidin channel with a membrane wrapped around it. Furthermore, BioMOCA also dumps the ion trajectories in standard formats so they could be later loaded to molecular visualization tools such as VMD and watched frame by frame in a movie format. Recording trajectories in binary Other than counting the number of ions crossing the channel, sometimes it is desirable to study their behavior at different regions of the channel. Such examples would be the average occupancy of ions or their average moving velocity inside the channel or a nanopore. BioMOCA has been equipped with the option of dumping every ions position, average and instantaneous velocities, potential and kinetic energies, average and instantaneous displacements and other info at every step (or few steps) of the simulations in ASCII format, so such trajectory information could be studied later on to gather further statistics. From a technical point of view however, dumping such information for tens of ions, even at every few hundreds of time steps, could slow down the simulations and end up with huge files accumulating to tens of gigabytes. Loading such files later on from disk storage is also a very time-consuming and computationally inefficient procedure. Over and above that, recoding the numerical information in ASCII format does not hold its machine precision and has loss of accuracy. Solving such problems is actually an easy task and it is simply to avoid using ASCII format and use binary format instead. Not only it preserves the machine accuracy but also writing and reading to file system is a lot faster. The computational overhead to dump the trajectories becomes negligible and the trajectory files become about two orders of magnitude smaller in size. The downside might be that programming and decoding the data could become very tricky, but once it's done correctly and with care, the advantages of using binary format are well worth the extra effort. BioMOCA is now equipped with the tools to record the trajectory information in binary format. See also Monte Carlo method Biology Computational biology References University of Illinois Urbana-Champaign Monte Carlo methods Statistical mechanics Computational physics Randomized algorithms
Biology Monte Carlo method
Physics
4,860
562,353
https://en.wikipedia.org/wiki/Butterworth%20filter
The Butterworth filter is a type of signal processing filter designed to have a frequency response that is as flat as possible in the passband. It is also referred to as a maximally flat magnitude filter. It was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled "On the Theory of Filter Amplifiers". Original paper Butterworth had a reputation for solving very complex mathematical problems thought to be 'impossible'. At the time, filter design required a considerable amount of designer experience due to limitations of the theory then in use. The filter was not in common use for over 30 years after its publication. Butterworth stated that: Such an ideal filter cannot be achieved, but Butterworth showed that successively closer approximations were obtained with increasing numbers of filter elements of the right values. At the time, filters generated substantial ripple in the passband, and the choice of component values was highly interactive. Butterworth showed that a low-pass filter could be designed whose gain as a function of frequency (i.e., the magnitude of its frequency response) is: where is the angular frequency in radians per second and is the number of poles in the filter—equal to the number of reactive elements in a passive filter. Its cutoff frequency (the half-power point of approximately −3 dB or a voltage gain of 1/ ≈ 0.7071) is normalized to 𝜔 = 1 radian per second. Butterworth only dealt with filters with an even number of poles in his paper, though odd-order filters can be created with the addition of a single-pole filter applied to the output of the even-order filter. He built his higher-order filters from 2-pole filters separated by vacuum tube amplifiers. His plot of the frequency response of 2-, 4-, 6-, 8-, and 10-pole filters is shown as A, B, C, D, and E in his original graph. Butterworth solved the equations for two-pole and four-pole filters, showing how the latter could be cascaded when separated by vacuum tube amplifiers and so enabling the construction of higher-order filters despite inductor losses. In 1930, low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were rather lossy. Butterworth discovered that it was possible to adjust the component values of the filter to compensate for the winding resistance of the inductors. He used coil forms of 1.25″ diameter and 3″ length with plug-in terminals. Associated capacitors and resistors were contained inside the wound coil form. The coil formed part of the plate load resistor. Two poles were used per vacuum tube and RC coupling was used to the grid of the following tube. Butterworth also showed that the basic low-pass filter could be modified to give low-pass, high-pass, band-pass and band-stop functionality. Overview The frequency response of the Butterworth filter is maximally flat (i.e., has no ripples) in the passband and rolls off towards zero in the stopband. When viewed on a logarithmic Bode plot, the response slopes off linearly towards negative infinity. A first-order filter's response rolls off at −6 dB per octave (−20 dB per decade) (all first-order lowpass filters have the same normalized frequency response). A second-order filter decreases at −12 dB per octave, a third-order at −18 dB and so on. Butterworth filters have a monotonically changing magnitude function with , unlike other filter types that have non-monotonic ripple in the passband and/or the stopband. Compared with a Chebyshev Type I/Type II filter or an elliptic filter, the Butterworth filter has a slower roll-off, and thus will require a higher order to implement a particular stopband specification, but Butterworth filters have a more linear phase response in the passband than Chebyshev Type I/Type II and elliptic filters can achieve. Example A transfer function of a third-order low-pass Butterworth filter design shown in the figure on the right looks like this: A simple example of a Butterworth filter is the third-order low-pass design shown in the figure on the right, with  = 4/3 F,  = 1 Ω,  = 3/2 H, and  = 1/2 H. Taking the impedance of the capacitors to be and the impedance of the inductors to be , where is the complex frequency, the circuit equations yield the transfer function for this device: The magnitude of the frequency response (gain) is given by obtained from and the phase is given by The group delay is defined as the negative derivative of the phase shift with respect to angular frequency and is a measure of the distortion in the signal introduced by phase differences for different frequencies. The gain and the delay for this filter are plotted in the graph on the left. There are no ripples in the gain curve in either the passband or the stopband. The log of the absolute value of the transfer function is plotted in complex frequency space in the second graph on the right. The function is defined by the three poles in the left half of the complex frequency plane. These are arranged on a circle of radius unity, symmetrical about the real axis. The gain function will have three more poles on the right half-plane to complete the circle. By replacing each inductor with a capacitor and each capacitor with an inductor, a high-pass Butterworth filter is obtained. A band-pass Butterworth filter is obtained by placing a capacitor in series with each inductor and an inductor in parallel with each capacitor to form resonant circuits. The value of each new component must be selected to resonate with the old component at the frequency of interest. A band-stop Butterworth filter is obtained by placing a capacitor in parallel with each inductor and an inductor in series with each capacitor to form resonant circuits. The value of each new component must be selected to resonate with the old component at the frequency that is to be rejected. Transfer function Like all filters, the typical prototype is the low-pass filter, which can be modified into a high-pass filter, or placed in series with others to form band-pass and band-stop filters, and higher order versions of these. The gain of an th-order Butterworth low-pass filter is given in terms of the transfer function as where is the order of filter, is the cutoff frequency (approximately the −3 dB frequency), and is the DC gain (gain at zero frequency). It can be seen that as approaches infinity, the gain becomes a rectangle function and frequencies below will be passed with gain , while frequencies above will be suppressed. For smaller values of , the cutoff will be less sharp. We wish to determine the transfer function where (from Laplace transform). Because and, as a general property of Laplace transforms at , , if we select such that: then, with , we have the frequency response of the Butterworth filter. The poles of this expression occur on a circle of radius at equally-spaced points, and symmetric around the negative real axis. For stability, the transfer function, , is therefore chosen such that it contains only the poles in the negative real half-plane of . The -th pole is specified by and hence The transfer (or system) function may be written in terms of these poles as . where is the product of a sequence operator. The denominator is a Butterworth polynomial in . Normalized Butterworth polynomials The Butterworth polynomials may be written in complex form as above, but are usually written with real coefficients by multiplying pole pairs that are complex conjugates, such as and . The polynomials are normalized by setting . The normalized Butterworth polynomials then have the general product form Factors of Butterworth polynomials of order 1 through 10 are shown in the following table (to six decimal places). Factors of Butterworth polynomials of order 1 through 6 are shown in the following table (Exact). where the Greek letter phi ( or ) represents the golden ratio. It is an irrational number that is a solution to the quadratic equation with a value of The th Butterworth polynomial can also be written as a sum with its coefficients given by the recursion formula and by the product formula where Further, . The rounded coefficients for the first 10 Butterworth polynomials are: The normalized Butterworth polynomials can be used to determine the transfer function for any low-pass filter cut-off frequency , as follows , where Transformation to other bandforms are also possible, see prototype filter. Maximal flatness Assuming and , the derivative of the gain with respect to frequency can be shown to be which is monotonically decreasing for all since the gain is always positive. The gain function of the Butterworth filter therefore has no ripple. The series expansion of the gain is given by In other words, all derivatives of the gain up to but not including the 2-th derivative are zero at , resulting in "maximal flatness". If the requirement to be monotonic is limited to the passband only and ripples are allowed in the stopband, then it is possible to design a filter of the same order, such as the inverse Chebyshev filter, that is flatter in the passband than the "maximally flat" Butterworth. High-frequency roll-off Again assuming , the slope of the log of the gain for large is In decibels, the high-frequency roll-off is therefore 20 dB/decade, or 6 dB/octave (the factor of 20 is used because the power is proportional to the square of the voltage gain; see 20 log rule.) Minimum order To design a Butterworth filter using the minimum required number of elements, the minimum order of the Butterworth filter may be calculated as follows. where: and are the pass band frequency and attenuation at that frequency in dB. and are the stop band frequency and attenuation at that frequency in dB. is the minimum number of poles, the order of the filter. denotes the ceiling function. Nonstandard cutoff attenuation The cutoff attenuation for Butterworth filters is usually defined to be −3.01 dB. If it is desired to use a different attenuation at the cutoff frequency, then the following factor may be applied to each pole, whereupon the poles will continue to lie on a circle, but the radius will no longer be unity. The cutoff attenuation equation may be derived through algebraic manipulation of the Butterworth defining equation stated at the top of the page. where: is the relocated pole positioned to set the desired cutoff attenuation. is a −3.01 dB cutoff pole that lies on the unit circle. is the desired attenuation at the cutoff frequency in dB (1 dB, 10 dB, etc.). is the number of poles, the order of the filter. Filter implementation and design There are several different filter topologies available to implement a linear analogue filter. The most often used topology for a passive realisation is the Cauer topology, and the most often used topology for an active realisation is the Sallen–Key topology. Cauer topology The Cauer topology uses passive components (shunt capacitors and series inductors) to implement a linear analog filter. The Butterworth filter having a given transfer function can be realised using a Cauer 1-form. The k-th element is given by The filter may start with a series inductor if desired, in which case the Lk are k odd and the Ck are k even. These formulae may usefully be combined by making both Lk and Ck equal to gk. That is, gk is the immittance divided by s. These formulae apply to a doubly terminated filter (that is, the source and load impedance are both equal to unity) with ωc = 1. This prototype filter can be scaled for other values of impedance and frequency. For a singly terminated filter (that is, one driven by an ideal voltage or current source) the element values are given by where and Voltage driven filters must start with a series element and current driven filters must start with a shunt element. These forms are useful in the design of diplexers and multiplexers. Sallen–Key topology The Sallen–Key topology uses active and passive components (noninverting buffers, usually op amps, resistors, and capacitors) to implement a linear analog filter. Each Sallen–Key stage implements a conjugate pair of poles; the overall filter is implemented by cascading all stages in series. If there is a real pole (in the case where is odd), this must be implemented separately, usually as an RC circuit, and cascaded with the active stages. For the second-order Sallen–Key circuit shown to the right the transfer function is given by We wish the denominator to be one of the quadratic terms in a Butterworth polynomial. Assuming that , this will mean that and This leaves two undefined component values that may be chosen at will. Butterworth lowpass filters with Sallen–Key topology of third and fourth order, using only one op amp, are described by Huelsman, and further single-amplifier Butterworth filters also of higher order are given by Jurišić et al. Digital implementation Digital implementations of Butterworth and other filters are often based on the bilinear transform method or the matched Z-transform method, two different methods to discretize an analog filter design. In the case of all-pole filters such as the Butterworth, the matched Z-transform method is equivalent to the impulse invariance method. For higher orders, digital filters are sensitive to quantization errors, so they are often calculated as cascaded biquad sections, plus one first-order or third-order section for odd orders. Comparison with other linear filters Properties of the Butterworth filter are: Monotonic amplitude response in both passband and stopband Quick roll-off around the cutoff frequency, which improves with increasing order Considerable overshoot and ringing in step response, which worsens with increasing order Slightly non-linear phase response Group delay largely frequency-dependent Here is an image showing the gain of a discrete-time Butterworth filter next to other common filter types. All of these filters are fifth-order. The Butterworth filter rolls off more slowly around the cutoff frequency than the Chebyshev filter or the Elliptic filter, but without ripple. See also Bessel filter Chebyshev filter Comb filter Elliptic filter Filter design References Linear filters Network synthesis filters Electronic design
Butterworth filter
Engineering
3,053
42,437,478
https://en.wikipedia.org/wiki/DesignTO
DesignTO (known as the Toronto Design Offsite Festival until 2019) is a nonprofit arts organization that is most well known for the DesignTO Festival, a design week showcasing Canadian designers held in Toronto, Ontario, Canada. Each year, the DesignTO Festival features over 100 free exhibitions and events. In 2015, the festival added an annual self-produced exhibition component entitled White Out, TO DO Talks, located at various locations around the city, and Outside the Box, an exhibition where correspondents across the Americas curate original works and then ship them to Toronto for exhibition. Outside the Box was also exhibited in New York City for Wanted Design NYC in May 2015. The alternative design exhibition, Come Up To My Room, is one of the events under the TO DO umbrella. In 2013, the festival added an awards component, the TO DO Awards, presented by Herman Miller, with jury and people's choice categories, and the TO DO Festival opening party, a celebration of the launch of festival week. The juror's choice for "Best in Festival" that year went to Mason Studio for "Cloud Sourcing", a design installation that consisted of cloud-like objects made from tissue paper. References External links DesignTO.org Events in Toronto The offsite festival is
DesignTO
Engineering
256
5,066,430
https://en.wikipedia.org/wiki/Gyration%20tensor
In physics, the gyration tensor is a tensor that describes the second moments of position of a collection of particles where is the Cartesian coordinate of the position vector of the particle. The origin of the coordinate system has been chosen such that i.e. in the system of the center of mass . Where Another definition, which is mathematically identical but gives an alternative calculation method, is: Therefore, the x-y component of the gyration tensor for particles in Cartesian coordinates would be: In the continuum limit, where represents the number density of particles at position . Although they have different units, the gyration tensor is related to the moment of inertia tensor. The key difference is that the particle positions are weighted by mass in the inertia tensor, whereas the gyration tensor depends only on the particle positions; mass plays no role in defining the gyration tensor. Diagonalization Since the gyration tensor is a symmetric 3x3 matrix, a Cartesian coordinate system can be found in which it is diagonal where the axes are chosen such that the diagonal elements are ordered . These diagonal elements are called the principal moments of the gyration tensor. Shape descriptors The principal moments can be combined to give several parameters that describe the distribution of particles. The squared radius of gyration is the sum of the principal moments: The asphericity is defined by which is always non-negative and zero only when the three principal moments are equal, λx = λy = λz. This zero condition is met when the distribution of particles is spherically symmetric (hence the name asphericity) but also whenever the particle distribution is symmetric with respect to the three coordinate axes, e.g., when the particles are distributed uniformly on a cube, tetrahedron or other Platonic solid. Similarly, the acylindricity is defined by which is always non-negative and zero only when the two principal moments are equal, λx = λy. This zero condition is met when the distribution of particles is cylindrically symmetric (hence the name, acylindricity), but also whenever the particle distribution is symmetric with respect to the two coordinate axes, e.g., when the particles are distributed uniformly on a regular prism. Finally, the relative shape anisotropy is defined which is bounded between zero and one. = 0 only occurs if all points are spherically symmetric, and = 1 only occurs if all points lie on a line. References Polymer physics Tensors
Gyration tensor
Chemistry,Materials_science,Engineering
515
9,770,828
https://en.wikipedia.org/wiki/Leonard%20Medal
The Leonard Medal honors outstanding contributions to the science of meteoritics and closely allied fields. It is awarded by the Meteoritical Society. It was established in 1962 to honor the first President of the Society, Frederick C. Leonard. Leonard Medal Winners See also List of astronomy awards Glossary of meteoritics References Astronomy prizes Meteorite prizes Awards established in 1962
Leonard Medal
Astronomy,Technology
73
44,630,324
https://en.wikipedia.org/wiki/Lovejoy%20Columns
The Lovejoy Columns, located in Portland, Oregon, United States, supported the Lovejoy Ramp, a viaduct that from 1927 to 1999 carried the western approach to the Broadway Bridge over the freight tracks in what is now the Pearl District. The columns were painted by Greek immigrant Tom Stefopoulos between 1948 and 1952. In 1999, the viaduct was demolished but the columns were spared due to the efforts of the architectural group Rigga. For the next five years, attempts to restore the columns were unsuccessful and they remained in storage beneath the Fremont Bridge. In 2005, two of the original columns were resited at Northwest 10th Avenue between Everett and Flanders Streets. The Regional Arts & Culture Council was searching for photographs showing the murals in their original location for an ongoing restoration project. In 2006, Randy Shelton reconstructed the artworks on the columns using the photographs for reference. Description and history The Lovejoy Columns supported the Lovejoy Ramp, a viaduct that stretched from 14th Avenue and Lovejoy Street to the Broadway Bridge within northwest Portland's Pearl District. It was constructed in 1927–1928. Between 1948 and 1952, Athanasios Efthimiou "Tom" Stefopoulos (died 1971), a Spokane, Portland and Seattle Railway night watchman, artist and master calligrapher in the copperplate style, drew upon the columns in chalk and later painted them. His work was spontaneous and not commissioned. Stefopoulos painted Greek mythology and Americana imagery in a calligraphic style; the designs depicted "fanciful" owls, landscapes "bedecked with homespun aphorisms", and ancient Greek philosopher Diogenes of Sinope navigating the streets of Athens with a lantern. He painted around a dozen murals, though photographic evidence does not exist for each of them. The paintings became a local landmark and quickly gained Stefopoulos notoriety and media coverage. In the late 1990s, developer Homer Williams persuaded the city to demolish the viaduct to open up dozens of blocks in the redeveloping Pearl District. Preservation efforts began immediately. In 1998, Georgiana Nehl completed a painting of the columns called Guardians: Under the Lovejoy Ramp to "catch a small flavor of these 'guardians,' while they were still in place in their surprising location—before they were lost in the name of progress". In 1999, James Henderson took a series of photographs of the remaining pigments of the original paintings; he recorded the murals using cross-polarized lighting and used digital enhancement to restore the colors. The Regional Arts & Culture Council administers at least six of Henderson's photographs, which were printed in 2002 and each called Lovejoy Column. Demolition The viaduct was removed in 1999, but the architectural group Rigga persuaded the city to preserve the paintings and the columns. Rigga said that if the murals had been removed from the columns, "much of their magic would be lost". The City of Portland's Office of Transportation earmarked funds to remove ten columns; an ad hoc committee called Friends of the Columns was formed to raise money for their storage, restoration and public display, which was estimated to cost $460,000. City Commissioner Charlie Hales said, "Saving the Lovejoy columns and the artwork provides a real bridge between the rich history of this industrial area and its future as a residential neighborhood. I am pleased that we are able to save these columns and look forward to them being placed on some of the park spaces in the River District." According to the James M. Harrison Art and Design Studio, "Extracting the columns both captured the space created by Tom and preserved a ruin that would continue to tell a story. The fragile paintings preserved the mighty concrete." During the next five years, attempts by the city, and non-profit and entrepreneurial groups to restore the columns were unsuccessful. Boora Architects' Northwest Marshall Street Pedestrian Bridge Feasibility Study (2001), funded by the Portland Development Commission, proposed installing the columns at the intersection of Northwest 9th Avenue and Naito Parkway. The columns were featured in a 2003 article by the Getty Conservation Institute called "The Conservation of Outdoor Contemporary Murals", which described best practices for preserving murals and included photographs of the columns during the demolition phase, with conservator J. Claire Dean assessing one of them. From August 10 to September 4, 2004, Portland-based artist and filmmaker Rankin Renwick exhibited a paper and video installation called Lovejoy Lost, featuring camera work by her and Gus Van Sant, for the PDX Window Project. In November 2004, Willamette Week reported that the columns were being held at a storage yard at Northwest 14th Avenue and Savier Street, beneath the Fremont Bridge. The paper said, "[h]alf-covered in blue tarps, their rusted steel girders sticking out of concrete like veins from a freshly amputated arm, they await the political momentum to rescue them from rot". Real estate developer John Carroll hoped to site the columns at the Elizabeth Lofts, but former Rigga member James Harrison said he was reluctant to believe it would happen, given their history. Harrison told Willamette Week, "[t]hese things can turn on a dime". Resiting Carroll's and Harrison's efforts were realized in 2005 when two of the ten original columns were resited at Northwest 10th Avenue between Everett and Flanders streets. The columns featured a majority of Stefopoulos' paintings. Harrison reportedly watched with "something like fatherly joy" during the installation and said, "[w]e're installing a ruin". Carroll said displaying the columns as public art "will preserve an element of the city’s past for current and future generations" and acknowledged support from the neighborhood, Friends of the Columns and the Portland Development Commission. The Regional Arts & Culture Council was searching for photographs showing the murals in their original location for a restoration project, which would be completed the following summer. In 2006, the columns were reconstructed from the photographs by Randy Shelton. The City of Portland's Bureau of Planning said the resited columns "[celebrate] a period in the district’s history, showcasing the art for a broader audience". An event called "Public Space Invasion" was held in the plaza containing the columns in 2011, inviting guests to "explore the legal limits of Portland's more peculiar public spaces". It advertised "crafts among the condos" and the opportunity to "picnic beside a freeway". In 2013, a bicycle tour called "Lovejoy Columns and Tom" focused on the conservation of the columns, the "almost forgotten history" of Stefopoulos and the rise of the Pearl District. The tour was narrated by Harrison on behalf of Friends of the Columns and guided by "Portland's Museum Lady" Carye Bye; it raised money for a gravestone for Stefopoulos' unmarked grave at Rose City Cemetery. It included a guided tour at the Hellenic-American Cultural Center and Museum, which was exhibiting Master Penworks of Tom Stefopoulos to view pen-and-ink art by Stefopoulos. It also included a viewing of Renwick's unfinished film Lovejoy and an optional visit to Stefopoulos' grave. In her documentary, Renwick chronicled the effort to save the columns and restore the paintings. Depictions and reception The Daily Journal of Commerce called the columns a Portland "urban legend". According to Richard Speer of Willamette Week, "generations of Portlanders grew up counting the Lovejoy columns as one of the city's most unique attractions". Speer also said the columns were once "postcard favorites and seemed as much a part of the city's landscape as the Hawthorne Bridge" and have an "endearing, perspectiveless style". The murals appeared in Van Sant's film Drugstore Cowboy (1989), Foxfire (1996) and a music video featuring Elliott Smith. The resited columns have been included in published walking tours of Portland. In her 2006 book Walking Portland: 30 Tours of Stumptown's Funky Neighborhoods, Historic Landmarks, Park Trails, Farmers Markets, and Brewpubs, Becky Ohlsen said, "Whatever you make of the artwork, the inspired effort that went into preserving it—not to mention the awesome spectacle of those massive columns ripped free, their rebar guts exposed to the air—is damned impressive". See also 1952 in art 2006 in art Greek mythology in western art and literature References External links Lovejoy Columns, 1927 at cultureNOW Portland Then/Now: Northwest 12th Avenue and Lovejoy Street by Byron Beck (October 2, 2014), GoLocalPDX Historic Bicycle Tour of Northwest Portland , page 13 (PDF), Northwest District Association 1928 establishments in Oregon 1940s murals 1950s murals 1952 establishments in Oregon 2006 establishments in Oregon Birds in art Columns and entablature Demolished buildings and structures in Portland, Oregon Ancient Greece in art and culture Murals in Oregon Outsider art Pearl District, Portland, Oregon Public art in the United States Works by American people Works by Greek people
Lovejoy Columns
Technology
1,829
77,786,011
https://en.wikipedia.org/wiki/Niall%20J.%20English
Niall J. English (born March 29th, 1979) is an Irish inventor, industrialist, researcher, and chartered chemical engineer. He is the founding director of BioSimulytics and AquaB. Early life and education Niall J. English (born March 29th, 1979) was born in Dublin, Ireland, to Michael and Catherine English. He grew up in Dublin and Brussels. He speaks Irish and French. English obtained a First-Class Honors degree in Chemical Engineering from University College Dublin in 2000 and won the Ferdinand de Lesseps medal in French as well as the Engineering Graduates’ Association gold medal in his final year in 1999-2000. He completed his Ph.D. in 2003. Career During 2004-2005, English explored electric-field effects thereon, at the National Energy Technology Laboratory, a U.S. DOE research facility in Pittsburgh. Between 2005 and 2007, English worked for Chemical Computing Group in Cambridge, Great Britain. During this time, English developed molecular simulation codes, protocols, and methods for biomolecular simulation. In January 2007, he was hired as a lecturer at the School of Chemical and Bioprocess Engineering, and was promoted to senior lecturer in 2014. In 2017, he became a professor at the same school. English’s research specializes in nanoscience, energy, gas hydrates, solar and renewable energies, and simulation of electromagnetic field effects on (nano) materials and biological systems. In 2019, he is a co-founder and director of BioSimulytics and Aqua-B. Both companies are backed by the EIC-Accelerator program. In 2023, he took legal action against the University College Dublin from granting commercialization license to his rival companies. The case was settled out of court. Publications (2012b). Photo-induced charge separation across the graphene–tio2 interface is faster than energy losses: A time-domain ab initio analysis. Journal of the American Chemical Society, 134/34: 14238–48. DOI: 10.1021/ja3063953 (2015). English, N. J., & Waldron, C. J. Perspectives on External Electric Fields in molecular simulation: Progress, prospects and challenges. Physical Chemistry Chemical Physics. The Royal Society of Chemistry. (2003) English, N. J., & MacElroy, J. M. D. Molecular dynamics simulations of microwave heating of water. AIP Publishing. AIP Publishing. (2007) Rosenbaum, E. J., English, N. J., Johnson, J. K., Shaw, D. W., & Warzinski, R. P. (2007). Thermal conductivity of methane hydrate from experiment and molecular simulation. The Journal of Physical Chemistry B, 111/46: 13194–205. DOI: 10.1021/jp074419o (2005) English, N. J., Johnson, J. K., & Taylor, C. E. Molecular-dynamics simulations of Methane Hydrate Dissociation. AIP Publishing. AIP Publishing. (2014) English, N. J., & MacElroy, J. M. D. Perspectives on molecular simulation of clathrate hydrates: Progress, prospects and challenges. Chemical Engineering Science. Pergamon. (2010) Long, R., & English, N. J. Synergistic effects on band gap-narrowing in Titania by codoping from first-principles calculations. Chemistry of Materials, 22/5: 1616–23. DOI: 10.1021/cm903688z (2004) English , N. J., & MacElroy, J. M. D. Theoretical studies of the kinetics of methane hydrate crystallization in external electromagnetic fields. The Journal of chemical physics. U.S. National Library of Medicine. (2003a) English, N. J., & MacElroy, J. M. D. (2003a). Hydrogen bonding and molecular mobility in liquid water in external electromagnetic fields. AIP Publishing. (2009) Long, R., & English, N. J.First-principles calculation of nitrogen-tungsten codoping effects on the band structure of anatase-titania. AIP Publishing. References Chemical engineers 1979 births Living people Irish inventors Members of the Royal Irish Academy
Niall J. English
Chemistry,Engineering
899
71,498,717
https://en.wikipedia.org/wiki/Coherent%20microwave%20scattering
Coherent microwave scattering is a diagnostic technique used in the characterization of classical microplasmas. In this technique, the plasma to be studied is irradiated with a long-wavelength microwave field relative to the characteristic spatial dimensions of the plasma. For plasmas with sufficiently low skin-depths, the target is periodically polarized in a uniform fashion, and the scattered field can be measured and analyzed. In this case, the emitted radiation resembles that of a short-dipole predominantly determined by electron contributions rather than ions. The scattering is correspondingly referred to as constructive elastic. Various properties can be derived from the measured radiation such as total electron numbers, electron number densities (if the plasma volume is known), local magnetic fields through magnetically-induced depolarization, and electron collision frequencies for momentum transfer through the scattered phase. Notable advantages of the technique include a high sensitivity, ease of calibration using a dielectric scattering sample, good temporal resolution, low shot noise, non-intrusive probing, species-selectivity when coupled with resonance-enhanced multiphoton ionization (REMPI), single-shot acquisition, and the capability of time-gating due to continuous scanning. History Initially devised by Mikhail Shneider and Richard Miles at Princeton University, coherent microwave scattering has become a valuable technique in applications ranging from photoionization and electron-loss rate measurements to trace species detection, gaseous mixture and reaction characterization, molecular spectroscopy, electron propulsion device characterization, standoff measurement of electron collision frequencies for momentum transfer through the scattered phase, and standoff measurement of local vector magnetic fields through magnetically-induced depolarization. Scattering regimes For the simplest embodiment of linearly-polarized microwave scattering in the absence of magnetic depolarization, three regimes may arise due to the correlation between scatterers. The Thomson regime refers to free plasma electrons oscillating in-phase with the incident microwave field. The total scattering cross-section of an independent electron then coincides with the classical Thomson cross section and is independent of the microwave wavelength λ. Second, Shneider-Miles scattering (SM, often referred to as collisional scattering) refers to collision-dominated electron motion with displacement oscillations shifted 90 degrees with respect to the irradiating field. The total scattering cross-section correspondingly exhibits a ω2 dependency - a unique regime made possible through interparticle interactions. Finally, the Rayleigh scattering regime can be observed which is associated with restoring-force-dominated electron motion and shares a ω4 dependence with its volumetric polarizability optical counterpart. In this case the "scattering particle" refers to the entire plasma object. As such, plasma expansion may cause a transition towards Mie scattering. Note that the Rayleigh regime refers to small particle ω4 scattering here, rather than an even broader small-dipole approximation of the radiation. References Spectroscopy
Coherent microwave scattering
Physics,Chemistry
591
1,805,271
https://en.wikipedia.org/wiki/Social%20construction%20of%20technology
Social construction of technology (SCOT) is a theory within the field of science and technology studies. Advocates of SCOT—that is, social constructivists—argue that technology does not determine human action, but that rather, human action shapes technology. They also argue that the ways a technology is used cannot be understood without understanding how that technology is embedded in its social context. SCOT is a response to technological determinism and is sometimes known as technological constructivism. SCOT draws on work done in the constructivist school of the sociology of scientific knowledge, and its subtopics include actor-network theory (a branch of the sociology of science and technology) and historical analysis of sociotechnical systems, such as the work of historian Thomas P. Hughes. Its empirical methods are an adaptation of the Empirical Programme of Relativism (EPOR), which outlines a method of analysis to demonstrate the ways in which scientific findings are socially constructed (see strong program). Leading adherents of SCOT include Wiebe Bijker and Trevor Pinch. SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology should look to the social world. It is not enough, according to SCOT, to explain a technology's success by saying that it is "the best"—researchers must look at how the criteria of being "the best" is defined and what groups and stakeholders participate in defining it. In particular, they must ask who defines the technical criteria success is measured by, why technical criteria are defined this way, and who is included or excluded. Pinch and Bijker argue that technological determinism is a myth that results when one looks backwards and believes that the path taken to the present was the only possible path. SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow when one wants to analyze the causes of technological failures or successes. Legacy of the Strong Programme in the sociology of science At the point of its conception, the SCOT approach was partly motivated by the ideas of the strong programme in the sociology of science (Bloor 1973). In their seminal article, Pinch and Bijker refer to the Principle of Symmetry as the most influential tenet of the Sociology of Science, which should be applied in historical and sociological investigations of technology as well. It is strongly connected to Bloor's theory of social causation. Symmetry The Principle of Symmetry holds that in explaining the origins of scientific beliefs, that is, assessing the success and failure of models, theories, or experiments, the historian/sociologist should deploy the same kind of explanation in the cases of success as in cases of failure. When investigating beliefs, researchers should be impartial to the (a posteriori attributed) truth or falsehood of those beliefs, and the explanations should be unbiased. The strong programme adopts a position of relativism or neutralism regarding the arguments that social actors put forward for the acceptance/rejection of any technology. All arguments (social, cultural, political, economic, as well as technical) are to be treated equally. The symmetry principle addresses the problem that the historian is tempted to explain the success of successful theories by referring to their "objective truth", or inherent "technical superiority", whereas s/he is more likely to put forward sociological explanations (citing political influence or economic reasons) only in the case of failures. For example, having experienced the obvious success of the chain-driven bicycle for decades, it is tempting to attribute its success to its "advanced technology" compared to the "primitiveness" of the Penny Farthing, but if we look closely and symmetrically at their history (as Pinch and Bijker do), we can see that at the beginning bicycles were valued according to quite different standards than nowadays. The early adopters (predominantly young, well-to-do gentlemen) valued the speed, the thrill, and the spectacularity of the Penny Farthing – in contrast to the security and stability of the chain-driven Safety Bicycle. Many other social factors (e.g., the contemporary state of urbanism and transport, women's clothing habits and feminism) have influenced and changed the relative valuations of bicycle models. A weak reading of the Principle of Symmetry points out that there often are many competing theories or technologies, which all have the potential to provide slightly different solutions to similar problems. In these cases, sociological factors tip the balance between them: that's why we should pay equal attention to them. A strong, social constructivist reading would add that even the emergence of the questions or problems to be solved are governed by social determinations, so the Principle of Symmetry is applicable even to the apparently purely technical issues. Original Core concepts The Empirical Programme of Relativism (EPOR) introduced the SCOT theory in two stage. First Stage: Interpretative flexibility The first stage of the SCOT research methodology is to reconstruct the alternative interpretations of the technology, analyze the problems and conflicts these interpretations give rise to, and connect them to the design features of the technological artifacts. The relations between groups, problems, and designs can be visualized in diagrams. Interpretative flexibility means that each technological artifact has different meanings and interpretations for various groups. Bijker and Pinch show that the air tire of the bicycle meant a more convenient mode of transportation for some people, whereas it meant technical nuisances, traction problems and ugly aesthetics to others. In racing air tires lent to greater speed. These alternative interpretations generate different problems to be solved. For the bicycle, it means how features such as aesthetics, convenience, and speed should be prioritized. It also considers tradeoffs, such as between traction and speed. Relevant social groups The most basic relevant groups are the users and the producers of the technological artifact, but most often many subgroups can be delineated – users with different socioeconomic status, competing producers, etc. Sometimes there are relevant groups who are neither users, nor producers of the technology, for example, journalists, politicians, and civil organizations. Trevor Pinch has argued that the salespeople of technology should also be included in the study of technology. The groups can be distinguished based on their shared or diverging interpretations of the technology in question. Design flexibility Just as technologies have different meanings in different social groups, there are always multiple ways of constructing technologies. A particular design is only a single point in the large field of technical possibilities, reflecting the interpretations of certain relevant groups. Problems and conflicts The different interpretations often give rise to conflicts between criteria that are hard to resolve technologically (e.g., in the case of the bicycle, one such problem was how a woman could ride the bicycle in a skirt while still adhering to standards of decency), or conflicts between the relevant groups (the "Anti-cyclists" lobbied for the banning of the bicycles). Different groups in different societies construct different problems, leading to different designs. Second Stage: Closure The second stage of the SCOT methodology is to show how closure is achieved. Over time, as technologies are developed, the interpretative and design flexibility collapse through closure mechanisms. Two examples of closure mechanisms: Rhetorical closure: When social groups see the problem as being solved, the need for alternative designs diminishes. This is often the result of advertising. Redefinition of the problem: A design standing in the focus of conflicts can be stabilized by using it to solve a different, new problem, which ends up being solved by this very design. As an example, the aesthetic and technical problems of the air tire diminished, as the technology advanced to the stage where air tire bikes started to win the bike races. Tires were still considered cumbersome and ugly, but they provided a solution to the "speed problem", and this overrode previous concerns. Closure is not permanent. New social groups may form and reintroduce interpretative flexibility, causing a new round of debate or conflict about a technology. (For instance, in the 1890s automobiles were seen as the "green" alternative, a cleaner environmentally-friendly technology, to horse-powered vehicles; by the 1960s, new social groups had introduced new interpretations about the environmental effects of the automobile, eliciting the opposite conclusion.) Subsequent extension of the SCOT theory Many other historians and sociologists of technology extended the original SCOT theory. Technological Frame Relating the content of the technological artifact to the wider sociopolitical milieu This is often considered the third stage of the original theory. For example, Paul N. Edwards shows in his book "The Closed World: Computers and the Politics of Discourse in Cold War America" the strong relations between the political discourse of the Cold War and the computer designs of this era. Criticism In 1993, Langdon Winner published a critique of SCOT entitled "Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology." In it, he argues that social constructivism is an overly narrow research program. He identifies the following specific limitations in social constructivism: It explains how technologies arise, but ignores the consequences of the technologies after the fact. This results in a sociology that says nothing about how such technologies matter in the broader context. It examines social groups and interests that contribute to the construction of technology, but ignores those who have no voice in the process, yet are affected by it. Likewise, when documenting technological contingencies and choices, it fails to account for those options that never made it to the table. According to Winner, this results in conservative and elitist sociology. It is superficial in that it focuses on how the immediate needs, interests, problems and solutions of chosen social groups influence technological choice, but disregards any possible deeper cultural, intellectual or economic origins of social choices concerning technology. It actively avoids taking any kind of moral stance or passing judgment on the relative merits of the alternative interpretations of a technology. This indifference makes it unhelpful in addressing important debates about the place of technology in human affairs. Other critics include Stewart Russell with his letter in the journal Social Studies of Science titled "The Social Construction of Artifacts: A Response to Pinch and Bijker". Deborah Deliyannis, Hendrik Dey, and Paolo Squatriti criticize the concept of social construction of technology for being a false dichotomy with a technologically determinist straw man that ignores third, fourth and more alternatives, as well as for overlooking the process of how the technology is developed as something that can work. For example, accounting for which groups would have interests in a windmill cannot explain how a windmill is practically constructed, nor does it account for the difference between having the knowledge but for some reason not using it and lacking the knowledge altogether. This distinction between knowledge that have not yet been invented and knowledge that is merely prevented from being used by commercial, bureaucratic or other socially constructed factors, which it is argued that SCOT overlooks, is argued to explain the archaeological evidence of rich technological cultures in the aftermath of the collapse of civilizations (such as early medieval technology in the aftermath of the collapse of the Roman Empire, which was much richer than it is depicted as by the "Dark Medieval" stereotype) as a result of technology being remembered even when prevented from being used with the potential to being put into use when the artificial repression is no longer in place due to societal collapse. See also History of science and technology Industrial sociology Science and technology studies (STS) Social shaping of technology Systems theory Sociocultural evolution Sociology of scientific knowledge Technology dynamics Theories of technology Notes References Pinch, Trevor J. and Wiebe E. Bijker. "The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other." Social Studies of Science 14 (August 1984): 399–441. Russell, Stewart. "The Social Construction of Artefacts: Response to Pinch and Bijker." Social Studies of Science 16 (May 1986): 331–346. Pinch, Trevor J. and Wiebe E. Bijker. "Science, Relativism and the New Sociology of Technology: Reply to Russell." Social Studies of Science 16 (May 1986): 347–360. Bijker, Wiebe E., Thomas P. Hughes, and Trevor J. Pinch, eds. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, MA: MIT Press, 1987. Sismondo, Sergio. "Some Social Constructions." Social Studies of Science, 23 (1993): 515–53. External links STS Wiki History of technology Science and technology studies Sociology of scientific knowledge Technology Technological change Technology in society social construction Philosophy of technology
Social construction of technology
Technology
2,590
26,297,717
https://en.wikipedia.org/wiki/Impression%20%28online%20media%29
An impression (in the context of online advertising) is when an ad is fetched from its source, and is countable. Whether the ad is clicked is not taken into account. Each time an ad is fetched, it is counted as one impression. Because of the possibility of click fraud, robotic activity is usually filtered and excluded, and a more technical definition is given for accounting purposed by the IAB, a standards and watchdog industry group: "Impression" is a measurement of responses from a Web server to a page request from the user browser, which is filtered from robotic activity and error codes, and is recorded at a point as close as possible to opportunity to see the page by the user. Purpose Counting impressions is the method by which most Web advertising is accounted and paid for, and the cost is quoted in CPM (cost per thousand impressions) or CPI (cost per impression). (Contrast CPC, which is the cost per click and not impression-based). Construction A movement is underway to move from the current standard of served impressions, to a new standard of viewable impressions. The Interactive Advertising Bureau (IAB), Association of National Advertisers (ANA), and the American Association of Advertising Agencies (4A’s) have joined forces in an initiative called 3MS (Making Measurement Make Sense), with the purpose of better defining the value of display media. Served impressions are the current standard. They are recorded by ad servers, and are counted whether or not the ad itself is fully loaded and in a space viewable to the end-user. Viewable impressions are defined as those that are at least 50% visible to the user for at least one second. See also Cost per impression Cost per click (CPC)/Pay per click (PPC) Cost per order Cost per mille (CPM) Effective cost per mille (eCPM) Cost per action (CPA) Effective cost per action (eCPA) Click-through rate (CTR) Internet marketing Performance-based advertising References Online advertising Internet terminology
Impression (online media)
Technology
419
52,534
https://en.wikipedia.org/wiki/Axiom%20of%20empty%20set
In axiomatic set theory, the axiom of empty set, also called the axiom of null set and the axiom of existence, is a statement that asserts the existence of a set with no elements. It is an axiom of Kripke–Platek set theory and the variant of general set theory that Burgess (2005) calls "ST," and a demonstrable truth in Zermelo set theory and Zermelo–Fraenkel set theory, with or without the axiom of choice. Formal statement In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: . Or, alternatively, . In words: There is a set such that no element is a member of it. Interpretation We can use the axiom of extensionality to show that there is only one empty set. Since it is unique we can name it. It is called the empty set (denoted by { } or ∅). The axiom, stated in natural language, is in essence: An empty set exists. This formula is a theorem and considered true in every version of set theory. The only controversy is over how it should be justified: by making it an axiom; by deriving it from a set-existence axiom (or logic) and the axiom of separation; by deriving it from the axiom of infinity; or some other method. In some formulations of ZF, the axiom of empty set is actually repeated in the axiom of infinity. However, there are other formulations of that axiom that do not presuppose the existence of an empty set. The ZF axioms can also be written using a constant symbol representing the empty set; then the axiom of infinity uses this symbol without requiring it to be empty, while the axiom of empty set is needed to state that it is in fact empty. Furthermore, one sometimes considers set theories in which there are no infinite sets, and then the axiom of empty set may still be required. However, any axiom of set theory or logic that implies the existence of any set will imply the existence of the empty set, if one has the axiom schema of separation. This is true, since the empty set is a subset of any set consisting of those elements that satisfy a contradictory formula. In many formulations of first-order predicate logic, the existence of at least one object is always guaranteed. If the axiomatization of set theory is formulated in such a logical system with the axiom schema of separation as axioms, and if the theory makes no distinction between sets and other kinds of objects (which holds for ZF, KP, and similar theories), then the existence of the empty set is a theorem. If separation is not postulated as an axiom schema, but derived as a theorem schema from the schema of replacement (as is sometimes done), the situation is more complicated, and depends on the exact formulation of the replacement schema. The formulation used in the axiom schema of replacement article only allows to construct the image F[a] when a is contained in the domain of the class function F; then the derivation of separation requires the axiom of empty set. On the other hand, the constraint of totality of F is often dropped from the replacement schema, in which case it implies the separation schema without using the axiom of empty set (or any other axiom for that matter). References Further reading Burgess, John, 2005. Fixing Frege. Princeton Univ. Press. Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. . Axioms of set theory de:Zermelo-Fraenkel-Mengenlehre#Die Axiome von ZF und ZFC
Axiom of empty set
Mathematics
848
52,710,724
https://en.wikipedia.org/wiki/C.W.%20Bill%20Jones%20Pumping%20Plant
The C.W. Bill Jones Pumping Plant (formerly the Tracy Pumping Plant) located northwest of Tracy, California, was constructed between 1947 and 1951, and is a key component of the Central Valley Project. The Delta Cross Channel intercepts Sacramento River water as it travels westwards towards Suisun Bay and diverts it south through a series of man-made channels, the Mokelumne River, and other natural sloughs, marshes and distributaries. From there, the water travels to the C.W. Bill Jones Pumping Plant, which raises water into the Delta-Mendota Canal, which in turn travels southwards to Mendota Pool on the San Joaquin River, supplying water to other CVP reservoirs about midway. The Tracy Fish Collection Facility exists at the entrance of the pump plant in order to catch fish that would otherwise end up in the Delta-Mendota Canal. The Jones Pumping Plant provides water service to 32 water districts within the western San Joaquin Valley, San Benito and Santa Clara counties. Of the approximate of water distributed, is delivered to farms, to urban areas, including Tracy and cities with in the Santa Clara Valley Water District, and for wildlife refuges. Specifications pumps: six 22,500 HP electric motors normal lift: maximum pumping rate: 5200 cubic feet per second (2,000,000 gallons per minute, per day) References External links CDEC daily sensor data Central Valley Project Buildings and structures in Contra Costa County, California Water supply infrastructure in California Water supply pumping stations in the United States Sacramento–San Joaquin River Delta
C.W. Bill Jones Pumping Plant
Engineering
321
3,286,366
https://en.wikipedia.org/wiki/Cognitive%20specialization
Cognitive specialization suggests that certain behaviors, often in the domain of social communication, are passed on to offspring and refined to be maximally beneficial by the process of natural selection. Specializations serve an adaptive purpose for an organism by allowing the organism to be better suited for its habitat. Over time, specializations often become essential to the species' continued survival. Cognitive specialization in humans has been thought to underlie the acquisition, development, and evolution of language, theory of mind, and specific social skills such as trust and reciprocity. These specializations are considered to be critical to the survival of the species, even though there are successful individuals who lack certain specializations, including those diagnosed with autism spectrum disorder or who lack language abilities. Cognitive specialization is also believed to underlie adaptive behaviors such as self-awareness, navigation, and problem solving skills in several animal species such as chimpanzees and bottlenose dolphins. Background First studied as an adaptive mechanism specific to humans, cognitive specialization has since evolved to encompass many behaviors in the social realm. Organisms have evolved over millions of years to become well-adapted to their habitats; this requires becoming specialized in behaviors that improve an organism's likelihood of survival and reproduction. Not to be confused with functional specialization, which examines the specific parts of the brain that are engaged during specific behaviors or processes, cognitive specialization is focused on characteristics of the mind (an internal entity), which in turn affects external behaviors. Most of these specializations are thought to have developed in areas of the neocortex unique to humans. The most significant cognitive specializations among humans include theory of mind and language acquisition and production, while non-human animals may specialize in foraging behavior, self-awareness, or other adaptive abilities. Social behavior Social communication is critical to effective human interaction, and has evolved over time to support the complex exchange of ideas. Some social behaviors, such as helping and altruism, are largely unique to humans and are instrumental in ensuring the survival of the species. Evolutionary psychologists Leda Cosmides and John Tooby argue that the human mind contains "specialized mechanisms" that were designed by natural selection to facilitate social communication and exchange. Without this specialized "algorithm", Cosmides and Tooby claim, social exchange among humans would be closer to that of our closest evolutionary ancestors, the great apes. In addition to humans' broad abilities supporting positive social interaction, Stone et al. (2002) put forth evidence for more specific specializations including "cheater detection" and "precautionary reasoning," both of which appear to serve strong adaptive purposes by allowing humans to share resources with only those who are likely to share with them in the future, and avoid sharing resources with untrustworthy individuals. Overall, the adaptiveness of social communication has been examined in children, adults, and older adults, across cultures, and in neuropsychiatric populations. Evidence for universality If social behavior is to be considered a cognitive specialization unique to human neural architecture, it should be present in every human society. To provide cross-cultural evidence that cognitive adaptations specifically support social communication, Sugiyama, Tooby, and Cosmides investigated social reasoning in a tribe in the Ecuadorian Amazon. The Shiwiar, who are a hunter-horticulturalist group previously unexposed to the presented psychological stimuli, were "as highly proficient" in determining who cheated in a given situation as their counterparts in the United States. This performance indicates that social communication, at least in the domain of cheater detection, is not determined by one's culture. According to Sugiyama, Tooby, and Cosimdes, the social "algorithms" discussed above are present in both Western and non-Western populations, providing strong evidence for the universality of such a skill. Theory of mind Theory of mind, or the ability to attribute mental states to other people, is thought to be a cognitive specialization unique to humans, with a few possible exceptions discussed below. Theory of mind is thought to be critical in social cognition and communication because it allows us to distinguish between accidental and purposeful actions, to make judgments about others' internal states, and to determine how another's thoughts may differ from our own. The acquisition of theory of mind in humans mostly takes place during early childhood, and is thought to be fully developed by the early school years. Theory of mind research in chimpanzees by social psychologists David Premack and Guy Woodruff in 1978 brought it to the forefront of psychological inquiry, though true theory of mind is only thought to exist in humans. This phenomenon has been analyzed in many fields, and it is thought to be among the most beneficial specializations for survival of the human species, due to its facilitation of cooperation and interpersonal relationships. In autism Theory of mind appears to be lacking in children with autism spectrum disorders, and this deficit is thought to be a major contributor to frequent impairments in some areas of social understanding in people with autism. The fact that a developmental delay in (or absence of) theory of mind can impair social functioning—a skill imperative in the survival of the human species—is argued to be evidence for theory of mind as an adaptive cognitive specialization. Understanding that others may be thinking different thoughts than I am (colloquially, "putting oneself in another person's shoes") allows humans to communicate effectively and to live in large social groups. This adaptability is what makes theory of mind a cognitive specialization, rather than just another byproduct of human evolution: humankind has unique and beneficial communication skills, and this is partially due to our ability to recognize that other people may not think or know the same things we do. Language Though some (including Bates et al.) have argued that language arose as a byproduct of the evolution of humans' general cognitive abilities, Steven Pinker argues that it is, on its own, an adaptive mechanism. Drawing on existing literature and theory, he proposes several types of evidence for this claim, including the universality and ontogeny of language. Pinker also uses the double dissociation between general intelligence and language to argue for language as a specific adaptation. Those who lose language capabilities due to traumatic brain injury or stroke but maintain many other cognitive abilities exemplify Pinker's idea that language and general cognition are not always perfectly overlapping in human behavior. Using language "multiplies the benefit of knowledge" in multiple domains, including technology, tool use, and intentions of ourselves and others. Evolution Arbib puts forth a hypothesis that mirror neurons in the primate brain were a precursor to language abilities in humans. Without these neurons in Broca's area in humans (which is analogous to F5 in monkeys), Arbib claims, we could not have evolved a specialization for language—which is used to explain why non-human animals do not have linguistic capabilities. In addition, Meguerditchian and Vauclair have argued that our evolutionary ancestors' communicative gestures (such as threat gestures and "food begs" among baboons) established a foundation on which to build human language skills. This behavior was selected for, built upon, and modified, leading to the capabilities humans have today. Early theories explained early language as an adaptive way to communicate during a hunt, but recent research has focused on ecological theories that incorporate social demands; or, as Flinn et al. put it, a "social arms race" against non-human primates. As a behavior selected for over the long term, with many successful "intermediary stages," human language differs from all other social behaviors among chimpanzees, which are thought to be more gradual in their evolutionary development. Further evidence for language as a cognitive specialization includes Ferreira et al.'s finding that some parts of language (for instance, syntax) can be spared in amnesia, while other abilities (like memory retention) are drastically reduced. This and similar dissociations support the theory that specific neural architecture, which has evolved over time, supports language function. Universal Grammar Linguist Noam Chomsky proposed a biological component of language, which he termed Universal Grammar. According to Chomsky, an essential part of language processing is hard-wired into the human brain. This allows language to be produced with or without specific linguistic instruction (which is closely associated with the poverty of the stimulus argument). All humans—and only humans—have this biological trait, but building blocks of universal grammar have been reported in other species. Jackendoff argues that Universal Grammar is itself a "pre-existing cognitive specialization": rather than needing explicit instruction on how to speak their native language, or having vocabulary and syntactical rules of a specific language present in their brains from birth, children seem to be genetically pre-disposed to learn language. Complementary to the connection made between area F5 in macaques' brains, the theory of Universal Grammar allows for an evolutionary perspective on language use as a cognitive specialization. There is some controversy, however, on whether or not Universal Grammar can have evolved by standard Darwinian evolutionary principles, or must be explained using different mechanisms. Benefits According to Nowak and Sigmund, language is essential to human life as we know it. Without the ability to verbally communicate with members of our social group, there would be no reciprocity (that is, returning of favors), and no way to cooperate with one another for a greater good. Some have argued that unique aspects of human language have evolved for unexpectedly beneficial reasons, besides simply asking for help or sharing information about the world. Gossip, viewed by many to be a superfluous aspect of human communication, may even serve an adaptive purpose. The spread of information about other people, even if it is malicious, may serve as an indicator of social intelligence and a way to deter illicit behaviors. Though gossip likely helps some humans and hinders others' social standing, it appears to be an overall benefit of the ability to produce verbal language. Without an overall specialization for language (including such sub-specializations as gossip), linguists argue, humans would not be able to share information efficiently and effectively. Other possible specializations Watson et al. provide support for a specific specialization in language-dependent humor. Its adaptive value has both extrinsic and intrinsic components: humor facilitates social bonding if shared extrinsically, and provides pleasure if enjoyed in one's own mind. In addition, Johnson-Frey (2003) proposed a unique human specialization for tool use. According to Johnson-Frey, humans' ability to use tools is based on complex cognitive mechanisms, not just advanced sensorimotor skills. Rather than it being considered a purely physical specialization based only in motor areas of the brain, Johnson-Frey argues that tool use should be classified as a cognitive phenomenon due to its foundation in cognition. On a more philosophical level, Boyer (2003) argues that "religious thought and behavior" is a specialization that originally developed as a by-product of brain function, and its adaptive purposes led to its continued evolution by natural selection. Krueger et al. (2007) have argued that trust, which may form the foundation for helping and altruism and thus the basis of human social interaction, is also a cognitive specialization. Non-human specialization In non-human primates Humankind's closest ancestors, the great apes, have evolved a number of specialized behaviors: orangutans are specialists at climbing trees, while chimpanzees and gorillas have evolved to walk on their knuckles. However, in considering non-behavioral specializations, Penn et al. (2008) argue that the "profound continuity" Charles Darwin noted between human and non-human animals in the biological domain is matched by a "profound discontinuity" between human and non-human animal minds. In contrast, in addition to cognitive-behavioral adaptations, it is possible that chimpanzees have acquired more socially advanced skills through natural selection, including self-recognition (indicated by chimpanzees' established ability to pass the "mirror test"). This task—in which a successful trial is simply one in which an animal recognizes itself in a mirror—is thought to be a basic building block of theory of mind development. Rhesus monkeys have also been shown to realize when they remember certain events and items, which is considered to be an instrumental building block in the formation of social relationships, as one must remember who owes him favors, who he can trust, and who he should avoid in order to prosper in the community. In other animals More recent evidence has shown that cognitive specialization is not just present in primates: domesticated dogs may show signs of understanding human behavior and communication, indicating a social-cognitive specialization that is argued to make them more likely to receive food, shelter and love from their human owners. Being receptive to human behavioral indicators and responding accordingly has allowed dogs to survive and thrive as a species. Bottlenose dolphins and elephants have also been shown to pass the "mirror test" explained above. This indication of some elementary self-awareness provides more evidence for foundational theory of mind skills in organisms throughout the animal kingdom. Ants, bees, and other insects have also evolved behaviors consistent with various specializations, including advanced navigational skills and several basic social communication abilities. Adaptive cognitive evolution has been examined in pigeons' ability to group objects (which is argued to support their processing of and adaptation to novel environments), problem solving and "creative" tool modification among rooks, and tool use in crows. See also Functional specialization (brain) Behavioral neuroscience Evolutionary psychology References Further reading Baron-Cohen, S. (1997). Mindblindness: An essay on autism and theory of mind. MIT press. Futuyma, D. J., & Moreno, G. (1988). The evolution of ecological specialization. Annual Review of Ecology and Systematics, 207–233. Jackendoff, R. (2008). Patterns in the mind: Language and human nature. Basic Books. External links Uniquely-Human Features of the Brain: Specialization and Language by The University of California Television Human Altruism-Brain and Behavior: Trade and Cooperation by The University of California Television Cognitive psychology Cognitive science
Cognitive specialization
Biology
2,893
2,903,058
https://en.wikipedia.org/wiki/5%20Aquilae
5 Aquilae (abbreviated 5 Aql) is a quadruple star system in the constellation of Aquila. 5 Aquilae is the Flamsteed designation. The combined apparent visual magnitude of the system is 5.9, which means it is faintly visible to the naked eye. With an annual parallax shift of 8.94 mas, the distance to this system is estimated as approximately , albeit with a 13% margin of error. Two of the components of this system, 5 Aquilae Aa and Ab, are Am stars. That is, they are chemically peculiar stars that show unusual abundances of elements other than hydrogen and helium. The two orbit each other with a period lasting 33.65 years at an eccentricity of 0.33. One of these stars is itself a close spectroscopic binary, with a 4.765 day period and a nearly circular orbit that has an eccentricity of just 0.02. The fourth component, 5 Aquilae B, is a magnitude 7.65 F-type main sequence star with a stellar classification of F3 Vm. It is at an angular separation of 12.71 arcseconds from the other members of the system. References External links Image 5 Aquilae Aquila (constellation) 4 Aquilae, 05 173654 092117 7059 Durchmusterung objects Am stars
5 Aquilae
Astronomy
286
42,212,963
https://en.wikipedia.org/wiki/Pavement%20milling
Pavement milling (cold planing, asphalt milling, or profiling) is the process of removing at least part of the surface of a paved area such as a road, bridge, or parking lot. Milling removes anywhere from just enough thickness to level and smooth the surface to a full depth removal. There are a number of different reasons for milling a paved area instead of simply repaving over the existing surface. Purpose Recycling of the road surface is one of the main reasons for milling a road surface. Milling is widely used for pavement recycling today, where the pavement is removed and ground up to be used as the aggregate in new pavement. For asphalt surfaces the product of milling is reclaimed asphalt pavement (RAP), which can be recycled in the asphalt hot mix asphalt (pavement) by combining with new aggregate and asphalt cement (binder) or a recycling agent. This reduces the impact that resurfacing has on the environment. Milling can also remove distresses from the surface, providing a better driving experience and/or longer roadway life. Some of the issues that milling can remove include: Raveling: aggregate becoming separated from the binder and loose on the road Bleeding: the binder (asphalt) coming up to the surface of the road Rutting: formation of low spots in pavement along the direction of travel usually in the wheel path Shoving: a washboard like effect transverse to the direction of travel Ride quality: uneven road surface such as swells, bumps, sags, or depressions Damage: resulting from accidents and/or fires It can also be used to control or change the height of part or all of the road. This can be done to control heights and clearances of other road structures such as: curb reveals, manhole and catch basin heights, shoulder and guardrail heights, and overhead clearances. It can also be done to change the slope or camber of the road or for grade adjustments which can help with drainage. Specialty Specialty milling can be used to form rumble strips which are often used along highways. Using milling instead of other methods, such as rolling them in, means that the rumble strips can be added at any time after the road surface has hardened. Another example is to modify the roto-milling head to create slots in concrete slabs for the dowel bar retrofit process. The typical process is to saw cut and jackhammer out the slots for the dowels. Following dowel placement, the slots are then typically backfilled with a non-shrink concrete mixture, and the pavement is diamond-ground to restore smoothness. This special milling process shortens the time to create slots from the traditional method which is labor-intensive. Types In the USA, the Asphalt Recycling and Reclaiming Association has defined five classes of cold planing that the Federal Highway Administration has recognized. The classes are: Class I – milling to remove surface irregularities Class II – milling to uniform depth as shown on plans and specifications Class III – same as class II with the addition of cross slope Class IV – milling to the base or subgrade (full depth) Class V – milling to different depths at different locations Process and machinery Milling is performed by construction equipment called milling machines or cold planers. These machines use a large rotating drum to remove and grind the road surface. The drum consists of scrolls of tool holders. The scrolls are positioned around the drum such that the ground pavement is moved toward the center and can be loaded onto the machine's conveyor belt. The tool holders can wear out over time and can be broken if highway structures like manholes are encountered while milling. The tool holders on the drum hold carbide cutters. The cutters can be removed and replaced as they wear out. The amount of wear (and therefore the interval between replacement) varies with the type and consistency of the material being milled; intervals can range from a few hours to several days. The drum is enclosed in a housing/scrapper that is used to contain the milled material to be collected and deposited on the conveyor. The spacing of the tool spirals around the drum affect the end surface of the road, with micro-milling having the tightest spacing. The majority of milling machines use an up-cut setup which means that the drum rotates in the direction opposite that of the drive wheel or tracks, (i.e. work surface feeds into the cut). The speed of the rotating drum should be slower than the forward speed of the machine for a suitable finished surface. Modern machines generally use a front-loading conveyor system that have the advantage of picking up any material that falls off the conveyor as milling progresses. Water is generally applied to the drum as it spins, because of the heat generated during the milling process. Additionally, water helps control the dust created. In order to control the depth, slopes, and profile of the final milled surface many millers now have automatic depth control using lasers, string-lines, or other methods to maintain milled surfaces to ± of the target height. Micro milling Micro milling is also known as carbide grinding. It is a lower cost alternative to diamond grinding of pavement. Micro milling uses a specialty drum with three to four times as many cutting teeth than a standard milling drum. Micro milling can be used either as the final surface or as a treatment before applying a thin overlay. Micro milling can be used to remove many of the same distresses that standard milling can remove, although usually to a shallower depth. A micro milled surface has a uniform finish with reduced road noise compared to standard milling. References External links Asphalt Recycling and Reclaiming Association Engineering vehicles Road construction
Pavement milling
Engineering
1,142
58,948,411
https://en.wikipedia.org/wiki/Venus%20in%20culture
Venus, as one of the brightest objects in the sky, has been known since prehistoric times and has been a major fixture in human culture for as long as records have existed. As such, it has a prominent position in human culture, religion, and myth. It has been made sacred to gods of many cultures, and has been a prime inspiration for writers and poets as the morning star and evening star. Background and name What is now known as the planet Venus has long been an object of fascination for cultures worldwide. It is the second brightest object in the night sky, and follows a synodic cycle by which it seems to disappear for several days due to its proximity to the Sun, then re-appear on the opposite side of the Sun and on the other horizon. Depending on the point in its cycle, Venus may appear before sunrise in the morning, or after sunset in the evening, but it never appears to reach the apex of the sky. Therefore, many cultures have recognized it with two names, even if their astronomers realized that it was really one object. In old English, the planet was known as morgensteorra (morning star) and æfensteorra (evening star). It was not until the 13th century C.E. that the name "Venus" was adopted for the planet. It was called Lucifer in classical Latin though the morning star was considered sacred to the goddess Venus. In Chinese the planet is called Jīn-xīng (金星), the golden planet of the metal element. It is known as "Kejora" in Indonesian and Malaysian Malay. Modern Chinese, Japanese and Korean cultures refer to the planet literally as the "gold star" (), based on the Five elements. Ancient Near East Mesopotamia Because the movements of Venus appear to be discontinuous (it disappears due to its proximity to the Sun, for many days at a time, and then reappears on the other horizon), some cultures did not recognize Venus as single entity; instead, they assumed it to be two separate stars on each horizon: the morning and evening star. Nonetheless, a cylinder seal from the Jemdet Nasr period indicates that the ancient Sumerians already knew that the morning and evening stars were the same celestial object. The Sumerians associated the planet with the goddess Inanna, who was known as Ishtar by the later Akkadians and Babylonians. She had a dual role as a goddess of both love and war, thereby representing a deity that presided over birth and death. The discontinuous movements of Venus relate to both Inanna's mythology as well as her dual nature. Inanna's actions in several of her myths, including Inanna and Shukaletuda and Inanna's Descent into the Underworld appear to parallel the motion of the planet Venus as it progresses through its synodic cycle. For example, in Inanna's Descent to the Underworld, Inanna is able to descend into the netherworld, where she is killed, and then resurrected three days later to return to the heavens. An interpretation of this myth by Clyde Hostetter holds that it is an allegory for the movements of the planet Venus, beginning with the spring equinox and concluding with a meteor shower near the end of one synodic period of Venus. The three-day disappearance of Inanna refers to the three-day planetary disappearance of Venus between its appearance as a morning and evening star. An introductory hymn to this myth describes Inanna leaving the heavens and heading for Kur, what could be presumed to be the mountains, replicating the rising and setting of Inanna to the West. In the myth Inanna and Shukaletuda, Shukaletuda is described as scanning the heavens in search of Inanna, possibly searching the eastern and western horizons. In the same myth, while searching for her attacker, Inanna herself makes several movements that correspond with the movements of Venus in the sky. Inanna-Ishtar's most common symbol was the eight-pointed star. The eight-pointed star seems to have originally borne a general association with the heavens, but, by the Old Babylonian Period ( 1830 – 1531 BC), it had come to be specifically associated with the planet Venus, with which Ishtar was identified. In the Old Babylonian period, the planet Venus was known as Ninsi'anna, and later as Dilbat. " Ninsi'anna" translates to "divine lady, illumination of heaven", which refers to Venus as the brightest visible "star". Earlier spellings of the name were written with the cuneiform sign si4 (= SU, meaning "to be red"), and the original meaning may have been "divine lady of the redness of heaven", in reference to the color of the morning and evening sky. Venus is described in Babylonian cuneiform texts such as the Venus tablet of Ammisaduqa, which relates observations that possibly date from 1600 BC. The Venus tablet of Ammisaduqa shows the Babylonians understood morning and evening star were a single object, referred to in the tablet as the "bright queen of the sky" or "bright Queen of Heaven", and could support this view with detailed observations. Canaanite mythology In ancient Canaanite religion, the morning star is personified as the god Attar, a masculine variant of the name of the Babylonian goddess Ishtar. In myth, Attar attempted to occupy the throne of Ba'al and, finding he was unable to do so, descended and ruled the underworld. The original myth may have been about a lesser god, Helel, trying to dethrone the Canaanite high god El, who was believed to live on a mountain to the north. Hermann Gunkel's reconstruction of the myth told of a mighty warrior called Hêlal, whose ambition was to ascend higher than all the other stellar divinities, but who had to descend to the depths. It thus portrayed as a battle the process by which the bright morning star fails to reach the highest point in the sky before being faded out by the rising sun. Similarities have been noted with the story of Inanna's descent into the underworld, Ishtar and Inanna being associated with the planet Venus. A connection has been seen also with the Babylonian myth of Etana. The Jewish Encyclopedia comments: "The brilliancy of the morning star, which eclipses all other stars, but is not seen during the night, may easily have given rise to a myth such as was told of Ethana and Zu: he was led by his pride to strive for the highest seat among the star-gods on the northern mountain of the gods ... but was hurled down by the supreme ruler of the Babylonian Olympus." In the Hebrew language Book of Isaiah, chapter 14, the King of Babylon is condemned using imagery derived from Canaanite myth, and is called (, Hebrew for "shining one, son of the morning"). The title "Helel ben Shahar" may refer to the planet Venus as the morning star. Helel ben Shahar was cast out of heaven for rebelling against Elion. Egypt The Ancient Egyptians possibly knew that the morning star (Tioumoutiri) and evening star (Ouaiti) were one and the same by the second millennium BC or at the latest by the Later Period under Mesopotamian influence. At first described as either a phoenix or heron (or Bennu), calling it "the crosser" or "star with crosses", and associated with Osiris, later during the Late Period under probably Mesopotamian influence Venus was depicted as a two-headed morning god (with human and falcon heads), as in the Dendera zodiac, and associated with Horus, son of Isis (which during the even later Hellenistic period was together with Hathor identified with Aphrodite). Ancient Greece and Rome The Ancient Greeks called the morning star , (epithet of Hecate), the "Bringer of Light". Another Greek name for the morning star was Heosphoros (Greek Heōsphoros), meaning "Dawn-Bringer". They called the evening star, which was long considered a separate celestial object, (, the "star of the evening"). Both were children of dawn Eos and therefore grandchildren of Aphrodite. By Hellenistic times, the ancient Greeks had identified these as a single planet, though the traditional use of two names for its appearance in the morning and the evening continued even into the Roman period. The Greek myth of Phaethon, whose name means "Shining One", has also been seen as similar to those of other gods who cyclically descend from the heavens, like Inanna and Attar. In classical mythology, Lucifer ("light-bringer" in Latin) was the name of the planet Venus as the morning star (as the evening star it was called Vesper), and it was often personified as a male figure bearing a torch. Lucifer was said to be "the fabled son of Aurora and Cephalus, and father of Ceyx". He was often presented in poetry as heralding the dawn. The Romans considered the planet Lucifer particularly sacred to the goddess Venus, whose name eventually became the scientific name for the planet. The second century Roman mythographer Pseudo-Hyginus said of the planet: "The fourth star is that of Venus, Luciferus by name. Some say it is Juno's. In many tales it is recorded that it is called Hesperus, too. It seems to be the largest of all stars. Some have said it represents the son of Aurora and Cephalus, who surpassed many in beauty, so that he even vied with Venus, and, as Eratosthenes says, for this reason it is called the star of Venus. It is visible both at dawn and sunset, and so properly has been called both Luciferus and Hesperus." Ovid, in his first century epic Metamorphoses, describes Lucifer as ordering the heavens: "Aurora, watchful in the reddening dawn, threw wide her crimson doors and rose-filled halls; the Stellae took flight, in marshaled order set by Lucifer who left his station last." In the classical Roman period, Lucifer was not typically regarded as a deity and had few, if any, myths, though the planet was associated with various deities and often poetically personified. Cicero pointed out that "You say that Sol the Sun and Luna the Moon are deities, and the Greeks identify the former with Apollo and the latter with Diana. But if Luna (the Moon) is a goddess, then Lucifer (the Morning-Star) also and the rest of the Wandering Stars (Stellae Errantes) will have to be counted gods; and if so, then the Fixed Stars (Stellae Inerrantes) as well." Christianity The Hebrew word transliterated as Hêlêl or Heylel (pron. as Hay-LALE), occurs only once in the Hebrew Bible. The Septuagint renders הֵילֵל in Greek as Ἑωσφόρος (heōsphoros), "bringer of dawn", the Ancient Greek name for the morning star. Aquila of Sinope derives the word , the Hebrew name for the morning star, from the verb (to lament). This derivation was adopted as a proper name for an angel who laments the loss of his former beauty. The Christian church fathers – for example Hieronymus, in his Vulgate – translated this as Lucifer. The equation of Lucifer with the fallen angel probably occurred in 1st century Palestinian Judaism. According to the King James Bible-based Strong's Concordance, the original Hebrew word means "shining one, light-bearer", and the translation given in the King James text is the Latin name for the planet Venus, "Lucifer". However, the translation of הֵילֵל with the name "Lucifer" has been abandoned in modern English translations of Isaiah 14:12. In a modern translation from the original Hebrew, the passage in which the name occurs begins with the statement: "On the day the Lord gives you relief from your suffering and turmoil and from the harsh labour forced on you, you will take up this taunt against the king of Babylon: How the oppressor has come to an end! How his fury has ended!" After describing the death of the king, the taunt continues: "How you have fallen from heaven, morning star, son of the dawn! You have been cast down to the earth, you who once laid low the nations! You said in your heart, 'I will ascend to the heavens; I will raise my throne above the stars of God; I will sit enthroned on the mount of assembly, on the utmost heights of Mount Zaphon. I will ascend above the tops of the clouds; I will make myself like the Most High.' But you are brought down to the realm of the dead, to the depths of the pit. Those who see you stare at you, they ponder your fate: 'Is this the man who shook the earth and made kingdoms tremble, the man who made the world a wilderness, who overthrew its cities and would not let his captives go home?'" This passage was the origin of the later belief that the Devil was a fallen angel, who could also be referred to as "Lucifer". However, it originally referred to the rise and disappearance of the morning star as an allegory for the fall of a once-proud king. This allegorical understanding of Isaiah seems to be the most accepted interpretation in the New Testament, as well as among early Christians such as Origen, Eusebius, Tertullian, and Gregory the Great. The fallen angel motif may therefore be considered a Christian "remythologization" of Isaiah 14, returning its allegorical imagery of the hubris of a historical ruler to the original roots of the Canaanite myth of a lesser god trying and failing to claim the throne of the heavens, who is then cast down to the underworld. In Christian tradition the morning star is a symbol for the approaching Son of God and his light-filled appearance in the night of the world (Epiphany). Astronomical theories for dating the Star of Bethlehem relate, among other things, to various conjunctions of Venus and Jupiter. Sometimes Venus is also identified as the Stella maris, a title of Mary, mother of Jesus of Nazareth. Vietnam In Vietnamese folklore, the planet was regarded as two separate bodies: the morning star (sao Mai) and the evening star (sao Hôm). Due to the position of these supposedly distinct bodies in the sky, they went down in folk poetry as a metaphor for separation, especially that between lovers. When it was in the opposite direction of the Moon, the planet was also known as sao Vượt (the climbing/passing star, also spelled as sao Vược due to different Quốc ngữ interpretations of one Nôm character). Such an opposition, much like that between the morning star and the evening star, has also been likened in folk poetry to the separation of ill-fated lovers, as evidenced by this lục bát couplet: "Mình đi có nhớ ta chăng? Ta như sao Vượt chờ trăng giữa trời." (When you go, do you miss me? I am the climbing star waiting for the moon in the sky.) Hinduism In India Shukra Graha ("the planet Shukra") which is named after a powerful saint Shukra. Shukra which is used in Indian Vedic astrology means "clear, pure" or "brightness, clearness" in Sanskrit. One of the nine Navagraha, it is held to affect wealth, pleasure and reproduction; it was the son of Bhrgu, preceptor of the Daityas, and guru of the Asuras. The word Shukra is also associated with semen, or generation. Persia In Iranian mythology, especially in Persian mythology, the planet usually corresponds to the goddess Anahita. In some parts of Pahlavi literature the deities Aredvi Sura and Anahita are regarded as separate entities, the first one as a personification of the mythical river and the latter as a goddess of fertility, which is associated with the planet Venus. As the goddess Aredvi Sura Anahita—and simply called Anahita as well—both deities are unified in other descriptions, e. g. in the Greater Bundahishn, and are represented by the planet. In the Avestan text Mehr Yasht (Yasht 10) there is a possible early link to Mithra. The Persian name of the planet today is "Nahid", which derives from Anahita and later in history from the Pahlavi language Anahid. Turkic mythology The deity Erkliğ Han (the Powerful) was identified with Venus as a great warrior. He was responsible for killing the stars when the sun rises. For this reason, he was a symbol for warriors in general. In the 11th century Turkic Kutadgu Bilig, under cross-cultural influences of Greek and Sumerian mythology, Venus became associated with love, beauty, and fertility. Islam In Islamic traditions the morning star is called , Zohra or Zohrah and commonly related to a "beautiful woman". According to myth, of which an echo is found in a play by the 17th-century English poet William Percy, two angels, Harut and Marut, descended to earth and were seduced by Zohra's beauty to commit shirk, murder, adultery and drinking wine. In their drunken state, Zohra elicited from these angels the secret words to ascend to heaven. When she spoke the secret words, she elevated herself to the first heaven, but was imprisoned there (i.e. transformed into the planet Venus). According to tafsir, some say that the woman literally became the morning star, as a reflection of her ability to blend the angels. Others say that during her ascend she was imprisoned on the planet and is tortured there. Maya Venus was considered the most important celestial body observed by the Maya, who called it Chac ek, or Noh Ek', "the Great Star". The Maya monitored the movements of Venus closely and observed it in daytime. The positions of Venus and other planets were thought to influence life on Earth, so the Maya and other ancient Mesoamerican cultures timed wars and other important events based on their observations. In the Dresden Codex, the Maya included an almanac showing Venus's full cycle, in five sets of 584 days each (approximately eight years), after which the patterns repeated (since Venus has a synodic period of 583.92 days). The Maya civilization developed a religious calendar, based in part upon the motions of the planet, and held the motions of Venus to determine the propitious time for events such as war. They also named it Xux Ek', the Wasp Star. The Maya were aware of the planet's synodic period, and could compute it to within a hundredth part of a day. Other cultures In traditional Lakota star knowledge, the planet Venus is named Aŋpo Wiŋ or the Light of Dawn (sometimes also translated as Morningstar). It is believed to be a male Nāgī controlling beginnings, fate and all things cyclical. He is also sometimes credited as the father of Star Boy. The Maasai people named the planet Kileken, and have an oral tradition about it called The Orphan Boy. Venus is important in many Australian aboriginal cultures, such as that of the Yolngu people in Northern Australia. The Yolngu gather after sunset to await the rising of Venus, which they call Barnumbirr. As she approaches, in the early hours before dawn, she draws behind her a rope of light attached to the Earth, and along this rope, with the aid of a richly decorated "Morning Star Pole", the people are able to communicate with their dead loved ones, showing that they still love and remember them. Barnumbirr is also an important creator-spirit in the Dreaming, and "sang" much of the country into life. Venus plays a prominent role in Pawnee mythology. One specific group of Pawnee, a North American native tribe, until as late as 1838, practiced a morning star ritual in which a girl was sacrificed to the morning star. Among the Mapuche of south-central Chile and southwestern Argentina; the planet or Wünelve ("the First") is believed to have existed when spirits were attempting to ascend back from the World Below or Minchemapu after falling from the Middle World or Rangimapu; the planet is believed to be an amalgamation of some of those spirits who were stuck on their way. The planet is an important symbol for this people; it was eventually incorporated into the flag of Chile simplified as a five-pointed star symbolizing a beacon of progress and honor. In western astrology, derived from its historical connotation with goddesses of femininity and love, Venus is held to influence desire and sexual fertility. In the metaphysical system of Theosophy, it is believed that on the etheric plane of Venus there is a civilization that existed hundreds of millions of years before Earth's and it is also believed that the governing deity of Earth, Sanat Kumara, is from Venus. In fiction The discovery in the modern era that Venus was a distant world covered in impenetrable cloud cover gave science fiction writers free rein to speculate on conditions at its surface; all the more so when early observations showed that not only was it similar in size to Earth, it possessed a substantial atmosphere. Closer to the Sun than Earth, the planet was frequently depicted as warmer, but still habitable by humans. The genre reached its peak between the 1930s and 1950s, at a time when science had revealed some aspects of Venus, but not yet the harsh reality of its surface conditions. Findings from the first missions to Venus showed the reality to be quite different, and brought this particular genre to an end. As scientific knowledge of Venus advanced, science fiction authors tried to keep pace, particularly by conjecturing human attempts to terraform Venus. In humour Scientists who had reported 2020 possible signs of life in the clouds of Venus stated that the found biosignature phosphine is found on Earth and among others produced by penguins. Subsequently some public news reports and public responses wrongly cited the scientists' interest in the processes that create phosphine, suggesting that penguins lived in the clouds of Venus. The Planetary Society picked up on the misunderstanding for entertainment purposes. References External links Jemdet Nasr period Topics in popular culture Solar System
Venus in culture
Astronomy
4,699
15,290
https://en.wikipedia.org/wiki/Intercalation%20%28timekeeping%29
Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of days or months. Solar calendars The solar or tropical year does not have a whole number of days (it is about 365.24 days), but a calendar year must have a whole number of days. The most common way to reconcile the two is to vary the number of days in the calendar year. In solar calendars, this is done by adding an extra day ("leap day" or "intercalary day") to a common year of 365 days, about once every four years, creating a leap year that has 366 days (Julian, Gregorian and Indian national calendars). The Decree of Canopus, issued by the pharaoh Ptolemy III Euergetes of Ancient Egypt in 239 BC, decreed a solar leap day system; an Egyptian leap year was not adopted until 25 BC, when the Roman Emperor Augustus instituted a reformed Alexandrian calendar. In the Julian calendar, as well as in the Gregorian calendar, which improved upon it, intercalation is done by adding an extra day to February in each leap year. In the Julian calendar this was done every four years. In the Gregorian, years divisible by 100 but not 400 were exempted in order to improve accuracy. Thus, 2000 was a leap year; 1700, 1800, and 1900 were not. Epagomenal days are days within a solar calendar that are outside any regular month. Usually five epagomenal days are included within every year (Egyptian, Coptic, Ethiopian, Mayan Haab' and French Republican Calendars), but a sixth epagomenal day is intercalated every four years in some (Coptic, Ethiopian and French Republican calendars). The Solar Hijri calendar, used in Iran, is based on solar calculations and is similar to the Gregorian calendar in its structure, and hence the intercalation, with the exception that its epoch is the Hijrah. The Bahá'í calendar includes enough epagomenal days (usually 4 or 5) before the last month (, ʿalāʾ) to ensure that the following year starts on the March equinox. These are known as the Ayyám-i-Há. Lunisolar calendars The solar year does not have a whole number of lunar months (it is about 365/29.5 = 12.37 lunations), so a lunisolar calendar must have a variable number of months per year. Regular years have 12 months, but embolismic years insert a 13th "intercalary" or "leap" month or "embolismic" month every second or third year. Whether to insert an intercalary month in a given year may be determined using regular cycles such as the 19-year Metonic cycle (Hebrew calendar and in the determination of Easter) or using calculations of lunar phases (Hindu lunisolar and Chinese calendars). The Buddhist calendar adds both an intercalary day and month on a usually regular cycle. Lunar calendars In principle, lunar calendars do not employ intercalation because they do not seek to synchronise with the seasons, and the motion of the moon is astronomically predictable. But religious lunar calendars rely on actual observation. The Lunar Hijri calendar, the purely lunar calendar observed by most of Islam, depends on actual observation of the first crescent of the moon and thus has no intercalation. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of 29- or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th. The tabular Islamic calendar, used in Iran, has 12 lunar months that usually alternate between 30 and 29 days every year, but an intercalary day is added to the last month of the year 12 times in a 33-year cycle. Some historians also linked the pre-Islamic practice of Nasi' to intercalation. Leap seconds The International Earth Rotation and Reference Systems Service can insert or remove leap seconds from the last day of any month (June and December are preferred). These are sometimes described as intercalary. Other uses ISO 8601 includes a specification for a 52/53-week year. Any year that has 53 Thursdays has 53 weeks; this extra week may be regarded as intercalary. The xiuhpōhualli (year count) system of the Aztec calendar had five intercalary days after the eighteenth and final month, the nēmontēmi, in which the people fasted and reflected on the past year. See also Lunisolar calendar Egyptian, Coptic, and Ethiopian calendars Iranian calendar Islamic calendar Mandaean calendar Celtic calendar Thai lunar calendar Bengali calendar Igbo calendar World Calendar Intercalated Games References Calendars Units of time
Intercalation (timekeeping)
Physics,Mathematics
1,094
34,865,455
https://en.wikipedia.org/wiki/Coxeter%20complex
In mathematics, the Coxeter complex, named after H. S. M. Coxeter, is a geometrical structure (a simplicial complex) associated to a Coxeter group. Coxeter complexes are the basic objects that allow the construction of buildings; they form the apartments of a building. Construction The canonical linear representation The first ingredient in the construction of the Coxeter complex associated to a Coxeter system is a certain representation of , called the canonical representation of . Let be a Coxeter system with Coxeter matrix . The canonical representation is given by a vector space with basis of formal symbols , which is equipped with the symmetric bilinear form . In particular, . The action of on is then given by . This representation has several foundational properties in the theory of Coxeter groups; for instance, is positive definite if and only if is finite. It is a faithful representation of . Chambers and the Tits cone This representation describes as a reflection group, with the caveat that might not be positive definite. It becomes important then to distinguish the representation from its dual . The vectors lie in and have corresponding dual vectors in given by where the angled brackets indicate the natural pairing between and . Now acts on and the action is given by for and any . Then is a reflection in the hyperplane . One has the fundamental chamber ; this has faces the so-called walls, . The other chambers can be obtained from by translation: they are the for . The Tits cone is . This need not be the whole of . Of major importance is the fact that is convex. The closure of is a fundamental domain for the action of on . The Coxeter complex The Coxeter complex of with respect to is , where is the multiplicative group of positive reals. Examples Finite dihedral groups The dihedral groups (of order 2n) are Coxeter groups, of corresponding type . These have the presentation . The canonical linear representation of is the usual reflection representation of the dihedral group, as acting on an -gon in the plane (so in this case). For instance, in the case we get the Coxeter group of type , acting on an equilateral triangle in the plane. Each reflection has an associated hyperplane in the dual vector space (which can be canonically identified with the vector space itself using the bilinear form , which is an inner product in this case as remarked above); these are the walls. They cut out chambers, as seen below: The Coxeter complex is then the corresponding -gon, as in the image above. This is a simplicial complex of dimension 1, and it can be colored by cotype. The infinite dihedral group Another motivating example is the infinite dihedral group . This can be seen as the group of symmetries of the real line that preserves the set of points with integer coordinates; it is generated by the reflections in and . This group has the Coxeter presentation . In this case, it is no longer possible to identify with its dual space , as is degenerate. It is then better to work solely with , which is where the hyperplanes are defined. This then gives the following picture: In this case, the Tits cone is not the whole plane, but only the upper half plane. Taking the quotient by the positive reals then yields another copy of the real line, with marked points at the integers. This is the Coxeter complex of the infinite dihedral group. Alternative construction of the Coxeter complex Another description of the Coxeter complex uses standard cosets of the Coxeter group . A standard coset is a coset of the form , where for some proper subset of . For instance, and . The Coxeter complex is then the poset of standard cosets, ordered by reverse inclusion. This has a canonical structure of a simplicial complex, as do all posets that satisfy: Any two elements have a greatest lower bound. The poset of elements less than or equal to any given element is isomorphic to the poset of subsets of for some integer n. Properties The Coxeter complex associated to has dimension . It is homeomorphic to a -sphere if W is finite and is contractible if W is infinite. Every apartment of a spherical Tits building is a Coxeter complex. See also Buildings Weyl group Root system References Sources Peter Abramenko and Kenneth S. Brown, Buildings, Theory and Applications. Springer, 2008. Group theory Algebraic combinatorics Geometric group theory Mathematical structures
Coxeter complex
Physics,Mathematics
920
65,374,549
https://en.wikipedia.org/wiki/Monique%20Chyba
Monique Chyba (born 1969) is a control theorist who works as a professor of mathematics at the University of Hawaiʻi at Mānoa. Her work on control theory has involved the theory of singular trajectories, and applications in the control of autonomous underwater vehicles. More recently, she has also applied control theory to the prediction and modeling of the spread of COVID-19 in Hawaii. Education and career Chyba's parents Mirek and Jana Chyba were Czech, but settled in Geneva, Switzerland. Chyba earned a Ph.D. through the University of Burgundy in Dijon, France, in 1997, while working as a teaching assistant at the University of Geneva. Her dissertation, Le Cas Martinet en Geometrie Sous-Riemannienne [the Martinet case in sub-Riemannian geometry], was supervised by Bernard Bonnard. After postdoctoral research at Pierre and Marie Curie University, Harvard University, INRIA Sophia Antipolis, Princeton University, and the University of California, Santa Cruz, she joined the University of Hawaiʻi faculty in 2002. and was promoted to full professor in 2012. Book Chyba is an author of the book Singular Trajectories and their Role in Control Theory (with Bernard Bonnard, Springer, 2003). Recognition In 2014, Chiba University in Japan gave Chyba their Science and Lectureship Award. References External links Home page 1969 births Living people Women mathematicians University of Hawaiʻi at Mānoa faculty Control theorists University of Burgundy alumni
Monique Chyba
Engineering
312
57,387,305
https://en.wikipedia.org/wiki/Ribbon%20Communications
Ribbon Communications Inc. is a public company that makes software, IP and optical networking solutions for service providers, enterprises and critical infrastructure sectors. The company was formed in 2017, following the merger of Genband and Sonus Networks and is headquartered in Plano, Texas. History Ribbon Communications was the combination of two companies, each of which had acquired other businesses over their history. Ribbon Communications Ribbon Communications was founded in October, 2017, following the merger of Genband and Sonus Networks in May. Ray Dolan initially headed the combined company, while Walsh led the Kandy business unit. By December, Dolan, who had led Sonus since 2010, resigned. Franklin (Fritz) W. Hobbs was appointed as president and CEO of the combined organization and served in that role until November 2019. In January 2018, the company announced that its session border controllers would be used in the virtual network services of Verizon. In 2018 Ribbon also acquired Edgewater Networks. In November 2019, Ribbon announced it would acquire ECI Telecom from Shaul Shani for $486 million in cash and stock. The company completed the merger in March 2020. In February 2020, Bruce McClelland was named president, CEO and director. A years later, Ribbon moved its headquarters to Plano, Texas. In August 2020, AVCTechnologies announced an agreement to buy Kandy Communications Business. The transaction was completed in December 2020. Genband General Bandwidth was founded in 1999 by Paul Carew, Brendon Mills, Ron Lutz and Steve Raich in Austin, Texas, and received initial venture capital funding of $12 million. The company raised over $200 million in four rounds of venture funding and grew to over 200 people by 2003. In 2004, Mills resigned and was replaced as CEO by Charles Vogt. In March 2006, General Bandwidth changed its name to Genband, Inc. and moved its headquarters to Plano, Texas. Genband started as a media gateway vendor selling the G6 media gateway, but eventually branched out to IP switching, IP applications, IP Multimedia Subsystem and session border controllers. In August, 2006, Genband acquired Syndeo and Baypackets (headquartered in Fremont, California, with employees mostly in India). In October, 2006 it acquired the digital central office products known as Siemens DCO. In 2007, Genband acquired Tekelec's switching group, which expanded product offerings in application software and SIP trunking gateways. In 2008, the company acquired Nokia Siemens Networks’ Surpass HiG media gateway product portfolio, including fixed-line trunking media gateways. The company concluded 2008 with the acquisition of NextPoint Networks, which included session border controller (SBCs) and security gateway offerings. In May 2010, Genband purchased Nortel Networks' carrier VoIP and application business for an estimated net $182 million after Nortel became bankrupt. Existing shareholder One Equity Partners assisted in financing. In June, 2010, Genband was re-incorporated as Genband, Inc, and disclosed an equity investment from executives and board members of about $4 million. In December, 2010 it moved its headquarters to Frisco, Texas, keeping its Plano campus as a design center. Both are near Dallas, Texas. In January, 2011, Genband acquired Cedar Point Communications in Derry, New Hampshire. In 2012, Genband acquired Aztek Networks, a switch maker specializing in hardware that allows for a smoother transition from legacy to IP networks. Genband was named the top-ventured capital backed company by the Wall Street Journal out of nearly 6,000 companies that were considered. On February 12, 2013, Genband announced the launch of the NUViA Cloud offering which was their entry into the SaaS market. The NUViA Unified Communications as a Service (UCaaS) offering included HD voice, video, multimedia messaging, mobility, conferencing, Web collaboration, desktop clients, and fixed and mobile convergence hosted from datacenters run by the company around the globe. Also in 2013, Genband acquired Fringland Ltd., provider of the Fring! app, an over-the-top (OTT) mobile IP communications service provider. Two years later, it announced the Fring Alliance, a community promoting communications service providers to provide instant messaging, voice and video services to their subscribers. Charles Vogt left Genband in 2013 and David Walsh added the CEO position to his other already held title of chairman. In 2014, Genband acquired uReach Technologies, a provider of unified communications and messaging, and introduced unified communications products and services for business customers. In September 2014, Genband announced Kandy.io, cloud-based, real-time software support communications marketed as platform as a service (PaaS). In May 2015, Genband was named in CNBC's "disruptor" list. In 2016, it was involved in a patent dispute with Metaswitch. In September, 2016, pre-packaged software using the Kandy technology were announced, called "Kandy wrappers". Sonus Networks Sonus Networks, Inc. was founded in August 1997 by Jay Pasco-Anderson, Karl Schwiegershausen, Michael G. Hluchyj, Rubin Gruber and Tony Risica. Hluchyj was chief technology officer, Gruber served as president until November 1998, when Hassan M. Ahmed became CEO and chairman. There were no revenues until the quarter ending in March 2000, with accumulated losses of about $50 million against $1.1 million revenues. On May 31, 2000, Sonus had its initial public offering (IPO), raising over $100 million. It was listed on Nasdaq with the symbol SONS. At the time (near the end of the dot-com bubble), it was located in Westford, Massachusetts. In January 2001, Sonus acquired Anousheh Ansari's firm Telecom Technologies, Inc., in an all-stock deal. Sonus subsequently integrated TTI's soft switch technology INtelligentIP into its own packet telephony suite. In 2008, Richard Nottenburg joined as chief executive. A product called a network border switch was announced in 2003, and updated in 2006. In August 2012, Sonus acquired Network Equipment Technologies, Inc., for approximately $42 million. The acquisition complemented their existing SBC line with the NET UX series for SIP Trunking and SIP-based UC. On December 13, 2013, Sonus agreed to acquire Performance Technologies Inc entered into a definitive merger agreement, under which Sonus will acquire PT for $3.75 per share in cash, or approximately $30 million.[6] In 2014 Sonus acquired Performance Technologies, moving into the diameter signaling market. In 2016 Sonus Networks Inc., acquired Taqua expanding its soft switching portfolio. Kandy Kandy is a cloud communications platform created by Genband in September 2014. The Kandy platform includes Communications Platform as a Service (CPaaS) and Unified Communications as a Service (UCaaS) assets formerly known as NUViA. The platform also includes pre-built customer engagement tools, based on WebRTC technology, called Kandy Wrappers.  The platform offers white-labeled services to Communication Service Providers (CSPs) and Systems Integrators (SIs).  As such, Kandy Partners typically sell these to their end customers under their own brands. Kandy was formed when GENBAND announced the launch of its real-time software development communications platform in September 2014. Initially only focused on CPaaS, the scope of the platform was quickly expanded to include GENBAND's Nuvia UCaaS offer, rebranded Kandy Business Solutions (KBS). After the 2017 merger Ribbon Communications maintained the Kandy offerings and Kandy sub brand for its cloud portfolio. On December 2, 2020, Ribbon Communications sold the Kandy assets to AVCtechnologies. AVCT issued to Ribbon units of securities consisting of convertible debentures in an aggregate principal amount of approximately $45 million and warrants to purchase an aggregate of approximately 4.5 million shares of AVCT's common stock for an exercise price of $0.01 per share. On January 11, 2023, American Virtual Cloud and all of its affiliated subsidiaries, including AVCtechnologies and Kandy Communications, declared Chapter 11 bankruptcy. The company will continue to operate normally as 'debters-in-possession'. References External links Networking hardware companies Companies based in Middlesex County, Massachusetts Westford, Massachusetts Telecommunications companies established in 2017 Networking companies of the United States Telecommunications equipment vendors Computer companies of the United States Computer hardware companies
Ribbon Communications
Technology
1,761
34,055,768
https://en.wikipedia.org/wiki/Eukaryotic%20translation%20initiation%20factor%204E%20family
In molecular biology, the eukaryotic translation initiation factor 4E family (eIF-4E) is a family of proteins that bind to the cap structure of eukaryotic cellular mRNAs. Members of this family recognise and bind the 7-methyl-guanosine-containing (m7Gppp) cap during an early step in the initiation of protein synthesis and facilitate ribosome binding to an mRNA by inducing the unwinding of its secondary structures. A tryptophan in the central part of the sequence of human eIF-4E seems to be implicated in cap-binding. Members of this family include EIF4E, EIF4E2, EIF4E3 and EIF4E1B. References External links Protein domains
Eukaryotic translation initiation factor 4E family
Biology
161
60,057,035
https://en.wikipedia.org/wiki/5-over-1
5-over-1 or over-1s, also known as a one-plus-five or a podium building, is a type of multi-family residential building commonly found in urban areas of North America. The mid-rise buildings are normally constructed with four or five wood-frame stories above a concrete podium, usually for retail or resident amenity space. The name derives from the maximum permissible five floors of combustible construction (Type III or Type V) over a fire-resistive Type I podium of one floor for "5-over-1" or two floors for "5-over-2", as defined in the United States–based International Building Code (IBC) Section 510.2. Some sources instead attribute the name to the wood framing of the upper construction; the International Building Code uses "Type V" to refer to non-fireproof structures, including those framed with dimensional lumber. The style of buildings originated with the work of architect Tim Smith in Los Angeles, who took advantage of a change in construction code allowing the use of fire-retardant treated wood (FRTW) to construct buildings up to five stories. From this he saw that what became the "Five-Over-One" model would bring the construction costs down substantially, making a 100-unit affordable housing project financially viable. The style took root in New York and other dense cities in the American Northeast following the revisions in the 2000 IBC edition, and it exploded in popularity in the 2010s, following a 2009 revision to IBC, which allowed up to five stories of wood-framed construction. Description The first recorded example of 5-over-1 construction is an affordable housing apartment building in Los Angeles built in 1996. The wood-framed 5-over-1 style is popular due to its high density and relatively lower construction costs compared to steel and concrete. 5-over-1 buildings often feature secure-access interior hallways with residential units on both sides, which favors a U, E, C, or right-angle building shape. The exteriors of 5-over-1 buildings often contain flat windows, rainscreen cladding, and Hardie board cement fiber panels. These buildings are also sometimes called a Wrap or Texas Doughnut, which describes a multifamily building which is wrapped around a parking garage in the center. This style is common in areas with higher parking mandates. Criticism The 5-over-1 style of buildings are often criticized for their high fire risk when under construction, as well as their architectural blandness. Some cities and jurisdictions have considered additional regulations for multi-story wood-framed structures. After an under construction apartment complex burned to the ground in downtown Waltham, Massachusetts, in 2017, the city council voted 14–0 to request that the state reevaluate the building code for 5-over-1 buildings. The borough of Edgewater, New Jersey, introduced a resolution calling on the state of New Jersey to enact stricter fire safety regulations for wood-framed buildings following a large fire that occurred in the wood-framed Avalon at Edgewater apartments in 2015. The 5-over-1 style of apartment buildings are also associated with gentrification, due to the popularity of the building style in neighborhoods affected by development-induced displacement. However, new housing at market rates (which may include 5-over-1-style buildings) has been shown to loosen the market for lower-quality housing, making it a possible anti-displacement tool. See also Little Boxes ("Ticky tacky") Mixed-use development Studio apartment Three-decker (house) References Apartment types House types Housing in the United States Urban design Urban planning 2010s architecture in the United States
5-over-1
Engineering
753
5,345,346
https://en.wikipedia.org/wiki/Exterior%20gateway%20protocol
An exterior gateway protocol is an IP routing protocol used to exchange routing information between autonomous systems. This exchange is crucial for communications across the Internet. Notable exterior gateway protocols include Exterior Gateway Protocol (EGP), now obsolete, and Border Gateway Protocol (BGP). By contrast, an interior gateway protocol is a type of protocol used for exchanging routing information between gateways (commonly routers) within an autonomous system (for example, a system of corporate local area networks). This routing information can then be used to route network-level protocols like IP. References Internet protocols Internet Standards Routing protocols
Exterior gateway protocol
Technology
120
44,846,612
https://en.wikipedia.org/wiki/Classical%20nucleation%20theory
Classical nucleation theory (CNT) is the most common theoretical model used to quantitatively study the kinetics of nucleation. Nucleation is the first step in the spontaneous formation of a new thermodynamic phase or a new structure, starting from a state of metastability. The kinetics of formation of the new phase is frequently dominated by nucleation, such that the time to nucleate determines how long it will take for the new phase to appear. The time to nucleate can vary by orders of magnitude, from negligible to exceedingly large, far beyond reach of experimental timescales. One of the key achievements of classical nucleation theory is to explain and quantify this immense variation. Description The central result of classical nucleation theory is a prediction for the rate of nucleation , in units of (number of events)/(volume·time). For instance, a rate in a supersaturated vapor would correspond to an average of 1000 droplets nucleating in a volume of 1 cubic meter in 1 second. The CNT prediction for is where is the free energy cost of the nucleus at the top of the nucleation barrier, and is the average thermal energy with the absolute temperature and the Boltzmann constant. is the number of nucleation sites. is the rate at which molecules attach to the nucleus. is the Zeldovich factor, (named after Yakov Zeldovich) which gives the probability that a nucleus at the top of the barrier will go on to form the new phase, rather than dissolve. This expression for the rate can be thought of as a product of two factors: the first, , is the number of nucleation sites multiplied by the probability that a nucleus of critical size has grown around it. It can be interpreted as the average, instantaneous number of nuclei at the top of the nucleation barrier. Free energies and probabilities are closely related by definition. The probability of a nucleus forming at a site is proportional to . So if is large and positive the probability of forming a nucleus is very low and nucleation will be slow. Then the average number will be much less than one, i.e., it is likely that at any given time none of the sites has a nucleus. The second factor in the expression for the rate is the dynamic part, . Here, expresses the rate of incoming matter and is the probability that a nucleus of critical size (at the maximum of the energy barrier) will continue to grow and not dissolve. The Zeldovich factor is derived by assuming that the nuclei near the top of the barrier are effectively diffusing along the radial axis. By statistical fluctuations, a nucleus at the top of the barrier can grow diffusively into a larger nucleus that will grow into a new phase, or it can lose molecules and shrink back to nothing. The probability that a given nucleus goes forward is . Taking into consideration kinetic theory and assuming that there is the same transition probability in each direction, it is known that . As determines the hopping rate, the previous formula can be rewritten in terms of the mean free path and the mean free time . Consequently, a relation of in terms of the diffusion coefficient is obtained.Further considerations can be made in order to study temperature dependence. Therefore, Einstein-Stokes relation is introduced under the consideration of a spherical shape, where is the material's viscosity.Considering the last two expressions, it is seen that . If , being the melting temperature, the ensemble gains high velocity and makes and to increase and hence, decreases. If , the ensemble has a low mobility, which makes to decrease as well. To see how this works in practice we can look at an example. Sanz and coworkers have used computer simulation to estimate all the quantities in the above equation, for the nucleation of ice in liquid water. They did this for a simple but approximate model of water called TIP4P/2005. At a supercooling of 19.5 °C, i.e., 19.5 °C below the freezing point of water in their model, they estimate a free energy barrier to nucleation of ice of . They also estimate a rate of addition of water molecules to an ice nucleus near the top of the barrier of and a Zeldovich factor . The number of water molecules in 1 m3 of water is approximately 1028. These leads to the prediction , which means that on average one would have to wait 1083s (1076 years) to see a single ice nucleus forming in 1 m3 of water at -20 °C! This is a rate of homogeneous nucleation estimated for a model of water, not real water — in experiments one cannot grow nuclei of water and so cannot directly determine the values of the barrier , or the dynamic parameters such as , for real water. However, it may be that indeed the homogeneous nucleation of ice at temperatures near -20 °C and above is extremely slow and so that whenever water freezes at temperatures of -20 °C and above this is due to heterogeneous nucleation, i.e., the ice nucleates in contact with a surface. Homogeneous nucleation Homogeneous nucleation is much rarer than heterogeneous nucleation. However, homogeneous nucleation is simpler and easier to understand than heterogeneous nucleation, so the easiest way to understand heterogeneous nucleation is to start with homogeneous nucleation. So we will outline the CNT calculation for the homogeneous nucleation barrier . To understand if nucleation is fast or slow, , the Gibbs free energy change as a function of the size of the nucleus, needs to be calculated. The classical theory assumes that even for a microscopic nucleus of the new phase, we can write the free energy of a droplet as the sum of a bulk term that is proportional to the volume of the nucleus, and a surface term, that is proportional to its surface area The first term is the volume term, and as we are assuming that the nucleus is spherical, this is the volume of a sphere of radius . is the difference in free energy per unit of volume between the phase that nucleates and the thermodynamic phase nucleation is occurring in. For example, if water is nucleating in supersaturated air, then is the free energy per unit of volume of water minus that of supersaturated air at the same pressure. As nucleation only occurs when the air is supersaturated, is always negative. The second term comes from the interface at surface of the nucleus, which is why it is proportional to the surface area of a sphere. is the surface tension of the interface between the nucleus and its surroundings, which is always positive. For small the second surface term dominates and . The free energy is the sum of an and terms. Now the terms varies more rapidly with than the term, so as small the term dominates and the free energy is positive while for large , the term dominates and the free energy is negative. This shown in the figure to the right. Thus at some intermediate value of , the free energy goes through a maximum, and so the probability of formation of a nucleus goes through a minimum. There is a least-probable nucleus size, i.e., the one with the highest value of whereAddition of new molecules to nuclei larger than this critical radius, , decreases the free energy, so these nuclei are more probable. The rate at which nucleation occurs is then limited by, i.e., determined by the probability, of forming the critical nucleus. This is just the exponential of minus the free energy of the critical nucleus , which is This is the free energy barrier needed in the CNT expression for above. In the discussion above, we assumed the growing nucleus to be three-dimensional and spherical. Similar equations can be set up for other dimensions and/or other shapes, using the appropriate expressions for the analogues of volume and surface area of the nucleus. One will then find out that any non-spherical nucleus has a higher barrier height than the corresponding spherical nucleus. This can be understood from the fact that a sphere has the lowest possible surface area to volume ratio, thereby minimizing the (unfavourable) surface contribution with respect to the (favourable) bulk volume contribution to the free energy. Assuming equal kinetic prefactors, the fact that is higher for non-spherical nuclei implies that their formation rate is lower. This explains why in homogeneous nucleation usually only spherical nuclei are taken into account. From an experimental standpoint, this theory grants tuning of the critical radius through the dependence of on temperature. The variable , described above, can be expressed as where is the melting point and is the enthalpy of formation for the material. Furthermore, the critical radius can be expressed as revealing a dependence of reaction temperature. Thus as you increase the temperature near , the critical radius will increase. Same happens when you move away from the melting point, the critical radius and the free energy decrease. Heterogeneous nucleation Unlike homogeneous nucleation, heterogeneous nucleation occurs on a surface or impurity. It is much more common than homogeneous nucleation. This is because the nucleation barrier for heterogeneous nucleation is much lower than for homogeneous nucleation. To see this, note that the nucleation barrier is determined by the positive term in the free energy , which is proportional to the total exposed surface area of a nucleus. For homogeneous nucleation the surface area is simply that of a sphere. For heterogeneous nucleation, however, the surface area is smaller since part of the nucleus boundary is accommodated by the surface or impurity onto which it is nucleating. There are several factors which determine the precise reduction in the exposed surface area. As shown in a diagram on the left, these factors include the size of the droplet, the contact angle, , between the droplet and surface, and the interactions at the three phase interfaces: liquid-solid, solid-vapor, and liquid-vapor. The free energy needed for heterogeneous nucleation, , is equal to the product of homogeneous nucleation, , and a function of the contact angle, :.The schematic to the right illustrates the decrease in the exposed surface area of the droplet as the contact angle decreases. Deviations from a flat interface decrease the exposed surface even further: there exist expressions for this reduction for simple surface geometries. In practice, this means that nucleation will tend to occur on surface imperfections. Statistical mechanical treatment The classical nucleation theory hypothesis for the form of can be examined more rigorously using the tools of statistical mechanics. Specifically, the system is modeled as a gas of non-interacting clusters in the grand canonical ensemble. A state of metastable equilibrium is assumed, such that the methods of statistical mechanics hold at least approximately. The grand partition function is Here the inner summation is over all microstates which contain exactly particles. It can be decomposed into contributions from each possible combination of clusters which results in total particles. For instance, where is the configuration integral of a cluster with particles and potential energy : The quantity is the thermal de Broglie wavelength of the particle, which enters due to the integration over the momentum degrees of freedom. The inverse factorials are included to compensate for overcounting, since particles and clusters alike are assumed indistinguishable. More compactly, . Then, by expanding in powers of , the probability of finding exactly clusters which each has particles is The number density of -clusters can therefore be calculated as This is also called the cluster size distribution. The grand potential is equal to , which, using the thermodynamic relationship , leads to the following expansion for the pressure: If one defines the right hand side of the above equation as the function , then various other thermodynamic quantities can be calculated in terms of derivatives of with respect to . The connection with the simple version of the theory is made by assuming perfectly spherical clusters, in which case depends only on , in the form where is the binding energy of a single particle in the interior of a cluster, and is the excess energy per unit area of the cluster surface. Then, , and the cluster size distribution is which implies an effective free energy landscape , in agreement with the form proposed by the simple theory. On the other hand, this derivation reveals the significant approximation in assuming spherical clusters with . In reality, the configuration integral contains contributions from the full set of particle coordinates , thus including deviations from spherical shape as well as cluster degrees of freedom such as translation, vibration, and rotation. Various attempts have been made to include these effects in the calculation of , although the interpretation and application of these extended theories has been debated. A common feature is the addition of a logarithmic correction to , which plays an important role near the critical point of the fluid. Limitations Classical nucleation theory makes a number of assumptions which limit its applicability. Most fundamentally, in the so-called capillarity approximation it treats the nucleus interior as a bulk, incompressible fluid and ascribes to the nucleus surface the macroscopic interfacial tension , even though it is not obvious that such macroscopic equilibrium properties apply to a typical nucleus of, say, 50 molecules across. In fact, it has been shown that the effective surface tension of small droplets is smaller than that of the bulk liquid. In addition, the classical theory places restrictions on the kinetic pathways by which nucleation occurs, assuming clusters grow or shrink only by single particle adsorption/emission. In reality, merging and fragmentation of entire clusters cannot be excluded as important kinetic pathways in some systems. Particularly in dense systems or near the critical point – where clusters acquire an extended and ramified structure – such kinetic pathways are expected to contribute significantly. The behavior near the critical point also suggests the inadequacy, at least in some cases, of treating clusters as purely spherical. Various attempts have been made to remedy these limitations and others by explicitly accounting for the microscopic properties of clusters. However, the validity of such extended models is debated. One difficulty is the exquisite sensitivity of the nucleation rate to the free energy : even small discrepancies in the microscopic parameters can lead to enormous changes in the predicted nucleation rate. This fact makes first-principles predictions nearly impossible. Instead, models must be fit directly to experimental data, which limits the ability to test their fundamental validity. Comparison with simulation and experiment For simple model systems, modern computers are powerful enough to calculate exact nucleation rates numerically. An example is the nucleation of the crystal phase in a system of hard spheres, which is a simple model of colloids consisting of perfectly hard spheres in thermal motion. The agreement of CNT with the simulated rates for this system confirms that the classical theory is a reasonable approximation. For simple models CNT works quite well; however it is unclear if it describes complex (e.g., molecular) systems equally well. For example, in the context of vapor to liquid nucleation, the CNT predictions for the nucleation rate are incorrect by several orders of magnitude on an absolute scale — that is, without renormalizing with respect to experimental data. Nevertheless, certain variations on the classical theory have been claimed to represent the temperature dependence adequately, even if the absolute magnitude is inaccurate. Jones et al. computationally explored the nucleation of small water clusters using a classical water model. It was found that CNT could describe the nucleation of clusters of 8-50 water molecules well, but failed to describe smaller clusters. Corrections to CNT, obtained from higher accuracy methods such as quantum chemical calculations, may improve the agreement with experiment. References Particle detectors Self-organization
Classical nucleation theory
Mathematics,Technology,Engineering
3,259
156,932
https://en.wikipedia.org/wiki/Peristalsis
Peristalsis ( , ) is a type of intestinal motility, characterized by radially symmetrical contraction and relaxation of muscles that propagate in a wave down a tube, in an anterograde direction. Peristalsis is progression of coordinated contraction of involuntary circular muscles, which is preceded by a simultaneous contraction of the longitudinal muscle and relaxation of the circular muscle in the lining of the gut. In much of a digestive tract, such as the human gastrointestinal tract, smooth muscle tissue contracts in sequence to produce a peristaltic wave, which propels a ball of food (called a bolus before being transformed into chyme in the stomach) along the tract. The peristaltic movement comprises relaxation of circular smooth muscles, then their contraction behind the chewed material to keep it from moving backward, then longitudinal contraction to push it forward. Earthworms use a similar mechanism to drive their locomotion, and some modern machinery imitate this design. The word comes from Neo-Latin and is derived from the Greek peristellein, "to wrap around," from peri-, "around" + stellein, "draw in, bring together; set in order". Human physiology Peristalsis is generally directed caudal, that is, towards the anus. This sense of direction might be attributable to the polarisation of the myenteric plexus. Because of the reliance of the peristaltic reflex on the myenteric plexus, it is also referred to as the myenteric reflex. Mechanism of the peristaltic reflex The food bolus causes a stretch of the gut smooth muscle that causes serotonin to be secreted to sensory neurons, which then get activated. These sensory neurons, in turn, activate neurons of the myenteric plexus, which then proceed to split into two cholinergic pathways: a retrograde and an anterograde. Activated neurons of the retrograde pathway release substance molecules alsoP and acetylcholine to contract the smooth muscle behind the bolus. The activated neurons of the anterograde pathway instead release nitric oxide and vasoactive intestinal polypeptide to relax the smooth muscle caudal to the bolus. This allows the food bolus to effectively be pushed forward along the digestive tract. Esophagus After food is chewed into a bolus, it is swallowed and moved through the esophagus. Smooth muscles contract behind the bolus to prevent it from being squeezed back into the mouth. Then rhythmic, unidirectional waves of contractions work to rapidly force the food into the stomach. The migrating motor complex (MMC) helps trigger peristaltic waves. This process works in one direction only, and its sole esophageal function is to move food from the mouth into the stomach (the MMC also functions to clear out remaining food in the stomach to the small bowel and remaining particles in the small bowel into the colon). In the esophagus, two types of peristalsis occur: First, there is a primary peristaltic wave, which occurs when the bolus enters the esophagus during swallowing. The primary peristaltic wave forces the bolus down the esophagus and into the stomach in a wave lasting about 8–9 seconds. The wave travels down to the stomach even if the bolus of food descends at a greater rate than the wave itself, and continues even if for some reason the bolus gets stuck further up the esophagus. If the bolus gets stuck or moves slower than the primary peristaltic wave (as can happen when it is poorly lubricated), then stretch receptors in the esophageal lining are stimulated and a local reflex response causes a secondary peristaltic wave around the bolus, forcing it further down the esophagus, and these secondary waves continue indefinitely until the bolus enters the stomach. The process of peristalsis is controlled by the medulla oblongata. Esophageal peristalsis is typically assessed by performing an esophageal motility study. A third type of peristalsis, tertiary peristalsis, is dysfunctional and involves irregular, diffuse, simultaneous contractions. These contractions are suspect in esophageal dysmotility and present on a barium swallow as a "corkscrew esophagus". During vomiting, the propulsion of food up the esophagus and out the mouth comes from the contraction of the abdominal muscles; peristalsis does not reverse in the esophagus. Stomach When a peristaltic wave reaches at the end of the esophagus, the cardiac sphincter (gastroesophageal sphincter) opens, allowing the passage of bolus into the stomach. The gastroesophageal sphincter normally remains closed and does not allow the stomach's food contents to move back. The churning movements of the stomach's thick muscular wall blend the food thoroughly with the acidic gastric juice, producing a mixture called the chyme. The muscularis layer of the stomach is thickest and maximum peristalsis occurs here. After short intervals, the pyloric sphincter keeps on opening and closing so the chyme is fed into the intestine in installments. Small intestine Once processed and digested by the stomach, the semifluid chyme is passed through the pyloric sphincter into the small intestine. Once past the stomach, a typical peristaltic wave lasts only a few seconds, traveling at only a few centimeters per second. Its primary purpose is to mix the chyme in the intestine rather than to move it forward in the intestine. Through this process of mixing and continued digestion and absorption of nutrients, the chyme gradually works its way through the small intestine to the large intestine. In contrast to peristalsis, segmentation contractions result in that churning and mixing without pushing materials further down the digestive tract. Large intestine Although the large intestine has peristalsis of the type that the small intestine uses, it is not the primary propulsion. Instead, general contractions called mass action contractions occur one to three times per day in the large intestine, propelling the chyme (now feces) toward the rectum. Mass movements often tend to be triggered by meals, as the presence of chyme in the stomach and duodenum prompts them (gastrocolic reflex). Minimum peristalsis is found in the rectum part of the large intestine as a result of the thinnest muscularis layer. Lymph The human lymphatic system has no central pump. Instead, lymph circulates through peristalsis in the lymph capillaries as well as valves in the capillaries, compression during contraction of adjacent skeletal muscle, and arterial pulsation. Sperm During ejaculation, the smooth muscle in the walls of the vasa deferentia contract reflexively in peristalsis, propelling sperm from the testicles to the urethra. Earthworms The earthworm is a limbless annelid worm with a hydrostatic skeleton that moves by peristalsis. Its hydrostatic skeleton consists of a fluid-filled body cavity surrounded by an extensible body wall. The worm moves by radially constricting the anterior portion of its body, increasing length via hydrostatic pressure. This constricted region propagates posteriorly along the worm's body. As a result, each segment is extended forward, then relaxes and re-contacts the substrate, with hair-like setae preventing backward slipping. Various other invertebrates, such as caterpillars and millipedes, also move by peristalsis. Machinery A peristaltic pump is a positive-displacement pump in which a motor pinches advancing portions of a flexible tube to propel a fluid within the tube. The pump isolates the fluid from the machinery, which is important if the fluid is abrasive or must remain sterile. Robots have been designed that use peristalsis to achieve locomotion, as the earthworm uses it. Related terms Aperistalsis refers to a lack of propulsion. It can result from achalasia of the smooth muscle involved. Basal electrical rhythm is a slow wave of electrical activity that can initiate a contraction. Catastalsis is a related intestinal muscle process. Ileus is a disruption of the normal propulsive ability of the gastrointestinal tract caused by the failure of peristalsis. Retroperistalsis, the reverse of peristalsis Segmentation contractions are another type of intestinal motility. Intestinal desmosis, the atrophy of the tendinous plexus layer, may cause disturbed gut motility. References External links Interactive 3D display of swallow waves at menne-biomed.de Overview at colostate.edu Digestive system
Peristalsis
Biology
1,902
74,580,636
https://en.wikipedia.org/wiki/Macronovirus
Macronovirus is the only genus of the family Sarthroviridae and only contains the species Macrobrachium satellite virus 1 It is found in The French West Indies, Thailand, Taiwan, China, and India. Etymology The genus name, Macronovirus, is a combination of Macro, from type species host Macrobrachium rosenbergii and no, from helper virus nodavirus. The family name, Sarthroviridae, is a combination of S, from Small and arthro, from host arthropoda. Hosts Macronovirus'''s cell tropism is muscle and connective cells of diseased animals, and its natural hosts are arthropods Structure The virion Macrobrachium satellite virus 1 has a genome consisting of linear single-stranded RNA of positive polarity, 0.8kb in size, with two genes. This encodes two capsid proteins, CP-17 and CP-16. The virion is non-enveloped, spherical, with a capsid of about 15 nm with icosahedral symmetry. The virion is constructed from two capsid proteins CP-17 and CP-16. It has a Monopartite, linear, ssRNA(+) genome. Gene expression The virion RNA is infectious and serves as both the genome and viral messenger RNA. Replication Its replication is cytoplasmic, and has 8 steps. Attachement to host receptors mediates entry into the host cell. Uncoating, and release of the viral genomic RNA into the cytoplasm. Viral RNA is translated in a polyprotein to produce replication proteins. Replication by helper virus occurs in viral factories made of membrane vesicles derived from the ER. A dsRNA genome is synthesized from the genomic ssRNA(+). The dsRNA genome is transcribed/replicated thereby providing viral mRNAs/new ssRNA(+) genomes. Expression of the capsid proteins. Assembly of new virus particles. Virus release. Disease Whitish muscle disease, which develops in post-larvae of freshwater prawn Macrobrachium rosenbergii and is caused by Macrobrachium rosenbergii nodavirus (MrNV) and its associate Macrobrachium satellite virus 1''. Main symptom is a whitish appearance of the muscles, particularly noticeable in the abdomen. Mortalities can reach 100%. References Virus genera Riboviria
Macronovirus
Biology
497
57,886,751
https://en.wikipedia.org/wiki/NGC%203840
NGC 3840 is a spiral galaxy located about 320 million light-years away in the constellation Leo. The galaxy was discovered by astronomer Heinrich d'Arrest on May 8, 1864. NGC 3840 is a member of the Leo Cluster. The galaxy is rich in neutral atomic hydrogen (H I) and is not interacting with its environment. NGC 3840 is likely to be a low-luminosity AGN (LLAGN). See also List of NGC objects (3001–4000) References External links 3840 36477 6702 Leo (constellation) Leo Cluster Spiral galaxies Astronomical objects discovered in 1864 Active galaxies
NGC 3840
Astronomy
127
24,106,988
https://en.wikipedia.org/wiki/C15H12O6
{{DISPLAYTITLE:C15H12O6}} The chemical formula C15H12O6 (molar mass : 288.25 g/mol, exact mass : 288.063388) may refer to: Aromadedrin, a flavanonol Dehydroaltenusin, a polyphenol Eriodictyol, flavanone Fustin, a flavanonol Okanin, a chalcone Thunberginol D, an isocoumarin
C15H12O6
Chemistry
113
61,495,967
https://en.wikipedia.org/wiki/C18H22N4O2
{{DISPLAYTITLE:C18H22N4O2}} The molecular formula C18H22N4O2 (molar mass: 326.17 g/mol, exact mass: 326.1743 u) may refer to: EGIS-7625 Peficitinib (Smyraf)
C18H22N4O2
Chemistry
69
20,543,137
https://en.wikipedia.org/wiki/Amanita%20persicina
Amanita persicina, commonly known as the peach-colored fly agaric, is a basidiomycete fungus of the genus Amanita with a peach-colored center. Until , the fungus was believed to be a variety of A. muscaria. A. persicina is distributed in eastern North America. It is both poisonous and psychoactive. Taxonomy Amanita persicina was formerly treated as a variety of A. muscaria (the fly agaric) and it was classified as A. muscaria var. persicina. Recent DNA evidence, however, has indicated that A. persicina is better treated as a distinct species, and it was elevated to species status in 2015 by Tulloss & Geml. Description Cap The cap is wide, hemispheric to convex when young, becoming plano-convex to plano-depressed in age. It is pinkish-melon-colored to peach-orange, sometimes pastel red towards the disc. The cap is slightly appendiculate. The volva is distributed over the cap as thin pale yellowish to pale tannish warts; it is otherwise smooth and subviscid, and the margin becomes slightly to moderately striate in age. The flesh is white and does not stain when cut or injured. The flesh has a pleasant taste and odor. Gills The gills are free, crowded, moderately broad, creamy with a pale pinkish tint, and have a very floccose edge. They are abruptly truncate. Spores Amanita persicina spores are white in deposit, ellipsoid to elongate, infrequently broadly ellipsoid, rarely cylindric, inamyloid, and are (8.0) 9.4–12.7 (18.0) x (5.5) 6.5–8.5 (11.1) μm. Stipe The stipe is 4–10.5 cm long, 1–2 cm wide, and more or less equal or narrowing upwards and slightly flaring at the apex. It is pale yellow in the superior region, tannish white below, and densely stuffed with a pith. The ring is fragile, white above and yellowish below, and poorly formed or absent. Remnants of the universal veil on the vasal bulb as concentric rings are fragile or absent. Chemistry This species contains variable amounts of the neurotoxic compound ibotenic acid and the psychoactive compound muscimol. Distribution and habitat A. persicina is found growing solitary or gregariously. It is mycorrhizal with conifers (Pine) and deciduous (Oak) trees in North America. It often fruits in the fall, but sometimes in the spring and summer in the southern states. The fungus is common in the southeast United States, from Texas to Georgia, and north to New Jersey. Toxicity A. persicina is both poisonous and psychoactive if not properly prepared by parboiling. Pending further research, it should not be eaten. Gallery References Miller, O. K. Jr., D. T. Jenkins and P. Dery. 1986. Mycorrhizal synthesis of Amanita muscaria var. persicina with hard pines. Mycotaxon 26: 165–172. Jenkins, D. T. 1977. A taxonomic and nomenclatural study of the genus Amanita section Amanita for North America. Biblioth. Mycol. 57: 126 pp. External links Amanita persicina page by Rod Tulloss persicina Fungi described in 1977 Poisonous fungi Psychoactive fungi Fungus species
Amanita persicina
Biology,Environmental_science
764
23,209
https://en.wikipedia.org/wiki/Peppermint
Peppermint (Mentha × piperita) is a hybrid species of mint, a cross between watermint and spearmint. Indigenous to Europe and the Middle East, the plant is now widely spread and cultivated in many regions of the world. It is occasionally found in the wild with its parent species. Although the genus Mentha comprises more than 25 species, the one in most common use is peppermint. While Western peppermint is derived from Mentha × piperita, Chinese peppermint, or bohe, is derived from the fresh leaves of M. haplocalyx. M. × piperita and M. haplocalyx are both recognized as plant sources of menthol and menthone, and are among the oldest herbs used for both culinary and medicinal products. Botany Peppermint was first identified in Hertfordshire, England, by a Dr. Eales, a discovery which John Ray published 1696 in the second edition of his book Synopsis Methodica Stirpium Britannicarum. He initially gave it the name Mentha spicis brevioribus et habitioribus, foliis Mentha fusca, sapore fervido piperis and later in his 1704 volume Historia Plantarum he called it Mentha palustris or Peper–Mint. The plant was then added to the London Pharmacopoeia under the name Mentha piperitis sapore in 1721. It was given the name Mentha piperita in 1753 by Carl Linnaeus in his Species Plantarum Volume 2. Linnaeus treated peppermint as a species, but it is now universally agreed to be a hybrid between Mentha viridis and Mentha aquatica with Mentha viridis itself also being a hybrid between Mentha sylvestris and Mentha rotundifolis. Peppermint is an herbaceous, rhizomatous, perennial plant that grows to be tall, with smooth stems, square in cross section. The rhizomes are wide-spreading and fleshy, and bear fibrous roots. The leaves can be long and broad. They are dark green with reddish veins, with an acute apex and coarsely toothed margins. The leaves and stems are usually slightly fuzzy. The flowers are purple, long, with a four-lobed corolla about diameter; they are produced in whorls (verticillasters) around the stem, forming thick, blunt spikes. Flowering season lasts from mid- to late summer. The chromosome number is variable, with 2n counts of 66, 72, 84, and 120 recorded. Peppermint is a fast-growing plant, spreading quickly once it has sprouted. Ecology Peppermint typically occurs in moist habitats, including stream sides and drainage ditches. Being a hybrid, it is usually sterile, producing no seeds and reproducing only vegetatively, spreading by its runners. Outside of its native range, areas where peppermint was formerly grown for oil often have an abundance of feral plants, and it is considered invasive in Australia, the Galápagos Islands, New Zealand, and the United States in the Great Lakes region, noted since 1843. Cultivation Peppermint generally grows best in moist, shaded locations, and expands by underground rhizomes. Young shoots are taken from old stocks and dibbled into the ground about 0.5 m (1.5 ft) apart. They grow quickly and cover the ground with runners if it is permanently moist. For the home gardener, it is often grown in containers to restrict rapid spreading. It grows best with a good supply of water, without being water-logged, and planted in areas with partial sun to shade. The leaves and flowering tops are used; they are collected as soon as the flowers begin to open and can be dried. The wild form of the plant is less suitable for this purpose, with cultivated plants having been selected for more and better oil content. They may be allowed to lie and wilt a little before distillation, or they may be taken directly to the still. Cultivars Several cultivars have been selected for garden use: Mentha × piperita 'Candymint' has reddish stems. Mentha × piperita 'Chocolate Mint'. Its flowers open from the bottom up; its flavour is reminiscent of the flavour in Andes Chocolate Mints, a popular confection. Mentha × piperita 'Citrata' includes a number of varieties including Eau de Cologne mint, grapefruit mint, lemon mint, and orange mint. Its leaves are aromatic and hairless. Mentha × piperita 'Crispa' has wrinkled leaves. Mentha × piperita 'Lavender Mint' Mentha × piperita 'Lime Mint' has lime-scented foliage. Mentha × piperita 'Variegata' has mottled green and pale yellow leaves. Commercial cultivars may include: Dulgo pole Zefir Bulgarian population #2 Clone 11-6-22 Clone 80-121-33 Mitcham Digne 38 Mitcham Ribecourt 19 'Todd's Mitcham', a verticillium wilt-resistant cultivar produced from a breeding and test program of atomic gardening at Brookhaven National Laboratory from the mid-1950s 'Refined Murray', also verticillium-resistant 'Roberts Mitcham', also verticillium-resistant and also the product of mutation breeding Diseases Verticillium wilt is a major constraint in peppermint cultivation. 'Todd's Mitcham', 'Refined Murray', 'Roberts Mitcham' (see above), and a few other cultivars have some degree of resistance. Production In 2022, world production of peppermint was 51,081 tonnes, led by Morocco with 84% of the total and Argentina with 14% (table). In the United States, Oregon and Washington produce most of the country's peppermint, the leaves of which are processed for the essential oil to produce flavorings mainly for chewing gum and toothpaste. Chemical constituents Peppermint has a high menthol content. The essential oil also contains menthone and carboxyl esters, particularly menthyl acetate. Dried peppermint typically has 0.3–0.4% of volatile oil containing menthol (7–48%), menthone (20–46%), menthyl acetate (3–10%), menthofuran (1–17%), and 1,8-cineol (3–6%). Peppermint oil also contains small amounts of many additional compounds, including limonene, pulegone, caryophyllene, and pinene. Peppermint contains terpenoids and flavonoids such as eriocitrin, hesperidin, and kaempferol 7-O-rutinoside. Oil Peppermint oil has a high concentration of natural pesticides, mainly pulegone (found mainly in M. arvensis var. piperascens (cornmint, field mint, or Japanese mint), and to a lesser extent (6,530 ppm) in Mentha × piperita subsp. notho) and menthone. It is known to repel some pest insects, including mosquitos, and has uses in organic gardening. It is also widely used to repel rodents. The chemical composition of the essential oil from peppermint (Mentha × piperita L.) was analyzed by GC/FID and GC-MS. The main constituents were menthol (40.7%) and menthone (23.4%). Further components were (±)-menthyl acetate, 1,8-cineole, limonene, beta-pinene, and beta-caryophyllene. Research and health effects Peppermint oil is under preliminary research for its potential as a short-term treatment for irritable bowel syndrome, and has supposed uses in traditional medicine for minor ailments. Peppermint oil and leaves have a cooling effect when used topically for muscle pain, nerve pain, relief from itching, or as a fragrance. High oral doses of peppermint oil (500 mg) can cause mucosal irritation and mimic heartburn. Peppermint roots bioaccumulate radium, so the plant may be effective for phytoremediation of radioactively contaminated soil. Culinary and other uses Fresh or dried peppermint leaves are often used alone in peppermint tea or with other herbs in herbal teas (tisanes, infusions). Peppermint is used for flavouring ice cream, candy, fruit preserves, alcoholic beverages, chewing gum, toothpaste, and some shampoos, soaps, and skin care products. Menthol activates cold-sensitive TRPM8 receptors in the skin and mucosal tissues, and is the primary source of the cooling sensation that follows the topical application of peppermint oil. Peppermint oil is also used in construction and plumbing to test for the tightness of pipes and disclose leaks by its odor. Safety Medicinal uses of peppermint have not been approved as effective or safe by the US Food and Drug Administration. With caution that the concentration of the peppermint constituent pulegone should not exceed 1% (140 mg), peppermint preparations are considered safe by the European Medicines Agency when used in topical formulations for adult subjects. Diluted peppermint essential oil is safe for oral intake when only a few drops are used. Although peppermint is commonly available as a herbal supplement, no established, consistent manufacturing standards exist for it, and some peppermint products may be contaminated with toxic metals or other substituted compounds. Skin rashes, irritation, or allergic reactions may result from applying peppermint oil to the skin, and its use on the face or chest of young children may cause side effects if the oil menthol is inhaled. A common side effect from oral intake of peppermint oil or capsules is heartburn. Oral use of peppermint products may have adverse effects when used with iron supplements, cyclosporine, medicines for heart conditions or high blood pressure, or medicines to decrease stomach acid. Standardization ISO 676:1995—contains the information about the nomenclature of the variety and cultivars ISO 5563:1984—a specification for its dried leaves of Mentha piperita Linnaeus Peppermint oil—ISO 856:2006 See also Eucalyptus Peppermint extract References Antiemetics Flora of Europe Herbs Medicinal plants Mentha Plants described in 1753 Hybrid plants
Peppermint
Biology
2,206
47,887,874
https://en.wikipedia.org/wiki/Solid%20nitrogen
Solid nitrogen is a number of solid forms of the element nitrogen, first observed in 1884. Solid nitrogen is mainly the subject of academic research, but low-temperature, low-pressure solid nitrogen is a substantial component of bodies in the outer Solar System and high-temperature, high-pressure solid nitrogen is a powerful explosive, with higher energy density than any other non-nuclear material. Generation Karol Olszewski first observed solid nitrogen in 1884, by first liquefying hydrogen with evaporating liquid nitrogen, and then allowing the liquid hydrogen to freeze the nitrogen. By evaporating vapour from the solid nitrogen, Olszewski also generated the extremely low temperature of , at the time a world record. Modern techniques usually take a similar approach: solid nitrogen is normally made in a laboratory by evaporating liquid nitrogen in a vacuum. The solid produced is porous. Occurrence in nature Solid nitrogen forms a large part of the surface of Pluto (where it mixes with solid carbon monoxide and methane) and the Neptunian moon Triton. On Pluto it was directly observed for the first time in July 2015 by the New Horizons space probe and on Triton it was directly observed by the Voyager 2 space probe in August 1989. Even at the low temperatures of solid nitrogen it is fairly volatile and can sublime to form an atmosphere, or condense back into nitrogen frost. Compared to other materials, solid nitrogen loses cohesion at low pressures and flows in the form of glaciers when amassed. Yet its density is higher than that of water ice, so the forces of buoyancy will naturally transport blocks of water ice towards the surface. Indeed, New Horizons observed "floating" water ice atop nitrogen ice on the surface of Pluto. On Triton, solid nitrogen takes the form of frost crystals and a transparent sheet layer of annealed nitrogen ice, often referred to as a "glaze". Eruptions of nitrogen gas were observed by Voyager 2 to spew from the subpolar regions around Triton's southern polar ice cap. A possible explanation of this observed phenomenon is that the Sun shines through the transparent layer of nitrogen ice, heating the layers beneath. Nitrogen sublimes and eventually erupts through holes in the upper layer, carrying dust along with it and creating dark streaks. Transitions to fluid allotropes Melting At standard atmospheric pressure, the melting point of N2 is . Like most substances, nitrogen melts at a higher temperature with increasing ambient pressure until , when liquid nitrogen is predicted to polymerize. Within that region, melting point increases at a rate of approximately . Above , the melting point drops. Sublimation Nitrogen has a triple point at and ; below this pressure, solid nitrogen sublimes directly to gas. At these low pressures, nitrogen exists in only two known allotropes: α-nitrogen (below ) and β-nitrogen (). Measurements of the vapour pressure from suggest the following empirical formulae: Solubility in common cryogens Solid nitrogen is slightly soluble in liquid hydrogen. Based on solubility in gaseous hydrogen, Seidal et al. estimated that liquid hydrogen at can dissolve . At the boiling point of hydrogen with excess solid nitrogen, the dissolved molar fraction is 10−8. At (just below the boiling point of ) and , the maximum molar concentration of dissolved N2 is . Nitrogen and oxygen are miscible in liquid phase but separate in solid phase. Thus excess nitrogen (melting at 63 K) or oxygen (melting at 55 K) freeze out first, and the eutectic liquid air freezes at 50 K. Crystal structure Dinitrogen crystals At ambient and moderate pressures, nitrogen forms molecules; at low temperature London dispersion forces suffice to solidify these molecules. α and β Solid nitrogen admits two phases at ambient pressure: α- and β-nitrogen. Below , nitrogen adopts a cubic structure with space group Pa3; the molecules are located on the body diagonals of the unit cell cube. At low temperatures the α-phase can be compressed to before it changes (to γ), and as the temperature rises above , this pressure rises to about . At , the unit cell dimension is , decreasing to under . Above (until it melts), nitrogen adopts a hexagonal close packed structure, with unit cell ratio . The nitrogen molecules are randomly tipped at an angle of , due to strong quadrupole-quadrupole interaction. At the unit cell has and , but these shrink at and to and . At higher pressures, the displays practically no variation. γ The tetragonal γ form exists at low temperatures below and pressures around . The α/β/γ2 triple point occurs at and . Formation of γ-dinitrogen exhibits a substantial isotope effect: at , the isotope 15N converts to the γ form at a pressure lower than natural nitrogen. The space group of the γ phase is P42/mnm. At and , the unit cell has lattice constants and . The nitrogen molecules themselves are arranged in P42/mnm pattern f and take the shape of a prolate spheroid with long dimension and diameter . The molecules can vibrate up to on the plane, and up to in the direction of the axis. δ, δloc, and ε At high pressure (but ambient temperature), dinitrogen adopts the cubic δ form, with space group pm3n and eight molecules per unit cell. This phase admits a lattice constant of (at and ). δ- admits two triple points. The (δ-, β-, liquid) triple point occurs somewhere around and . The (δ-, β-, γ-) triple point occurs at and . Within the lattice cells, the molecules themselves have disordered orientation, but increases in pressure causes a phase transition to a slightly different phase, δloc, in which the molecular orientations progressively order, a distinction that is only visible via Raman spectroscopy. At high pressure (roughly ) and low temperature, the dinitrogen molecule orientations fully order into the rhombohedral ε phase, which follows space group Rc. Cell dimensions are , , , , , volume , . Dissolved can stabilize ε- at higher temperatures or lower pressures from transforming into δ- (see ). ζ Above , ε- transforms to an orthorhombic phase designated by ζ-. There is no measurable discontinuity in the volume per molecule between ε- and ζ-. The structure of ζ- is very similar to that of ε-, with only small differences in the orientation of the molecules. ζ- adopts the monoclinic space group C2/c, and has lattice constants of , , and with sixteen molecules per unit cell. θ and ι Further compression and heating produces two crystalline phases of nitrogen with surprising metastability. A ζ- phase compressed to and then heated to over produces a uniformly translucent structure called θ-nitrogen. The ι phase can be accessed by isobarically heating ε- to at or isothermal decompression of θ- to at . The ι- crystal structure is characterised by primitive monoclinic lattice with unit-cell dimensions of: , , and at and ambient temperature. The space group is P21/c and the unit cell contains 48 molecules arranged into a layered structure. Upon pressure release, θ- does not return to ε- until around ; ι- transforms to ε- until around . "Black phosphorus" nitrogen When compressing nitrogen to pressures and temperatures above , nitrogen adopts a crystal structure ("bp-N") identical to that of black phosphorus (orthorhombic, Cmce space group). Like black phosphorus, bp-N is an electrical conductor. The existence of bp-N structure matches the behavior of heavier pnictogens, and reaffirms the trend that elements at high pressure adopt the same structures as heavier congeners at lower pressures. Oligomer crystals Hexagonal layered polymeric nitrogen Hexagonal layered polymeric nitrogen (HLP-N) was experimentally synthesized at and . It adopts a tetragonal unit cell (P42bc) in which the single-bonded nitrogen atoms form two layers of interconnected hexagons. HPL-N is metastable to at least 66 GPa. Linear forms (N6 and N8) The decomposition of hydrazinium azide at high pressure and low temperature produces a molecular solid made of linear chains of 8 nitrogen atoms (). Simulations suggest that is stable at low temperatures and pressures (< 20 GPa); in practice, the reported decomposes to the ε allotrope below 25 GPa but a residue remains at pressure as low as 3 GPa. Grechner et al. predicted in 2016 that an analogous allotrope with six nitrogens should exist at ambient conditions. Amorphous and network allotropes Non-molecular forms of solid nitrogen exhibit the highest known non-nuclear energy density. μ When the ζ-N2 phase is compressed at room temperature over an amorphous form is produced. This is a narrow gap semiconductor, and designated the μ-phase. The μ-phase has been brought to atmospheric pressure by first cooling it to . η η-N is a semiconducting amorphous form of nitrogen. It forms at pressures around and temperatures . In reflected light it appears black, but does transmit some red or yellow light. In the infrared there is an absorption band around . Under even higher pressure of approximately , the band gap closes and η-nitrogen metallizes. Cubic gauche At pressures higher than and temperatures around , nitrogen forms a network solid, bound by covalent bonds in a cubic-gauche structure, abbreviated as cg-N. The cubic-gauche form has space group I213. Each unit cell has edge length , and contains eight nitrogen atoms. As a network, cg-N consists of fused rings of nitrogen atoms; at each atom, the bond angles are very close to tetrahedral. The position of the lone pairs of electrons is ranged so that their overlap is minimised. The cubic-gauche structure for nitrogen is predicted to have bond lengths of 1.40 Å, bond angles of 114.0° and dihedral angles of −106.8°. The term gauche refers to the odd dihedral angles, if it were 0° it would be called cis, and if 180° it would be called trans. The dihedral angle Φ is related to the bond angle θ by sec(Φ) = sec(θ) − 1. The coordinate of one atom in the unit cell at x,x,x also determines the bond angle by cos(θ) = x(x-1/4)/(x2+(x-1/4)2). All bonds in cg-N have the same length: at . This suggests that all bonds have the same order: a single bond carrying . In contrast, the triple bond in gaseous nitrogen carries only , so that relaxation to the gaseous form involves tremendous energy release: more than any other non-nuclear reaction. For this reason, cubic-gauche nitrogen is being investigated for use in explosives and rocket fuel. Estimates of its energy density vary: simulations predict is predicted, which is the energy density of HMX. cg-N is also very stiff with a bulk modulus around , similar to diamond. Poly-N Another network solid nitrogen called poly-N and abbreviated pN was predicted in 2006. pN has space group C2/c and cell dimensions a = 5.49 Å, β = 87.68°. Other higher pressure polymeric forms are predicted in theory, and a metallic form is expected if the pressure is high enough. Others Yet other phases of solid dinitrogen are termed ζ'-N2 and κ-N2. Bulk properties At the ultimate compressive strength is 0.24 MPa. Strength increases as temperature lowers becoming 0.54 MPa at 40.6 K. Elastic modulus varies from 161 to 225 MPa over the same range. The thermal conductivity of solid nitrogen is 0.7 W m−1 K−1. Thermal conductivity varies with temperature and the relation is given by k = 0.1802×T0.1041  W m−1 K−1. Specific heat is given by 926.91×e0.0093T joules per kilogram per kelvin. Its appearance at 50 K is transparent, while at 20 K it is white. Nitrogen frost has a density of 0.85 g cm−3. As a bulk material the crystals are pressed together and density is near that of water. It is temperature dependent and given by ρ = 0.0134T2 − 0.6981T + 1038.1 kg/m3. The volume coefficient of expansion is given by 2×10−6T2 − 0.0002T + 0.006 K−1. The index of refraction at 6328 Å is 1.25 and hardly varies with temperature. The speed of sound in solid nitrogen is 1452 m/s at 20 K and 1222 m/s at 44 K. The longitudinal velocity ranges from 1850 m/s at 5 K to 1700 m/s at 35 K. With temperature rise the nitrogen changes phase and the longitudinal velocity drops rapidly over a small temperature range to below 1600 m/s and then it slowly drops to 1400 m/s near the melting point. The transverse velocity is much lower ranging from 900 to 800 m/s over the same temperature range. The bulk modulus of s-N2 is 2.16 GPa at 20 K, and 1.47 GPa at 44 K. At temperatures below 30 K solid nitrogen will undergo brittle failure, particularly if strain is applied quickly. Above this temperature the failure mode is ductile failure. Dropping 10 K makes the solid nitrogen 10 times as stiff. Related substances Under pressure nitrogen can form crystalline van der Waals compounds with other molecules. It can form an orthorhombic phase with methane above 5 GPa. With helium He(N2)11 is formed. N2 crystallizes with water in nitrogen clathrate and in a mixture with oxygen O2 and water in air clathrate. Helium Solid nitrogen can dissolve 2 mole % helium under pressure in its disordered phases such as the γ-phase. Under higher pressure 9 mol% helium, He can react with ε-nitrogen to form a hexagonal birefringent crystalline van der Waals compound. The unit cell contains 22 nitrogen atoms and 2 helium atoms. It has a volume of 580 Å3 for a pressure of 11 GPa decreasing to 515 Å3 at 14 GPa. It resembles the ε-phase. At 14.5 GPa and 295 K the unit cell has space group P63/m and a=7.936 Å c=9.360 Å. At 28 GPa a transition happens in which the orientation of N2 molecules becomes more ordered. When the pressure on He(N2)11 exceeds 135 GPa the substance changes from clear to black, and takes on an amorphous form similar to η-N2. Methane Solid nitrogen can crystallise with some solid methane included. At 55 K the molar percentage can range up to 16.35% CH4, and at 40 K only 5%. In the complementary situation, solid methane can include some nitrogen in its crystals, up to 17.31% nitrogen. As the temperature drops, less methane can dissolve in solid nitrogen, and in α-N2 there is a major drop in methane solubility. These mixtures are prevalent in outer Solar System objects such as Pluto that have both nitrogen and methane on their surfaces. At room temperature there is a clathrate of methane and nitrogen in 1:1 ratio formed at pressures over 5.6 GPa. Carbon monoxide The carbon monoxide molecule (CO) is very similar to dinitrogen in size, and it can mix in all proportions with solid nitrogen without changing crystal structure. Carbon monoxide is also found on the surfaces of Pluto and Triton at levels below 1%. Variations in the infrared linewidth of carbon monoxide absorption can reveal the concentration. Noble gases Neon or xenon atoms can also be included in solid nitrogen in the β and δ phases. Inclusion of neon pushes the β−δ phase boundary to higher pressures. Argon is also very miscible in solid nitrogen. For compositions of argon and nitrogen with 60% to 70% nitrogen, the hexagonal form remains stable to 0 K. A van der Waals compound of xenon and nitrogen exists above 5.3 GPa. A van der Waals compound of neon and nitrogen was shown using Raman spectroscopy. The compound has formula (N2)6Ne7. It has a hexagonal structure, with a=14.400 c=8.0940 at a pressure of 8 GPa. A van der Waals compound with argon is not known. Hydrogen With dideuterium, a clathrate (N2)12D2 exits around 70 GPa. Oxygen Solid nitrogen can take up to a one fifth substitution by oxygen O2 and still keep the same crystal structure. δ-N2 can be substituted by up to 95% O2 and retain the same structure. Solid O2 can only have a solid solution of 5% or less of N2. Use Solid nitrogen is used in a slush mixture with liquid nitrogen in order to cool faster than with liquid nitrogen alone, useful for applications such as sperm cryopreservation. The semi-solid mixture can also be called slush nitrogen or SN2. Solid nitrogen is used as a matrix on which to store and study reactive chemical species, such as free radicals or isolated atoms. One use is to study dinitrogen complexes of metals in isolation from other molecules. Reactions When solid nitrogen is irradiated by high speed protons or electrons, several reactive radicals are formed, including atomic nitrogen (N), nitrogen cations (N+), dinitrogen cation (N2+), trinitrogen radicals (N3 and N3+), and azide (N3−). Notes References External links Jessica Orwig: Freezing Liquid Nitrogen Creates Something Amazing. On: BusinessInsider. Jan 28, 2015 - Videos of nitrogen boiling, freezing, and spontaneously changing crystal form. Xiaoli Wang, J. Li, N. Xu et al. (2015): Layered polymeric nitrogen in RbN3 at high pressures. In: Scientific Reports volume 5, Article number: 16677. doi:10.1038/srep16677. Nitrogen Allotropes of nitrogen Pnictogens Diatomic nonmetals Ice
Solid nitrogen
Chemistry,Materials_science
3,860
49,303,418
https://en.wikipedia.org/wiki/C15H14N2O2
{{DISPLAYTITLE:C15H14N2O2}} The molecular formula C15H14N2O2 may refer to: Licarbazepine Nepafenac Pyrrolidonyl-β-naphthylamide Molecular formulas
C15H14N2O2
Physics,Chemistry
57
15,663,283
https://en.wikipedia.org/wiki/Nonlinear%20eigenproblem
In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form where is a vector, and is a matrix-valued function of the number . The number is known as the (nonlinear) eigenvalue, the vector as the (nonlinear) eigenvector, and as the eigenpair. The matrix is singular at an eigenvalue . Definition In the discipline of numerical linear algebra the following definition is typically used. Let , and let be a function that maps scalars to matrices. A scalar is called an eigenvalue, and a nonzero vector is called a right eigevector if . Moreover, a nonzero vector is called a left eigevector if , where the superscript denotes the Hermitian transpose. The definition of the eigenvalue is equivalent to , where denotes the determinant. The function is usually required to be a holomorphic function of (in some domain ). In general, could be a linear map, but most commonly it is a finite-dimensional, usually square, matrix. Definition: The problem is said to be regular if there exists a such that . Otherwise it is said to be singular. Definition: An eigenvalue is said to have algebraic multiplicity if is the smallest integer such that the th derivative of with respect to , in is nonzero. In formulas that but for . Definition: The geometric multiplicity of an eigenvalue is the dimension of the nullspace of . Special cases The following examples are special cases of the nonlinear eigenproblem. The (ordinary) eigenvalue problem: The generalized eigenvalue problem: The quadratic eigenvalue problem: The polynomial eigenvalue problem: The rational eigenvalue problem: where are rational functions. The delay eigenvalue problem: where are given scalars, known as delays. Jordan chains Definition: Let be an eigenpair. A tuple of vectors is called a Jordan chain iffor , where denotes the th derivative of with respect to and evaluated in . The vectors are called generalized eigenvectors, is called the length of the Jordan chain, and the maximal length a Jordan chain starting with is called the rank of . Theorem: A tuple of vectors is a Jordan chain if and only if the function has a root in and the root is of multiplicity at least for , where the vector valued function is defined as Mathematical software The eigenvalue solver package SLEPc contains C-implementations of many numerical methods for nonlinear eigenvalue problems. The NLEVP collection of nonlinear eigenvalue problems is a MATLAB package containing many nonlinear eigenvalue problems with various properties. The FEAST eigenvalue solver is a software package for standard eigenvalue problems as well as nonlinear eigenvalue problems, designed from density-matrix representation in quantum mechanics combined with contour integration techniques. The MATLAB toolbox NLEIGS contains an implementation of fully rational Krylov with a dynamically constructed rational interpolant. The MATLAB toolbox CORK contains an implementation of the compact rational Krylov algorithm that exploits the Kronecker structure of the linearization pencils. The MATLAB toolbox AAA-EIGS contains an implementation of CORK with rational approximation by set-valued AAA. The MATLAB toolbox RKToolbox (Rational Krylov Toolbox) contains implementations of the rational Krylov method for nonlinear eigenvalue problems as well as features for rational approximation. The Julia package NEP-PACK contains many implementations of various numerical methods for nonlinear eigenvalue problems, as well as many benchmark problems. The review paper of Güttel & Tisseur contains MATLAB code snippets implementing basic Newton-type methods and contour integration methods for nonlinear eigenproblems. Eigenvector nonlinearity Eigenvector nonlinearities is a related, but different, form of nonlinearity that is sometimes studied. In this case the function maps vectors to matrices, or sometimes hermitian matrices to hermitian matrices. References Further reading Françoise Tisseur and Karl Meerbergen, "The quadratic eigenvalue problem," SIAM Review 43 (2), 235–286 (2001) (link). Gene H. Golub and Henk A. van der Vorst, "Eigenvalue computation in the 20th century," Journal of Computational and Applied Mathematics 123, 35–65 (2000). Philippe Guillaume, "Nonlinear eigenproblems," SIAM Journal on Matrix Analysis and Applications 20 (3), 575–595 (1999) (link). Cedric Effenberger, "Robust solution methods fornonlinear eigenvalue problems", PhD thesis EPFL (2013) (link) Roel Van Beeumen, "Rational Krylov methods fornonlinear eigenvalue problems", PhD thesis KU Leuven (2015) (link) Linear algebra
Nonlinear eigenproblem
Mathematics
1,073
3,645,679
https://en.wikipedia.org/wiki/Axial%20piston%20pump
An axial piston pump is a positive displacement pump that has a number of pistons in a circular array within a cylinder block. It can be used as a stand-alone pump, a hydraulic motor or an automotive air conditioning compressor. Description An axial piston pump has a number of pistons (usually an odd number) arranged in a circular array within a housing which is commonly referred to as a cylinder block, rotor or barrel. This cylinder block is driven to rotate about its axis of symmetry by an integral shaft that is, more or less, aligned with the pumping pistons (usually parallel but not necessarily). Mating surfaces. One end of the cylinder block is convex and wears against a mating surface on a stationary valve plate. The inlet and outlet fluid of the pump pass through different parts of the sliding interface between the cylinder block and valve plate. The valve plate has two semi-circular ports that allow inlet of the operating fluid and exhaust of the outlet fluid respectively. Protruding pistons. The pumping pistons protrude from the opposite end of the cylinder block. There are numerous configurations used for the exposed ends of the pistons but in all cases they bear against a cam. In variable displacement units, the cam is movable and commonly referred to as a swashplate, yoke or hanger. For conceptual purposes, the cam can be represented by a plane, the orientation of which, in combination with shaft rotation, provides the cam action that leads to piston reciprocation and thus pumping. The angle between a vector normal to the cam plane and the cylinder block axis of rotation, called the cam angle, is one variable that determines the displacement of the pump or the amount of fluid pumped per shaft revolution. Variable displacement units have the ability to vary the cam angle during operation whereas fixed displacement units do not. Reciprocating pistons. As the cylinder block rotates, the exposed ends of the pistons are constrained to follow the surface of the cam plane. Since the cam plane is at an angle to the axis of rotation, the pistons must reciprocate axially as they precess about the cylinder block axis. The axial motion of the pistons is sinusoidal. During the rising portion of the piston's reciprocation cycle, the piston moves toward the valve plate. Also, during this time, the fluid trapped between the buried end of the piston and the valve plate is vented to the pump's discharge port through one of the valve plate's semi-circular ports - the discharge port. As the piston moves toward the valve plate, fluid is pushed or displaced through the discharge port of the valve plate. Effect of precession. When the piston is at the top of the reciprocation cycle (commonly referred to as top-dead-center or just TDC), the connection between the trapped fluid chamber and the pump's discharge port is closed. Shortly thereafter, that same chamber becomes open to the pump's inlet port. As the piston continues to precess about the cylinder block axis, it moves away from the valve plate thereby increasing the volume of the trapped chamber. As this occurs, fluid enters the chamber from the pump's inlet to fill the void. This process continues until the piston reaches the bottom of the reciprocation cylinder - commonly referred to as bottom-dead-center or BDC. At BDC, the connection between the pumping chamber and inlet port is closed. Shortly thereafter, the chamber becomes open to the discharge port again and the pumping cycle starts over. Variable displacement. In a variable displacement pump, if the vector normal to the cam plane (swash plate) is set parallel to the axis of rotation, there is no movement of the pistons in their cylinders. Thus there is no output. Movement of the swash plate controls pump output from zero to maximum. There are two kinds of variable-displacement axial piston pumps: direct displacement control pump, a kind of axial piston pump with a direct displacement control. A direct displacement control uses a mechanical lever attached to the swashplate of the axial piston pump. Higher system pressures require more force to move that lever, making direct displacement control only suitable for light or medium duty pumps. Heavy duty pumps require servo control. A direct displacement control pump contains linkages and springs and in some cases magnets rather than a shaft to a motor located outside of the pump (thereby reducing the number of moving parts), keeping parts protected and lubricated and reducing the resistance against the flow of liquid. servo control pump. Pressure. In a typical pressure-compensated pump, the swash plate angle is adjusted through the action of a valve which uses pressure feedback so that the instantaneous pump output flow is exactly enough to maintain a designated pressure. If the load flow increases, pressure will momentarily decrease but the pressure-compensation valve will sense the decrease and then increase the swash plate angle to increase pump output flow so that the desired pressure is restored. In reality most systems use pressure as a control for this type of pump. The operating pressure reaches, say, 200 bar (20 MPa or 2900 psi) and the swash plate is driven towards zero angle (piston stroke nearly zero) and with the inherent leaks in the system allows the pump to stabilize at the delivery volume that maintains the set pressure. As demand increases the swash plate is moved to a greater angle, piston stroke increases and the volume of fluid increases; if the demand slackens the pressure will rise, and the pumped volume diminishes as the pressure rises. At maximum system pressure the output is once again almost zero. If the fluid demand increases beyond the capacity of the pump to deliver, the system pressure will drop to near zero. The swash plate angle will remain at the maximum allowed, and the pistons will operate at full stroke. This continues until system flow-demand eases and the pump's capacity is greater than demand. As the pressure rises the swash-plate angle modulates to try to not exceed the maximum pressure while meeting the flow demand. Design difficulties Designers have a number of problems to overcome in designing axial piston pumps. One is managing to be able to manufacture a pump with the fine tolerances necessary for efficient operation. The mating faces between the rotary piston-cylinder assembly and the stationary pump body have to be almost a perfect seal while the rotary part turns at perhaps 3000 rpm. The pistons are usually less than half an inch (13 mm) in diameter with similar stroke lengths. Keeping the wall to piston seal tight means that very small clearances are involved and that materials have to be closely matched for similar coefficient of expansion. The pistons have to be drawn outwards in their cylinder by some means. On small pumps this can be done by means of a spring inside the cylinder that forces the piston up the cylinder. Inlet fluid pressure can also be arranged so that the fluid pushes the pistons up the cylinder. Often a vane pump is located on the same drive shaft to provide this pressure and it also allows the pump assembly to draw fluid against some suction head from the reservoir, which is not an attribute of the unaided axial piston pump. Another method of drawing pistons up the cylinder is to attach the cylinder heads to the surface of the swash plate. In that way the piston stroke is totally mechanical. However, the designer's problem of lubricating the swash plate face (a sliding contact) is made even more difficult. Internal lubrication of the pump is achieved by use of the operating fluid—normally called hydraulic fluid. Most hydraulic systems have a maximum operating temperature, limited by the fluid, of about 120 °C (250 °F) so that using that fluid as a lubricant brings its own problems. In this type of pump the leakage from the face between the cylinder housing and the body block is used to cool and lubricate the exterior of the rotating parts. The leakage is then carried off to the reservoir or to the inlet side of the pump again. Hydraulic fluid that has been used is always cooled and passed through micrometre-sized filters before recirculating through the pump. Uses Despite the problems indicated above this type of pump can contain most of the necessary circuit controls integrally (the swash-plate angle control) to regulate flow and pressure, be very reliable and allow the rest of the hydraulic system to be very simple and inexpensive. Axial piston pumps are used to power the hydraulic systems of jet aircraft, being gear-driven off of the turbine engine's main shaft, The system used on the F-14 used a 9-piston pump that produced a standard system operating pressure of 3000 psi and a maximum flow of 84 gallons per minute. Automotive air conditioning compressors for cabin cooling are nowadays mostly based around the axial piston pump design (others are based on the scroll compressor or rotary vane pump ones instead) in order to contain their weight and space requirement in the vehicle's engine bay and reduce vibrations. They're available in fixed displacement and dynamically adjusted variable displacement variants, and, depending upon the compressor's design, the actual rotating swashplate either directly drives a set of pistons mated to its edges through a set of hemispherical metal shoes, or a nutating plate on which a set of pistons are mounted by means of rods. They are also used in some pressure washers. For example Kärcher has several models powered by axial piston pumps with three pistons. Axial reciprocating motors are also used to power many machines. They operate on the same principle as described above, except that the circulating fluid is provided under considerable pressure and the piston housing is made to rotate and provide shaft power to another machine. A common use of an axial reciprocating motor is to power small earthmoving plant such as skid loader machines. Another use is to drive the screws of torpedoes. History The first example can be found on page 213 (or page 89 per book's pagination) in Le diverse et artificiose machine by Agostino Ramelli. References External links www.rotarypower.com, Manufacturer of Axial Piston Pumps Tecnapol, Axial Piston Pumps repair/rebuild Engine technology Pumps eo:Aksa piŝta pumpilo
Axial piston pump
Physics,Chemistry,Technology
2,071
1,597,924
https://en.wikipedia.org/wiki/Linear%20predictive%20analysis
Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a tangent to the graph and extending the line. One use of this is in linear predictive coding which can be used as a method of reducing the amount of data needed to approximately encode a series. Suppose it is desired to store or transmit a series of values representing voice. The value at each sampling point could be transmitted (if 256 values are possible then 8 bits of data for each point are required, if the precision of 65536 levels are desired then 16 bits per sample are required). If it is known that the value rarely changes more than +/- 15 values between successive samples (-15 to +15 is 31 steps, counting the zero) then we could encode the change in 5 bits. As long as the change is less than +/- 15 values in successive steps the value will exactly reproduce the desired sequence. When the rate of change exceeds +/-15 then the reconstructed values will temporarily differ from the desired value; provided fast changes that exceed the limit are rare it may be acceptable to use the approximation in order to attain the improved coding density. See also Linear prediction References Interpolation Asymptotic analysis
Linear predictive analysis
Mathematics
282
4,171,640
https://en.wikipedia.org/wiki/A%20Man%20on%20the%20Moon
A Man on the Moon: The Voyages of the Apollo Astronauts is a 1994 book by Andrew Chaikin. It describes the 1968-1972 voyages of the Apollo program astronauts in detail, from Apollo 8 to 17. "A decade in the making, this book is based on hundreds of hours of in-depth interviews with each of the twenty-four moon voyagers, as well as those who contributed their brain power, training and teamwork on Earth." This book formed the basis of the 1998 television miniseries From the Earth to the Moon. It was released in paperback in 2007 by Penguin Books, . See also First Man: The Life of Neil A. Armstrong Carrying the Fire the autobiography of the Gemini 10 and Apollo 11 astronaut Michael Collins One Giant Leap, a 2019 book Moon Shot: The Inside Story of America's Race to the Moon References 1994 non-fiction books Spaceflight books Books about the Apollo program
A Man on the Moon
Astronomy
185
71,060,229
https://en.wikipedia.org/wiki/The%20Scattered%20Nation
The Scattered Nation is a controversial speech by the U.S. Senator, Confederate officer, and slaveowner Zebulon Baird Vance, written sometime between 1868 and 1870. The speech praises the accomplishments of Jewish people, crediting Jews for much of what Vance considered great in Western civilization. Particular praise in reserved for white Jews of Central and Western European descent, while Black people and Jews of color are disparaged as culturally and racially inferior. Vance was a prominent defender of Jews during a time when antisemitism was common in the American South. While positively remembered for decades by the North Carolina Jewish community, Vance's reputation has declined in recent years due to his racism, support for slavery and the Confederacy, and promotion of Jewish stereotypes. About "The Scattered Nation" was first written between 1868 and 1870. The text of the speech was printed in 1904 and later reprinted in 1916. During his 20 years as a Senator, Vance delivered the speech hundreds of times across the United States, often in sold-out lyceums and lecture halls. In the speech, Vance credits "the race of Shem" for originating much of what he considered the greatest accomplishments in Western civilization, including monotheism. Advancing a supersessionist view, Vance states that the "Christian is simply the successor of the Jew--the glory of the one is likewise the glory of the other." The speech claims that "no people can claim such an unmixed purity of blood" as Jewish people and that "certainly none can establish such antiquity of origin, such unbroken generations of descent." Proto-Zionist in outlook, the speech asserts that Palestine was the "central chamber of God's administration." Vance disparages African, Asian, and MENA Jews living in "Africa, Arabia, India, China, Turkestan and Bokhara" as the "lowest of the Jewish people in wealth, intelligence and religion", but insisted that non-European Jews are still "superior to their Gentile neighbors in each." Orthodox Jews and other religious Jews in Eastern Europe, North Africa, and the Middle East, including Hasidic Jews and Karaites, are characterized as "ignorant", zealous, and underdeveloped. Vance praises Jews of Central and Western European descent as "by far the most intelligent and civilized of their race." Reform Judaism is praised for eliminating "Oriental mysticism" from the practice of Judaism, but criticized for dispensing with "much of the Old Testament itself". In Vance's view, Reform Jews have thus become "simply Unitarians or Deists." The text of the speech frequently characterizes Jews as wealthy due to involvement in commerce, claiming that Jews are "the leading merchants, bankers, and financiers of the world." Vance claims that the Rothschild family wielded disproportionate political and economic power in Europe, arguing that this is a form of Jewish "genius which showed itself capable of controlling the financial affairs of the world." Taking a jab at Northerners, Vance states that "if a Yankee and a Jew were to ‘lock horns’ in a regular encounter of commercial wits", the Yankee would "in two hours time whittle the smartest Jew in New York out of his homestead in the Abrahamic covenant." In contrast to his praise of Jewish people, Vance disparages Black Africans. He claims that "wars have been waged and constitutions violated for the benefit of the African negro, the descendants of barbarian tribes who for four thousand years have contributed nothing to, though in close contact with, the civilization of mankind..." One passage compares the torso size of Jews and Africans, but does not explain the significance of these supposed racial differences. Poor and working-class Jews are not described favorably by the speech, which lavishes praise on wealthy Jews. Derogatory references are made against "low-bred" Jews. North Carolina Historic Sites notes that the speech is "an argument against antisemitism toward middle-class Jews." Legacy Due to Vance's condemnation of antisemitism in the American South, he was largely remembered positively for many years by North Carolina Jews and by the Asheville Jewish community in particular. A Confederate monument honoring Vance was erected by the City of Asheville in 1897. Following Vance's death, the local chapter of B'nai B'rith and the United Daughters of the Confederacy held an annual ceremony at the Vance Monument. The ceremony was held every year for decades, continuing until the early 2000s. The Jewish-American philanthropist and pro-Confederate activist Nathan Straus, co-owner of the Macy's department store chain, funded the construction of a wrought iron fence around the monument as well as an annual wreath-laying to honor Vance. The speech was often republished by Jewish publishing houses. Maurice A. Weinstein's 1995 book Zebulon B. Vance and “The Scattered Nation” (Wildacres Press, Charlotte) helped to keep Vance's memory alive within North Carolina's Jewish community. The Vance Monument was removed by the City of Asheville in May 2021, with the support of the local Jewish community. In the 21st century, Vance is no longer held in esteem by the North Carolina Jewish community. According to a 2021 statement released by two Jewish organizations in Asheville, the Jewish Community Relations Council of the Greater Asheville Area and Carolina Jews for Justice/West, Vance "classifies Jews in a hierarchy of worthiness according to their geographic origins. Not surprisingly, white, Ashkenazi Jews from Western and Central Europe rank highest." Andrea Cooper, writing for The Forward, notes that the speech contains "hoary stereotypes" about Jews and finance. Asheville Mayor Esther Manheimer, a Jewish woman, has stated that Vance's views no longer represent the views of the Asheville community in general or the views of the white population of Asheville specifically. North Carolina historian Kevan Frazier notes that at the time the speech was written, only about 500 Jewish people lived in North Carolina, so the speech was not motivated by political gain. Frazier credits Vance for his strong opposition to antisemitism, but criticizes the speech's anti-Black foundational arguments. Vance's biographer Selig Adler also noted that "there were somewhat less than five hundred Jews in North Carolina at the time Vance wrote the speech, a fact that discounts all political motives." See also Model minority Philosemitism Stereotypes of Jews References External links Full text of speech, Archive.org African-American history of North Carolina African American–Jewish relations Anti-Arabism in the United States Anti-Asian sentiment in the United States Anti-black racism in North Carolina Anti-Mizrahi sentiment Antisemitism in the United States Christian Zionism in the United States Class discrimination History of racism in North Carolina Jews and Judaism in North Carolina Lost Cause of the Confederacy Opposition to antisemitism in the United States Orientalism Philosemitism Scientific racism Southern United States literature 19th-century speeches Supersessionism Western North Carolina White American culture in North Carolina White supremacy in the United States
The Scattered Nation
Biology
1,460
20,809,419
https://en.wikipedia.org/wiki/Edgewood%20Chemical%20Biological%20Center
The United States Army Combat Capabilities Development Command Chemical Biological Center (DEVCOM CBC) is the United States Department of Defense's principal research and development resource for non-medical chemical and biological defense (CB). As a critical national asset in the CB defense community, CBC supports all phases of the acquisition life-cycle ― from basic and applied research through technology development, engineering design, equipment evaluation, product support, sustainment, field operations and demilitarization ― to address its customers’ unique requirements. Its mission is to provide innovative chemical, biological, radiological, nuclear and explosive (CBRNE) defense capabilities to enable the joint warfighters' dominance on the battlefield and interagency defense of the homeland. The DEVCOM Chemical Biological Center has more than 1,300 full-time employees located at three different sites in the United States: Edgewood Area of Aberdeen Proving Ground, Maryland; Pine Bluff Arsenal, Arkansas; and Rock Island Arsenal, Illinois. It has 1.22 million square feet of laboratory and test chamber space between its four research campuses. DEVCOM also possesses a chemical munitions field operations capability. It consists of field-deployable scientists, engineers, technicians and explosives specialists with chemical/biological agent surety expertise plus unique capabilities for in situ destruction of agents. Finally, DEVCOM CBC develops smoke and obscurants technology, including synthesis, transport and dispersion. DEVCOM CBC actively initiates agreements with industry to collaborate on applied research, product development and testing. It offers its partner companies the benefits of its intellectual property portfolio, science and engineering expertise, and its one-of-kind chemical biological research and testing infrastructure. Mechanisms for collaborative research, development and commercial production include Cooperative Research & Development Agreements (CRADAs), Letters of Intent (LOIs), Material Transfer Agreements (MTAs), Patent License Agreements (PLAs), Technology Support Agreements (TSAs) plus Memos of Agreement and Memos of Understanding. History As an organizational grandchild of the original Edgewood Arsenal, DEVCOM CBC traces its lineage back over a century to 1917 when President Woodrow Wilson established the site as the location for the first chemical shell filling plant in the United States. Since that time, the center has expanded its mission to include biological materials, DEVCOM CBC’s name has changed many times over the past century. The name changes were: 1918 – Originally designated the Edgewood Arsenal by the War Department 1942 – Renamed Chemical Warfare Center at Edgewood Arsenal 1963 - Named returns to the Edgewood Arsenal (both command and installation) 1977 - Edgewood Arsenal (command) disestablished, Chemical Systems Laboratory (CSL) established under Armament Research and Development Command (ARRADCOM) 1983 - Redesignated Chemical Research and Development Center (CRDC) under Armament, Chemical, and Munitions Command (AMCCOM) 1986 - CRDC redesignated Chemical Research, Development and Engineering Center (CRDEC) 1992 - CRDEC reorganized to form Edgewood Research, Development, and Engineering Center (ERDEC) under Chemical and Biological Defense Agency (CBDA) 1998 - ERDEC redesignated Edgewood Chemical Biological Center (ECBC) under Soldier and Biological Chemical Command (SBCCOM) 2019 - Combat Capabilities Development Command Chemical Biological Center 2021- DEVCOM Chemical Biological Center References Sources Chemical warfare facilities Biological warfare facilities United States biological weapons program
Edgewood Chemical Biological Center
Chemistry,Biology
673
71,981,306
https://en.wikipedia.org/wiki/List%20of%20Art%20Deco%20architecture%20in%20Georgia%20%28U.S.%20state%29
This is a list of buildings that are examples of the Art Deco architectural style in Georgia, United States. Atlanta 7 Stages Theatre (former Little 5 Points Theatre), Atlanta, 1940 Atlanta City Hall, Atlanta, 1930 Cheshire Square Shopping Center, Atlanta, 1967 Empire Manufacturing Company Building, Atlanta, 1939 Evans Cucich House, Atlanta, 1935 Forsyth Walton Building, Atlanta, 1936 Freeman Ford Building, Atlanta, 1930 GLG Grand, Atlanta, 1992 Healey Building, Atlanta, 1920 Lerner Shops, Atlanta Majestic Diner, Atlanta, 1929 Martin Luther King, Jr. Federal Building, Atlanta, 1933 Municipal Auditorium, Atlanta, 1909 Nabisco Plant, Atlanta, 1955 National NuGrape Company, Atlanta, 1937 Olympia Building, Atlanta, 1936 Plaza Theater, Atlanta, 1939 Regenstein's, Atlanta Rhodes Haverty Building, Atlanta Southern Bell Telephone Company Building, Atlanta, 1929 Southern Dairies, Atlanta, 1939 Telephone Factory Lofts, Atlanta, 1938 Ten Park Place, Atlanta, 1930 Troy-Peerless Laundry Building, Atlanta, 1929 United States Post Office, Federal Annex, Atlanta, 1933 Variety Playhouse, Atlanta, 1940 W. W. Orr Building, Atlanta, 1930 William–Oliver Building, Atlanta, 1930 Gainesville Dixie Hunt Hotel, Gainesville, 1937 Logan Building, Gainesville, 1929 Old Hall County Courthouse, Gainesville, 1937 Savannah Globe Shoe Company, Savannah, 1929 Karpf Building, Savannah The Savannah Theatre, Savannah, 1958 Tifton Jenny's Fashion, Tifton Commercial Historic District, Tifton Lockeby Building, Tifton Commercial Historic District, Tifton, 1937 Tift Theater, Tifton, 1937 Other cities Albany Insurance Mart, Albany, 1950s Bacon Theatre, Alma, 1946 Blackshear Bank Building, Blackshear, 1930s Blair Rutland Building, Decatur, 1925 Campus Theatre, Milledgeville Coffee County Courthouse, Downtown Douglas Historic District, Douglas, 1940 Colquitt Theater, Colquitt Town Square Historic District, Colquitt Colquitt Theater, Moultrie Commercial Historic District, Moultrie, 1941 Dosta Theater, Valdosta, 1941 Dublin Theatre, Dublin, 1934 Earl Smith Strand Theater, Marietta, 1935 Early County Jail, Blakely Court Square Historic District, Blakely, 1940 Fickling Lodge No. 129, Butler, 1920 Friedlander's Building, Moultrie, 1936 Georgia Theater (now Emma Kelly Theater in the Averitt Center for the Arts), Statesboro, 1936 Grand Theater, Fitzgerald, 1936 Hogansville City Hall (former Royal Theater), Hogansville, 1937 Holly Theatre, Dahlonega, 1948 Liberty Theater, Columbus, 1925 Martin Theatre (now Martin Centre), Downtown Douglas Historic District, Douglas, 1939 Miller Theatre, Augusta, 1938 Mitchell County Courthouse, Camilla Commercial Historic District, Camilla, mid-1930s Monroe City Hall, Monroe, 1939 Montezuma Motor Company, Montezuma, 1920s Montgomery Ward Building, Griffin Commercial Historic District, Griffin, 1929 Ocmulgee National Monument Visitor Center, Macon, 1936 Pickens County Courthouse, Jasper, 1949 Pine Theatre, Fitzgerald, 1945 Playhouse Theater, Valdosta, 1941 Ritz Theater, Thomaston, 1927 Ritz Theatre at the Schaefer Center, Toccoa, 1939 Royal Cafe, Quitman Historic District, Quitman, 1913 Southeast Georgian Building, Kingsland Commercial Historic District, Kingsland, 1925 Southern Trust Building, Macon, 1941 State Theatre, Albany, 1945 Strand Dinner Cinema, Jesup, 1920s Suwanee City Hall (Town Center), Suwanee, 2002 Sylvia Theatre & Granitoid Office Building, Sylvania Tom Huston Frozen Foods Company–Montezuma Motor Company, Montezuma, 1920 Troup County Courthouse, Annex, and Jail, LaGrange, 1939 United States Post Office, Decatur, 1935 Walker Theatre, Fort Gaines, 1936 West Theatre, Cedartown, 1941 Wink Theater, Dalton Commercial Historic District, Dalton, 1938 Zebulon Theater, Cairo, 1936 See also List of Art Deco architecture List of Art Deco architecture in the United States References "Art Deco & Streamline Moderne Buildings." Roadside Architecture.com. Retrieved 2019-01-03. Cinema Treasures. Retrieved 2022-09-06 "Court House Lover". Flickr. Retrieved 2022-09-06 "New Deal Map". The Living New Deal. Retrieved 2020-12-25. "SAH Archipedia". Society of Architectural Historians. Retrieved 2021-11-21. External links Art Deco Lists of buildings and structures in Georgia (U.S. state)
List of Art Deco architecture in Georgia (U.S. state)
Engineering
930
2,103,852
https://en.wikipedia.org/wiki/Orthomode%20transducer
An orthomode transducer (OMT) is a waveguide component that is commonly referred to as a polarisation duplexer. Orthomode is a contraction of orthogonal mode. Orthomode transducers serve either to combine or to separate two orthogonally polarized microwave signal paths. One of the paths forms the uplink, which is transmitted over the same waveguide as the received signal path, or downlink path. Such a device may be part of a very small aperture terminal (VSAT) antenna feed or a terrestrial microwave radio feed; for example, OMTs are often used with a feed horn to isolate orthogonal polarizations of a signal and to transfer transmit and receive signals to different ports. VSAT and satellite Earth station applications For VSAT modems the transmission and reception paths are at 90° to each other, or in other words, the signals are orthogonally polarized with respect to each other. This orthogonal shift between the two signal paths provides approximately an isolation of 40 dB in the Ku band and Ka band radio frequency bands. Hence this device serves in an essential role as the junction element of the outdoor unit (ODU) of a VSAT modem. It protects the receiver front-end element (the low-noise block downconverter, LNB) from burn-out by the power of the output signal generated by the block up converter (BUC). The BUC is also connected to the feed horn through a wave guide port of the OMT junction device. Orthomode transducers are used in dual-polarized VSATs, in sparsely populated areas, radar antennas, radiometers, and communications links. They are usually connected to the antenna's down converter or LNB and to the high-power amplifier (HPA), attached to a transmitting antenna. When the transmitted and received radio signal to and from the antenna have two different polarizations (horizontal and vertical), they are said to be orthogonal. This means that the modulation planes of the two radio signal waves are at 90 degrees to each other. The OMT device is used to separate two equal frequency signals, but different polarizations, of high and low signal power. Protective separation is essential as the transmitter unit would seriously damage the very sensitive low micro-voltage (μV), front-end receiver amplifier unit at the antenna. The transmission signal of the up-link, of relatively high power (1, 2, or 5 watts for common VSAT equipment) originating from BUC and the very low power received signal power (in the order of μV) coming from the antenna to the LNB receiver unit, in this case are at an angle of 90° relative to each other, are both coupled together at the feed-horn focal-point of the parabolic antenna. The device that unites both up-link and down-link paths, which are at 90° to each other is the OMT. In the VSAT Ku band of operation case, a typical OMT provides a -40 dB isolation between each of the connected radio ports to the feed horn that faces the parabolic dish reflector (-40 dB means that only 0.01% of the transmitter's output power is cross-fed into the receiver's wave guide port). The port facing the parabolic reflector of the antenna is a circular polarizing port so that horizontal and vertical polarity coupling of inbound and outbound radio signal is easily achieved. The 40 dB isolation provides essential protection to the very sensitive receiver amplifier against burn out from the relatively high-power signal of the transmitter unit. Further isolation may be obtained by means of selective radio frequency filtering to achieve an isolation of -100 dB (-100 dB means that only a 10−10 fraction of the transmitter's output power is cross-fed into the wave guide port of the receiver). The second image demonstrates two types of outdoor units, a 1-watt Hughes unit and a composite configuration of a 2-watt BUC/OMT/LNB Andrew, Swedish Microwave units. The following images show a Portenseigne & Hirschmann Ku band configuration, that highlights the horizontal the vertical, and circular polarized wave-guide ports that join to the feed-horn, the LNB or BUC elements of an outdoor unit. Terrestrial microwave radio links An ortho-mode transducer is also a component commonly found on high capacity terrestrial microwave radio links. In this arrangement, two parabolic reflector dishes operate in a point to point microwave radio path (4 GHz to 85 GHz) with four radios, two mounted on each end. On each dish a T-shaped ortho-mode transducer is mounted at the rear of the feed, separating the signal from the feed into two separate radios, one operating in the horizontal polarity, and the other in the vertical polarity. This arrangement is used to increase the aggregate data throughput between two dishes on a point to point microwave path, or for fault-tolerance redundancy. Certain types of outdoor microwave radios have integrated orthomode transducers and operate in both polarities from a single radio unit, performing cross-polarization interference cancellation (XPIC) within the radio unit itself. Alternatively, the orthomode transducer may be built into the antenna, and allow connection of separate radios, or separate ports of the same radio, to the antenna. Characterization An ortho-mode transducer can be modelled as a 4-port device, 2 of these (H and V) representing the single-polarization ports and the remaining (h, v) embodied by the degenerate modes in the dual-polarized port. The scattering parameters can be gathered in a 4×4 scattering matrix , which is symmetrical for a reciprocal OMT (i.e. not including circulators, isolators or active components), thus leaving 10 independent terms for a general lossy device: Of these: 4 (, , , ) represent the intrinsic reflection terms of the 4 ports, related to the return loss when all the ports are closed onto ideal loads equal to the port characteristic impedance; 2 (, ) are the main direct transmission terms (from each single-polarization port to the corresponding mode on the dual-polarized port); 2 (, ) represent the cross-polarization discrimination (XPD): from each single-polarization port to the supposedly-isolated mode on the dual-polarized port; 2 (, ) model the isolation terms (sometimes referred as inter-port isolation, IPI): between the two single-polarized ports and between the two orthogonal modes at the dual-polarized port. An ideal OMT exhibits perfect matching (null terms on the diagonal), unitary direct transmission terms and infinite XPD and isolation (null corresponding scattering parameters): Characterization of a manufactured OMT (considered the device under test, DUT) is usually a delicate matter for both mechanical and theoretical reasons. Conceptually, if an ideal OMT is available as part of the measurement setup, often named "golden sample", its dual-polarized port can be connected to its counterpart on the DUT, resulting in a 4-port equivalent device with 4 single-polarization ports. The ideal OMT splits the two polarizations at the dual-polarized port into two standard single-polarized ports and such arrangement allows the direct measurement of all the scattering parameters of the DUT (either by using a 4-port vector network analyzer (VNA) or a 2-port one with 2 single-polarized loads used in several combinations). Such ideal setup is only prone to mechanical uncertainties related to the physical placement and alignment of the dual-polarized ports. A simple misalignment angle introduces an artificial path from each polarization to the opposite proportional to . The phasorial combination of the leakage (or ) due to the XPDs of DUT and this artificial loss is the actual external measured quantity. If, by proper phase recombination, the two contributions tend to cancel each other, the actual measured XPD can increase to infinity (possible only if ), thus resulting in a huge estimation error. Depending on the expected XPD of the DUT, mechanical countermeasures should be introduced to guarantee that the artificial measurement uncertainty can be neglected. Any deviation from this ideal setup, however, introduces errors and uncertainties. If a dual-polarization matched load is available in place of the ideal OMT, this allows 2×2 measurements from the single-polarization ports, yielding only 2 of the reflection terms ( and ) and one IPI (). Other measurements aimed at gaining estimations of the other scattering parameters of the DUT involve the dual-polarized port and require additional components, such as dual-polarized to single-polarized transitions or tapers, which are often not matched on at least one of the two polarizations: this creates undesired reflections which propagate through the OMT and combine at the VNA ports thus preventing direct measurements. These issues add to mechanical factors and enhance uncertainties in the measurement procedure. Due to the increasing demand for high-capacity data links, the exploitation of dual-polarization has fostered research in design and characterization of OMTs to overcome the practical difficulties. The literature concerning OMT modelling and practical characterization consists of works both by academic organizations such as the National Research Council (Italy), Marche Polytechnic University and European Space Agency and likewise by industrial teams such as CommScope and Siae Microelettronica with immediate impact on products for modern dual-polarized telecommunication systems, for instance in terrestrial microwave backhauling. See also Feed horn Waveguide (electromagnetism) References External links VSAT specific training that demonstrates the use of the Orthomode Transducer (OMT): VSAT Installation Manual Video Presentation with explanation of the Orthomode Transducer (OMT) Antennas Communication circuits Radio electronics Satellite broadcasting Transducers
Orthomode transducer
Engineering
2,064
48,029,103
https://en.wikipedia.org/wiki/Plasma%20electrochemistry
Plasma electrochemistry is a new field of research where the interaction of plasma with an electrolyte solution is studied. It uses plasma to drive chemical reactions in liquid. References Electrochemistry electrochemistry
Plasma electrochemistry
Physics,Chemistry
43
74,413,105
https://en.wikipedia.org/wiki/Exception%20handling%20%28programming%29
In computer programming, several language mechanisms exist for exception handling. The term exception is typically used to denote a data structure storing information about an exceptional condition. One mechanism to transfer control, or raise an exception, is known as a throw; the exception is said to be thrown. Execution is transferred to a catch. Usage Programming languages differ substantially in their notion of what an exception is. Exceptions can be used to represent and handle abnormal, unpredictable, erroneous situations, but also as flow control structures to handle normal situations. For example, Python's iterators throw StopIteration exceptions to signal that there are no further items produced by the iterator. There is disagreement within many languages as to what constitutes idiomatic usage of exceptions. For example, Joshua Bloch states that Java's exceptions should only be used for exceptional situations, but Kiniry observes that Java's built-in is not at all an exceptional event. Similarly, Bjarne Stroustrup, author of C++, states that C++ exceptions should only be used for error handling, as this is what they were designed for, but Kiniry observes that many modern languages such as Ada, C++, Modula-3, ML and OCaml, Python, and Ruby use exceptions for flow control. Some languages such as Eiffel, C#, Common Lisp, and Modula-2 have made a concerted effort to restrict their usage of exceptions, although this is done on a social rather than technical level. History The earliest IBM Fortran compilers had statements for testing exceptional conditions. These included the IF ACCUMULATOR OVERFLOW, IF QUOTIENT OVERFLOW, and IF DIVIDE CHECK statements. In the interest of machine independence, they were not included in FORTRAN IV nor the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in the IEEE_EXCEPTIONS module. Software exception handling continued to be developed in the 1960s and 1970s. LISP 1.5 (1958-1961) allowed exceptions to be raised by the ERROR pseudo-function, similarly to errors raised by the interpreter or compiler. Exceptions were caught by the ERRORSET keyword, which returned NIL in case of an error, instead of terminating the program or entering the debugger. PL/I introduced its own form of exception handling circa 1964, allowing interrupts to be handled with ON units. MacLisp observed that ERRSET and ERR were used not only for error raising, but for non-local control flow, and thus added two new keywords, CATCH and THROW (June 1972). The cleanup behavior now generally called "finally" was introduced in NIL (New Implementation of LISP) in the mid- to late-1970s as UNWIND-PROTECT. This was then adopted by Common Lisp. Contemporary with this was dynamic-wind in Scheme, which handled exceptions in closures. The first papers on structured exception handling were and . Exception handling was subsequently widely adopted by many programming languages from the 1980s onward. Syntax Many computer languages have built-in syntactic support for exceptions and exception handling. This includes ActionScript, Ada, BlitzMax, C++, C#, Clojure, COBOL, D, ECMAScript, Eiffel, Java, ML, Object Pascal (e.g. Delphi, Free Pascal, and the like), PowerBuilder, Objective-C, OCaml, Perl, PHP (as of version 5), PL/I, PL/SQL, Prolog, Python, REALbasic, Ruby, Scala, Seed7, Smalltalk, Tcl, Visual Prolog and most .NET languages. Excluding minor syntactic differences, there are only a couple of exception handling styles in use. In the most popular style, an exception is initiated by a special statement (throw or raise) with an exception object (e.g. with Java or Object Pascal) or a value of a special extendable enumerated type (e.g. with Ada or SML). The scope for exception handlers starts with a marker clause (try or the language's block starter such as begin) and ends in the start of the first handler clause (catch, except, rescue). Several handler clauses can follow, and each can specify which exception types it handles and what name it uses for the exception object. As a minor variation, some languages use a single handler clause, which deals with the class of the exception internally. Also common is a related clause (finally or ensure) that is executed whether an exception occurred or not, typically to release resources acquired within the body of the exception-handling block. Notably, C++ does not provide this construct, recommending instead the Resource Acquisition Is Initialization (RAII) technique which frees resources using destructors. According to a 2008 paper by Westley Weimer and George Necula, the syntax of the try...finally blocks in Java is a contributing factor to software defects. When a method needs to handle the acquisition and release of 3–5 resources, programmers are apparently unwilling to nest enough blocks due to readability concerns, even when this would be a correct solution. It is possible to use a single try...finally block even when dealing with multiple resources, but that requires a correct use of sentinel values, which is another common source of bugs for this type of problem. Python and Ruby also permit a clause (else) that is used in case no exception occurred before the end of the handler's scope was reached. In its whole, exception handling code might look like this (in Java-like pseudocode): try { line = console.readLine(); if (line.length() == 0) { throw new EmptyLineException("The line read from console was empty!"); } console.printLine("Hello %s!" % line); } catch (EmptyLineException e) { console.printLine("Hello!"); } catch (Exception e) { console.printLine("Error: " + e.message()); } else { console.printLine("The program ran successfully."); } finally { console.printLine("The program is now terminating."); } C does not have try-catch exception handling, but uses return codes for error checking. The setjmp and longjmp standard library functions can be used to implement try-catch handling via macros. Perl 5 uses die for throw and for try-catch. It has CPAN modules that offer try-catch semantics. Termination and resumption semantics When an exception is thrown, the program searches back through the stack of function calls until an exception handler is found. Some languages call for unwinding the stack as this search progresses. That is, if function , containing a handler for exception , calls function , which in turn calls function , and an exception occurs in , then functions and may be terminated, and in will handle . This is said to be termination semantics. Alternately, the exception handling mechanisms may not unwind the stack on entry to an exception handler, giving the exception handler the option to restart the computation, resume or unwind. This allows the program to continue the computation at exactly the same place where the error occurred (for example when a previously missing file has become available) or to implement notifications, logging, queries and fluid variables on top of the exception handling mechanism (as done in Smalltalk). Allowing the computation to resume where it left off is termed resumption semantics. There are theoretical and design arguments in favor of either decision. C++ standardization discussions in 1989–1991 resulted in a definitive decision to use termination semantics in C++. Bjarne Stroustrup cites a presentation by Jim Mitchell as a key data point: Exception-handling languages with resumption include Common Lisp with its Condition System, PL/I, Dylan, R, and Smalltalk. However, the majority of newer programming languages follow C++ and use termination semantics. Exception handling implementation The implementation of exception handling in programming languages typically involves a fair amount of support from both a code generator and the runtime system accompanying a compiler. (It was the addition of exception handling to C++ that ended the useful lifetime of the original C++ compiler, Cfront.) Two schemes are most common. The first, , generates code that continually updates structures about the program state in terms of exception handling. Typically, this adds a new element to the stack frame layout that knows what handlers are available for the function or method associated with that frame; if an exception is thrown, a pointer in the layout directs the runtime to the appropriate handler code. This approach is compact in terms of space, but adds execution overhead on frame entry and exit. It was commonly used in many Ada implementations, for example, where complex generation and runtime support was already needed for many other language features. Microsoft's 32-bit Structured Exception Handling (SEH) uses this approach with a separate exception stack. Dynamic registration, being fairly straightforward to define, is amenable to proof of correctness. The second scheme, and the one implemented in many production-quality C++ compilers and 64-bit Microsoft SEH, is a . This creates static tables at compile time and link time that relate ranges of the program counter to the program state with respect to exception handling. Then, if an exception is thrown, the runtime system looks up the current instruction location in the tables and determines what handlers are in play and what needs to be done. This approach minimizes executive overhead for the case where an exception is not thrown. This happens at the cost of some space, but this space can be allocated into read-only, special-purpose data sections that are not loaded or relocated until an exception is actually thrown. The location (in memory) of the code for handling an exception need not be located within (or even near) the region of memory where the rest of the function's code is stored. So if an exception is thrown then a performance hit – roughly comparable to a function call – may occur if the necessary exception handling code needs to be loaded/cached. However, this scheme has minimal performance cost if no exception is thrown. Since exceptions in C++ are supposed to be exceptional (i.e. uncommon/rare) events, the phrase "zero-cost exceptions" is sometimes used to describe exception handling in C++. Like runtime type identification (RTTI), exceptions might not adhere to C++'s zero-overhead principle as implementing exception handling at run-time requires a non-zero amount of memory for the lookup table. For this reason, exception handling (and RTTI) can be disabled in many C++ compilers, which may be useful for systems with very limited memory (such as embedded systems). This second approach is also superior in terms of achieving thread safety. Other definitional and implementation schemes have been proposed as well. For languages that support metaprogramming, approaches that involve no overhead at all (beyond the already present support for reflection) have been advanced. Exception handling based on design by contract A different view of exceptions is based on the principles of design by contract and is supported in particular by the Eiffel language. The idea is to provide a more rigorous basis for exception handling by defining precisely what is "normal" and "abnormal" behavior. Specifically, the approach is based on two concepts: Failure: the inability of an operation to fulfill its contract. For example, an addition may produce an arithmetic overflow (it does not fulfill its contract of computing a good approximation to the mathematical sum); or a routine may fail to meet its postcondition. Exception: an abnormal event occurring during the execution of a routine (that routine is the "recipient" of the exception) during its execution. Such an abnormal event results from the failure of an operation called by the routine. The "Safe Exception Handling principle" as introduced by Bertrand Meyer in Object-Oriented Software Construction then holds that there are only two meaningful ways a routine can react when an exception occurs: Failure, or "organized panic": The routine fixes the object's state by re-establishing the invariant (this is the "organized" part), and then fails (panics), triggering an exception in its caller (so that the abnormal event is not ignored). Retry: The routine tries the algorithm again, usually after changing some values so that the next attempt will have a better chance to succeed. In particular, simply ignoring an exception is not permitted; a block must either be retried and successfully complete, or propagate the exception to its caller. Here is an example expressed in Eiffel syntax. It assumes that a routine is normally the better way to send a message, but it may fail, triggering an exception; if so, the algorithm next uses , which will fail less often. If fails, the routine as a whole should fail, causing the caller to get an exception. send (m: MESSAGE) is -- Send m through fast link, if possible, otherwise through slow link. local tried_fast, tried_slow: BOOLEAN do if tried_fast then tried_slow := True send_slow (m) else tried_fast := True send_fast (m) end rescue if not tried_slow then retry end end The boolean local variables are initialized to False at the start. If fails, the body ( clause) will be executed again, causing execution of . If this execution of fails, the clause will execute to the end with no (no clause in the final ), causing the routine execution as a whole to fail. This approach has the merit of defining clearly what "normal" and "abnormal" cases are: an abnormal case, causing an exception, is one in which the routine is unable to fulfill its contract. It defines a clear distribution of roles: the clause (normal body) is in charge of achieving, or attempting to achieve, the routine's contract; the clause is in charge of reestablishing the context and restarting the process, if this has a chance of succeeding, but not of performing any actual computation. Although exceptions in Eiffel have a fairly clear philosophy, Kiniry (2006) criticizes their implementation because "Exceptions that are part of the language definition are represented by INTEGER values, developer-defined exceptions by STRING values. [...] Additionally, because they are basic values and not objects, they have no inherent semantics beyond that which is expressed in a helper routine which necessarily cannot be foolproof because of the representation overloading in effect (e.g., one cannot differentiate two integers of the same value)." Uncaught exceptions Contemporary applications face many design challenges when considering exception handling strategies. Particularly in modern enterprise level applications, exceptions must often cross process boundaries and machine boundaries. Part of designing a solid exception handling strategy is recognizing when a process has failed to the point where it cannot be economically handled by the software portion of the process. If an exception is thrown and not caught (operationally, an exception is thrown when there is no applicable handler specified), the uncaught exception is handled by the runtime; the routine that does this is called the . The most common default behavior is to terminate the program and print an error message to the console, usually including debug information such as a string representation of the exception and the stack trace. This is often avoided by having a top-level (application-level) handler (for example in an event loop) that catches exceptions before they reach the runtime. Note that even though an uncaught exception may result in the program terminating abnormally (the program may not be correct if an exception is not caught, notably by not rolling back partially completed transactions, or not releasing resources), the process terminates normally (assuming the runtime works correctly), as the runtime (which is controlling execution of the program) can ensure orderly shutdown of the process. In a multithreaded program, an uncaught exception in a thread may instead result in termination of just that thread, not the entire process (uncaught exceptions in the thread-level handler are caught by the top-level handler). This is particularly important for servers, where for example a servlet (running in its own thread) can be terminated without the server overall being affected. This default uncaught exception handler may be overridden, either globally or per-thread, for example to provide alternative logging or end-user reporting of uncaught exceptions, or to restart threads that terminate due to an uncaught exception. For example, in Java this is done for a single thread via Thread.setUncaughtExceptionHandler and globally via Thread.setDefaultUncaughtExceptionHandler; in Python this is done by modifying sys.excepthook. Checked exceptions Java introduced the notion of checked exceptions, which are special classes of exceptions. The checked exceptions that a method may raise must be part of the method's signature. For instance, if a method might throw an , it must declare this fact explicitly in its method signature. Failure to do so raises a compile-time error. According to Hanspeter Mössenböck, checked exceptions are less convenient but more robust. Checked exceptions can, at compile time, reduce the incidence of unhandled exceptions surfacing at runtime in a given application. Kiniry writes that "As any Java programmer knows, the volume of try catch code in a typical Java application is sometimes larger than the comparable code necessary for explicit formal parameter and return value checking in other languages that do not have checked exceptions. In fact, the general consensus among in-the-trenches Java programmers is that dealing with checked exceptions is nearly as unpleasant a task as writing documentation. Thus, many programmers report that they “resent” checked exceptions.". Martin Fowler has written "...on the whole I think that exceptions are good, but Java checked exceptions are more trouble than they are worth." As of 2006 no major programming language has followed Java in adding checked exceptions. For example, C# does not require or allow declaration of any exception specifications, with the following posted by Eric Gunnerson: Anders Hejlsberg describes two concerns with checked exceptions: Versioning: A method may be declared to throw exceptions X and Y. In a later version of the code, one cannot throw exception Z from the method, because it would make the new code incompatible with the earlier uses. Checked exceptions require the method's callers to either add Z to their throws clause or handle the exception. Alternately, Z may be misrepresented as an X or a Y. Scalability: In a hierarchical design, each systems may have several subsystems. Each subsystem may throw several exceptions. Each parent system must deal with the exceptions of all subsystems below it, resulting in an exponential number of exceptions to be dealt with. Checked exceptions require all of these exceptions to be dealt with explicitly. To work around these, Hejlsberg says programmers resort to circumventing the feature by using a declaration. Another circumvention is to use a handler. This is referred to as catch-all exception handling or Pokémon exception handling after the show's catchphrase "Gotta Catch ‘Em All!". The Java Tutorials discourage catch-all exception handling as it may catch exceptions "for which the handler was not intended". Still another discouraged circumvention is to make all exceptions subclass . An encouraged solution is to use a catch-all handler or throws clause but with a specific superclass of all potentially thrown exceptions rather than the general superclass . Another encouraged solution is to define and declare exception types that are suitable for the level of abstraction of the called method and map lower level exceptions to these types by using exception chaining. Similar mechanisms The roots of checked exceptions go back to the CLU programming language's notion of exception specification. A function could raise only exceptions listed in its type, but any leaking exceptions from called functions would automatically be turned into the sole runtime exception, , instead of resulting in compile-time error. Later, Modula-3 had a similar feature. These features don't include the compile time checking that is central in the concept of checked exceptions. Early versions of the C++ programming language included an optional mechanism similar to checked exceptions, called exception specifications. By default any function could throw any exception, but this could be limited by a clause added to the function signature, that specified which exceptions the function may throw. Exception specifications were not enforced at compile-time. Violations resulted in the global function being called. An empty exception specification could be given, which indicated that the function will throw no exception. This was not made the default when exception handling was added to the language because it would have required too much modification of existing code, would have impeded interaction with code written in other languages, and would have tempted programmers into writing too many handlers at the local level. Explicit use of empty exception specifications could, however, allow C++ compilers to perform significant code and stack layout optimizations that are precluded when exception handling may take place in a function. Some analysts viewed the proper use of exception specifications in C++ as difficult to achieve. This use of exception specifications was included in C++98 and C++03, deprecated in the 2012 C++ language standard (C++11), and was removed from the language in C++17. A function that will not throw any exceptions can now be denoted by the keyword. An uncaught exceptions analyzer exists for the OCaml programming language. The tool reports the set of raised exceptions as an extended type signature. But, unlike checked exceptions, the tool does not require any syntactic annotations and is external (i.e. it is possible to compile and run a program without having checked the exceptions). Dynamic checking of exceptions The point of exception handling routines is to ensure that the code can handle error conditions. In order to establish that exception handling routines are sufficiently robust, it is necessary to present the code with a wide spectrum of invalid or unexpected inputs, such as can be created via software fault injection and mutation testing (that is also sometimes referred to as fuzz testing). One of the most difficult types of software for which to write exception handling routines is protocol software, since a robust protocol implementation must be prepared to receive input that does not comply with the relevant specification(s). In order to ensure that meaningful regression analysis can be conducted throughout a software development lifecycle process, any exception handling testing should be highly automated, and the test cases must be generated in a scientific, repeatable fashion. Several commercially available systems exist that perform such testing. In runtime engine environments such as Java or .NET, there exist tools that attach to the runtime engine and every time that an exception of interest occurs, they record debugging information that existed in memory at the time the exception was thrown (call stack and heap values). These tools are called automated exception handling or error interception tools and provide 'root-cause' information for exceptions. Asynchronous exceptions Asynchronous exceptions are events raised by a separate thread or external process, such as pressing Ctrl-C to interrupt a program, receiving a signal, or sending a disruptive message such as "stop" or "suspend" from another thread of execution. Whereas synchronous exceptions happen at a specific throw statement, asynchronous exceptions can be raised at any time. It follows that asynchronous exception handling can't be optimized out by the compiler, as it cannot prove the absence of asynchronous exceptions. They are also difficult to program with correctly, as asynchronous exceptions must be blocked during cleanup operations to avoid resource leaks. Programming languages typically avoid or restrict asynchronous exception handling, for example C++ forbids raising exceptions from signal handlers, and Java has deprecated the use of its ThreadDeath exception that was used to allow one thread to stop another one. Another feature is a semi-asynchronous mechanism that raises an asynchronous exception only during certain operations of the program. For example, Java's only affects the thread when the thread calls an operation that throws . The similar POSIX API has race conditions which make it impossible to use safely. Condition systems Common Lisp, R, Dylan and Smalltalk have a condition system (see Common Lisp Condition System) that encompasses the aforementioned exception handling systems. In those languages or environments the advent of a condition (a "generalisation of an error" according to Kent Pitman) implies a function call, and only late in the exception handler the decision to unwind the stack may be taken. Conditions are a generalization of exceptions. When a condition arises, an appropriate condition handler is searched for and selected, in stack order, to handle the condition. Conditions that do not represent errors may safely go unhandled entirely; their only purpose may be to propagate hints or warnings toward the user. Continuable exceptions This is related to the so-called resumption model of exception handling, in which some exceptions are said to be continuable: it is permitted to return to the expression that signaled an exception, after having taken corrective action in the handler. The condition system is generalized thus: within the handler of a non-serious condition (a.k.a. continuable exception), it is possible to jump to predefined restart points (a.k.a. restarts) that lie between the signaling expression and the condition handler. Restarts are functions closed over some lexical environment, allowing the programmer to repair this environment before exiting the condition handler completely or unwinding the stack even partially. An example is the ENDPAGE condition in PL/I; the ON unit might write page trailer lines and header lines for the next page, then fall through to resume execution of the interrupted code. Restarts separate mechanism from policy Condition handling moreover provides a separation of mechanism and policy. Restarts provide various possible mechanisms for recovering from error, but do not select which mechanism is appropriate in a given situation. That is the province of the condition handler, which (since it is located in higher-level code) has access to a broader view. An example: Suppose there is a library function whose purpose is to parse a single syslog file entry. What should this function do if the entry is malformed? There is no one right answer, because the same library could be deployed in programs for many different purposes. In an interactive log-file browser, the right thing to do might be to return the entry unparsed, so the user can see it—but in an automated log-summarizing program, the right thing to do might be to supply null values for the unreadable fields, but abort with an error, if too many entries have been malformed. That is to say, the question can only be answered in terms of the broader goals of the program, which are not known to the general-purpose library function. Nonetheless, exiting with an error message is only rarely the right answer. So instead of simply exiting with an error, the function may establish restarts offering various ways to continue—for instance, to skip the log entry, to supply default or null values for the unreadable fields, to ask the user for the missing values, or to unwind the stack and abort processing with an error message. The restarts offered constitute the mechanisms available for recovering from error; the selection of restart by the condition handler supplies the policy. Criticism Exception handling is often not handled correctly in software, especially when there are multiple sources of exceptions; data flow analysis of 5 million lines of Java code found over 1300 exception handling defects. Citing multiple prior studies by others (1999–2004) and their own results, Weimer and Necula wrote that a significant problem with exceptions is that they "create hidden control-flow paths that are difficult for programmers to reason about". "While try-catch-finally is conceptually simple, it has the most complicated execution description in the language specification [Gosling et al. 1996] and requires four levels of nested “if”s in its official English description. In short, it contains a large number of corner cases that programmers often overlook." Exceptions, as unstructured flow, increase the risk of resource leaks (such as escaping a section locked by a mutex, or one temporarily holding a file open) or inconsistent state. There are various techniques for resource management in the presence of exceptions, most commonly combining the dispose pattern with some form of unwind protection (like a finally clause), which automatically releases the resource when control exits a section of code. Tony Hoare in 1980 described the Ada programming language as having "...a plethora of features and notational conventions, many of them unnecessary and some of them, like exception handling, even dangerous. [...] Do not allow this language in its present state to be used in applications where reliability is critical [...]. The next rocket to go astray as a result of a programming language error may not be an exploratory space rocket on a harmless trip to Venus: It may be a nuclear warhead exploding over one of our own cities." The Go developers believe that the try-catch-finally idiom obfuscates control flow, and introduced the exception-like / mechanism. differs from in that it can only be called from within a code block in a function, so the handler can only do clean-up and change the function's return values, and cannot return control to an arbitrary point within the function. The block itself functions similarly to a clause. See also Automated exception handling Continuation Defensive programming Exception safety Option types and Result types, alternative ways of handling errors in functional programming without exceptions Notes References Works cited Control flow Software anomalies
Exception handling (programming)
Technology
6,227
20,931,316
https://en.wikipedia.org/wiki/Tatango
Tatango is a U.S.-based company offering text message marketing (SMS/MMS) services. Tatango is a privately held corporation based in Seattle, WA, with investments from the Seattle Alliance of Angels. History Derek Johnson developed the service, originally named NetworkText, during his time at the University of Houston's Bauer College of Business. Initially, it was started as a way for his fraternity (Delta Upsilon) to communicate. Initially operating as NetworkText, Tatango allowed groups and organizations to send text messages to their members, with 30-40 character advertisements included at the bottom of each message. The service was free for groups and organizations in collaboration with 4INFO. This was later changed on July 26, 2008 and the company started charging a monthly fee to use the service. Johnson left college and moved to Bellingham, WA, where he founded NetworkText with Matt Pelo. Pelo left the company later that year. In 2008, the company was renamed to Tatango, and offices were found. Tatango moved from being a limited-liability company to a corporation late in 2008. In October that same year, Tatango launched a voice messaging service which has since been discontinued. Tatango acquired HungryThumb in 2012, followed by Broadtexter the following year. In 2016 Tatango launched the U.S. Short Code Directory. In 2022, Kevin Fitzgerald became the CEO, and Derek Johnson became Chief Innovation Officer. Highlights In 2009, Tatango's CEO was included in Business Week's "Best Young Entrepreneurs" list. Press Tatango has been mentioned in media outlets such as TechCrunch, Cnet, The Seattle Times, and LifeHacker. Former Tatango CEO, Derek Johnson, has been featured in The Wall Street Journal. Mentioned in the Forbes article "Killer app of the 2012 election". References External links Mobile telecommunication services
Tatango
Technology
385
15,182,234
https://en.wikipedia.org/wiki/CACNG3
Voltage-dependent calcium channel gamma-3 subunit is a protein that in humans is encoded by the CACNG3 gene. L-type calcium channels are composed of five subunits. The protein encoded by this gene represents one of these subunits, gamma, and is one of several gamma subunit proteins. It is an integral membrane protein that is thought to stabilize the calcium channel in an inactive (closed) state. This protein is similar to the mouse stargazin protein, mutations in which have been associated with absence seizures, also known as petit-mal or spike-wave seizures. This gene is a member of the neuronal calcium channel gamma subunit gene subfamily of the PMP-22/EMP/MP20 family. This gene is a candidate gene for a familial infantile convulsive disorder with paroxysmal choreoathetosis. See also Voltage-dependent calcium channel References Further reading External links Ion channels
CACNG3
Chemistry
193
47,752,270
https://en.wikipedia.org/wiki/Ophiocordyceps%20camponoti-novogranadensis
Ophiocordyceps camponoti-novogranadensis is a species of fungus that parasitizes insect hosts, in particular members of the order Hymenoptera. It was first isolated from Parque Estadual de Itacolomi in Ouro Preto, at an altitude of , on Camponotus novogranadensis. Description Its mycelium is a chocolate brown colour, and is especially dense around its feet, forming distinctive pads. Its stromatal morphology is the same as O. camponotirufipedis. Its fertile region is brown, its ascomata being semi-erumpent and crowded. The asci are 8-spored, hyaline and cylindrical, with a prominent apical cap, while the ascospores are hyaline, thin-walled, and 5–10-septate. References Further reading Araújo, João, et al. "Unravelling the diversity behind Ophiocordyceps unilateralis complex: Three new species of Zombie-Ant fungus from Brazilian Amazon." bioRxiv (2014): 003806. External links MycoBank Ophiocordycipitaceae Fungi described in 2011 Fungus species
Ophiocordyceps camponoti-novogranadensis
Biology
254
5,847,536
https://en.wikipedia.org/wiki/List%20of%20astronomical%20instrument%20makers
The following is a list of astronomical instrument makers, along with lifespan and country of work, if available. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also History of the telescope List of largest optical reflecting telescopes List of largest optical refracting telescopes List of observatory codes List of Russian astronomers and astrophysicists List of telescope types Space telescope Timeline of telescopes, observatories, and observing technology References External links Technology-related lists Instrument makers Lists of manufacturers List
List of astronomical instrument makers
Astronomy
113
2,903,194
https://en.wikipedia.org/wiki/39%20Aurigae
39 Aurigae is a single star in the constellation of Auriga. The designation is from the star catalogue of English astronomer John Flamsteed, first published in 1712. The star is just barely visible to the naked eye, having an apparent visual magnitude of 5.90. Based upon an annual parallax shift of 20.11 mas as seen from Earth, it is located 112 light years away. 5 Andromedae is moving further from the Sun with a radial velocity of +34 km/s. It has a relatively high proper motion, advancing across the celestial sphere at the rate of 0.151 arc seconds per year. This is an F-type main-sequence star with a stellar classification of F1 V. It is an estimated 603 million years old with a relatively high rate of spin, showing a projected rotational velocity of around 88 km/s. The star has 1.45 times the mass of the Sun and it is radiating 9.36 times the Sun's luminosity from its photosphere at an effective temperature of around 7,161 K. References F-type main-sequence stars Auriga Durchmusterung objects Aurigae, 39 041074 028823 2132
39 Aurigae
Astronomy
257
28,305,156
https://en.wikipedia.org/wiki/Metro%20%28design%20language%29
Microsoft Design Language (or MDL), previously known as Metro, is a design language created by Microsoft. This design language is focused on typography and simplified icons, absence of clutter, increased content to chrome ratio ("content before chrome"), and basic geometric shapes. Early examples of MDL principles can be found in Encarta 95 and MSN 2.0. The design language evolved in Windows Media Center and Zune and was formally introduced as Metro during the unveiling of Windows Phone 7. It has since been incorporated into several of the company's other products, including the Xbox 360 system software and the Xbox One system software, Windows 8, Windows Phone, and Outlook.com. Before the "Microsoft design language" title became official, Microsoft executive Qi Lu referred to it as the modern UI design language in his MIXX conference keynote speech. According to Microsoft, "Metro" has always been a codename and was never meant as a final product, but news websites attribute this change to trademark issues. Microsoft Design Language 2 (MDL2) was developed alongside Windows 10. In 2017, the Fluent Design language extended it. History The design language is based on the design principles of classic Swiss graphic design. Early glimpses of this style could be seen in Windows Media Center for Windows XP Media Center Edition, which favored text as the primary form of navigation, as well as early concepts of Neptune. This interface carried over into later iterations of Media Center. In 2006, Zune refreshed its interface using these principles. Microsoft designers decided to redesign the interface and with more focus on clean typography and less on UI chrome. These principles and the new Zune UI were carried over to Windows Phone first released in 2010 (from which much was drawn for Windows 8). The Zune Desktop Client was also redesigned with an emphasis on typography and clean design that was different from the Zune's previous Portable Media Center based UI. Flat colored "live tiles" were introduced into the design language during the early Windows Phones studies. In an interview it was explained that different Microsoft divisions use each other's products, and the extension of Metro was not a company-wide approach but instead teams such as Xbox liking Metro and adapting it for its own products. Many of Microsoft's divisions ended up adopting Metro. Microsoft Design Language 2 (MDL2) was developed alongside Windows 10. This version introduced a new set of widgets, including date pickers, toggles and switches, and reduced the border thicknesses for all user interface elements. Principles Microsoft's design team cites as an inspiration for the design language signs commonly found at public transport systems. The design language places emphasis on good typography and has large text that catches the eye. Microsoft sees the design language as "sleek, quick, modern" and a "refresh" from the icon-based interfaces of Windows, Android, and iOS. All instances use fonts based on the Segoe font family designed by Steve Matteson at Agfa Monotype and licensed to Microsoft. For the Zune, Microsoft created a custom version called Zegoe UI, and for Windows Phone Microsoft created the Segoe WP font family. The fonts mostly differ only in minor details. More obvious differences between Segoe UI and Segoe WP are apparent in their respective numerical characters. The Segoe UI font in Windows 8 had obvious differences – similar to Segoe WP. Characters with notable typographic changes included 1, 2, 4, 5, 7, 8, I, and Q. Joe Belfiore was one of the architects of Metro. At Nokia World 2011, Belfiore explained that the UI aims to be "artistic" in textual elements and iconography. He also mentioned the "motion" of the UI, specifically in Windows Phone, of the Live Tiles, moving dots, and kinetic scrolling. Microsoft designed the design language specifically to consolidate groups of common tasks to speed up usage. It achieves this by excluding superfluous graphics and instead relying on the actual content to function as the main UI. The resulting interfaces favor larger hubs over smaller buttons and often feature laterally scrolling canvases. Page titles are usually large and consequently also take advantage of lateral scrolling. Animation plays a large part. Microsoft recommends consistent acknowledgement of transitions, and user interactions (such as presses or swipes) by some form of natural animation or motion. This aims to give the user the impression of an "alive" and responsive UI with "an added sense of depth". Reception On mobile Early response to the language was generally positive. In a review of the Zune HD, Engadget said, "Microsoft continues its push towards big, big typography here, providing a sophisticated, neatly designed layout that's almost as functional as it is attractive." CNET complimented the design language, saying, "it's a bit more daring and informal than the tight, sterile icon grids and Rolodex menus of the iPhone and iPod Touch." At its IDEA 2011 Ceremony, the Industrial Designers Society of America (IDSA) gave Windows Phone 7, which uses the UI, its "Gold Interactive" award, its "People's Choice Award", and a "Best in Show" award. Isabel Ancona, the User Experience Consultant at IDSA, explained why Windows Phone won: It was reported that the UI was better received by women and first-time users. Criticism particularly focused on the use of all caps text. With the rise of Internet usage, critics have compared this to a computer program shouting at its user. IT journalist Lee Hutchinson described Microsoft's use of the practice in the macOS version of OneNote as terrible, claiming that it is "cursed with insane, non-standard application window menus IN ALL CAPS that doesn't so much violate OS X's design conventions as it does take them out behind the shed, pour gasoline on them, and set them on fire." On Windows 8 desktop With the arrival of Windows 8, the operating system's user interface and its use of the design language drew generally negative critical responses. On 25 August 2012, Peter Bright of Ars Technica reviewed the preview release of Windows 8, dedicating the first part of the review to a comparison between the Start menu designs used by Windows 8 and Windows 7. Recounting their pros and cons, Peter Bright concluded that the Start menu in Windows 8 (dubbed Start screen), though not devoid of problems, was a clear winner. However, he concluded that Windows 8's user interface was frustrating and that the various aspects of the user interface did not work well together. Woody Leonhard was even more critical when he said, "From the user's standpoint, Windows 8 is a failure – an awkward mishmash that pulls the user in two directions at once." In addition to the changes to the Start menu, Windows 8 takes a more modal approach with its use of full-screen apps that steer away from reliance on the icon-based desktop interface. In doing so, however, Microsoft has shifted its focus away from multitasking and business productivity. Name change In August 2012, The Verge announced that an internal memorandum had been sent out to developers and Microsoft employees announcing the decision to "discontinue the use" of the term "Metro" because of "discussions with an important European partner", and that they were "working on a replacement term". Technology news outlets Ars Technica, TechRadar, CNET, Engadget and Network World and mainstream press Bits Blog from The New York Times Company and the BBC News Online published that the partner mentioned in the memo could be one of Microsoft's retail partners, German company Metro AG, as the name had the potential to infringe on their "Metro" trademark. Microsoft later stated that the reason for de-emphasizing the name was not related to any current litigation, and that "Metro" was only an internal project codename, despite having heavily promoted the brand to the public. In some contexts, the company began using the term "Modern" or the more generic "Windows 8" modifier to refer to the new design, possibly as a placeholder. In September 2012, "Microsoft design language" was adopted as the official name for the design style. The term was used on Microsoft Developer Network documentation and at the 2012 Microsoft Build conference to refer to the design language. In a related change, Microsoft dropped use of the phrase "Metro-style apps" to refer to mobile apps distributed via Windows Store. See also Flat design Skeuomorph design Human interface guidelines Windows Aero Universal Windows Platform apps References External links Microsoft by the Numbers website Modern Design at Microsoft (Archive) UX guidelines for Windows Store apps on MSDN Design Guidelines for Windows Phone on MSDN Design language Graphical user interfaces Touch user interfaces Windows 8 Windows Phone Xbox 360 Xbox One
Metro (design language)
Engineering
1,826
30,324,482
https://en.wikipedia.org/wiki/Suillus%20subluteus
Suillus subluteus is a species of mushroom in the genus Suillus. First described as Boletus subluteus by Charles Horton Peck in 1887, it was transferred to Suillus by Wally Snell in 1944. It is found in North America. References External links subluteus Fungi of North America Edible fungi Fungi described in 1887 Taxa named by Charles Horton Peck Fungus species
Suillus subluteus
Biology
81
20,464,688
https://en.wikipedia.org/wiki/Xiao-Gang%20Wen
Xiao-Gang Wen (; born November 26, 1961) is a Chinese-American physicist. He is a Cecil and Ida Green Professor of Physics at the Massachusetts Institute of Technology and Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics. His expertise is in condensed matter theory in strongly correlated electronic systems. In Oct. 2016, he was awarded the Oliver E. Buckley Condensed Matter Prize. He is the author of a book in advanced quantum many-body theory entitled Quantum Field Theory of Many-body Systems: From the Origin of Sound to an Origin of Light and Electrons (Oxford University Press, 2004). Early life and education Wen attended the University of Science and Technology of China and earned a B.S. in Physics in 1982. In 1982, Wen came to the US for graduate school via the CUSPEA program, which was organized by Prof. T. D. Lee. He attended Princeton University, from which be attained an M.A. in Physics in 1983 and a Ph.D in Physics in 1987. Work Wen studied superstring theory under theoretical physicist Edward Witten at Princeton University where he received his Ph.D. degree in 1987. He later switched his research field to condensed matter physics while working with theoretical physicists Robert Schrieffer, Frank Wilczek, Anthony Zee in Institute for Theoretical Physics, UC Santa Barbara (1987–1989). Wen introduced the notion of topological order (1989) and quantum order (2002), to describe a new class of matter states. This opened up a new research direction in condensed matter physics. He found that states with topological order contain non-trivial boundary excitations and developed chiral Luttinger theory for the boundary states (1990). Boundary states can become ideal conduction channels which may lead to device application of topological phases. He proposed the simplest topological order — Z2 topological order (1990), which turns out to be the topological order in the toric code. He also proposed a special class of topological order: non-Abelian quantum Hall states. They contain emergent particles with non-Abelian statistics which generalizes the well known Bose and Fermi statistics. Non-Abelian particles may allow us to perform fault tolerant quantum computations. With Michael Levin, he found that string-net condensations can give rise to a large class of topological orders (2005). In particular, string-net condensation provides a unified origin of photons, electrons, and other elementary particles (2003). It unifies two fundamental phenomena: gauge interactions and Fermi statistics. He pointed out that topological order is nothing but the pattern of long range entanglements. This led to a notion of symmetry protected topological (SPT) order (short-range entangled states with symmetry) and its description by group cohomology of the symmetry group (2011). The notion of SPT order generalizes the notion of topological insulator to interacting cases. He also proposed the SU(2) gauge theory of high temperature superconductors (1996). Professional record Professor, MIT, 2000–present Isaac Newton Research Chair, Perimeter Institute for Theoretical Physics, 2012–2014 Associate professor, MIT, 1995—2000 Assistant professor, MIT, 1991—1995 Five-year member of IAS, 1989—1991 Member of ITP, UC Santa Barbara, 1987—1989 Honors A.P. Sloan Foundation fellow (1992) Overseas Chinese Physics Association outstanding young researcher award (1994) Changjiang professor, Center for Advanced Study, Tsinghua University (2000—2004) Fellow of American Physical Society (2002) Cecil and Ida Green Professor of Physics, MIT (2004—present) Distinguished Moore Scholar, Caltech (2006) Distinguished Research Chair, Perimeter Institute (2009) Isaac Newton Chair, Perimeter Institute (announced Sep 2011) 2017 Oliver E. Buckley Condensed Matter Prize (announced Oct. 2016) Member of National Academy of Sciences (2018) 2018 Dirac Medal of the ICTP Selected publications See also Topological order String-net Topological entanglement entropy References External links https://xgwen.mit.edu http://physics.stackexchange.com/users/9444/xiao-gang-wen 1961 births Living people 21st-century American physicists Chinese emigrants to the United States Massachusetts Institute of Technology School of Science faculty Princeton University alumni Theoretical physicists University of Science and Technology of China alumni Members of the United States National Academy of Sciences Physicists from Shaanxi People from Xi'an Educators from Shaanxi Sloan Research Fellows Fellows of the American Physical Society Oliver E. Buckley Condensed Matter Prize winners
Xiao-Gang Wen
Physics
935
9,571,778
https://en.wikipedia.org/wiki/Plant%20geneticist
A plant geneticist is a scientist involved with the study of genetics in botany. Typical work is done with genes in order to isolate and then develop certain plant traits. Once a certain trait, such as plant height, fruit sweetness, or tolerance to cold, is found, a plant geneticist works to improve breeding methods to ensure that future plant generations possess the desired traits. Plant genetics played a key role in the modern-day theories of heredity, beginning with Gregor Mendel's study of pea plants in the 19th century. The occupation has since grown to encompass advancements in biotechnology that have led to greater understanding of plant breeding and hybridization. Commercially, plant geneticists are sometimes employed to develop methods of making produce more nutritious, or altering plant pigments to make the food more enticing to consumers. References National Science Teachers Association: Plant Geneticist Interview USDA Agriculture Research Service Geneticist Geneticist
Plant geneticist
Biology
186
1,335,495
https://en.wikipedia.org/wiki/Lambda%20point
The lambda point is the temperature at which normal fluid helium (helium I) makes the transition to superfluid state (helium II). At pressure of 1 atmosphere, the transition occurs at approximately 2.17 K. The lowest pressure at which He-I and He-II can coexist is the vapor−He-I−He-II triple point at and , which is the "saturated vapor pressure" at that temperature (pure helium gas in thermal equilibrium over the liquid surface, in a hermetic container). The highest pressure at which He-I and He-II can coexist is the bcc−He-I−He-II triple point with a helium solid at , . The point's name derives from the graph (pictured) that results from plotting the specific heat capacity as a function of temperature (for a given pressure in the above range, in the example shown, at 1 atmosphere), which resembles the Greek letter lambda . The specific heat capacity has a sharp peak as the temperature approaches the lambda point. The tip of the peak is so sharp that a critical exponent characterizing the divergence of the heat capacity can be measured precisely only in zero gravity, to provide a uniform density over a substantial volume of fluid. Hence, the heat capacity was measured within 2 nK below the transition in an experiment included in a Space Shuttle payload in 1992. Although the heat capacity has a peak, it does not tend towards infinity (contrary to what the graph may suggest), but has finite limiting values when approaching the transition from above and below. The behavior of the heat capacity near the peak is described by the formula where is the reduced temperature, is the Lambda point temperature, are constants (different above and below the transition temperature), and is the critical exponent: . Since this exponent is negative for the superfluid transition, specific heat remains finite. The quoted experimental value of is in a significant disagreement with the most precise theoretical determinations coming from high temperature expansion techniques, Monte Carlo methods and the conformal bootstrap. See also Lambda point refrigerator References External links What is superfluidity? Threshold temperatures Superfluidity Liquid helium
Lambda point
Physics,Chemistry,Materials_science
443
8,766,186
https://en.wikipedia.org/wiki/Martin%20Head-Gordon
Martin Philip Head-Gordon (né Martin Philip Head) is a professor of chemistry at the University of California, Berkeley, and Lawrence Berkeley National Laboratory working in the area of computational quantum chemistry. He is a member of the International Academy of Quantum Molecular Science. Education A native of Australia, Head-Gordon received his Bachelor of Science and Master of Science degrees from Monash University, followed by a PhD from Carnegie Mellon University working under the supervision of John Pople developing a number of useful techniques including the Head-Gordon-Pople scheme for the evaluation of integrals, and the orbital rotation picture of orbital optimization. Career and research At Berkeley, Martin supervises a group interested in pairing methods, local correlation methods, dual-basis methods, scaled MP2 methods, new efficient algorithms, and very recently corrections to the Kohn-Sham density functional framework. Broadly speaking, wavefunction based methods are the focus of his research. Head-Gordon is one of the founders of Q-Chem Inc. Awards and honors In 2015, Head-Gordon was elected a Member of the National Academy of Sciences. References Living people Members of the International Academy of Quantum Molecular Science Australian emigrants to the United States Carnegie Mellon University alumni UC Berkeley College of Chemistry faculty Fellows of the American Academy of Arts and Sciences 1962 births Computational chemists Theoretical chemists Schrödinger Medal recipients
Martin Head-Gordon
Chemistry
274
50,658,551
https://en.wikipedia.org/wiki/IC%204499
IC 4499 is a loose globular cluster in the constellation Apus. It is located in the medium-far galactic halo. Its apparent magnitude is 9.76, and was thought to be unusual because it appears to be 3–4 billion years younger than most other globular clusters in the Milky Way, as determined by metallicity measurements in 1995. However, this was contradicted in 2011 by results that yielded a much older age of 12 billion years. As typical for very old globular clusters, IC 4499 contains two generations of stars. References Apus Globular clusters 4499
IC 4499
Astronomy
128
4,255,513
https://en.wikipedia.org/wiki/Distributed%20constraint%20optimization
Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized. Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents. Problems defined with this framework can be solved by any of the algorithms that are designed for it. The framework was used under different names in the 1980s. The first known usage with the current name is in 1990. Definitions DCOP The main ingredients of a DCOP problem are agents and variables. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a tuple , where: is the set of agents, . is the set of variables, . is the set of variable-domains, where each is a finite set containing the possible values of variable . If contains only two values (e.g. 0 or 1), then is called a binary variable. is the cost function. It is a function that maps every possible partial assignment to a cost. Usually, only few values of are non-zero, and it is represented as a list of the tuples that are assigned a non-zero value. Each such tuple is called a constraint. Each constraint in this set is a function assigning a real value to each possible assignment of the variables. Some special kinds of constraints are: Unary constraints - constraints on a single variable, i.e., for some . Binary constraints - constraints on two variables, i.e, for some is the ownership function. It is a function mapping each variable to its associated agent. means that variable "belongs" to agent . This implies that it is agent 's responsibility to assign the value of variable . Note that is not necessarily an injection, i.e., one agent may own more than one variables. It is also not necessarily a surjection, i.e., some agents may own no variables. is the objective function. It is an operator that aggregates all of the individual costs for all possible variable assignments. This is usually accomplished through summation: The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize for a given assignment of the variables. Assignments A value assignment is a pair where is an element of the domain . A partial assignment is a set of value-assignments where each appears at most once. It is also called a context. This can be thought of as a function mapping variables in the DCOP to their current values: Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, implies that the agent has not yet assigned a value to variable . Given this representation, the "domain" (that is, the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the function) as an input to the function. A full assignment is an assignment in which each appears exactly once, that is, all variables are assigned. It is also called a solution to the DCOP. An optimal solution is a full assignment in which the objective function is optimized (i.e., maximized or minimized, depending on the type of problem). Example problems Various problems from different domains can be presented as DCOPs. Distributed graph coloring The graph coloring problem is as follows: given a graph and a set of colors , assign each vertex, , a color, , such that the number of adjacent vertices with the same color is minimized. As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality (there is one domain value for each possible color). For each vertex , there is a variable with domain . For each pair of adjacent vertices , there is a constraint of cost 1 if both of the associated variables are assigned the same color: The objective, then, is to minimize . Distributed multiple knapsack problem The distributed multiple- variant of the knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let be the set of items, be the set of knapsacks, be a function mapping items to their volume, and be a function mapping knapsacks to their capacities. To encode this problem as a DCOP, for each create one variable with associated domain . Then for all possible contexts :where represents the total weight assigned by context to knapsack : Distributed item allocation problem The item allocation problem is as follows. There are several items that have to be divided among several agents. Each agent has a different valuation for the items. The goal is to optimize some global goal, such as maximizing the sum of utilities or minimizing the envy. The item allocation problem can be formulated as a DCOP as follows. Add a binary variable vij for each agent i and item j. The variable value is "1" if the agent gets the item, and "0" otherwise. The variable is owned by agent i. To express the constraint that each item is given to at most one agent, add binary constraints for each two different variables related to the same item, with an infinite cost if the two variables are simultaneously "1", and a zero cost otherwise. To express the constraint that all items must be allocated, add an n-ary constraint for each item (where n is the number of agents), with an infinite cost if no variable related to this item is "1". Other applications DCOP was applied to other problems, such as: coordinating mobile sensors; meeting and task scheduling. Algorithms DCOP algorithms can be classified in several ways: Completeness - complete search algorithms finding the optimal solution, vs. local search algorithms finding a local optimum. Search strategy - best-first search or depth-first branch-and-bound search; Synchronization among agents - synchronous or asynchronous; Communication among agents - point-to-point with neighbors in the constraint graph, or broadcast; Communication topology - chain or tree. ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology. Hybrids of these DCOP algorithms also exist. BnB-Adopt, for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search. Asymmetric DCOP An asymmetric DCOP is an extension of DCOP in which the cost of each constraint may be different for different agents. Some example applications are: Event scheduling: agents who attend the same event might derive different values from it. Smart grid: the increase in price of electricity in loaded hours may be different agents. One way to represent an ADCOP is to represent the constraints as functions: Here, for each constraint there is not a single cost but a vector of costs - one for each agent involved in the constraint. The vector of costs is of length k if each variable belongs to a different agent; if two or more variables belong to the same agent, then the vector of costs is shorter - there is a single cost for each involved agent, not for each variable. Approaches to solving an ADCOP A simple way for solving an ADCOP is to replace each constraint with a constraint , which equals the sum of the functions . However, this solution requires the agents to reveal their cost functions. Often, this is not desired due to privacy considerations. Another approach is called Private Events as Variables (PEAV). In this approach, each variable owns, in addition to his own variables, also "mirror variables" of all the variables owned by his neighbors in the constraint network. There are additional constraints (with a cost of infinity) that guarantee that the mirror variables equal the original variables. The disadvantage of this method is that the number of variables and constraints is much larger than the original, which leads to a higher run-time. A third approach is to adapt existing algorithms, developed for DCOPs, to the ADCOP framework. This has been done for both complete-search algorithms and local-search algorithms. Comparison with strategic games The structure of an ADCOP problem is similar to the game-theoretic concept of a simultaneous game. In both cases, there are agents who control variables (in game theory, the variables are the agents' possible actions or strategies). In both cases, each choice of variables by the different agents result in a different payoff to each agent. However, there is a fundamental difference: In a simultaneous game, the agents are selfish - each of them wants to maximize his/her own utility (or minimize his/her own cost). Therefore, the best outcome that can be sought for in such setting is an equilibrium - a situation in which no agent can unilaterally increase his/her own gain. In an ADCOP, the agents are considered cooperative: they act according to the protocol even if it decreases their own utility. Therefore, the goal is more challenging: we would like to maximize the sum of utilities (or minimize the sum of costs). A Nash equilibrium roughly corresponds to a local optimum of this problem, while we are looking for a global optimum. Partial cooperation There are some intermediate models in which the agents are partially-cooperative: they are willing to decrease their utility to help the global goal, but only if their own cost is not too high. An example of partially-cooperative agents are employees in a firm. On one hand, each employee wants to maximize their own utility; on the other hand, they also want to contribute to the success of the firm. Therefore, they are willing to help others or do some other time-consuming tasks that help the firm, as long as it is not too burdensome on them. Some models for partially-cooperative agents are: Guaranteed personal benefit: the agents agree to act for the global good if their own utility is at least as high as in the non-cooperative setting (i.e., the final outcome must be a Pareto improvement of the original state). Lambda-cooperation: there is a parameter . The agents agree to act for the global good if their own utility is at least as high as times their non-cooperative utility. Solving such partial-coopreation ADCOPs requires adaptations of ADCOP algorithms. See also Constraint satisfaction problem Distributed algorithm Distributed algorithmic mechanism design Notes and references Books and surveys A chapter in an edited book. See Chapters 1 and 2; downloadable free online. Mathematical optimization Constraint programming
Distributed constraint optimization
Mathematics
2,300
15,282,697
https://en.wikipedia.org/wiki/Technics%20%28brand%29
is a Japanese audio brand established by Matsushita Electric (now Panasonic) in 1965. Since 1965, Matsushita has produced a variety of HiFi and other audio products under the brand name, such as turntables, amplifiers, radio receivers, tape recorders, CD players, loudspeakers, and digital pianos. Technics products were available for sale in various countries. The brand was originally conceived as a line of high-end audio equipment to compete against brands such as Nakamichi. From 2002 onwards products were rebranded as Panasonic except in Japan and CIS countries (such as Russia), where the brand remained in high regard. Panasonic discontinued the brand for most products in October 2010, but it was revived in 2015 with new high-end turntables. The brand is best known for the SL-1200 DJ turntable, an industry standard for decades. History Technics was introduced as a brand name for premium loudspeakers marketed domestically by Matsushita in 1965. The name came to wider prominence with the international sales of direct-drive turntables. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita, based in Osaka. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. It is a significant advancement over older belt-drive turntables, which are unsuitable for turntablism, since they have a slow start-up time, and are prone to wear-and-tear and breakage, as the belt would break from back spinning or scratching. In 1969, Matsushita launched Obata's invention as the SP-10, the first direct-drive turntable on the professional market. In 1971, Matsushita released the Technics SL-1100 for the consumer market. Due to its strong motor, durability, and fidelity, it was adopted by early hip hop artists. The SL-1100 was used by the influential DJ Kool Herc for the first sound system he set up after emigrating from Jamaica to the US. It was followed by the SL-1200, the most influential turntable. It was developed in 1971 by a team led by Shuichi Obata at Matsushita, which then released it onto the market in 1972. It was adopted by New York City hip hop DJs such as Grand Wizard Theodore and Afrika Bambaataa in the 1970s. As they experimented with the SL-1200 decks, they developed scratching techniques when they found that the motor would continue to spin at the correct RPM even if the DJ wiggled the record back and forth on the platter. As the upgraded SL-1200 MK2, it became a widely used turntable by DJs. A robust machine, the SL-1200 MK2 incorporated a pitch control mechanism (or vari-speed), and maintained a relatively constant speed with low variability, which proved popular with DJs. The SL-1200 series remained the most widely used turntable in DJ culture through to the 2000s. The SL-1200 model, often considered the industry standard turntable, continued to evolve with the M3D series, followed by the MK5 series in 2003. Despite being originally created to market their high-end equipment, by the early 1980s Technics was offering an entire range of equipment from entry-level to high-end. In 1972, Technics introduced the first autoreverse system in a cassette deck in its Technics RS-277US and in 1973 it introduced the first three-head recording technique in a cassette deck (Technics RS-279US). In 1976, Technics introduced two belt-driven turntables for the mass market, the SL-20 and SL-23. The principal difference between the two models was the additional feature of semi-automatic operation in the SL-23, along with an adjustable speed control with built-in strobe light. They offered technical specifications and features rivaling much more expensive turntables, including well-engineered s-shaped tonearms with tracking weight and anti-skate adjustments. At the time they were introduced, the SL-20 and SL-23, which sold for $US100 and $US140 respectively, set a new performance standard for inexpensive turntables. The Technics brand was discontinued in 2010, but reappeared at the 2014 consumer electronics trade fair IFA. In January 2016 on the occasion of the 50th Anniversary the Technics SL-1200 returned with the Technics SL-1200 G. Notable products Early 1960s SX-601 Electronic Organ (1963) – an origin of Technics SX keyboard series, the result of cooperative works of National Electronic Organ Company (Panasonic group) and Ace Tone (precursor of Roland Corporation). After the 1970s, this product line was branded "Technitone" as a brother brand of Technics, and newer electronic musical instruments were branded Technics. EAB-1204 loudspeakers (1965) – premium loudspeakers, later renamed to SB-1204. Nicknamed "Technics 1", and referred to as the origin of Technics brand. Late 1960s – early 1970s SP-10 Direct Drive Turntables (1969) – first direct-drive model for the professional market SL-1100 Direct Drive Turntables (1971) – for the consumer market SL-1200 Direct Drive Turntables (1972) – for the consumer market RS-277US Autoreverse Cassette Deck (1972) RS-279US Three-heads recording Cassette Deck (1973) SA-6800X 4 Channel Receiver (1973) - also branded a Panasonic and National Panasonic. Each had different front panel styling Mid-1970s SA-8500X The biggest quadraphonic receiver Technics ever built with integrated CD4 demodulation RS-858US quadraphonic 8-track player/recorder SH-3433 4-channel Quadraphonic Audioscope SA-50XX Budget amplifiers ranging from $150 (cheapest) to $600 (Most expensive) SB-7000 Linear Phase 3 way loudspeaker (First Linear Phase Speaker system in the World) SL-20 and SL-23 belt drive turntables. Their first belt drive series. Wooden (MDF) plinth. Intended as a cheaper alternative to their higher end direct drive. Main difference is that SL-20 is completely manual without any automatic function or pitch control. SL-23 is basically built on the same base, but has an auto return function, independent pitch control for 33 and 45 speeds and stroboscope for 50 and 60 Hz. Also known as SL-22 respectively SL-26 in some markets. Late 1970s RS-1500/1700 series of open-reel tape decks; SA-100/400/600/800/1000 receivers SL-1300, SL-1400, SL-1500, SL-1600, SL-1700, SL-1800 Direct Drive Turntables SL-1300MK2, SL-1400MK2, SL-1500MK2, SL-150MK2 (No Tonearm) Quartz Synthesizer Direct Drive Turntables "Professional Series" "New class A" Amplifier series launched featuring inter alia SE-A3/SE-A5 High Output Power Amplifiers SU-C01, SU-C03, SU-C04 amplifiers (a "concise" line of home audio consisting of amplifier, tuner and cassette deck) SB-F1, SB-F01, SB-F2 and SB-F3 monitor speakers (2-way, sealed casing, aluminum box speakers) SY-1010 Analog Synthesizer (1977) 9000 Professional Series: A series of stack-able, or rack mountable, units included the SE-9060 Amp, SU-9070 Pre-Amp, SH-9010 Equalizer, SH-9020 Meter Unit and ST-9030 Tuner. These "Pro Series" components replaced the earlier SE-9600 Amp, SU-9700 Pre-Amp and ST-9300/9600/9700 Tuner that were deemed too large. The 9000 Pro Series was introduced because of demand for smaller, quality components. The European version of the Pro Series had a different faceplate than the US version: 18" vs. 19". Because of the narrower face plate, the European version required special rack brackets to be rack mountable. The brackets came with the European version of the SH-905ST Professional Series rack. The only difference between this rack and the US version was inclusion of those brackets. As a result, the brackets are ultra rare and even the rack was sold in limited numbers in the USA. SB-10000 Loudspeaker: Top of the line Technics speaker at a cost of US$12,000. They featured a tweeter made of boron. A used pair sold for US$32,050 around 2010 in Germany. SE-A1 Amp: Top of the line Technics amp at a cost of US$6,000. SU-A2 Pre-Amp: Top of the line for Technics at a cost of US$8,000. SB-E100 and SB-E200 Loudspeakers: These were both designed with the SB-10000 in mind. The SB-E100 looked like the 10000 with the bass enclosure turned on its end with the mid/tweeter section mounted on top. The SB-E100 was made of MDF with Rosewood veneer. The SB-E200 was made of Rosewood and, while more similar in design to the SB-10000, it was virtually the same as the SB-E100 except for the bass box configuration and solid wood. The SB-E100 was designed to sit on the floor while the SB-E200 could sit on a table or pedestal. The SB-E100 had slightly better specs than the SB-E200 due to construction. Neither of them were released for the US market. RS-9900US Tape Deck: Top of the line tape deck at the time and quite at home with the 9600 Series components listed above. It was a two piece behemoth that sold for $2,000 in 1977–78. RS-M95 Tape Deck: This deck replaced the 9600 in the same way as the 9000 Professional Series components replaced the 9600. It was much smaller, less expensive ($1400) and had better specs than the RS-9900US it replaced, resulting in better sound. Early 1980s SU-V3, V4, V5, V6, V7, V8 and V9 "new Class A" Stereo Integrated Amplifiers SE-A3MK2, SE-A5, SE-A5MK2, SE-A7 Power Amplifiers and SU-A4MK2, SU-A6 SU-A6MK2 and SU-A8 preamplifiers SV-P100 digital audio recorder (using VHS tapes). Also available as the SV-100, a stand-alone PCM adaptor requiring a separate VCR; cassette decks with dbx noise reduction SB-2155 3-Way Stereo Speakers [1982] SL-D212 Direct Drive Turntable [1982] SU-Z65 Stereo Integrated Amplifier [1982] SH-8015 Stereo Frequency Equalizer [1982] ST-Z45 Synthesizer FM/AM Stereo Tuner [1981] RS-M205 Cassette Deck [1980] RS-M216 Cassette Deck [1982] direct-drive linear tracking turntables SL-10, SL-15, SL-7, SL-6, SL-5, and SL-V5 (vertical) Mid-1980s Technitone E series (1983): one of the earliest PCM sampling organs in Japan SX-PV10 PCM Digital Piano (1984): one of the earliest PCM sampling pianos in Japan SL-J2: direct-drive turntable SY-DP50 PCM Digital Drum Percussion (1985) "Class AA" VC-4 stereo integrated amplifiers, starting with the SU-V40, V50 and V60 models (1986) The SL range of Direct Drive turntables, like the SL-5 1990s–2000s During the 90's, Technics launched a successful series of mini hi-fi systems (SC-EH series, SC-CA SC-CH series and SC-DV series with cd player and surround sound) and in the late 90s, the very successful series of micro hi-fi systems, SC-HD series (SC-HDV and SC-HDA, for series with DVD player and surround sound). These were manufactured until 2004, and after that, until 2005, were named Panasonic for the short time they were still kept in production after Technics brand was phased out. Technics had also created a 60+1 disc changer in 1998 under the model line SL-MC (excluding the last model, the SL-MC7, that being a 110+1 changer) that ran until 2002 across a total of 8 models before being shut down, the last 60+1 mechanisms being featured in Panasonic Mini-HiFi systems. The Technics badge was then relegated to turntables in 2005, including the low cost SL-BD20/22 manufactured well into the 2000's, and some higher quality headphones and speakers, although the same model names appeared under both Technics and Panasonic names in some countries, for a while. From 2002 onwards, receivers which once were known as Technics, were rebranded as Panasonic. Technics stopped manufacturing separates (cd players, cassette deck, tuners, amplifiers) in late 2001, but remained for a while in the home cinema market, with both DVD players and receivers and speakers until late 2002, when these were renamed Panasonic. From 2004 on, except turntables, a series of headphones, and some DJ equipment, all audio products were by now bearing the Panasonic name, rather than Technics. Also, by 2004, both SL-BD20/22 turntables were phased out. The two subwoofers listed below (SST-25/35HZ) along with the SST-1 loudspeakers, were not intended for home use. SST-25HZ Super Bass Exciter (Sub-Woofer), top of the line Technics sub SST-35HZ Super Bass Exciter (Sub-woofer), 1991 cost $2500 SST-1 Loudspeaker, 1991 cost $2000. These were meant to be mated with the SST-25HZ or 35HZ sub-woofers. hi-quality power amps, Mainstream receivers, Dolby Pro Logic receivers SX-KN series electronic keyboards, including the arranger keyboards KN3000, KN5000, KN6000 and KN7000, competing with the same market as the Yamaha Tyros SX-WSA1/SX-WSA1R Digital Synthesizer (1995), utilizing Acoustic Modeling synthesis (PCM sample + physical modeling resonator) Since 2014 Panasonic Corporation relaunched the Technics brand in late 2014, mainly because of increased market interest in high end hi-fi, and also due to renewed interest in vinyl. The brand was relaunched with a series of amplifiers, speakers and micro hi-fi systems, but no turntables were yet available. The turntables were relaunched in 2016. As written above, in 2016, on the occasion of the 50th Anniversary of the SL-1200, Technics came back with the SL-1200 G. About 2017 a remarkable digital amplifier, the SU-G700, was announced. Among their most successful products are the newly launched SL1500-C turntable series, and the Ottava micro hi-fi series, and also their active speakers series. The SL1200 is also successful. Technics SL1500-C was launched as an alternative to the SL1200 series, being aimed at home use rather than DJ use. It has a quartz speed stabilizer, also it has no variable pitch and has no stroboscope for speed adjustment. Like 1200, it is manual; it only has an arm lift feature at the end of the record, which can be deactivated. It is available in silver and black versions. It has a built-in preamplifier, which can be completely deactivated if not needed. It also has a heavy damped platter. In the tradition of Technics, SL1500-C is a Direct Drive turntable. It is different, however, from the SL1500 models from the 1970s and it is not manufactured in Japan like its bigger brothers, the SL1200 and the SP10, but in Malaysia. In 2021 the production of all Technics turntables was moved to Malaysia. Although Technics previously manufactured a series of belt drive turntables (mainly cheaper versions), no new belt drive turntables from Technics are available now, and it seems that Technics will not launch a new belt drive series. Technics also launched a successful series of wireless headphones, both earbuds and over the ear types. As of 2022, the earbuds series are: EAH-AZ40, EAH-AZ60, EAH-AZ70 and EAH-AZ80. The over the ear series are EAH-A800 and EAH-F70. All of them, except from EAH-F70, can be controlled with an application from Technics. All of them have noise canceling. EAH-F70 seems to be discontinued, although still available. The EAH-F70 and EAH-A800 models can also operate as wired headphones, in which case the microphone and active noise canceling features are lost. See also List of phonograph manufacturers References historical products — other older Technics products site in Japanese. — information about older Technics products — Technitone Electronic Organ database including and models External links Official sites General Technics DJ home page Technics Musical Instruments home page Technics Hi-Fi Audio The Exclusive Online Audio Museum "TheVintageKnob" with Technics Audio Products History (1960–2000) Panasonic Corporation brands Consumer electronics brands Headphones manufacturers Loudspeaker manufacturers Phonograph manufacturers Products introduced in 1965 DJ equipment Japanese brands Panasonic products Radio manufacturers
Technics (brand)
Engineering
3,851
33,653,162
https://en.wikipedia.org/wiki/Booster%20pump
A booster pump is a machine which increases the pressure of a fluid. It may be used with liquids or gases, and the construction details vary depending on the fluid. A gas booster is similar to a gas compressor, but generally a simpler mechanism which often has only a single stage of compression, and is used to increase pressure of a gas already above ambient pressure. Two-stage boosters are also made. Boosters may be used for increasing gas pressure, transferring high pressure gas, charging gas cylinders and scavenging. Water pressure On new construction and retrofit projects, water pressure booster pumps are used to provide adequate water pressure to upper floors of high rise buildings. The need for a water pressure booster pump can also arise after the installation of a backflow prevention device (BFP), which is currently mandated in many municipalities to protect the public water supplies from contaminants within a building entering the public water supply. The use of BFPs began after The Clean Water Act was passed. These devices can cause a loss of 12 PSI, and can cause flushometers on upper floors not to work properly. After pipes have been in service for an extended period, scale can build up on the inside surfaces which will cause a pressure drop when the water flows. Water pressure booster construction and function Booster pumps for household water pressure are usually simple electrically driven centrifugal pumps with a non-return valve. They may be constant speed pumps which switch on when pressure drops below the low pressure set-point and switch off when pressure reaches the high set-point, or variable speed pumps which are controlled to maintain a constant output pressure. Constant speed pumps are switched on by a normally closed low-pressure switch and will content to run until the pressure rises to open the high pressure switch. They will cycle whenever enough water is used to cause a pressure drop below the low set point. An accumulator in the upstream pipeline will reduce cycling. Variable speed pumps use pressure feedback to electronically control motor speed to maintain a reasonably constant discharge pressure. Most applications run off AC mains current and use an inverter to control motor speed. Installations that provide water to highrise buildings may need boosters at several levels to provide acceptably consistent pressure on all floors. In such a case independent boosters may be installed at various levels, each boosting the pressure provided by the next lower level. It is also possible to boost once to the maximum pressure required, and then to use a pressure reducer at each level. This method would be used if there is a holding tank on the roof with gravity feed to the supply system. Fire sprinkler booster pumps Multi-story buildings equipped with fire sprinkler systems may require a large booster pump to deliver sufficient water pressure and volume to upper floors in the event of a fire. Such pumps are often powered by a diesel engine dedicated to this purpose. The engine needs a fuel tank and an automatic controller that will start the booster pump when it is needed. A small auxiliary electrically-powered booster pump (called a "jockey pump") is often included in the system to maintain the sprinkler pipes at sufficient pressure, without requiring startup of the large diesel engine. Any emergency system must be periodically tested and maintained to ensure its reliability. A diesel engine must be started and operated for testing, and a battery bank for the starting motor must be maintained or replaced periodically. In recent years, a larger electrical pump with substantial battery backup may be substituted for the diesel engine, reducing but not eliminating the need for maintenance. Gas pressure Gas pressure boosting may be used to fill storage cylinders to a higher pressure than the available gas supply, or to provide production gas at pressure higher than line pressure. Examples include: Breathing gas blending for underwater diving where the gas is to be supplied from high-pressure cylinders, as in scuba, scuba replacement and surface-supplied mixed gas diving, where the component gases are blended by partial pressure addition to the storage cylinders, and the mixture storage pressure may be higher than the available pressure of the components. Helium reclaim systems, where the heliox breathing gas exhaled by a saturation diver is piped back to the surface, oxygen is added to make up the required composition, and the gas is boosted to the appropriate supply pressure, filtered, scrubbed of carbon dioxide, and returned to the gas distribution panel to be supplied to the diver again, or returned to high pressure storage Workshop compressed air is usually provided at a pressure suited to the majority of the applications, but some may need a higher pressure. A small booster can be effective to provide this air. Gas booster construction and function Gas booster pumps are usually piston or plunger type compressors. A single-acting, single-stage booster is the simplest configuration, and comprises a cylinder, designed to withstand the operating pressures, with a piston which is driven back and forth inside the cylinder. The cylinder head is fitted with supply and discharge ports, to which the supply and discharge hoses or pipes are connected, with a non-return valve on each, constraining flow in one direction from supply to discharge. When the booster is inactive, and the piston is stationary, gas will flow from the inlet hose, through the inlet valve into the space between the cylinder head and the piston. If the pressure in the outlet hose is lower, it will then flow out and to whatever the outlet hose is connected to. This flow will stop when the pressure is equalized, taking valve opening pressures into account. Once the flow has stopped, the booster is started, and as the piston withdraws along the cylinder, increasing the volume between the cylinder head and the piston crown, the pressure in the cylinder will drop, and gas will flow in from the inlet port. On the return cycle, the piston moves toward the cylinder head, decreasing the volume of the space and compressing the gas until the pressure is sufficient to overcome the pressure in the outlet line and the opening pressure of the outlet valve. At that point, the gas will flow out of the cylinder via the outlet valve and port. There will always be some compressed gas remaining in the cylinder and cylinder head spaces at the top of the stroke. The gas in this "dead space" will expand during the next induction stroke, and only after it has dropped below the supply gas pressure, more supply gas will flow into the cylinder. The ratio of the volume of the cylinder space with the piston fully withdrawn, to the dead space, is the "compression ratio" of the booster, also termed "boost ratio" in this context. Efficiency of the booster is related to the compression ratio, and gas will only be transferred while the pressure ratio between supply and discharge gas is less than the boost ratio, and delivery rate will drop as the inlet to delivery pressure ratio increases. Delivery rate starts at very close to swept volume when there is no pressure difference, and drops steadily until there is no effective transfer when the pressure ratio reaches the maximum boost ratio. Compression of gas will cause a rise in temperature. The heat is mostly carried out by the compressed gas, but the booster components will also be heated by contact with the hot gas. Some boosters are cooled by water jackets or external fins to increase convectional cooling by the ambient air, but smaller models may have no special cooling facilities at all. Cooling arrangements will improve efficiency, but will cost more to manufacture. Boosters to be used with oxygen must be made from oxygen-compatible materials, and use oxygen-compatible lubricants to avoid fire. Configurations Single stage, single acting: There is one booster cylinder, which pressurizes the gas in one direction of piston movement, and refills the cylinder on the return stroke. Single stage, double acting: There are two booster cylinders, which operate alternately, with each one pressurizing gas while the other is refilling. The cylinders each pressurize gas-fed directly from the supply, and the delivered gas from each is combined at the outlets. The cylinders work in parallel and have the same bore. Two stage, double acting: There are two cylinders, which operate alternately, each pressurising gas while the other is refilling, but the second stage has a smaller bore and is filled by the gas pressurised by the first stage, and it pressurises the gas further. The stages operate in series, and the gas passes though both of them in turn. Power sources Gas boosters may be driven by an electric motor, hydraulics, low or high pressure air, or manually by a lever system. Compressed air Those powered by compressed air are usually linear actuated systems, where a pneumatic cylinder directly drives the compression piston, often in a common housing, separated by one or more seals. A high pressure pneumatic drive arrangement may use the same pressure as the output pressure to drive the piston, and a low pressure drive will use a larger diameter piston to multiply the applied force. Low pressure air A common arrangement for low pressure air powered boosters is for the booster pistons to be direct coupled with the drive piston, on the same centreline. The low pressure cylinder has a considerably larger section area than the high pressure cylinders, in proportion to the pressure ratio between the drive and boosted gas. A single action booster of this type has a boost cylinder on one end of the power cylinder, and a double action booster has a boost cylinder on each end of the power cylinder, and the piston rod has a drive piston in the middle and a booster piston on each end. Oxygen boosters require some design features which may not be necessary in boosters for less reactive gases. It is necessary to ensure that drive air, which may not be sufficiently clean for safe contact with high pressure oxygen, cannot leak past the seals into the booster cylinder, or high pressure oxygen can not leak ito the drive cylinder. This can be done by providing a space between the low pressure cylinder and high pressure cylinder that is vented to atmosphere, and the piston rod is sealed on each side where it passes through this space. Any gas leaks from either cylinder past the rod seals escapes harmlessly into the ambient air. A special case for gas powered boosters is where the booster uses the same gas supply to power the booster and as the gas to be boosted. This arrangement is wasteful of gas and is most suitable for use to provide small quantities of higher pressure air where large quantities of lower pressure air are already available. This system is sometimes known as a "bootstrap" booster. High pressure Electrical Electrically powered boosters may use a single or three-phase AC motor drive. The high speed rotational output of the motor must be converted to lower speed reciprocating motion of the pistons. One way this has been done (Dräger and Russian KN-3 and KN-4 military boosters) is to connect the motor to a worm drive gearbox with an eccentric output shaft driving a connecting rod which drives the double-ended piston via a central trunnion. This system is well suited to a double acting booster, either with single-stage boost by parallel connected cylinders with the same bore, or two-stage cylinders of different bores connected in series. Some of these boosters allow for the connecting rod to be disconnected and a pair of long levers to be fitted for manual operation in emergencies or where electrical power is not available. Manual Manual boosters have been made with the configuration described above, either with a single vertical lever or with a seesaw styled double ended horizontal lever, and also with two parallel vertically mounted cylinders, much like the lever-operated diver's air pumps used for the early standard diving dress but with much smaller bore to allow two operators to generate high pressures. Manufacturers High pressure gas boosters are manufactured by Haskel, MPS Technology, Dräger, Gas Compression Systems and others. Rugged and unsophisticated models (KN-3 and KN-4) were manufactured for the Soviet Armed Forces and surplus examples are now used by technical divers as they are relatively inexpensive and are supplied with a comprehensive spares and tool kit. References Gas compressors Gases Diving support equipment
Booster pump
Physics,Chemistry
2,446
2,932,253
https://en.wikipedia.org/wiki/Filter%20factor
In photography, filter factor refers to the multiplicative amount of light a filter blocks. Converting between filter factors and stops The table below illustrates the relationship between filter factor, the amount of light that is allowed through the filter, and the number of stops this corresponds to. Calculating exposure increase The number of f-stops of light reduction, given a filter factor, may be calculated using the formula: Most calculators do not have a function. An equivalent calculation is: or An example: A green filter with a filter factor of 4 The green filter factor of 4 yields a 2 f-stop light reduction. The filter factor, given the exposure change in f-stops, may be calculated using the formula: An example: A deep red filter with an f-stop change of 3 stops A change of 3 f-stops is equivalent to a filter factor of 8. As a consequence of this relationship, filter factors should be multiplied together when such filters are stacked, as opposed to stop adjustments, which should be added together. Filter factors for common filters The table below gives approximate filter factors for a variety of common photographic filters. It is important to note that filter factors are highly dependent on the spectral response curve of the film being used. Thus, filter factors provided by the film manufacturer should be preferred over the ones documented below. Furthermore, note well that these factors are for daylight color temperature (5600K); when shooting under a different color temperature of ambient light, these values will most likely be incorrect. See also Filter (photography) Filter (optics) Wratten number Exposure (photography) F-number References Notes Further reading Hoya Corporation, Filters for imaging Cokin S.A., Cokin Creative Filter System Optical filters
Filter factor
Chemistry
350
2,387,951
https://en.wikipedia.org/wiki/Fuel%20system%20icing%20inhibitor
Fuel system icing inhibitor (FSII) is an additive to aviation fuels that prevents the formation of ice in fuel lines. FSII is sometimes referred to by the registered, genericized trademark Prist. Jet fuel can contain a small amount of dissolved water that does not appear in droplet form. As an aircraft gains altitude, the temperature drops and the fuel's capacity to hold water is diminished. Dissolved water can separate out and could become a serious problem if it freezes in fuel lines or filters, blocking the flow of fuel and shutting down an engine. Chemical composition Chemically, FSII is an almost pure (99.9%) ethylene glycol monomethyl ether (EGMME, 2-methoxy ethanol, APISOLVE 76, CAS number ); or since 1994, diethylene glycol monomethyl ether (DEGMME, 2-(2-methoxy ethoxy) ethanol, APITOL 120, methyl carbitol, CAS number ). Prior to 1994, Prist was regulated under the MIL-I-27686E standard, which specified use of EGMME, but subsequently came under the MIL-DTL-85470B, with use of less hazardous DEGMME with higher flash point. FSII was thought to retard the growth of microorganisms eventually present in the fuel, mostly Cladosporium resinae fungi and Pseudomonas aeruginosa bacteria, known as "hydrocarbon utilizing microorganisms" or "HUM bugs", which live in the water-fuel interface of the water droplets, form dark, gel-like mats, and cause microbial corrosion to plastic and rubber parts, but this has since been removed from labelling. EGMME had been certified as a pesticide by the EPA, but as the requirement changes raised the certification costs, DEGMME has no official pesticide certification. DEGMME is a potent solvent, and at high concentrations can damage fuel bladders and filters. Long-term storage of FSII-fuel mixtures is therefore not recommended. Anhydrous isopropyl alcohol is sometimes used as an alternative. Purpose FSII is an agent that is mixed with jet fuel as it is pumped into the aircraft. The mixture of FSII must be between 0.10% and 0.15% by volume for the additive to work correctly, and the FSII must be distributed evenly throughout the fuel. Simply adding FSII after the fuel has been pumped is therefore not sufficient. As aircraft climbs after takeoff, the temperature drops, and any dissolved water will separate out from the fuel. FSII dissolves itself in water preferentially over the jet fuel, where it then serves to depress the freezing point of water to -43 °C. Since the freezing point of jet fuel itself is usually in this region, the formation of ice is now a minimal concern. Large aircraft do not require FSII as they are usually equipped with electric fuel line heaters or fuel/oil intercoolers that keep the fuel at an appropriate temperature to prevent icing. However, if the fuel heaters are inoperable, the aircraft may be still be declared fit to fly, if FSII is added to the fuel. Storage and dispensing It is extremely important to store FSII properly. Drums containing FSII must be kept clean and dry, since the additive is hygroscopic and can absorb water directly from moisture in the air. Since some brands of FSII are highly toxic, a crew member must wear gloves when handling it undiluted. Many FBOs allow FSII injection to be turned on or off so that one fuel truck can service planes that do require FSII as well as planes that don't. Line crew, however, must be able to deliver FSII when it is needed. References Aviation fuels
Fuel system icing inhibitor
Engineering
813
42,488,319
https://en.wikipedia.org/wiki/Cite%20%28magazine%29
Cite: The Architecture and Design Magazine of Houston is a quarterly magazine published by the Rice Design Alliance, a program of the Rice University School of Architecture. History and profile Cite was established in 1982. Its topics include architecture, urban planning, historical preservation, and the arts. The magazine was established to provide coverage of architectural criticism that had hitherto been absent in publications. Barrie Scardino, William F. Stern, and Bruce C. Webb, the editors of the book Ephemeral City, a collection of essays from Cite, stated that the magazine had a "tough love" attitude towards the City of Houston. The writers are/were from Rice University and the University of Houston, and either held formal academic positions or otherwise were considered intellectuals of the architecture sphere. In 2005 Judith K. De Jong of the University of Illinois at Chicago wrote that "That such an initiative has not only lasted, but has also thrived, is testament to the importance of such a publication about Houston, and, by extension, about places like Houston." According to De Jong, the fact that the publication caters to ordinary people as well as specialists, its comprehensive coverage of topics, and its "excellent, provocative writing and criticism" contributed to its "longevity". Derivative works Ephemeral City re-published twenty-five Cite essays originally published from 1982 to 2000. See also Magazines in Houston References Further reading Scardino, Barrie, William F. Stern, and Bruce C. Webb (editors), Foreword: Peter G. Rowe. Ephemeral City: Cite Looks at Houston. University of Texas Press, December 1, 2003. , 9780292701878. External links 1982 establishments in Texas Architecture magazines Design magazines English-language magazines Magazines established in 1982 Magazines published in Houston Quarterly magazines published in the United States Visual arts magazines published in the United States
Cite (magazine)
Engineering
377
881,148
https://en.wikipedia.org/wiki/Monel
Monel is a group of alloys of nickel (from 52 to 68%) and copper, with small amounts of iron, manganese, carbon, and silicon. Monel is not a cupronickel alloy because it has less than 60% copper. Stronger than pure nickel, Monel alloys are resistant to corrosion by many aggressive agents, including rapidly flowing seawater. They can be fabricated readily by hot- and cold-working, machining, and welding. Monel was created in 1905 by Robert Crooks Stanley, who at the time worked at the International Nickel Company (Inco). Monel was named after company president Ambrose Monell, and patented in 1906. One L was dropped, because family names were not allowed as trademarks at that time. The trademark was registered in May 1921, and it is now a property of the Special Metals Corporation. As an expensive alloy, it tends to be used in applications where it cannot be replaced with cheaper alternatives. For example, in 2015 Monel piping was more than three times as expensive as the equivalent piping made from carbon steel. Properties Monel is a solid-solution binary alloy. As nickel and copper are mutually soluble in all proportions, it is a single-phase alloy. Compared to steel, Monel is very difficult to machine as it work-hardens very quickly. It needs to be turned and worked at slow speeds and low feed rates. It is resistant to corrosion and acids, and some alloys can withstand a fire in pure oxygen. It is commonly used in applications with highly corrosive conditions. Small additions of aluminium and titanium form an alloy (K-500) with the same corrosion resistance but with much greater strength due to gamma prime formation on aging. Monel is typically much more expensive than stainless steel. Monel alloy 400 has a specific gravity of 8.80, a melting range of 1300–1350 °C, an electrical conductivity of approximately 34% IACS, and (in the annealed state) a hardness of 65 Rockwell B. Monel alloy 400 is notable for its toughness, which is maintained over a considerable range of temperatures. Monel alloy 400 has excellent mechanical properties at subzero temperatures. Strength and hardness increase with only slight impairment of ductility or impact resistance. The alloy does not undergo a ductile-to-brittle transition even when cooled to the temperature of liquid hydrogen. This is in marked contrast to many ferrous materials which are brittle at low temperatures despite their increased strength. Uses Aerospace applications In the 1960s, Monel metal found bulk uses in aircraft construction, especially in making the frames and skins of experimental rocket planes, such as the North American X-15, to resist the great heat generated by aerodynamic friction during extremely high speed flight. Monel metal retains its strength at very high temperatures, allowing it to maintain its shape at high atmospheric flight speeds, a trade-off against the increased weight of the parts due to Monel's high density. Monel is used for safety wiring in aircraft maintenance to ensure that fasteners cannot come undone, usually in high-temperature areas; stainless wire is used in other areas for economy. In addition some fasteners used are made from the alloy. Oil production and refining Monel is used in the section of alkylation units in direct contact with concentrated hydrofluoric acid. Monel offers exceptional resistance to hydrofluoric acid in all concentrations up to the boiling point. It is perhaps the most resistant of all commonly used engineering alloys. The alloy is also resistant to many forms of sulfuric and hydrochloric acids under reducing conditions. Marine applications Monel's corrosion resistance makes it ideal in applications such as piping systems, pump shafts, seawater valves, trolling wire, and strainer baskets. Some alloys are completely non-magnetic and are used for anchor cable aboard minesweepers or in housings for magnetic-field measurement equipment. In recreational boating, Monel is used for wire to seize shackles for anchor ropes, for water and fuel tanks, and for underwater applications. It is also used for propeller shafts and for keel bolts. On the popular Hobiecat sailboats, Monel rivets are used where strength is needed but stainless steel cannot be used due to corrosion that would result from stainless steel being in contact with the aluminum mast, boom, and frame of the boat in a saltwater environment. Because of the problem of electrolytic action in salt water (also known as Galvanic corrosion), in shipbuilding Monel must be carefully insulated from other metals such as steel. The New York Times on August 12, 1915 published an article about a 215-foot yacht, "the first ship that has ever been built with an entirely Monel hull," that "went to pieces" in just six weeks and had to be scrapped, "on account of the disintegration of her bottom by electrical action." The yacht's steel skeleton deteriorated due to electrolytic interaction with the Monel. In seabird research, and bird banding or ringing in particular, Monel has been used to make bird bands or rings for many species, such as albatrosses, that live in a corrosive sea water environment. Musical instruments Monel is used as the material for valve pistons or rotors in some higher-quality musical instruments such as trumpets, tubas and French horns. RotoSound introduced the use of Monel for electric bass strings in 1962, and these strings have been used by numerous artists, including Steve Harris of Iron Maiden, The Who, Sting, John Deacon, John Paul Jones and the late Chris Squire. Monel was in use in the early 1930s by other musical string manufacturers, such as Gibson Guitar Corporation, who continue to offer them for mandolin as the Sam Bush signature set. Also, C.F. Martin & Co. uses Monel for their Martin Retro acoustic guitar strings. The Pyramid string factory (Germany) produces 'Monel classics' electric guitar strings, wound on a round core. In 2017, D'Addario string company released a line of violin strings using a Monel winding on the D and G string. Other Good resistance against corrosion by acids and oxygen makes Monel a good material for the chemical industry. Even corrosive fluorides can be handled within Monel apparatus; this was done in an extensive way in the enrichment of uranium in the Oak Ridge Gaseous Diffusion Plant. Here most of the larger-diameter tubing for the uranium hexafluoride was made from Monel. Regulators for reactive cylinder gases like hydrogen chloride form another example, where PTFE is not a suitable option when high delivery pressures are required. These will sometimes include a Monel manifold and taps prior to the regulator that allow the regulator to be flushed with a dry, inert gas after use to further protect the equipment. In the early 20th century, when steam power was widely used, Monel was advertised as being desirable for use in superheated steam systems. During the world wars, Monel was used for US military dog tags. Monel is often used for kitchen sinks and in the frames of eyeglasses. It has also been used for firebox stays in fire-tube boilers. Parts of the Clock of the Long Now, which is intended to run for 10,000 years, are made from Monel because of the corrosion resistance without the use of precious metals. Monel was used for much of the exposed metal used in the interior of the Bryn Athyn Cathedral in Pennsylvania, religious seat of the General Church of the New Jerusalem. This included large decorative screens, doorknobs, etc. Monel also has been used as roofing material in buildings such as the original Pennsylvania Station in New York City. The 1991–1996 Acura (Honda) NSX came with a key made of Monel. Oilfield applications include using Monel drill collars. Instruments which measure the Earth's magnetic field to obtain a direction are placed in a non-magnetic collar which isolates them from the magnetic pull of drilling tools located above and below the non-magnetic collars. Monel is now rarely used, usually replaced by non-magnetic stainless steels. Monel is also used as a protective binding material on the outside of western style stirrups. Monel is used by Arrow Fastener Co., Inc. for rustproof T50 staples. Monel has also been used in Kelvinator refrigerators. Monel was used in the Baby Alice Thumb Guard, a 1930s-era anti-thumb-sucking device. Monel is used in motion picture film processing. Monel staple splices are ideal for resisting corrosion from use in continuous-run photochemical tanks. Monel was latterly widely used to manufacture firebox stays in steam locomotive boilers. Alloys Monel is often traded under the ISO standards 6208 (plate, sheet and strip) 9723 (bars) 9724 (wire) 9725 (forgings) and the DIN 17751 (pipes and tubes). Monel 400 Monel 400 shows high strength and excellent corrosion resistance in a range of acidic and alkaline environments and is especially suitable for reducing conditions. It also has good ductility and thermal conductivity. Monel 400 typically finds application in marine engineering, chemical and hydrocarbon processing, heat exchangers, valves, and pumps. It is covered by the following standards: BS 3075, 3076 NA 13, DTD 204B and ASTM B164. Large use of Monel 400 is made in alkylation units, namely in the reacting section in contact with concentrated hydrofluoric acid. Monel 401 This alloy is designed for use in specialized electric and electronic applications. Alloy 401 is readily autogenously welded by the gas-tungsten-arc process. Resistance welding is a very satisfactory method for joining the material. It also exhibits good brazing characteristics. It is covered by standard UNS N04401. Monel 404 Monel 404 alloy is used primarily in specialized electrical and electronic applications. The composition of Monel 404 is carefully adjusted to provide a very low Curie temperature, low permeability, and good brazing characteristics. Monel 404 can be welded using common welding techniques and forged but cannot be hot worked. Cold working may be done using standard tooling and soft die materials for better finish. It is covered by standards UNS N04404 and ASTM F96. Monel 404 is used in capsules for transistors and ceramic to metal seals and other things. Monel 405 Monel alloy 405, also known as Monel R405, is the free-machining grade of alloy 400. The nickel, carbon, manganese, iron, silicon & copper percent remains the same as alloy 400, but the sulfur is increased from 0.024 max to 0.025-0.060%. Alloy 405 is used chiefly for automatic screw machine stock and is not generally recommended for other applications. The nickel–copper sulfides resulting from the sulfur in its composition act as chip breakers, but because of these inclusions the surface finish of the alloy is not as smooth as that of alloy 400. Monel 405 is designated UNS N04405 and is covered by ASME SB-164, ASTM B-164, Federal QQ-N-281, SAE AMS 4674 & 7234, Military MIL-N-894, and NACE MR-01-75. Monel 450 This alloy exhibits good fatigue strength and has relatively high thermal conductivity. It is used for seawater condensers, condenser plates, distiller tubes, evaporator and heat exchanger tubes, and saltwater piping. Monel K-500 Monel K-500 combines the excellent corrosion resistance characteristic of Monel alloy 400 with the added advantages of greater strength and hardness. The increased properties are obtained by adding aluminum and titanium to the nickel–copper base, and by heating under controlled conditions so that submicroscopic particles of Ni3 (Ti, Al) are precipitated throughout the matrix. The corrosion resistance of Monel alloy K-500 is substantially equivalent to that of alloy 400 except that, when in the age-hardened condition, alloy K-500 has a greater tendency toward stress-corrosion cracking in some environments. Monel alloy K-500 has been found to be resistant to a sour-gas environment. The combination of very low corrosion rates in high-velocity sea water and high strength make alloy K-500 particularly suitable for shafts of centrifugal pumps in marine service. In stagnant or slow-moving sea water, fouling may occur followed by pitting, but this pitting slows down after a fairly rapid initial attack. Typical applications for alloy K-500 are pump shafts and impellers, doctor blades and scrapers, and oil-well drill collars, instruments, and electronic components. It is also used in components for power plants, such as steam-turbine blades, heat exchangers, and condenser tubes. In the marine industry, it is utilized in components for marine hardware, propeller shafts, pump shafts and seawater valves exposed to harsh marine environments. Monel 502 Monel 502 is a nickel–copper alloy and its UNS no is N05502. This grade also has good creep and oxidation resistance. Monel 502 can be formed in different shapes, and can be machined similar to austenitic stainless steels. See also Hastelloy Inconel References Citations General and cited references External links Monel Corrosion monel - titaniumnazari Monel 400 vs. Monel K-500 Strip: Which One Is Best for You? What Is the Difference Between Monel Alloy 400 and Alloy K-500? Building materials Copper alloys Nickel alloys
Monel
Physics,Chemistry,Engineering
2,828
51,054,929
https://en.wikipedia.org/wiki/Furostilbestrol
Furostilbestrol (INN), also known as diethylstilbestrol di(2-furoate) or simply as diethylstilbestrol difuroate, is a synthetic, nonsteroidal estrogen of the stilbestrol group related to diethylstilbestrol, that was never marketed. It is an ester of diethylstilbestrol and was described in the literature in 1952. See also Diethylstilbestrol dipropionate Dimestrol Fosfestrol Mestilbol References Estrogen esters Stilbenoids Synthetic estrogens Abandoned drugs
Furostilbestrol
Chemistry
132
30,649
https://en.wikipedia.org/wiki/Tetracycline
Tetracycline, sold under various brand names, is an antibiotic in the tetracyclines family of medications, used to treat a number of infections, including acne, cholera, brucellosis, plague, malaria, and syphilis. It is available in oral and topical formulations. Common side effects include vomiting, diarrhea, rash, and loss of appetite. Other side effects include poor tooth development if used by children less than eight years of age, kidney problems, and sunburning easily. Use during pregnancy may harm the baby. It works by inhibiting protein synthesis in bacteria. Tetracycline was patented in 1953 and was approved for prescription use in 1954. It is on the World Health Organization's List of Essential Medicines. Tetracycline is available as a generic medication. Tetracycline was originally made from bacteria of the genus Streptomyces. Medical uses Spectrum of activity Tetracyclines have a broad spectrum of antibiotic action. Originally, they possessed some level of bacteriostatic activity against almost all medically relevant aerobic and anaerobic bacterial genera, both Gram-positive and Gram-negative, with a few exceptions, such as Pseudomonas aeruginosa and Proteus spp., which display intrinsic resistance. However, acquired (as opposed to inherent) resistance has proliferated in many pathogenic organisms and greatly eroded the formerly vast versatility of this group of antibiotics. Resistance amongst Staphylococcus spp., Streptococcus spp., Neisseria gonorrhoeae, anaerobes, members of the Enterobacteriaceae, and several other previously sensitive organisms is now quite common. Tetracyclines remain especially useful in the management of infections by certain obligately intracellular bacterial pathogens such as Chlamydia, Mycoplasma, and Rickettsia. They are also of value in spirochaetal infections, such as syphilis, and Lyme disease. Certain rare or exotic infections, including anthrax, plague, and brucellosis, are also susceptible to tetracyclines. Tetracycline tablets were used in the plague outbreak in India in 1994. Tetracycline is first-line therapy for Rocky Mountain spotted fever (Rickettsia), Lyme disease (B. burgdorferi), Q fever (Coxiella), psittacosis, Mycoplasma pneumoniae, and nasal carriage of meningococci. It is also one of a group of antibiotics which together may be used to treat peptic ulcers caused by bacterial infections. The mechanism of action for the antibacterial effect of tetracyclines relies on disrupting protein translation in bacteria, thereby damaging the ability of microbes to grow and repair; however, protein translation is also disrupted in eukaryotic mitochondria leading to effects that may confound experimental results. The following list presents MIC susceptibility data for some medically significant microorganisms: Escherichia coli: 1 / to >128 μg/mL Shigella : 1 μg/mL to 128 μg/mL Anti-eukaryote use The tetracyclines also have activity against certain eukaryotic parasites, including those responsible for diseases such as dysentery caused by an amoeba, malaria (a plasmodium), and balantidiasis (a ciliate). Use as a biomarker Since tetracycline is absorbed into bone, it is used as a marker of bone growth for biopsies in humans. Tetracycline labeling is used to determine the amount of bone growth within a certain period of time, usually a period around 21 days. Tetracycline is incorporated into mineralizing bone and can be detected by its fluorescence. In "double tetracycline labeling", a second dose is given 11–14 days after the first dose, and the amount of bone formed during that interval can be calculated by measuring the distance between the two fluorescent labels. Tetracycline is also used as a biomarker in wildlife to detect consumption of medicine- or vaccine-containing baits. Side effects Use of tetracycline antibiotics can: Discolor permanent teeth (yellow-gray-brown), from prenatal period through childhood and adulthood. Children receiving long- or short-term therapy with a tetracycline or glycylcycline may develop permanent brown discoloration of the teeth. Be inactivated by calcium ions, so are not to be taken with milk, yogurt, and other dairy products Be inactivated by aluminium, iron, and zinc ions, not to be taken at the same time as indigestion remedies (some common antacids and over-the-counter heartburn medicines) Cause skin photosensitivity, so exposure to the sun or intense light is not recommended Cause drug-induced lupus, and hepatitis Cause microvesicular fatty liver Cause tinnitus Cause epigastric pain Interfere with methotrexate by displacing it from the various protein-binding sites Cause breathing complications, as well as anaphylactic shock, in some individuals Affect bone growth of the fetus, so should be avoided during pregnancy Fanconi syndrome may result from ingesting expired tetracyclines. Caution should be exercised in long-term use when breastfeeding. Short-term use is safe; bioavailability in milk is low to nil. According to the U.S. Food and Drug Administration (FDA), cases of Stevens–Johnson syndrome, toxic epidermal necrolysis, and erythema multiforme associated with doxycycline use have been reported, but a causative role has not been established. Pharmacology Mechanism of action Tetracycline inhibits protein synthesis by blocking the attachment of charged tRNA at the P site peptide chain. Tetracycline blocks the A-site so that a hydrogen bond is not formed between the amino acids. Tetracycline binds to the 30S and 50S subunit of microbial ribosomes. Thus, it prevents the formation of a peptide chain. The action is usually not inhibitory and irreversible even with the withdrawal of the drug. Mammalian cells are not vulnerable to the effect of Tetracycline as these cells contain no 30S ribosomal subunits so do not accumulate the drug. This accounts for the relatively small off-site effect of tetracycline on human cells. Mechanisms of resistance Bacteria usually acquire resistance to tetracycline from horizontal transfer of a gene that either encodes an efflux pump or a ribosomal protection protein. Efflux pumps actively eject tetracycline from the cell, preventing the build up of an inhibitory concentration of tetracycline in the cytoplasm. Ribosomal protection proteins interact with the ribosome and dislodge tetracycline from the ribosome, allowing for translation to continue. History Discovery The tetracyclines, a large family of antibiotics, were discovered by Benjamin Minge Duggar in 1948 as natural products, and first prescribed in 1948. Benjamin Duggar, working under Yellapragada Subbarow at Lederle Laboratories, discovered the first tetracycline antibiotic, chlortetracycline (Aureomycin), in 1945. The structure of Aureomycin was elucidated in 1952 and published in 1954 by the Pfizer-Woodward group. After the discovery of the structure, researchers at Pfizer began chemically modifying aureomycin by treating it with hydrogen in the presence of a palladized carbon catalyst. This chemical reaction replaced a chlorine moiety with a hydrogen, creating a compound named tetracycline via hydrogenolysis. Tetracycline displayed higher potency, better solubility, and more favorable pharmacology than the other antibiotics in its class, leading to its FDA approval in 1954. The new compound was one of the first commercially successful semi-synthetic antibiotics that was used, and laid the foundation for the development of Sancycline, Minocycline, and later the Glycylcyclines. Evidence in antiquity Tetracycline has a high affinity for calcium and is incorporated into bones during the active mineralization of hydroxyapatite. When incorporated into bones, tetracycline can be identified using ultraviolet light. There is evidence that early inhabitants of Northeastern Africa consumed tetracycline antibiotics. Nubian mummies from between 350 and 550 A.D. were found to exhibit patterns of fluorescence identical with that of modern tetracycline labelled bone. It is conjectured that the beer brewed by the Nubians was the source of the tetracycline found in these bones. Society and culture Economics According to data from EvaluatePharma and published in the Boston Globe, in the USA the price of tetracycline rose from $0.06 per 250-mg pill in 2013 to $4.06 a pill in 2015. The Globe described the "big price hikes of some generic drugs" as a "relatively new phenomenon" which has left most pharmacists "grappling" with large upswings" in the "costs of generics, with 'overnight' price changes sometimes exceeding 1,000%." Brand names It is marketed under the brand names Sumycin, Tetracyn, and Panmycin, among others. Actisite is a thread-like fiber formulation used in dental applications. It is also used to produce several semisynthetic derivatives, which together are known as the tetracycline antibiotics. The term "tetracycline" is also used to denote the four-ring system of this compound; "tetracyclines" are related substances that contain the same four-ring system. Media Due to the drug's association with fighting infections, it serves as the main "commodity" in the science fiction series Aftermath, with the search for tetracycline becoming a major preoccupation in later episodes. Tetracycline is also represented in Bohemia Interactive's survival sandbox, DayZ. In the game, players may find the antibiotic to treat the common cold, influenza, cholera and infected wounds, but does not portray any side effects associated with tetracycline. Research Genetic engineering In genetic engineering, tetracycline is used in transcriptional activation. It has been used as an engineered "control switch" in chronic myelogenous leukemia models in mice. Engineers were able to develop a retrovirus that induced a particular type of leukemia in mice, and could then "switch" the cancer on and off through tetracycline administration. This could be used to grow the cancer in mice and then halt it at a particular stage to allow for further experimentation or study. A technique being developed for the control of the mosquito species Aedes aegypti (the infection vector for yellow fever, dengue fever, Zika fever, and several other diseases) uses a strain that is genetically modified to require tetracycline to develop beyond the larval stage. Modified males raised in a laboratory develop normally as they are supplied with this chemical and can be released into the wild. Their subsequent offspring inherit this trait, but find no tetracycline in their environments, so never develop into adults. References 1948 introductions Anti-acne preparations Biomarkers Cancer research Carboxamides Dermatoxins Hepatotoxins Otologicals Tetracycline antibiotics Wikipedia medicine articles ready to translate World Health Organization essential medicines
Tetracycline
Biology
2,465
2,021,691
https://en.wikipedia.org/wiki/Anti-psychologism
In logic, anti-psychologism (also logical objectivism or logical realism) is a theory about the nature of logical truth, that it does not depend upon the contents of human ideas but exists independent of human ideas. Overview The anti-psychologistic treatment of logic originated in the works of Immanuel Kant and Bernard Bolzano. The concept of logical objectivism or anti-psychologism was further developed by Johannes Rehmke (founder of Greifswald objectivism) and Gottlob Frege (founder of logicism the most famous anti-psychologist in the philosophy of mathematics), and has been the center of an important debate in early phenomenology and analytical philosophy. Frege's work was influenced by Bolzano. Elements of anti-psychologism in the historiography of philosophy can be found in the work of the members of the 1830s speculative theist movement and the late work of Hermann Lotze. The psychologism dispute () in 19th-century German-speaking philosophy is closely related to the contemporary internalism and externalism debate in epistemology; psychologism is often construed as a kind of internalism (the thesis that no fact about the world can provide reasons for action independently of desires and beliefs) and anti-psychologism as a kind of externalism (the thesis that reasons are to be identified with objective features of the world). Psychologism was defended by Theodor Lipps, Gerardus Heymans, Wilhelm Wundt, Wilhelm Jerusalem, Christoph von Sigwart, Theodor Elsenhans, and Benno Erdmann. Edmund Husserl was another important proponent of anti-psychologism, and this trait passed on to other phenomenologists, such as Martin Heidegger, whose doctoral thesis was meant to be a refutation of psychologism. They shared the argument that, because the proposition "no-p is a not-p" is not logically equivalent to "It is thought that 'no-p is a not-p'", psychologism does not logically stand. Charles Sanders Peirce—whose fields included logic, philosophy, and experimental psychology—could also be considered a critic of psychologism in logic. The return of psychologism Psychologism is not widely held amongst logicians today, but something like it has some high-profile defenders especially among those who do research at the intersection of logic and cognitive science, for example Dov Gabbay and John Woods, who concluded that "whereas mathematical logic must eschew psychologism, the new logic cannot do without it". Notes Further reading Vladimir Bryushinkin. Metapsychologism in the Philosophy of Logic. Proc. Logic and Philosophy of Logic, 20th World Congress in Philosophy, 2000. Martin Kusch. Psychologism: A Case Study in the Sociology of Philosophical Knowledge. London and New York: Routledge, 1995. Theories of deduction Philosophy of logic
Anti-psychologism
Mathematics
608
78,316,148
https://en.wikipedia.org/wiki/Fibre%20Chemistry
Fibre Chemistry is a bimonthly peer-reviewed scientific journal that covers the chemistry, technology, and applications of man-made fibers. It is the English translation of the Russian journal Khimicheskie Volokna (Химические Волокна) and publishes research covering the synthesis, properties, and industrial applications of synthetic fibers. It is published by Springer Science+Business Media and the editor-in-chief is Nikolay N. Matchalaba (Russian Academy of Engineering). The journal publishes both original research and review articles on textiles and materials science more in general. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 0.5. References External links English-language journals Textile journals Springer Science+Business Media academic journals Bimonthly journals Academic journals established in 1969
Fibre Chemistry
Materials_science
186
12,390,747
https://en.wikipedia.org/wiki/Pleurodema%20somuncurense
Pleurodema somuncurense (the Somuncura frog or El Rincon stream frog, in Spanish rana de Somuncura) is a species of frog in the family Leptodactylidae. It is endemic to the Somuncura Plateau in Patagonia, Argentina. Description Females reach in total length. They are slender with fairly small head and large protruding, gold-coloured eyes. Fingers and toes are long and slender, with the toes being about one-third webbed. Eyes have two symmetrical rounded structures on the centre of the upper and lower border of the iris. The skin is smooth. Colouration is bright yellowish-brown on the upper surfaces of the head, body and legs. There are irregular dark spots across the back, and wavy dark reticulated lines on the sides of the body and backs of the thighs. There is a characteristic yellowish stripe that runs centrally down the top of the head and half of the back. The belly is purplish-yellow with dark grey reticulated spots. The lower surface of the thighs is purplish-rose and bears faint grey reticulated spots. Reproduction Pleurodema somuncurense reproduces in the mid-spring and summer months through amplexus events, with males clasping on females from the back. P. somuncurense has specific features such as scramble competition and mating calls by males are typical of explosive breeders. Breeding microhabitats used by these species are under disturbance from livestock. Habitat and conservation Pleurodema somuncurense is a fully aquatic frog that inhabits geothermal springs and streams. The microendemic species is restricted to the thermal headwaters of Valcheta Stream in Northern Patagonia, Argentina. It is threatened by predation by introduced rainbow trout and by habitat loss from canalization of spring water. Livestock farming also has negative impacts through overgrazing and chemical pollution. The grassland fires used to promote regrowth of pasture for livestock impact the frog's availability of shelters, reproductive sites, and terrestrial prey. References Pleurodema Amphibians described in 1969 Amphibians of Patagonia Amphibians of Argentina Endemic fauna of Argentina EDGE species Taxonomy articles created by Polbot Taxa named by José Miguel Alfredo María Cei
Pleurodema somuncurense
Biology
469
55,543,286
https://en.wikipedia.org/wiki/Haploporus%20septatus
Haploporus septatus is a species of poroid crust fungus in the family Polyporaceae. Found in China, it causes a white rot in decomposing angiosperm wood. Taxonomy The fungus was collected from Ailaoshan Nature Reserve in Jingdong County (Yunnan Province) in October 2013, and described as a new species three years later. The specific epithet septatus refers to the septate skeletal hyphae. Description Fruit bodies of Haploporus septatus are crust-like, measuring long, wide, and up to 8 mm thick at the centre. The hymenophore, or pore surface, is white to cream coloured. The pores number around five to six per millimetre. The context has no distinct odour or taste. The hyphal structure is dimitic, meaning that there are both generative and skeletal hyphae. The generative hyphae have clamp connections. The thick-walled, cylindrical spores typically measure 8.5–11 by 5–6 μm. References Fungi described in 2016 Fungi of China Polyporaceae Taxa named by Yu-Cheng Dai Taxa named by Bao-Kai Cui Fungus species
Haploporus septatus
Biology
245
22,669,899
https://en.wikipedia.org/wiki/Cunnilingus
Cunnilingus is an oral sex act consisting of the stimulation of a vulva by using the tongue and lips. The clitoris is the most sexually sensitive part of the vulva, and its stimulation may result in a woman becoming sexually aroused or achieving orgasm. Cunnilingus can be sexually arousing for participants and may be performed by a sexual partner as foreplay to incite sexual arousal before other sexual activities (such as vaginal or anal intercourse) or as an erotic and physically intimate act on its own. Cunnilingus can be a risk for contracting sexually transmitted infections (STIs), but the transmission risk from oral sex, especially of HIV, is significantly lower than for vaginal or anal sex. Oral sex is often regarded as taboo, but most countries do not have laws which ban the practice. Commonly, heterosexual couples do not regard cunnilingus as affecting the virginity of either partner, while lesbian couples commonly do regard it as a form of virginity loss. People may also have negative feelings or sexual inhibitions about giving or receiving cunnilingus or may refuse to engage in it. Etymology and terminology The term cunnilingus is derived from the Latin words for vulva (cunnus) and the verb "to lick" (lingere). There are numerous slang terms for cunnilingus, including "drinking from the furry cup", "carpet munching", and "muff-diving". Additional common slang terms used are "giving lip", "lip service", or "tipping the velvet"; this last is an expression that novelist Sarah Waters claims to have "plucked from the relative obscurity of Victorian porn". It is also popularly known in the urban community as "dining at the Y" or "DATY". A person who performs cunnilingus may be referred to as a "cunnilinguist". It is also referred to by more ambiguous terminology nonspecific to the form of oral sex performed (e.g., "getting or giving head" or "going down" on someone). Practice General General statistics indicate that 70–80% of women require clitoral stimulation to achieve orgasm. Shere Hite's research on human female sexuality reports that, for most women, orgasm is easily achieved by cunnilingus because of the direct stimulation of the clitoral glans and shaft (including stimulation to other external parts of the vulva that are physically related to the clitoris) that may be involved during the act. The essential aspect of cunnilingus is oral stimulation of the vulva by licking with the tongue, use of the lips, or some combination. During the activity, the performer may use fingers to open the labia majora (the vulva's outer lips) to enable the tongue to better stimulate the clitoris, or the female may separate the labia for her partner. Separating the legs wide would also usually open the vulva sufficiently for the partner to orally reach the clitoris. The performer may also stimulate the labia minora (inner lips of the vulva) by using the lips or tongue. The nose, chin, and teeth might be used as well. Movements can be slow or fast, regular or erratic, firm or soft, according to the participants' preferences. The tongue can be inserted into the vagina, either stiffened or moving. The performing partner may also hum to produce vibration. Women may consider personal hygiene before practicing oral sex important, as poor hygiene can lead to odors, accumulation of sweat and micro-residue (such as lint, urine or menstrual blood), which the giving partner may find unpleasant. Some women remove or trim their pubic hair. Autocunnilingus, which is cunnilingus performed by a female on herself as masturbation, may be possible, but an unusually high degree of flexibility is required, which may be possessed only by contortionists. During menstruation Cunnilingus may be performed on a menstruating partner, which is called "to earn one's red wings" in slang. The phrase is a reference to menstrual blood stains in the shape of a small bird's wings that are liable to form on the giving partner's cheeks during the act. The red wing patch was common among the Hells Angels by the mid-1960s, and the slang term continued to be known among biker gangs in the 1980s. Gershon Legman saw the act/badge not only as functioning a homosocial tie, but also as reflecting a deep and primitive belief in the lifegiving powers of blood. The elder Mirabeau, in his Erotika Biblion of 1783, saw cunnilingus during menstruation as an extreme act, linked with the submissive worship of the Mother goddess, and by extension to the Black Mass. Prevalence In a Canadian study, 89% of heterosexual and bisexual men had practiced cunnilingus. 94% of them enjoyed it. Of the latter, 76% practiced it often or very often. Reasons for not practicing cunnilingus included lack of opportunity (73%) and disgust (13%). This suggests that much more than 89% of men would practice cunnilingus if they had a chance. Health aspects Sexually transmitted infections Chlamydia, human papillomavirus (HPV), gonorrhea, syphilis, herpes, hepatitis (multiple strains), and other sexually transmitted infections (STIs) can be transmitted through oral sex. Any sexual exchange of bodily fluids with a person infected with HIV, the virus that causes AIDS, poses a risk of infection. Risk of STI infection, however, is generally considered significantly lower for oral sex than for vaginal or anal sex, with HIV transmission considered the lowest risk with regard to oral sex. Furthermore, the documented risk of HIV transmission through cunnilingus is lower than that associated with fellatio, vaginal or anal intercourse. There is an increased risk of STI if the receiving partner has wounds on her vulva, or if the giving partner has wounds or open sores on or in their mouth, or bleeding gums. Brushing the teeth, flossing, or undergoing dental work soon before or after performing cunnilingus can also increase the risk of transmission, because all of these activities can cause small scratches in the lining of the mouth. These wounds, even when they are microscopic, increase the chances of contracting STIs that can be transmitted orally under these conditions. Such contact can also lead to more mundane infections from common bacteria and viruses found in, around and secreted from the genital regions. Because of the aforementioned factors, medical sources advise the use of effective barrier methods when performing or receiving cunnilingus with a partner whose STI status is unknown. Cunnilingus during menstruation is considered high risk for the partner performing cunnilingus because there may be a high concentration of virus in menstrual blood, such as hepatitis B. HPV and oral cancer Links have been reported between oral sex and oral cancer with human papillomavirus (HPV)-infected people. A 2007 study found a correlation between oral sex and throat cancer. It is believed that this is due to the transmission of HPV, a virus that has been implicated in the majority of cervical cancers and which has been detected in throat cancer tissue in numerous studies. The study concludes that people who had one to five oral sex partners in their lifetime had approximately a doubled risk of throat cancer compared with those who never engaged in this activity, and those with more than five oral sex partners had a 250 percent increased risk. Mechanical trauma to the tongue The lingual frenulum (underside of the tongue) is vulnerable to ulceration by repeated friction during sexual activity ("cunnilingus tongue"). Ulceration of the lingual frenulum caused by cunnilingus is horizontal, the lesion corresponding to the contact of the under surface of the tongue with the edges of the lower front teeth when the tongue is in its most forward position. This type of lesion resolves in 7–10 days, but may recur with repeated performances. Chronic ulceration at this site can cause linear fibrous hyperplasia. The incisal edges of the mandibular teeth can be smoothed to minimize the chance of trauma. Cultural and religious views General views Cultural views on giving or receiving cunnilingus range from aversion to high regard. It has been considered taboo, or discouraged, in many cultures and parts of the world. In Taoism, cunnilingus is revered as a spiritually fulfilling practice that is believed to enhance longevity. In modern Western culture, oral sex is widely practiced among adolescents and adults. Laws of some jurisdictions regard cunnilingus as penetrative sex for the purposes of sexual offenses with regard to the act, but most countries do not have laws which ban the practice, in contrast to anal sex or extramarital sex. People give various reasons for their dislike or reluctance to perform cunnilingus, or having cunnilingus performed on them. Some regard cunnilingus and other forms of oral sex as unnatural because the practices do not result in reproduction. Some cultures attach symbolism to different parts of the body, leading some people to believe that cunnilingus is ritually unclean or humiliating. While commonly believed that lesbian sexual practices involve cunnilingus for all women who have sex with women, some lesbian or bisexual women dislike cunnilingus due to not liking the experience or due to psychological or social factors, such as regarding it as unclean. Other lesbian or bisexual women believe that it is a necessity or largely defines lesbian sexual activity. Lesbian couples are more likely to consider a woman's dislike of cunnilingus as a problem than heterosexual couples are, and it is common for them to seek therapy to overcome inhibitions regarding it. Oral sex is also commonly used as a means of preserving virginity, especially among heterosexual pairings; this is sometimes termed technical virginity (which additionally includes anal sex, manual sex and other non-penetrative sex acts, but excludes penile-vaginal sex). The concept of "technical virginity" or sexual abstinence through oral sex is particularly popular among teenagers. By contrast, lesbian pairings commonly consider oral sex or fingering as resulting in virginity loss, though definitions of virginity loss vary among lesbians as well. Taoism Cunnilingus is accorded a revered place in Taoism. This is because the practice was believed to achieve longevity, by preventing the loss of semen, vaginal and other bodily liquids, whose loss is believed to bring about a corresponding loss of vitality. Conversely, by either semen retention or ingesting the secretions from the vagina, a person can conserve and increase their qi, or original vital breath. According to Philip Rawson, these half-poetic, half-medicinal metaphors explain the popularity of cunnilingus among people: "The practice was an excellent method of imbibing the precious feminine fluid". But the Taoist ideal is not just about the male's being enriched by female secretions; the female also benefits from her communion with the male, a feature that has led sinologist Kristofer Schipper to denounce the ancient handbooks on the "Art of the Bedroom" as embracing a "kind of glorified male vampirism" that is not truly Taoist at all. See also Anilingus – oral stimulation of the anus Facesitting Fellatio – oral stimulation of the penis Fingering References Further reading Gershon Legman: The Guilt of the Templars. Basic Books Inc., New York, 1966. Gershon Legman: Rationale of the Dirty Joke: An Analysis of Sexual Humor, Simon & Schuster, 1968. Non-penetrative sex Oral eroticism Sex positions Sexual acts Vulva
Cunnilingus
Biology
2,469
52,835,254
https://en.wikipedia.org/wiki/Center%20for%20Year%202000%20Strategic%20Stability
The Center for Year 2000 Strategic Stability was a joint operation of the United States and Russian Federation designed to provide mutual assurance that neither nation was launching a nuclear first strike against the other during the transition from the year 1999 to the year 2000. The program arose out of concerns the Year 2000 problem might generate false positives in each nation's nuclear attack Early warning systems. The center came online December 30, 1999 and was closed January 15, 2000. It operated from Peterson Air Force Base. References Foreign relations of Russia Nuclear warfare Foreign relations of the United States
Center for Year 2000 Strategic Stability
Chemistry
112
23,141,532
https://en.wikipedia.org/wiki/Sverdrup%20wave
A Sverdrup wave (also known as Poincaré wave, or rotational gravity wave ) is a wave in the ocean, or large lakes, which is affected by gravity and Earth's rotation (see Coriolis effect). For a non-rotating fluid, shallow water waves are affected only by gravity (see Gravity wave), where the phase velocity of shallow water gravity wave (c) can be noted as and the group velocity (cg) of shallow water gravity wave can be noted as i.e. where g is gravity, λ is the wavelength and H is the total depth. Derivation When the fluid is rotating, gravity waves with a long enough wavelength (discussed below) will also be affected by rotational forces. The linearized, shallow-water equations with a constant rotation rate, f0, are where u and v are the horizontal velocities and h is the instantaneous height of the free surface. Using Fourier analysis, these equations can be combined to find the dispersion relation for Sverdrup waves: where k and l are the wavenumbers associated with the horizontal and vertical directions, and is the frequency of oscillation. Limiting Cases There are two primary modes of interest when considering Poincaré waves: Short wave limit where is the Rossby radius of deformation. In this limit, the dispersion relation reduces to the solution for a non-rotating gravity wave. Long wave limit which looks like inertial oscillations driven purely by rotational forces. Solution for the one-dimensional case For a wave traveling in one direction (), the horizontal velocities are found to be equal to This shows that the inclusion of rotation will cause the wave to develop oscillations at 90° to the wave propagation at the opposite phase. In general, these are elliptical orbits that depend on the relative strength of gravity and rotation. In the long wave limit, these are circular orbits characterized by inertial oscillations. References See also Kelvin wave Rossby wave Geophysical fluid dynamics Sverdrup Harald Sverdrup Waves
Sverdrup wave
Physics
427
52,623,993
https://en.wikipedia.org/wiki/Estradiol%20cyclooctyl%20acetate
Estradiol cyclooctyl acetate (E2COA), or estradiol 17β-cyclooctylacetate, also known as estra-1,3,5(10)-triene-3,17β-diol 17β-cyclooctylacetate, is an estrogen medication and an estrogen ester – specifically, the 17β-cyclooctylacetate ester of estradiol – which has been studied for use in hormone replacement therapy for ovariectomized women and as a hormonal contraceptive in combination with a progestin but was never marketed. It has greater oral bioavailability than does micronized estradiol due to absorption via the lymphatic system and hence partial bypassing of first-pass metabolism. It is approximately twice as potent as micronized estradiol orally and has a comparatively reduced impact on liver parameters such as changes in sex hormone-binding globulin production. It was investigated in combination with desogestrel as a birth control pill, but resulted in unacceptable menstrual bleeding patterns and was not further developed. See also List of estrogen esters § Estradiol esters References Abandoned drugs Acetate esters Estradiol esters Synthetic estrogens
Estradiol cyclooctyl acetate
Chemistry
275
2,688,009
https://en.wikipedia.org/wiki/Merope%20%28star%29
Merope , designated 23 Tauri (abbreviated 23 Tau), is a star in the constellation of Taurus and a member of the Pleiades star cluster. It is approximately away. Distance Despite being one of the closest star clusters to Earth, the distance to the Pleiades and its member stars is still in dispute. The parallax of Merope itself is not known precisely enough to give an accurate distance. Its Hipparcos parallax has a statistical margin of error of about 5% and gave a distance 116 parsecs. This, and an overall distance to the Pleiades calculated from Hipparcos parallaxes of 120 parsecs, are inconsistent with other parallax measurements such as from Gaia. Merope is too bright for Gaia to have a reliable parallax for it, but calculations of the overall distance to the Pleiades cluster using Hipparcos, Gaia, Hubble Space Telescope, and other methods repeatedly show that the Hipparcos parallaxes suffered from some kind of systemic error, and the distance to the Pleiades is most likely around 135 parsecs. Description Merope is a blue-white B-type subgiant with a mean apparent magnitude of +4.18. Richard Hinckley Allen described the star as lucid white and violet. It has a luminosity of 927 times that of the Sun and a surface temperature of . Merope's mass is roughly and has a radius more than 7 times as great as the Sun's. It is classified as a Beta Cephei type variable star and its brightness varies by 0.01 magnitudes. It is given the variable star designation of V971 Tauri. Some papers have reported a companion star to Merope, at a separation of , as well as several other visual companions farther out. These possible companions have not been confirmed. Surrounding Merope is the Merope Nebula (NGC 1435). It appears brightest around Merope and is listed in the Index Catalogue as number IC 349. Nomenclature 23 Tauri is the star's Flamsteed designation. The name Merope originates with Greek mythology; she is one of the seven daughters of Atlas and Pleione known as the Pleiades. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Merope for this star. It is now so entered in the IAU Catalog of Star Names. References External links Jim Kaler's Stars, University of Illinois:Merope (23 Tauri) NGC 1435 - Merope Nebula LRGB image with 4 hours total exposure. Tauri, 023 Taurus (constellation) Beta Cephei variables Pleiades B-type subgiants 1156 017608 023480 Durchmusterung objects ? Tauri, V971
Merope (star)
Astronomy
630
10,020,042
https://en.wikipedia.org/wiki/Staurosporine
Staurosporine (antibiotic AM-2282 or STS) is a natural product originally isolated in 1977 from the bacterium Streptomyces staurosporeus. It was the first of over 50 alkaloids that were discovered to share this type of bis-indole chemical structure. The chemical structure of staurosporine was elucidated by X-ray crystalography in 1994. Staurosporine was discovered to have biological activities ranging from anti-fungal to anti-hypertensive. The interest in these activities resulted in a large investigative effort in chemistry and biology and the discovery of the potential for anti-cancer treatment. Biological activities The main biological activity of staurosporine is the inhibition of protein kinases through the prevention of ATP binding to the kinase. This is achieved through the stronger affinity of staurosporine to the ATP-binding site on the kinase. Staurosporine is a prototypical ATP-competitive kinase inhibitor in that it binds to many kinases with high affinity, though with little selectivity. Structural analysis of kinase pockets demonstrated that main chain atoms which are conserved in their relative positions to staurosporine contributes to staurosporine promiscuity. This lack of specificity has precluded its clinical use, but has made it a valuable research tool. In research, staurosporine is used to induce apoptosis. The mechanism of how it mediates this is not well understood. It has been found that one way in which staurosporine induces apoptosis is by activating caspase-3. At lower concentration, depending on the cell type, staurosporine induces specific cell cycle effects arresting cells either in G1 or in G2 phase of the cell cycle. Chemistry family Staurosporine is an indolocarbazole. It belongs to the most frequently isolated group of indolocarbazoles: Indolo(2,3-a)carbazoles. Of these, Staurosporine falls within the most common subgroup, called Indolo(2,3-a)pyrrole(3,4-c)carbazoles. These fall into two classes - halogenated (chlorinated) and non-halogenated. Halogenated indolo(2,3-a)pyrrole(3,4-c)carbazoles have a fully oxidized C-7 carbon with only one indole nitrogen containing a β-glycosidic bond, while non-halogenated indolo(2,3-a)pyrrole(3,4-c)carbazoles have both indole nitrogens glycosylated, and a fully reduced C-7 carbon. Staurosporine is in the non-halogenated class. Staurosporine is the precursor of the novel protein kinase inhibitor midostaurin (PKC412). Besides midostaurin, staurosporine is also used as a starting material in the commercial synthesis of K252c (also called staurosporine aglycone). In the natural biosynthetic pathway, K252c is a precursor of staurosporine. Biosynthesis The biosynthesis of staurosporine starts with the amino acid L-tryptophan in its zwitterionic form. Tryptophan is converted to an imine by enzyme StaO which is an L-amino acid oxidase (that may be FAD dependent). The imine is acted upon by StaD to form an uncharacterized intermediate proposed to be the dimerization product between 2 imine molecules. Chromopyrrolic acid is the molecule formed from this intermediate after the loss of VioE (used in the biosynthesis of violacein – a natural product formed from a branch point in this pathway that also diverges to form rebeccamycin. An aryl aryl coupling thought to be catalyzed by a cytochrome P450 enzyme to form an aromatic ring system occurs. This is followed by a nucleophilic attack between the indole nitrogens resulting in cyclization and then decarboxylation assisted by StaC exclusively forming staurosporine aglycone or K252c. Glucose is transformed to NTP-L-ristoamine by StaA/B/E/J/I/K which is then added on to the staurosporine aglycone at 1 indole N by StaG. The StaN enzyme reorients the sugar by attaching it to the 2nd indole nitrogen into an unfavored conformation to form intermediated O-demethyl-N-demethyl-staurosporine. Lastly, O-methylation of the 4'amine by StaMA and N-methylation of the 3'-hydroxy by StaMB leads to the formation of staurosporine. Research in preclinical use When encapsulated in liposome nanoparticle, staurosporine is shown to suppress tumors in vivo in a mouse model without the toxic side effects which have prohibited its use as an anti-cancer drug with high apoptotic activity. Researchers in UC San Diego Moores Cancer Center develop a platform technology of high drug-loading efficiency by manipulating the pH environment of the cells. When injected into the mouse glioblastoma model, staurosporine is found to accumulate primarily in the tumor via fluorescence confirmation, and the mice did not suffer weight loss compared to the control mice administered with the free compound, an indicator of reduced toxicity. List of compounds closely related to Staurosporine K252a Stauprimide Midostaurin References Bacterial alkaloids Antibiotics Gamma-lactams Protein kinase inhibitors Indolocarbazoles
Staurosporine
Biology
1,251
32,668,461
https://en.wikipedia.org/wiki/Susanna%20S.%20Epp
Susanna Samuels Epp (born 1943) is an author, mathematician, and professor. Her interests include discrete mathematics, mathematical logic, cognitive psychology, and mathematics education, and she has written numerous articles, publications, and textbooks. She is currently professor emerita at DePaul University, where she chaired the Department of Mathematical Sciences and was Vincent de Paul Professor in Mathematics. Education and career Epp holds degrees in mathematics from Northwestern University and the University of Chicago, where she completed her doctorate in 1968 under the supervision of Irving Kaplansky. She taught at Boston University and at the University of Illinois at Chicago before becoming a professor at DePaul University. Contributions Initially researching commutative algebra, Epp became interested by cognitive psychology, especially in education of Mathematics, Logic, Proof, and the Language of mathematics. She wrote several articles about teaching logic and proof in American Mathematical Monthly, and the Mathematics Teacher, a Journal by the National Council of Teachers of Mathematics. She is the author of several books including Discrete Mathematics with Applications (4th ed., Brooks/Cole, 2011), the third edition of which earned a Textbook Excellence Award from the Textbook and Academic Authors Association. "By combining discussion of theory and practice, I have tried to show that mathematics has engaging and important applications as well as being interesting and beautiful in its own right" - Susanna S. Epp wrote in the Preface of the 4th Edition of Discrete Mathematics. Recognition In 2005, she received the Louise Hay Award from the Association for Women in Mathematics in recognition for her contributions to mathematics education. Selected publications Epp, S.S., Variables in Mathematics Education. In Tools for Teaching Logic. Blackburn, P., van Ditmarsch, H., et al., eds. Springer Publishing, 2011. (Reprinted in Best Writing on Mathematics 2012, M. Pitici, Ed. Princeton Univ. Press, Nov. 2012.) Epp, S.S., V. Durand-Guerrier, et al. Argumentation and proof in the mathematics classroom. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers Eds. Springer Publishing. (co-authors: V. Durand-Guerrier, P. Boero, N. Douek, D. Tanguay), 2012. Epp, S.S., V. Durand-Guerrier, et al. Examining the role of logic in teaching proof. In Proof and Proving in Mathematics Education, G. Hanna & M. de Villiers Eds. Springer Publishing, 2012. Epp, S.S., Proof Issues with Existential Quantification. In Proof and Proving in Mathematics Education: ICMI Study 19 Conference Proceedings, F. L. Lin et al. eds., National Taiwan Normal University, 2009. Epp, S.S., The Use of Logic in Teaching Proof. In Resources for Teaching Discrete Mathematics. B. Hopkins, ed. Washington, DC: Mathematical Association of America, 2009, pp. 313–322. Epp, S.S., The Role of Logic in Teaching Proof, American Mathematical Monthly (110)10, Dec. 2003, 886-899 Epp, S.S., The Language of Quantification in Mathematics Instruction. In Developing Mathematical Reasoning in Grades K-12. Lee V. Stiff, Ed. Reston, VA: NCTM Publications, 1999, 188-197. Epp, S.S., The Role of Proof in Problem Solving. In Mathematical Thinking and Problem Solving. Alan H. Schoenfeld, Ed. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., Publishers, 1994, 257-269. References External links Susanna Epp's webpage at De Paul Fifteenth Annual Louise Hay Award, contains a brief biography of Susanna S. Epp. 1943 births 20th-century American mathematicians 21st-century American mathematicians Living people DePaul University faculty Mathematical logicians Women logicians 21st-century American women mathematicians 20th-century American women mathematicians
Susanna S. Epp
Mathematics
826
62,925,718
https://en.wikipedia.org/wiki/Nokia%20Talkman%20510
The Nokia Talkman 510 is a brick phone which has been discontinued. The phone is also known by the name 'Dataman'. The term 'brick' has become popular for explaining phones of a solid, chunky form factor that more closely resembles a brick than a device, something that has become more defined with the evolution of the mobile phone. Such phones have also come to be colloquially known as 'dumbphones', a play on the term 'smartphone'. References Talkman 510
Nokia Talkman 510
Technology
100
14,545,515
https://en.wikipedia.org/wiki/Power%20Sword
The Power Sword, also referred to as the Sword of Power or the Sword of Grayskull, is a fictional sword from Mattel's Masters of the Universe toy line. In the original mini-comics produced with the toyline in 1981, the Power Sword was a mystical object split into two parts, which Skeletor tries to obtain and put together in order to gain control over Castle Grayskull. In these early stories, He-Man uses an axe and a shield, rather than the magical sword. With the arrival of the 1983 He-Man and the Masters of the Universe animated series, the Power Sword became the means by which Prince Adam transforms into He-Man, and his pet tiger Cringer into Battle Cat. The weapon kept the same basic shape during most of the 1980s, but then it was radically redesigned twice: for the 1990 series The New Adventures of He-Man, and the 2002 remake, He-Man and the Masters of the Universe. In addition to the action-figure-sized Power Sword packaged with the character, full-size "He-Man Power Swords" were a favorite Christmas gift for decades, allowing children to role-play the barbarian hero. Some of these kid-sized Power Swords have been electronic, making a variety of battle sounds. Power Swords have also been sold as accessories for He-Man Halloween costumes. In the unsuccessful 1989 relaunch of the toy line, the electronic Power Sword reportedly sold better than the entire rest of the toy line put together. Early appearances The Power Sword was a late addition in the creation of the Masters of the Universe toy line; in the concept art, He-Man battled with an axe and shield, and a thin, unimpressive sword was wielded by the flamboyant Prince Adam (at that time a separate character, and not He-Man's alternate identity). When the initial Mattel toy line was introduced in 1982, the He-Man and Skeletor figures each came with half of a plastic sword which could be joined into one "complete" sword, corresponding to the storyline in the included mini-comic. Together, the combined sword was used as a key to open the jawbridge to the Castle Grayskull playset. According to the original storyline, the Goddess (an early name for the Sorceress) had split the sword into two and scattered the pieces, in order to protect the castle and its source of universal power. The story was told in the He-Man and the Power Sword illustrated mini-comic, which was packaged with the original He-Man action figure. Skeletor's goal in the book is to acquire the other half of the sword hidden inside Castle Grayskull in order to obtain the sword's total power, adding that "the magic fires, created by ancient scientists and sorcerers, will blaze again" once the two halves are joined. The specific purpose of the quest is also made clear: the Power Sword can be used to open a hole in the dimensional wall in order to bring reinforcements from Skeletor's dimension of origin, which would allow Skeletor to conquer Eternia's dimension. Once the two halves of the Power Sword are joined, Skeletor is able to use the sword to command various objects to attack He-Man. However, the spell is broken once the Sorceress splits the Power Sword into two halves again, hiding them and making the Power Sword the only key that can open the castle's Jaw-Bridge when inserted into an enchanted lock. The next illustrated book, King of Castle Grayskull, reveals where the two halves have been hidden: one at Eternia's "highest point", the other beneath its "hardest rock." Whoever finds them can claim the throne of Castle Grayskull and the "secrets of the universe". The "highest point" turns out to be the top of Stratos's mountain, while the "hardest rock" is the rock where He-Man built his home in the previous book. As expected, the Jawbridge opens once the two halves have been inserted into the lock, but then Skeletor loses the sword again in battle. The book ends with the Spirit of the Castle sending the two halves into another dimension, where Skeletor is not expected to find them easily. The Power Sword is not featured in the last two books of the first series, Battle in the Clouds and The Vengeance of Skeletor. In How He-Man Mastered the Universe: Toy to Television to the Big Screen, Brian C. Baer writes: Filmation cartoon When Filmation produced the cartoon He-Man and the Masters of the Universe in 1983, the producers worried that children wouldn't identify with a wild, axe-wielding barbarian character. Based on their experience with The Kid Super Power Hour with Shazam! in 1981, Filmation knew that kids would relate to a vulnerable, child-like figure who could turn super-powerful with a prop and a magic word. For He-Man, the protagonist became Prince Adam, who could use a newly restored Power Sword to turn into the muscle-bound hero. In the cartoon, the Sorceress of Grayskull gives Prince Adam the Power Sword, which allows him to transform into He-Man, "the Most Powerful Man in the Universe", and his cowardly pet tiger, Cringer, into the fierce and brave Battle Cat. Prince Adam begins his war-cry by holding the Power Sword above his head with his right hand, proclaiming, "By the Power of Grayskull...." whereupon mystical lightning strikes the Power Sword and transforms him; He-Man then seizes the tip of the Power Sword's blade and completes the war-cry, "...I HAVE THE POWER!" While the Power Sword is the key to unlocking He-Man's strength, it's rarely used in battle; he mostly uses it to cut objects, and deflect energy blasts. In the episode "The Problem with Power", when He-Man is fooled into thinking that he's inadvertently killed someone, he raises the sword and surrenders the power of Castle Grayskull, transforming back into Prince Adam by proclaiming, "Let the power return!" Princess Adora/She-Ra, He-Man's twin sister in the cartoon, has a companion Power Sword, called the Sword of Protection, which is identical except that it has a glowing jewel in the hilt. The jewel allows Princess Adora to channel her powers, as her sword is learned to have been a clone of He-Man's sword crafted by the Goddess of Grayskull. She transforms into She-Ra by saying, "For the honor of Grayskull...I am She-Ra!" Marvel Star Comics' 1986 Masters of the Universe comic book adaptation featured a storyline about an alternate timeline caused by the Power Sword being transported thirty years into the future, and is wielded by a hero named Clamp Champ. The 1989 newspaper comic strip adaptation also featured the Power Sword prominently, used in the iconic transformation in the first strip. A 1989 story, "When You Need an Extra Something", featured a battle between He-Man and Evil-Lyn for possession of the Sword. Live action movie In the 1987 live action film, Masters of the Universe, the Power Sword is renamed the Sword of Grayskull. In the cartoon, He-Man engaged in actual sword fights very rarely, but the film producers knew that the character was closely associated with the sword, which meant that it should feature prominently in the movie's finale. The New Adventures of He-Man The original toy line was cancelled in 1987 after drastically declining sales. Mattel attempted to relaunch the character just two years later, redesigning the character. He-Man was slimmed down to a more realistic musculature, and transported into the distant future for science-fiction adventures on the alien planet Primus. Along with the revamped character, the Power Sword was also redesigned into a more futuristic-looking form, with a green laser blade that could fire bolts of glowing energy. In 1990, Jetlag Productions produced a new cartoon, The New Adventures of He-Man, to promote the new toy line. In this series, Prince Adam's phrase to transform into He-Man is changed from "By the Power of Grayskull..." to "By the Power of Eternia..." He-Man's sword was a more important element in this version, gaining the ability to fire energy blasts and pulses of magic. 2002 television series In the animated 2002 reboot, the origins of the Power Sword and Castle Grayskull are again revised. The castle is revealed to be the former home of the ancient warrior King Grayskull, who resembles He-Man but larger and with longer Viking-like hair and a massive green saber-toothed lion as a steed. The Power Sword is King Grayskull's personal weapon, and after fighting a fatal battle with Hordak, the dying king binds his mystical powers to the weapon. Afterwards his advisers become the Elders who seal the castle, and his wife becomes its guardian, the first Sorceress. Therefore, when Prince Adam holds up the sword and calls out "by the power of Grayskull" he is calling on the energies of King Grayskull himself, rather than those of the namesake castle. The sword was heavily redesigned for the new cartoon but with a much more complex and mechanized look. When held by Prince Adam, it appears smaller. However, during the transformation sequence, the hilt pivots on an axis and changes shape, taking a new form when it is in He-Man's hands, and is more explicitly shown growing in size in the revised transformation sequence from the second season. In the series finale, it is shown that an alternate mode can be accessed wherein the blade splits in the middle and opens to reveal another emerald blade inside. The sword then appears to be two fangs (the blade) and a snake's tongue (the emerald blade). This mode of the sword was used to battle Serpos, the giant snake deity that was imprisoned in Snake Mountain. Also in this series, Skeletor possess twin swords that can be combined into one larger sword, a reference to the original concept of the Power Sword(s) from the action figures and minicomics, however this twin sword has no magical properties. According to designers the Four Horsemen, this was due to their original re-sculpts being intended for a continuation of the original storyline in which Skeletor had obtained both halves of the Power Sword (hence the new Skeletor figure's dual blades with clear "good" and "evil" hilt designs), necessitating a new sword to be built by Man-At-Arms and endowed with the properties of the original by the Sorceress. However, Mattel decreed that they wished to reboot the continuity for a new generation of children, and thus the "new" Power Sword design became the "original" version for the new continuity. Sword of Protection The Sword of Protection is the weapon wielded by Adora, Prince Adam's twin sister, and is used in her transformation into the heroic She-Ra and Spirit's into Swift Wind. Instead of the war-cry, "By the Power of Grayskull," Adora's transformation is triggered by calling "For the Honor of Grayskull." It is identical in overall design to the Sword of Power, with one exception - the Sword of Protection has a jewel imbedded in the hilt. The jewel is the key to such powers of the Sword of Protection as Adora's transformation; if it is damaged, she loses her ability to transform into She-Ra, as seen in the episode "The Stone in the Sword." The stone, which was created by the Goddess of Grayskull, allows Adora/She-Ra to channel all the powers of Grayskull if needed. She-Ra's sword is discovered to be a direct clone of He-Man's, as the Goddess felt that Adora's destiny would require her to also tap into the powers of Grayskull with her own sword. In addition to being a formidable weapon capable of cutting through most substances or deflecting attacks, the Sword of Protection has the ability to change its shape, a trait not shared by the Sword of Power. She-Ra can change the sword to a variety of weapons or tools through spoken command, varying from a shield or lasso, to a helmet or flaming blade. She-Ra can also use her sword to draw upon the mystical power of the planet Etheria itself, increasing her strength beyond her usual levels. According to the 2015 DC Comics series He-Man - The Eternity War, the Sword of Protection was forged in case the Sword of Power fell into the wrong hands or the wielder of it became corrupted. In Netflix's She-Ra and the Princesses of Power, the Sword of Protection is an amalgam of technology and magic. Created by the First Ones, there have been previous wielders of the sword, all who have been able to transform into She-Ra, suggesting its more of a title than individual. The sword is capable of interacting with other pieces of First One's tech, projecting bolts of energy and transforming any animals in a similar manner to Swift Wind's transformation. Further reading Mastering the Universe: He-Man and the Rise and Fall of a Billion-Dollar Idea by Roger Sweet and David Wecker, Emmis Books (2005) The Art of He-Man and the Masters of the Universe, Dark Horse Books (2015) He-Man and the Masters of the Universe: A Character Guide and World Compendium'', Dark Horse Books (2017) References External links How He-Man's Sword Retcons A Story From The Toy Line Fantasy weapons Fictional elements introduced in 1982 Fictional swords Magic items Masters of the Universe
Power Sword
Physics
2,882
1,422,748
https://en.wikipedia.org/wiki/Nonlinear%20Schr%C3%B6dinger%20equation
In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides and to Bose–Einstein condensates confined to highly anisotropic, cigar-shaped traps, in the mean-field regime. Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water; the Langmuir waves in hot plasmas; the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere; the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains; and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion. Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state. The 1D NLSE is an example of an integrable model. In quantum mechanics, the 1D NLSE is a special case of the classical nonlinear Schrödinger field, which in turn is a classical limit of a quantum Schrödinger field. Conversely, when the classical Schrödinger field is canonically quantized, it becomes a quantum field theory (which is linear, despite the fact that it is called ″quantum nonlinear Schrödinger equation″) that describes bosonic point particles with delta-function interactions — the particles either repel or attract when they are at the same point. In fact, when the number of particles is finite, this quantum field theory is equivalent to the Lieb–Liniger model. Both the quantum and the classical 1D nonlinear Schrödinger equations are integrable. Of special interest is the limit of infinite strength repulsion, in which case the Lieb–Liniger model becomes the Tonks–Girardeau gas (also called the hard-core Bose gas, or impenetrable Bose gas). In this limit, the bosons may, by a change of variables that is a continuum generalization of the Jordan–Wigner transformation, be transformed to a system one-dimensional noninteracting spinless fermions. The nonlinear Schrödinger equation is a simplified 1+1-dimensional form of the Ginzburg–Landau equation introduced in 1950 in their work on superconductivity, and was written down explicitly by in their study of optical beams. Multi-dimensional version replaces the second spatial derivative by the Laplacian. In more than one dimension, the equation is not integrable, it allows for a collapse and wave turbulence. Definition The nonlinear Schrödinger equation is a nonlinear partial differential equation, applicable to classical and quantum mechanics. Classical equation The classical field equation (in dimensionless form) is: for the complex field ψ(x,t). This equation arises from the Hamiltonian with the Poisson brackets Unlike its linear counterpart, it never describes the time evolution of a quantum state. The case with negative κ is called focusing and allows for bright soliton solutions (localized in space, and having spatial attenuation towards infinity) as well as breather solutions. It can be solved exactly by use of the inverse scattering transform, as shown by (see below). The other case, with κ positive, is the defocusing NLS which has dark soliton solutions (having constant amplitude at infinity, and a local spatial dip in amplitude). Quantum mechanics To get the quantized version, simply replace the Poisson brackets by commutators and normal order the Hamiltonian The quantum version was solved by Bethe ansatz by Lieb and Liniger. Thermodynamics was described by Chen-Ning Yang. Quantum correlation functions also were evaluated by Korepin in 1993. The model has higher conservation laws - Davies and Korepin in 1989 expressed them in terms of local fields. Solution The nonlinear Schrödinger equation is integrable in 1d: solved it with the inverse scattering transform. The corresponding linear system of equations is known as the Zakharov–Shabat system: where The nonlinear Schrödinger equation arises as compatibility condition of the Zakharov–Shabat system: By setting q = r* or q = − r* the nonlinear Schrödinger equation with attractive or repulsive interaction is obtained. An alternative approach uses the Zakharov–Shabat system directly and employs the following Darboux transformation: which leaves the system invariant. Here, φ is another invertible matrix solution (different from ϕ) of the Zakharov–Shabat system with spectral parameter Ω: Starting from the trivial solution U = 0 and iterating, one obtains the solutions with n solitons. This can be achieved via direct numerical simulation using, for example, the split-step method. Applications Fiber optics In optics, the nonlinear Schrödinger equation occurs in the Manakov system, a model of wave propagation in fiber optics. The function ψ represents a wave and the nonlinear Schrödinger equation describes the propagation of the wave through a nonlinear medium. The second-order derivative represents the dispersion, while the κ term represents the nonlinearity. The equation models many nonlinearity effects in a fiber, including but not limited to self-phase modulation, four-wave mixing, second-harmonic generation, stimulated Raman scattering, optical solitons, ultrashort pulses, etc. Water waves For water waves, the nonlinear Schrödinger equation describes the evolution of the envelope of modulated wave groups. In a paper in 1968, Vladimir E. Zakharov describes the Hamiltonian structure of water waves. In the same paper Zakharov shows that, for slowly modulated wave groups, the wave amplitude satisfies the nonlinear Schrödinger equation, approximately. The value of the nonlinearity parameter к depends on the relative water depth. For deep water, with the water depth large compared to the wave length of the water waves, к is negative and envelope solitons may occur. Additionally, the group velocity of these envelope solitons could be increased by an acceleration induced by an external time-dependent water flow. For shallow water, with wavelengths longer than 4.6 times the water depth, the nonlinearity parameter к is positive and wave groups with envelope solitons do not exist. In shallow water surface-elevation solitons or waves of translation do exist, but they are not governed by the nonlinear Schrödinger equation. The nonlinear Schrödinger equation is thought to be important for explaining the formation of rogue waves. The complex field ψ, as appearing in the nonlinear Schrödinger equation, is related to the amplitude and phase of the water waves. Consider a slowly modulated carrier wave with water surface elevation η of the form: where a(x0, t0) and θ(x0, t0) are the slowly modulated amplitude and phase. Further ω0 and k0 are the (constant) angular frequency and wavenumber of the carrier waves, which have to satisfy the dispersion relation ω0 = Ω(k0). Then So its modulus |ψ| is the wave amplitude a, and its argument arg(ψ) is the phase θ. The relation between the physical coordinates (x0, t0) and the (x, t) coordinates, as used in the nonlinear Schrödinger equation given above, is given by: Thus (x, t) is a transformed coordinate system moving with the group velocity Ω'(k0) of the carrier waves, The dispersion-relation curvature Ω"(k0) – representing group velocity dispersion – is always negative for water waves under the action of gravity, for any water depth. For waves on the water surface of deep water, the coefficients of importance for the nonlinear Schrödinger equation are: so where g is the acceleration due to gravity at the Earth's surface. In the original (x0, t0) coordinates the nonlinear Schrödinger equation for water waves reads: with (i.e. the complex conjugate of ) and So for deep water waves. Vortices showed that the work of on vortex filaments is closely related to the nonlinear Schrödinger equation. Subsequently, used this correspondence to show that breather solutions can also arise for a vortex filament. Galilean invariance The nonlinear Schrödinger equation is Galilean invariant in the following sense: Given a solution ψ(x, t) a new solution can be obtained by replacing x with x + vt everywhere in ψ(x, t) and by appending a phase factor of : Gauge equivalent counterpart NLSE (1) is gauge equivalent to the following isotropic Landau-Lifshitz equation (LLE) or Heisenberg ferromagnet equation Note that this equation admits several integrable and non-integrable generalizations in 2 + 1 dimensions like the Ishimori equation and so on. Zero-curvature formulation The NLSE is equivalent to the curvature of a particular -connection on being equal to zero. Explicitly, with coordinates on , the connection components are given by where the are the Pauli matrices. Then the zero-curvature equation is equivalent to the NLSE . The zero-curvature equation is so named as it corresponds to the curvature being equal to zero if it is defined . The pair of matrices and are also known as a Lax pair for the NLSE, in the sense that the zero-curvature equation recovers the PDE rather than them satisfying Lax's equation. See also AKNS system Eckhaus equation Gross–Pitaevskii equation Quartic interaction for a related model in quantum field theory Soliton (optics) Logarithmic Schrödinger equation References Notes Other External links Tutorial lecture on Nonlinear Schrodinger Equation (video). Nonlinear Schrodinger Equation with a Cubic Nonlinearity at EqWorld: The World of Mathematical Equations. Nonlinear Schrodinger Equation with a Power-Law Nonlinearity at EqWorld: The World of Mathematical Equations. Nonlinear Schrodinger Equation of General Form at EqWorld: The World of Mathematical Equations. Mathematical aspects of the nonlinear Schrödinger equation at Dispersive Wiki Partial differential equations Exactly solvable models Schrödinger equation Integrable systems
Nonlinear Schrödinger equation
Physics
2,187
70,515,748
https://en.wikipedia.org/wiki/Mercedes-Benz%20DTM%20V8%20engine
The Mercedes-Benz DTM V8 engine is a prototype, four-stroke, 4.0-liter, naturally aspirated V-8 racing engines, developed and produced by Mercedes-Benz for the Deutsche Tourenwagen Masters, between 2000 and 2018. Engine The Mercedes-Benz DTM V8 engine is a , naturally-aspirated, V8 engine, with a power output of between and a maximum torque . It is a 90-degree V8 engine with four-valves per cylinder, uses indirect fuel injection, and has 2 x 28 mm air restrictors due to regulations. Applications AMG-Mercedes CLK-DTM Mercedes-Benz AMG C-Class DTM (W203) Mercedes-Benz AMG C-Class DTM (W204) Mercedes-AMG C-Coupé DTM References V8 engines Mercedes-Benz engines Gasoline engines by model Engines by model Piston engines Internal combustion engine
Mercedes-Benz DTM V8 engine
Technology,Engineering
188
71,432,937
https://en.wikipedia.org/wiki/Ruthenium%28III%29%20fluoride
Ruthenium(III) fluoride is a fluoride of ruthenium, with the chemical formula of RuF3. Preparation Ruthenium(III) fluoride can be obtained from the reduction of ruthenium(V) fluoride by iodine at 250 °C: 5 RuF5 + I2 -> 5 RuF3 + 2 IF5 Properties Ruthenium(III) fluoride is a dark brown solid that is insoluble in water. It has a space group of Rc (No. 167). References Fluorides Ruthenium(III) compounds
Ruthenium(III) fluoride
Chemistry
128
897
https://en.wikipedia.org/wiki/Arsenic
Arsenic is a chemical element with the symbol As and the atomic number 33. It is a metalloid and one of the pnictogens, and therefore shares many properties with its group 15 neighbors phosphorus and antimony. Arsenic is a notoriously toxic heavy metal. It occurs naturally in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is also a common n-type dopant in semiconductor electronic devices, and a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds. Arsenic has been known since ancient times to be poisonous to humans. However, a few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic have been proposed to be an essential dietary element in rats, hamsters, goats, and chickens. Research has not been conducted to determine whether small amounts of arsenic may play a role in human metabolism. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic number 1 in its 2001 prioritized list of hazardous substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. Characteristics Physical characteristics The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor. Arsenic sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is at 3.63 MPa and . Isotopes Arsenic occurs in nature as one stable isotope, 75As, and is therefore called a monoisotopic element. As of 2024, at least 32 radioisotopes have also been synthesized, ranging in atomic mass from 64 to 95. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Chemistry Arsenic has a similar electronegativity and ionization energies to its lighter pnictogen congener phosphorus and therefore readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the +5 oxidation state than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds Compounds of arsenic resemble, in some respects, those of phosphorus, which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. Inorganic compounds One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and its salts, known as arsenates, are a major source of arsenic contamination of groundwater in regions with high levels of naturally-occurring arsenic minerals. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive, garlic-like odor; it is very toxic. Occurrence and production Arsenic is the 53rd most abundant element in the Earth's crust, comprising about 1.5 parts per million (0.00015%). Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Arsenic is the 22nd most abundant element in seawater and ranks 41st in abundance in the universe. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word zarnika, from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile". Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic". Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era. During the Bronze Age, arsenic was melted with copper to make arsenical bronze. Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide. In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic, which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper. Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. In small doses, soluble arsenic compounds act as stimulants, and were once popular as medicine by people in the mid-18th to 19th centuries; this use was especially prevalent for sport animals such as race horses or work dogs and continued into the 20th century. A 2006 study of the remains of the Australian racehorse Phar Lap determined that its 1932 death was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys. Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler), for treating diseases such as cancer or psoriasis. Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis in spite of their severe toxicity, since the disease is almost uniformly fatal if untreated. In 2000 the US Food and Drug Administration approved arsenic trioxide for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically. Arsenic was used in the taxidermy process up until the 1980s. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that the Halomonadaceae strain GFAJ-1 could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Potential role in higher animals Arsenic may be an essential trace mineral in birds, involved in the synthesis of methionine metabolites. However, the role of arsenic in bird nutrition is disputed, as other authors state that arsenic is toxic in small amounts Some evidence indicates that arsenic is an essential trace mineral in mammals. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. In Europe, an analysis based on 20,000 soil samples across all 28 countries show that 98% of sampled soils have concentrations less than 20 mg kg-1. In addition, the As hotspots are related to frequent fertilization and close distance to mining activities. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 μg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 μg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are , , , and at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where sulfate reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Arsenic removal Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or not at all due to charge repulsion. In coagulation, a positively charged coagulent such as iron and aluminum (commonly used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralize the negatively charged arsenate, enable it to settle. Flocculation follows where a flocculant bridges smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exists in uncharged arsenious acid, H3AsO3, at near-neutral pH. The major drawbacks of coagulation and flocculation are the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as iron may produce ion contamination that exceeds safety levels. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons (e.g. arsine). Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram, or 1000 ppb). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity. See also Aqua Tofana Arsenic and Old Lace Grainger challenge Hypothetical types of biochemistry References Bibliography Further reading External links WHO fact sheet on arsenic Arsenic Cancer Causing Substances, U.S. National Cancer Institute. CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database Contaminant Focus: Arsenic by the EPA. Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO. National Institute for Occupational Safety and Health – Arsenic Page Chemical elements Metalloids Semimetals Hepatotoxins Pnictogens Endocrine disruptors IARC Group 1 carcinogens Trigonal minerals Minerals in space group 166 Teratogens Fetotoxicants Suspected testicular toxicants Native element minerals Chemical elements with rhombohedral structure
Arsenic
Physics,Chemistry,Materials_science
10,952
14,900
https://en.wikipedia.org/wiki/ISO%203166
ISO 3166 is a standard published by the International Organization for Standardization (ISO) that defines codes for the names of countries, dependent territories, special areas of geographical interest, and their principal subdivisions (e.g., provinces or states). The official name of the standard is Codes for the representation of names of countries and their subdivisions. Parts It consists of three parts: ISO 3166-1, Codes for the representation of names of countries and their subdivisions – Part 1: Country codes, defines codes for the names of countries, dependent territories, and special areas of geographical interest. It defines three sets of country codes: ISO 3166-1 alpha-2 – two-letter country codes which are the most widely used of the three, and used most prominently for the Internet's country code top-level domains (with a few exceptions). ISO 3166-1 alpha-3 – three-letter country codes which allow a better visual association between the codes and the country names than the alpha-2 codes. ISO 3166-1 numeric – three-digit country codes which are identical to those developed and maintained by the United Nations Statistics Division, with the advantage of script (writing system) independence, and hence useful for people or systems using non-Latin scripts. ISO 3166-2, Codes for the representation of names of countries and their subdivisions – Part 2: Country subdivision code, defines codes for the names of the principal subdivisions (e.g., provinces, states, departments, regions) of all countries coded in ISO 3166-1. ISO 3166-3, Codes for the representation of names of countries and their subdivisions – Part 3: Code for formerly used names of countries, defines codes for country names which have been deleted from ISO 3166-1 since its first publication in 1974. Editions The first edition of ISO 3166, which included only alphabetic country codes, was published in 1974. The second edition, published in 1981, also included numeric country codes, with the third and fourth editions published in 1988 and 1993 respectively. The fifth edition, published between 1997 and 1999, was expanded into three parts to include codes for subdivisions and former countries. ISO 3166 Maintenance Agency The ISO 3166 standard is maintained by the ISO 3166 Maintenance Agency (ISO 3166/MA), located at the ISO central office in Geneva. Originally it was located at the Deutsches Institut für Normung (DIN) in Berlin. Its principal tasks are: To add and to eliminate country names and to assign code elements to them; To publish lists of country names and code elements; To maintain a reference list of all country code elements and subdivision code elements used and their period of use; To issue newsletters announcing changes to the code tables; To advise users on the application of ISO 3166. Members There are fifteen experts with voting rights on the ISO 3166/MA. Nine are representatives of national standards organizations: Association française de normalisation (AFNOR)France American National Standards Institute (ANSI)United States British Standards Institution (BSI)United Kingdom Deutsches Institut für Normung (DIN)Germany Japanese Industrial Standards Committee (JISC)Japan Standards Australia (SA)Australia Kenya Bureau of Standards (KEBS)Kenya Standardization Administration of China (SAC)China Swedish Standards Institute (SIS)Sweden The other six are representatives of major United Nations agencies or other international organizations who are all users of ISO 3166-1: International Atomic Energy Agency (IAEA) International Civil Aviation Organization (ICAO) International Telecommunication Union (ITU) Internet Corporation for Assigned Names and Numbers (ICANN) Universal Postal Union (UPU) United Nations Economic Commission for Europe (UNECE) The ISO 3166/MA has further associated members who do not participate in the votes but who, through their expertise, have significant influence on the decision-taking procedure in the maintenance agency. Codes beginning with "X" Country codes beginning with "X" are used for private custom use (reserved), never for official codes. Despite the words "private custom", the use may include other public standards. ISO affirms that no country code beginning with X will ever be standardised. Examples of X codes include: The ISO 3166-based NATO country codes (STANAG 1059, 9th edition) use "X" codes for imaginary exercise countries ranging from XXB for "Brownland" to XXY for "Yellowland", as well as for major commands such as XXE for SHAPE or XXS for SACLANT. X currencies defined in ISO 4217. Current country codes See also ISO standards ISO 3166 Explanatory notes References External links ISO 3166 Maintenance Agency, International Organization for Standardization (ISO) 03166 1974 introductions 1974 establishments Internationalization and localization
ISO 3166
Technology
970
6,797,677
https://en.wikipedia.org/wiki/Regular%20semigroup
In mathematics, a regular semigroup is a semigroup S in which every element is regular, i.e., for each element a in S there exists an element x in S such that . Regular semigroups are one of the most-studied classes of semigroups, and their structure is particularly amenable to study via Green's relations. History Regular semigroups were introduced by J. A. Green in his influential 1951 paper "On the structure of semigroups"; this was also the paper in which Green's relations were introduced. The concept of regularity in a semigroup was adapted from an analogous condition for rings, already considered by John von Neumann. It was Green's study of regular semigroups which led him to define his celebrated relations. According to a footnote in Green 1951, the suggestion that the notion of regularity be applied to semigroups was first made by David Rees. The term inversive semigroup (French: demi-groupe inversif) was historically used as synonym in the papers of Gabriel Thierrin (a student of Paul Dubreil) in the 1950s, and it is still used occasionally. The basics There are two equivalent ways in which to define a regular semigroup S: (1) for each a in S, there is an x in S, which is called a pseudoinverse, with axa = a; (2) every element a has at least one inverse b, in the sense that aba = a and bab = b. To see the equivalence of these definitions, first suppose that S is defined by (2). Then b serves as the required x in (1). Conversely, if S is defined by (1), then xax is an inverse for a, since a(xax)a = axa(xa) = axa = a and (xax)a(xax) = x(axa)(xax) = xa(xax) = x(axa)x = xax. The set of inverses (in the above sense) of an element a in an arbitrary semigroup S is denoted by V(a). Thus, another way of expressing definition (2) above is to say that in a regular semigroup, V(a) is nonempty, for every a in S. The product of any element a with any b in V(a) is always idempotent: abab = ab, since aba = a. Examples of regular semigroups Every group is a regular semigroup. Every band (idempotent semigroup) is regular in the sense of this article, though this is not what is meant by a regular band. The bicyclic semigroup is regular. Any full transformation semigroup is regular. A Rees matrix semigroup is regular. The homomorphic image of a regular semigroup is regular. Unique inverses and unique pseudoinverses A regular semigroup in which idempotents commute (with idempotents) is an inverse semigroup, or equivalently, every element has a unique inverse. To see this, let S be a regular semigroup in which idempotents commute. Then every element of S has at least one inverse. Suppose that a in S has two inverses b and c, i.e., aba = a, bab = b, aca = a and cac = c. Also ab, ba, ac and ca are idempotents as above. Then b = bab = b(aca)b = bac(a)b = bac(aca)b = bac(ac)(ab) = bac(ab)(ac) = ba(ca)bac = ca(ba)bac = c(aba)bac = cabac = cac = c. So, by commuting the pairs of idempotents ab & ac and ba & ca, the inverse of a is shown to be unique. Conversely, it can be shown that any inverse semigroup is a regular semigroup in which idempotents commute. The existence of a unique pseudoinverse implies the existence of a unique inverse, but the opposite is not true. For example, in the symmetric inverse semigroup, the empty transformation Ø does not have a unique pseudoinverse, because Ø = ØfØ for any transformation f. The inverse of Ø is unique however, because only one f satisfies the additional constraint that f = fØf, namely f = Ø. This remark holds more generally in any semigroup with zero. Furthermore, if every element has a unique pseudoinverse, then the semigroup is a group, and the unique pseudoinverse of an element coincides with the group inverse. Green's relations Recall that the principal ideals of a semigroup S are defined in terms of S1, the semigroup with identity adjoined; this is to ensure that an element a belongs to the principal right, left and two-sided ideals which it generates. In a regular semigroup S, however, an element a = axa automatically belongs to these ideals, without recourse to adjoining an identity. Green's relations can therefore be redefined for regular semigroups as follows: if, and only if, Sa = Sb; if, and only if, aS = bS; if, and only if, SaS = SbS. In a regular semigroup S, every - and -class contains at least one idempotent. If a is any element of S and is any inverse for a, then a is -related to a and -related to a. Theorem. Let S be a regular semigroup; let a and b be elements of S, and let V(x) denote the set of inverses of x in S. Then iff there exist in V(a) and in V(b) such that a = b; iff there exist in V(a) and in V(b) such that a = b, iff there exist in V(a) and in V(b) such that a = b and a = b. If S is an inverse semigroup, then the idempotent in each - and -class is unique. Special classes of regular semigroups Some special classes of regular semigroups are: Locally inverse semigroups: a regular semigroup S is locally inverse if eSe is an inverse semigroup, for each idempotent e. Orthodox semigroups: a regular semigroup S is orthodox if its subset of idempotents forms a subsemigroup. Generalised inverse semigroups: a regular semigroup S is called a generalised inverse semigroup if its idempotents form a normal band, i.e., for all idempotents x, y, z. The class of generalised inverse semigroups is the intersection of the class of locally inverse semigroups and the class of orthodox semigroups. All inverse semigroups are orthodox and locally inverse. The converse statements do not hold. Generalizations eventually regular semigroup E-dense (aka E-inversive) semigroup See also Biordered set Special classes of semigroups Nambooripad order Generalized inverse References Sources M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, . J. M. Howie, Semigroups, past, present and future, Proceedings of the International Conference on Algebra and Its Applications, 2002, 6–20. Semigroup theory Algebraic structures
Regular semigroup
Mathematics
1,607
53,174,400
https://en.wikipedia.org/wiki/Kenneth%20Sims%20%28geologist%29
Kenneth W. W. Sims (born 1959) is an American professor of isotope geology in the Department of Geology and Geophysics at the University of Wyoming. Sims operates the University of Wyoming High Precision Isotope Laboratory. Professor Sims is married, has two children and lives in Laramie, Wyoming. Research overview Sims is a National Geographic explorer well known for using his technical mountaineering skills to collect geological samples from remote locations across the globe, including sampling molten magma from lava lakes deep within volcanic craters, collecting temporal sequences of lavas from high, technical ridges on the flanks of the world's tallest volcanoes, and using submersibles to obtain mid-ocean ridge basalts from the bottom of the Earth's oceans. Many of these adventures have been featured in National Geographic publications and documentaries, as well as numerous other media venues. Sims’ research endeavors focus on obtaining hard-to-collect samples and then measuring unique and analytically challenging isotope systems to provide otherwise unobtainable answers to societally relevant questions about Earth systems science. Sims’ research applies a variety of isotopic techniques (U– and Th– decay series, cosmogenic nuclides, radiogenic isotopes, and non-traditional stable isotopes) to address a wide range of topics in earth and ocean sciences. Sims has published nearly one hundred research articles in peer-reviewed scientific journals, including Nature and Science. These publications cover a wide range of topics: magma genesis, differentiation, and degassing; continental and oceanic crustal construction; planetary accretion and core formation; trace-element partitioning; surficial weathering; paleo-oceanography; chemical oceanography; ground water hydrology; water-rock interaction; fumarolic activity; volcanic aerosol formation and dispersal; serpentinization; natural rates of carbon sequestration; and, shallow subsurface geophysics. Sims’ major contributions are determining the time scales and dynamics of magma genesis and volcanic processes. Sims’ research is funded by the US National Science Foundation, National Geographic Society, the US Department of Energy, Woods Hole Oceanographic Institution and the University of Wyoming. Career Sims received a B.A. in geology in 1986 from Colorado College, graduating lllCum lllLaude with llHonors. He completed an M.Sc. at the University of New Mexico’s Institute of Meteoritics in 1989, where his research focused on chemical fractionation during the formation of the Earth’s core and continental crust. His Ph.D. was earned in 1995 from the University of California, Berkeley where his research focused on magma genesis in the Earth’s mantle. Sims worked as a student and then as a guest scientist for the Isotope and Nuclear Chemistry Group at the Los Alamos National Laboratory, New Mexico. After completing his Ph.D., Sims was awarded the Woods Hole Oceanographic Institution (WHOI) Postdoctoral Scholar Fellow from 1995-1997. He was then hired onto the WHOI scientific staff in 1997 where he remained as a tenured research scientist until 2009. In 2009 Sims moved to the department of geology and geophysics of the University of Wyoming, where he is now a full professor. Sims was a Visiting CNRS Fellow at the Institut Universitaire Européen de la Mer (IUEM), France in 2002. In 2016, Sims became a US Fulbright scholar and a visiting professor at the Instituto de Geofisico, Escuela Politécnica Nacional, Quito Ecuador. In his current role at the University of Wyoming, Sims is involved in a variety of research projects, graduate and undergraduate teaching, and the supervision of graduate students. Sims is the University of Wyoming Organizational Lead for the Yellowstone Volcano Observatory. Sims has received various academic accolades for his research and public engagement. Additionally, Sims worked as a professional climbing instructor and high-altitude mountaineering guide for 23 years (1975-1998) including in Antarctica, Alaska, Mexico and Peru. During this period, he pioneered many difficult first ascents around the world. Field work Sims’ field work, primarily funded by the National Geographic Society and the National Science Foundation (NSF), has crossed the globe from the bottom of the Earth's oceans to the top of its highest volcanoes. Highlights of Sims’ field research include: Antarctica Sims has been to Antarctica fourteen times since 1989. He has worked both as a guide for science parties funded by the NSF and NASA (including as a guide and rigger for the NASA Dante Rover project), and also as a principal investigator funded by the NSF to conduct research on the volcanoes of Ross Island and Mt Morning. In particular, Sims has worked and published extensively on Mt Erebus, the world's southernmost active volcano, including descending into its active crater numerous times to collect recently erupted lava bombs from its persistent lava lake. Sims’ most recent expeditions (2012–2017) have been to study the volcanoes Mt Bird, Mt Terror and Hut Point on Ross Island and also the volcanic cones on Mt Discovery. Democratic Republic of the Congo Sims’ expeditions to the Democratic Republic of the Congo were funded by the National Geographic Society to film two documentaries and by NSF for research related to volcano hazard assessment. In his research pursuits, Sims has trekked deep into the Virunga jungles to access the remote, and often active volcano Nyamulagira, and he repeatedly descended into the active Nyamulagira crater to collect old lava flows from the crater walls and molten magma from the active lava lake. Ecuador Sims has conducted numerous expeditions into the Ecuadorian volcanoes. Funded by the National Geographic Society in 2014, Sims led a month-long expedition into the remote Sangay volcano (5,300 meters/17,400 feet ASL), which is one of the highest and most continuously active volcanoes in the world. In 2016, while living in Ecuador for six months on a US Fulbright Scholarship, Sims conducted several two-week expeditions to collect samples on the steep, glacially dissected flanks and high-altitude ridges of Chimborazo (6,263 meters/20,548 feet ASL). Sims has also conducted and published NSF- and UW-funded research on the volcanoes Reventador, Sumaco, and the Chalupas Caldera. Mid-ocean ridges Sims has researched the petrology of mid-ocean ridges extensively. Sims worked on several research expeditions to 9-10°N East Pacific Rise aboard the WHOI operated Research Vessel (RV) Atlantis utilizing the U.S. Navy-owned Deep Submergence Vehicle (DSV) Alvin and other remotely operated (DSL-120 and Jason) and autonomous (ABE) vehicles to conduct his research. Sims has also sent his graduate students on research expeditions to the Kolbeinsey Ridge, the SW Indian Ridge, 45°N Mid Atlantic Ridge, and 9°03 East Pacific Rise. Yellowstone Wyoming is home to one of the world's super-volcanos, Yellowstone, which also happens to host the world's most profound, and visually stunning example of an active continental hydrothermal system. Since moving to Wyoming, Sims has been coordinating the introduction of novel geochemical and geophysical techniques to study Yellowstone's “geohydrobiology”, which is the study of how Earth, water and life connect. Other locations Sims’ research has also taken him to volcanoes in Italy (Mt. Etna and Stromboli), Nicaragua (Vulcan Masaya), Iceland (Hekla, Theistareykir, Krafla), Hawaii (Kilauea, Mauna Loa, Hualalai, Mauna Kea and Haleakala), and New Mexico (Jemez Volcanic Field, Zuni-Bandera, and Raton-Clayton Volcanic Field). Sims has worked on problems related to serpentinization and carbonization in Samail Ophiolite of Oman, and the Josephine Peridotite in Oregon. Public engagement Sims’ research and scientific expeditions have been featured in: National Geographic Magazine (October 2004; April 2011); GEO Magazine (October 2012; 2017); National Geographic Explorer Kids (October 2011); Oceanus (Fall, 2006); Popular Mechanics (October 2006); New Scientist (July 2008; December 2008); CNN ("Great Big Story"); MentalFloss (August 2013); the children's book Lava Scientists: Careers on the Edge of Volcanoes (Sarah Latta, Enslow Publishing, Inc.); National Geographic Television (Man versus Volcano, April 2011; One Strange Rock, March 2018); National Geographic Weekend Radio (January 2015); Discovery Channel (Against the Elements, Spring 2009; Volcano Time Bomb, December 2012); NHK Japanese Public Television (Miracle Continent Antarctica); and, Boston Museum of Science (“Volcanoes on the Verge”). Sims contributes regularly to the National Geographic Explorers blog. Special Awards and Honors Source: Fulbright US Scholar Award (2016–2017) Top 10 Teacher Award, University of Wyoming (2015) Extraordinary Merit in Research Award, University of Wyoming (2015) National Geographic Society Explorer Faculty Senate Speaker Award, University of Wyoming (2104) Meritorious Teaching Award, University of Wyoming (2014) Papadopoulos Fellow, Kincaid School, Houston, Texas (2012) Mellon Independent Study Awards, Woods Hole Oceanographic Institution (1999; 2001; 2006; 2008) Outstanding Graduate Student Instructor, University of California, Berkeley (1992) Estwing Outstanding Senior Geologist, Colorado College (1986) Getty Oil Fellowship, Colorado College (1984) Significant publications *Authors marked with an asterisk are graduate students working with Sims. Mantle dynamics and magma genesis Sims, K.W.W., S.J. Goldstein, J. Blichert-Toft, M.R. Perfit, P. Kelemen, D.J. Fornari, P. Michael, M.T. Murrell, S.R. Hart, D.J. DePaolo, G.D. Layne, and M. Jull (2002). “Chemical and isotopic constraints on the generation and transport of melt beneath the East Pacific Rise.” Geochimica et Cosmochimica Acta, 66, 19, 3481-3504. doi:10.1016/S0016-7037(02)00909-2. Sims, K.W.W., M.T. Murrell, D.J. DePaolo, W.S. Baldridge, S.J. Goldstein, D. Clague and M. Jull (1999). “Porosity of the melting zone and variations in the solid mantle upwelling rate beneath Hawaii: Inferences from 238U–230Th–226Ra and 235U-231Pa disequilibria.” Geochimimica et Cosmochimica Acta, 63, 23, 4119-4138, doi: 10.1016/S0016-7037(99)00313-0. Sims, K.W.W. and D.J. DePaolo (1997). “Inferences about mantle magma sources from incompatible element concentration ratios in oceanic basalts.” Geochimica et Cosmochimica Acta, 61, 4, 765-784. doi: 10.1016/S0016-7037(96)00372-9. Sims, K.W.W., D.J. DePaolo, M.T. Murrrell, W.S. Baldridge, S.J. Goldstein, and D. Clague (1995). “Mechanisms of magma generation beneath Hawaii and Mid–Ocean ridges: U–Th and Sm–Nd isotopic evidence.” Science, 267, 508–512. doi: 10.1126/science.267.5197.508. Sims, K.W.W., J. Blichert-Toft, P.R. Kyle, S. Pichat, J. Bluzstajn, P.J. Kelly, L.A. Ball, and G. D. Layne (2008). “A Sr, Nd, Hf, and Pb isotope perspective on the genesis and long-term evolution of alkaline magmas from Erebus volcano, Antarctica.” Invited article to special volume on Mt. Erebus in Journal of Volcanology and Geothermal Research, 177, 606-618. doi: 10.1016/j.jvolgeores.2007.08.006. Sims, K.W.W., J. Maclennan, J. Blichert-Toft, E.M. Mervine, J. Bluzstajn, and K. Grönvold (2013). “Short length scale mantle heterogeneity beneath Iceland probed by glacial modulation of melting.” Earth and Planetary Science Letters, 379, 146-157, doi.org/10.1016/j.epsl.2013.07.027. *Waters, C.L., K.W.W. Sims, M.R. Perfit, J. Blichert-Toft, and J. Blusztajn (2011). “Perspective on the genesis of E-MORB from Chemical and Isotopoic Heterogeneity at 9-10ºN East Pacific Rise.” Journal of Petrology, 52, 3, 565-602. doi:10.1093/petrology/egq091. *Elkins, L.J., K.W.W. Sims, J. Prytulak, T. Elliott, N. Mattielli, J. Blichert-Toft, J. Blusztajn, C. Devey, D. Mertz, J.-G. Schilling, and M. Murrell (2011). “Understanding melt generation beneath the slow-spreading Kolbeinsey Ridge using 238U, 230Th, and 231Pa excesses.” Geochimica et Cosmochimica Acta, 75, 21, 6300-6329. doi:10.1016/j.gca.2011.08.020. Oceanic crustal construction Sims, K.W.W., J. Blichert-Toft, D.J. Fornari, M.R. Perfit, S.J. Goldstein, P. Johnson, D.J. DePaolo, and P. Michaels (2003). “Aberrant Youth: Chemical and isotopic constraints on the young off-axis lavas of the East Pacific Rise.” Geochemistry, Geophysics, Geosystems, 4, 10, 8621, doi:10.1029/2002GC000443. Sims, K.W.W., S.R. Hart, M.K. Reagan, J. Blusztajn, H. Staudigel, R.A. Sohn, G.D. Layne, L.A. Ball and J. Andrews (2008). “238U-230Th-226Ra-210Pb-210Po, 232Th-228Ra and 235U-231Pa constraints on the ages and petrogenesis of Vailulu and Malumalu Lavas, Samoa.” Geochemistry, Geophysics, Geosystems, 9, Q04003. doi:10.1029/2007GC001651. *Waters, C.L., K.W.W. Sims, E.M. Klein, S.M. White, M.K. Reagan, and G. Girard (2013). “Sill to Surface: Linking Young Off-Axis Volcanism with Subsurface Melt at the Overlapping Spreading Center at 9º03’N East Pacific Rise.” Earth and Planetary Science Letters, 369-370, 59-70. doi.org/10.1016/j.epsl.2013.03.006. *Standish, J.J. and K.W.W. Sims (2010). “Young Volcanism and Rift Valley Construction at an Ultraslow Spreading Ridge.” Nature Geoscience, 3, 4, 286-292. doi: 10.1038/NGEO824. Sohn, R.A. and K.W.W. Sims (2005). “Bending as a mechanism for triggering off-axis volcanism on the East Pacific Rise.” Geology, 33, 2, 93-96. doi: 10.1130/G21116.1. *Waters, C.L., K.W.W. Sims, S.A. Soule, J. Blichert-Toft, N.W. Dunbar, T. Plank, R.A. Sohn, and M.A. Tivey (2013). “Recent Volcanic Accretion at 9-10ºN East Pacific Rise as Resolved by Combined Geochemical and Geological Observations.” Geochemistry, Geophysics, Geosystems, 14. doi: 10.1002/ggge.20134. Shallow magmatic processes Sims, K.W.W., S. Pichat, M.K. Reagan, P.R. Kyle, H. Dulaiova, N. Dunbar, J. Prytulak, G. Sawyer, G. Layne, J. Blichert-Toft, P.J. Gauthier, M.A. Charrette, and T.R. Elliott (2013). “On the timescales of magma genesis, melt evolution, crystal growth rates and magma degassing in the Erebus volcano magmatic system using the 238U, 235U- and 232Th-decay series.” Journal of Petrology, 54, 2, 235-271. doi: 10.1093/petrology/egs068. Reagan, M.K., K.W.W. Sims, J. Enrich, R.B. Thomas, H. Cheng, R.L. Edwards, G.D. Layne, and L.A. Ball (2003). “Time-scale of differentiation from mafic parents to rhyolite in North American continental arcs.” Journal of Petrology, 44, 9, 1703-1726. doi: 10.1093/petrology/egg057 Giammanco, S., K.W.W. Sims, and S.M. Neri (2007). “Shallow rock stresses and gas transport at Mt. Etna (Italy) monitored through 220Rn, 222Rn and soil CO2 emissions in soil and fumaroles.” Geochem., Geophys., Geosys, 8, Q10001. doi:1029/2007GC00164. *Chakrabarti, R., K.W.W. Sims; A.R Basu; M. Reagan; and J. Durieux (2009). “Timescales of Magmatic Processes and Eruption Ages of the Nyiragongo volcanics from 238U -230Th-226Ra-210Pb disequilibria.” Earth and Planetary Science Letters, 288, 149–157. doi:10.1016/j.epsl.2009.09.017. Reubi, O., K.W.W. Sims, and B. Bourdon (2014). “238U-230Th equilibrium in arc magmas and implications for the time scales of mantle metasomatism.” Earth and Planetary Science Letters, 391, 146-158, doi.org/10.1016/j.epsl.2014.01.054. Reubi, O. K.W.W. Sims, N. Varley, M. Reagan, and J. Eikenberg (2015) Timescales of degassing and conduit dynamics inferred from 210Pb-226Ra disequilibria in Volcan de Colima 1998-2010 andesitic magmas" (Caricchi, L. & Blundy, J. D. (eds) Chemical, Physical and Temporal Evolution of Magmatic Systems. Geological Society, London, Special Publications, 422, http://doi.org/10.1144/SP422.5). Cooper, K., K.W.W. Sims, J.M. Eiler, N. Banerjee, (2016) Time scales of storage and recycling of crystal mush at Krafla Volcano, Iceland. Contributions to Mineralogy and Petrology, 171, 6, 54. doi 10.1007/s00410-016-1267-3. Garrison, J. M., K.W.W. Sims, G.M Yogodzinski, R. D. Escobar, S. Scott, P. Mothes, Patricia, M. L. Hall, P. Ramon (2018) Shallow-level differentiation of phonolitic lavas from Sumaco Volcano, Ecuador. Contributions to Mineralogy and Petrology, 173, 6. DOI: 10.1007/s00410-017-1431-4). Ocean chemistry and processes Pichat, S., K.W.W. Sims, R. François, J.F. McManus, S. Brown-Legger, and F. Albarède (2004). “Lower export production during glacial periods in the equatorial Pacific as derived from (231Pa/230Th) measurements in deep-sea sediments.” Paleoceanography, 19, 4023. doi: 10.1029/2003PA000994. *Owens, S.A., K.O. Buesseler, K.W.W.Sims (2011). “Re-evaluating the 238U-salinity relationship in seawater: Implications for the 238U-234Th disequilibrium method.” Marine Chemistry, 127, 1-4, 20, Pages 31–39. doi:10.1016/j.marchem.2011.07.005. *Arendt C.A. S.M. Aciego, K. W. W. Sims, S B. Das, C. Sheik, E. I. Stevenson (2018) Greenland subglacial water and proximal seawater U chemistry: Implications for seawater δ234U on glacial-interglacial timescales. Geochimica et Cosmochimica Acta, 225, 102-115, doi.org/10.1016/j.gca.2018.01.007. Crustal dynamics and processes Sims, K.W.W., R.P. Ackert, Jr., F. Ramos, R.A. Sohn, M.T. Murrell, and D. J. DePaolo (2007). “Determining eruption ages and erosion rates of Quaternary basaltic volcanism from combined U-series disequilibria and cosmogenic exposure ages.” Geology, 35, 471-474, doi:10.1130/G23381A.1. *Mervine, E.M., K.W.W. Sims, S.E. Humphris, and P.B. Kelemen (2015). "The applications and limitations of U-Th disequilbria systematics for determining rates of peridotite carbonation in the Samail Ophiolite, Sultanate of Oman." Chemical Geology, 412, 151-166, .org/10.1016/j.chemgeo.2015.07.023 *Arendt, C.A. S. M. Aciego, K.W.W. Sims, and S. M. Aarons (2017) Seasonal Progression of Uranium Series Isotopes in Subglacial Meltwater: Implications for Subglacial Storage Time. Chemical Geology, 467, 42-52, https://doi.org/10.1016/j.chemgeo.2017.07.007. *Scott, S.R., K. W.W. Sims, B. R. Frost, P. B. Kelemen ,K. A. Evans, and S. Swapp (2017) On the hydration of olivine in ultramafic rocks: Implications from Fe isotopes in serpentinites” (Geochemica Cosmochimica Acta, 215, 105-215, https://doi.org/10.1016/j.gca.2017.07.011 Core formation and planetary differentiation Newsom, H.E. and K.W.W. Sims (1991). “Core formation during early accretion of the Earth.” Science, 252, 926-933. doi: 10.1126/science.252.5008.926. Sims, K.W.W., H.E. Newsom, and E.S. Gladney (1990). “Chemical fractionation during formation of the Earth’s core and continental crust: Clues from As, Sb, W and Mo.” In Origin of the Earth, J. Jones and H.E. Newsom (eds.), New York: Oxford University Press; Houston. Lunar and Planetary Institute. . Newsom, H.E., K.W.W. Sims, P.D. Noll, W.L. Jaeger, S.A. Maehr, and T.B. Bassera (1996). “The depletion of W in the bulk-silicate Earth: constraints on core formation.” Geochimica et Cosmochimica Acta, 60, 7, 1155-1169. doi: 10.1016/0016-7037(96)00029-4. Development of novel analytical protocols Sims, K.W.W., J. Gill, A. Dossetto, D. Hoffmann, C.C. Lundstom, R. Williams, L.A. Ball, D. Tollstrup, S.P. Turner, J. Prytulak, J. Glessner, J.J. Standish, and T. Elliott (2008). “An inter-laboratory assessment of the Th Isotopic Composition of Synthetic and Rock standards.” Geostandards and Analytical Research, 32, 1, 65-91. doi: 10.1111/j.1751-908X.2008.00870.x. Ball, L.A., K.W.W. Sims, and J. Schwieters (2008). “Measurement of 234U/238U and 230Th/232Th in volcanic rocks using the Neptune PIMMS.” Journal Analytical Atomic Spectrometry, 23, 173-180. doi: 10.1039/b703193a. Layne, G.D. and K.W.W. Sims (2000). “Analysis of 232Th/230Th in volcanic rocks by Secondary Ionization Mass Spectrometry.” International Journal of Mass Spectrometry, 203, 1-3, 187-198. Dulaiova, H., K.W.W. Sims, and M.A. Charette (2013). “A new method for the determination of low-level actinium-227 in geological samples.” Journal of RadioAnalytical and Nuclear Chemistry, 296, 279-283. doi:10.1007/s10967-012-2140-0.) Lane-Smith, D. and K.W.W. Sims (2013). “The effect of CO2 on the measurement of 220Rn and 222Rn, with instruments utilizing electrostatic precipitation.” Acta Geophysica, 61, 4, 822-830 (Special volume on Geo-Hazards; Guest editor: Rakesh Chand Ramola) doi: 10.2478/s11600-013-0107-3. Sims, K.W.W., and E.S. Gladney (1991). “Determination of As, Sb, W and Mo in silicate matrices by epithermal neutron activation and inorganic ion exchange.” Analytica Chimica Acta, 251, 297-303. doi: 10.1016/0003-2670(91)87150-6. Sims, K.W.W., E.S. Gladney, C.C. Lundstrom, and N.W. Bower (1988). “Elemental concentrations in Japanese silicate rock standards: a comparison with the literature.” Geostandards Newsletter, 12, 379-389. Choi, M.S., R. Francois, K.W.W. Sims, M.P. Bacon, S. Legger-Brown, A.P. Fleer, L.A. Ball, D. Schneider, and S. Pichat (2001). “Rapid determination of 230Th and 231Pa in seawater by Inductively coupled plasma mass spectrometry.” Marine Chemistry, 76, 99-112. Select climbs High altitude alpine climbing Peru, Cordillera Blanca Nevado Huascaran Sur (6768 meters). The West Face via The Shield (Solo); Nevado Chacraraju (6112 meters) South Face via American Direct; Nevado Artesonraju (6025 meters) Southwest Face (Solo); Nevado Tocllaraju (6032 meters); Nevado Chopicalqi (6354 meters); Nevado Copa (6188 meters); Nevada Wamashraju (5434 meters),West Face via Sims-Hanning Route (V,5.10+) First Ascent. Ecuadorian volcanoes Chimborazo (6263 meters); Cayambe (5790 meters); Sangay (5300 meters). Alaska, Alaska range Denali (20,320 feet). West Buttress (as a guide); Moose's Tooth- West Ridge. Ice climbing Scotland Ben Nevis (4,406 feet). North Face. Orion Face Direct Grade VI, Point Five Gully Grade V (Solo), Zero Gully Grade V (Solo). Craig Megaiduh (3,658 feet). North Face, North Post Gully Grade V (Solo). Colorado Bridalveil Falls Grade VI, Ames Falls Grade VI, Skylight Grade V, The Squid Grade VI. Utah Stairway to Heaven Grade V, Great White Icicle Grade IV. Northeastern United States The Promenade Grade VI, Repentance Grade VI, The Black Dike Grade IV. Rock climbing Yosemite Valley, California El Capitan Pacific Ocean Wall VI 5.10 A5, Salathe Wall VI 5.11 A3, Nose VI, 5.11A1. Half Dome North West Face VI 5.11 A1 (6-hours, simultaneously climbing and almost all free); The Rostrum IV 5.11; Washington Column, Astroman IV 5.11 (one day combined ascent). Black Canyon of the Gunnison, Colorado North Chasm View Wall Air Voyage V 5.12+ (First Free Ascent), Scenic Cruise IV 5.10; Journey Home IV, 5.10R; South Chasm View Wall Mirror Wall IV 5.11, Black Jack III 5.10. Painted Wall Southern Arete V 5.10, Mordor Wall VI 5.10 A4. Longs Peak (14,110 feet), Rocky Mountain National Park, Colorado The Diamond Grand Traverse, Yellow Wall V 5.9 A3 (Winter Ascent), Casual Route VI, 5.9. Canyon Country The Witch, Sims-Hesse-Hanning Route, III 5.11+ R (First Free Ascent); Charlie Horse Needle, Sims-Hesse-Hanning Route, II 5.11 or 5.12a (First Free Ascent); Argon Tower, Arches National Park, III 5.11+, West Face (First Free Ascent); Cochina Spire, Zuni Reservation, III 5.11+ A0, West Face (First Ascent of Tower). Resources University of Wyoming faculty Living people 1960 births American geochemists People from Colorado Springs, Colorado
Kenneth Sims (geologist)
Chemistry
6,765
2,930,017
https://en.wikipedia.org/wiki/Avinash%20Dixit
Avinash Kamalakar Dixit (born 6 August 1944) is an Indian-American economist. He is the John J. F. Sherrerd '52 University Professor of Economics Emeritus at Princeton University, and has been distinguished adjunct professor of economics at Lingnan University (Hong Kong), senior research fellow at Nuffield College, Oxford and Sanjaya Lall Senior Visiting Research Fellow at Green Templeton College, Oxford. Education Dixit received a B.Sc. from University of Mumbai (St. Xavier's College) in 1963 in Mathematics and Physics, a B.A. from Cambridge University in 1965 in Mathematics (Corpus Christi College, First Class), and a Ph.D. in 1968 from the Massachusetts Institute of Technology in Economics. Career Dixit is the John J. F. Sherrerd '52 University Professor of Economics at Princeton University since July 1989, and Emeritus since 2010. He was also distinguished adjunct professor of economics at Lingnan University (Hong Kong), senior research fellow at Nuffield College, Oxford and Sanjaya Lall Senior Visiting Research Fellow at Green Templeton College, Oxford. He previously taught at Massachusetts Institute of Technology, at the University of California, Berkeley, at Balliol College, Oxford and at the University of Warwick. In 1994 Dixit received the first-ever CES Fellow Award from the Center for Economic Studies at the University of Munich in Germany. In January 2016, India conferred the Padma Vibhushan – the second highest of India's civilian honors to Dr. Dixit. Dixit has also held visiting scholar positions at the International Monetary Fund and the Russell Sage Foundation. He was president of the Econometric Society in 2001, and was vice-president (2002) and president (2008) of the American Economic Association. He was elected to the American Academy of Arts and Sciences in 1992, the National Academy of Sciences in 2005, and the American Philosophical Society in 2010. He has also been on the Social Sciences jury for the Infosys Prize from 2011. With Robert Pindyck he is author of "Investment Under Uncertainty" (Princeton University Press, 1994; ), the first textbook exclusively about the real options approach to investments, and described as "a born-classic" in view of its importance to the theory. Selected publications 1976. The Theory of Equilibrium Growth. Oxford University Press. 1977. "Monopolistic Competition and Optimum Product Diversity", The American Economic Review, vol. 67, no. 3, p. 297–308, with Joseph E. Stiglitz. 1980. Theory of International Trade, with Victor Norman. Cambridge University Press [1976] 1990. Optimization in Economic Theory, 2nd ed., Oxford. Description and contents preview. 1991. Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life, with Barry Nalebuff, New York: W.W. Norton. 1993. The Art of Smooth Pasting, Vol. 55 of series Fundamentals of Pure and Applied Economics, eds. Jacques Lesourne and Hugo Sonnenschein. Reading, UK: Harwood Academic Publishers. 1996a.Investment Under Uncertainty, co-authored by Robert Pindyck. Princeton University Press. 1996b. The Making of Economic Policy: A Transaction Cost Politics Perspective (Munich Lectures in Economics), M.I.T. Press. Description. 2004. Lawlessness and Economics: Alternative Modes of Governance], Gorman Lectures in Economics, University College London, Princeton University Press. Description and ch. 1, Economics With and Without the Law. 2008a. The Art of Strategy: A Game-Theorist's Guide to Success in Business and Life with Barry Nalebuff, New York: W. W. Norton. 2008b. "economic governance," in The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. 2009. Games of Strategy, with Susan Skeath and David McAdams, New York: W. W. Norton, 1999, 5th edition 2020. 2014. Microeconomics: A Very Short Introduction, Oxford University Press. References External links Short biography Curriculum vitae Recent writings 1944 births Living people Indian emigrants to the United States Academics of the University of Warwick 21st-century American economists Fellows of the Econometric Society American male writers of Indian descent MIT School of Humanities, Arts, and Social Sciences alumni Members of the United States National Academy of Sciences Princeton University faculty Fellows of Balliol College, Oxford Fellows of Nuffield College, Oxford St. Xavier's College, Mumbai alumni Presidents of the Econometric Society 20th-century American non-fiction writers 21st-century American non-fiction writers 20th-century Indian economists Trade economists 21st-century Indian economists American financial economists Real options American academics of Indian descent Recipients of the Padma Vibhushan in literature & education Presidents of the American Economic Association Writers from Mumbai 20th-century American male writers Distinguished fellows of the American Economic Association American male non-fiction writers Scientists from Mumbai Corresponding fellows of the British Academy 21st-century American male writers
Avinash Dixit
Engineering
1,027
22,754,707
https://en.wikipedia.org/wiki/Mitotic%20catastrophe
Mitotic catastrophe has been defined as either a cellular mechanism to prevent potentially cancerous cells from proliferating or as a mode of cellular death that occurs following improper cell cycle progression or entrance. Mitotic catastrophe can be induced by prolonged activation of the spindle assembly checkpoint, errors in mitosis, or DNA damage and operates to prevent genomic instability. It is a mechanism that is being researched as a potential therapeutic target in cancers, and numerous approved therapeutics induce mitotic catastrophe. Term usage Multiple attempts to specifically define mitotic catastrophe have been made since the term was first used to describe a temperature dependent lethality in the yeast, Schizosaccharomyces pombe, that demonstrated abnormal segregation of chromosomes. The term has been used to define a mechanism of cellular death that occurs while a cell is in mitosis or as a method of oncosuppression that prevents potentially tumorigenic cells from dividing. This oncosuppression is accomplished by initiating a form of cell death such as apoptosis or necrosis or by inducing cellular senescence. Mechanism to prevent cancer development One usage of the term mitotic catastrophe is to describe an oncosuppressive mechanism (i.e. a mechanism to prevent the proliferation of cancerous cells and the development of tumors) that occurs when cells undergo and detect a defective mitosis has occurred. This definition of this mechanism has been described by the International Nomenclature Committee on Cell Death. Under this definition, cells that undergo mitotic catastrophe either senesce and stop dividing or undergo a regulated form of cell death during mitosis or another form of cell death in the next G1 phase of the cell cycle. The function of this mechanism is to prevent cells from accruing genomic instability which can lead to tumorigenesis. When the cell undergoes cell death during mitosis this is known as mitotic death. This is characterized by high levels of cyclin B1 still present in the cell at the time of cell death indicating the cell never finished mitosis. Mitotic catastrophe can also lead to the cell being fated for cell death by apoptosis or necrosis following interphase of the cell cycle. However, the timing of cell death can vary from hours after mitosis completes to years later which has been witnessed in human tissues treated with radiotherapy. The least common outcome of mitotic catastrophe is senescence in which the cell stops dividing and enters a permanent cell cycle arrest that prevents the cell from proliferating any further. Mechanism of cellular death Another usage of the term mitotic catastrophe is to describe a mode of cell death that occurs during mitosis. This cell death can occur due to an accumulation of DNA damage in the presence of improperly functioning DNA structure checkpoints or an improperly functioning spindle assembly checkpoint. Cells that undergo mitotic catastrophe death can lack activation of pathways of the traditional death pathways such as apoptosis. While more recent definitions of mitotic catastrophe do not use it to describe a bona fide cell death mechanism, some publications describe it as a mechanism of cell death. Causes Prolonged spindle assembly checkpoint activation Cells have a mechanism to prevent improper segregation of chromosomes known as the spindle assembly checkpoint or mitotic checkpoint. The spindle assembly checkpoint verifies that mitotic spindles have properly attached to the kinetochores of each pair of chromosomes before the chromosomes segregate during cell division. If the mitotic spindles are not properly attached to the kinetochores then the spindle assembly checkpoint will prevent the transition from metaphase to anaphase. This mechanism is important to ensure that the DNA within the cell is divided equally between the two daughter cells. When the spindle assembly checkpoint is activated, it arrests the cell in mitosis until all chromosomes are properly attached and aligned. If the checkpoint is activated for a prolonged period it can lead to mitotic catastrophe. Prolonged activation of the spindle assembly checkpoint inhibits the anaphase promoting complex. Normally, activation of the anaphase promoting complex leads to the separation of sister chromatids and the cell exiting mitosis. The mitotic checkpoint complex acts as a negative regulator of the anaphase promoting complex. Unattached kinetochores promote the formation of the mitotic checkpoint complex which is composed of four different proteins known as Mad2, Cdc20, BubR1, and Bub3 in humans. When the mitotic checkpoint complex is formed, it binds to the anaphase promoting complex and prevents its ability to promote cell cycle progression. Errors in mitosis Some cells can have an erroneous mitosis yet survive and undergo another cell division which puts the cell at a higher likelihood to undergo mitotic catastrophe. For instance, cells can undergo a process called mitotic slippage where cells exit mitosis too early before the process of mitosis is finished. In this case, the cell finishes mitosis in the presence of spindle assembly checkpoint signaling which would normally prevent the cell from exiting mitosis. This phenomenon is caused by improper degradation of cyclin B1 and can result in chromosome missegregation events. Cyclin B1 is a major regulator of the cell cycle and guides the cells progression from G2 to M phase. Cyclin B1 works with its binding partner CDK1 to control this progression and the complex is known as the mitotic promoting factor. While the mitotic promoting factor is utilized to guide the cells entry into mitosis, its destruction also guides the cells exit from mitosis. Normally, cyclin B1 degradation is initiated by the anaphase promoting complex after all of the kinetochores have been properly attached by mitotic spindle fibers. However, when cyclin B1 levels are degraded too fast this can result in the cell exiting mitosis prematurely resulting in potential mitotic errors including missegregation of chromosomes. Tetraploid or otherwise aneuploid cells are at higher risk of mitotic catastrophe. Tetraploid cells are cells that have duplicated their genetic material, but have not undergo cytokinesis to split into two daughter cells and thus remain as one cell. Aneuploid cells are cells that have an incorrect number of chromosomes including whole additions of chromosomes or complete losses of chromosomes. Cells with an abnormal number of chromosomes are more likely to have chromosome segregation errors that result in mitotic catastrophe. Cells that become aneuploid often are prevented from further cell growth and division by the activation of tumor suppressor pathways such as p53 which drives the cell to a non-proliferating state known as cellular senescence. Given that aneuploid cells can often become tumorigenic, this mechanism prevents the propagation of these cells and thus prevents the development of cancers in the organism. Cells that undergo multipolar divisions, or in other words split into more than 2 daughter cells, are at a higher risk of mitotic catastrophe as well. While many of the progeny of multipolar divisions do not survive due to highly imbalanced chromosome numbers, most of the cells that survive and undergo a subsequent mitosis are likely to experience mitotic catastrophe. These multipolar divisions occur due to the presence of more than two centrosomes. Centrosomes are cellular organelles that acts to organize the mitotic spindle assembly in the cell during mitosis and thus guide the segregation of chromosomes during mitosis. Normally, cells will have two centrosomes that guide sister chromatids to opposite poles of the dividing cell. However, when there are more than two centrosomes present in mitosis they can pull chromosomes in incorrect directions resulting in daughter cells that are inviable. Many cancers have excessive numbers of centrosomes, but to prevent inviable daughter cells, the cancer cells have developed mechanisms to cluster their centrosomes. When the centrosomes are clustered to two poles of the dividing cell, the chromosomes are segregated properly and two daughter cells are formed. Thus, cancers that are able to adapt to a higher number of centrosomes are able to are able to prevent mitotic catastrophe and propagate in the presence of their extra centrosomes. DNA damage High levels of DNA damage that are not repaired before the cell enters mitosis can result in a mitotic catastrophe. Cells that have a compromised G2 checkpoint do not have the ability to prevent progression through the cell cycle even when there is DNA damage present in the cell's genome. The G2 checkpoint normally functions to stop cells that have damaged DNA from progressing to mitosis. The G2 checkpoint can be compromised if tumor suppressor p53 is no longer present in the cell. The response to DNA damage present during mitosis is different from the response to DNA damage detected during the rest of the cell cycle. Cells can detect DNA defects during the rest of the cell cycle and either repair them if possible or undergo apoptosis of senescence. Given that when this happens the cell does not progress into mitosis it is not considered a mitotic catastrophe. Mitotic catastrophe in cancer Prevention of genomic instability Genomic instability is one of the hallmarks of cancer cells and promotes genetic changes (both large chromosomal changes as well as individual nucleotide changes) in cancer cells which can lead to increased levels of tumor progression through genetic variation in the tumor cell. Cancers with a higher level of genomic instability have been shown to have worse patient outcomes than those cancers which have lower levels of genomic instability. Cells have gained mechanisms that resist increased genomic instability in cells. Mitotic catastrophe is one way in which cells prevent the propagation of genomically unstable cells. If mitotic catastrophe fails for cells whose genome has become unstable they can propagate uncontrollably and potentially become tumorigenic. The level of genomic instability is different across cancer types with epithelial cancers being more genomically unstable than cancers of hematological or mesenchymal origin. Mesothelioma, small-cell lung cancer, breast, ovarian, non-small cell lung cancer, and liver cancer exhibit high levels of genomic instability while acute lymphoblastic leukemia, myelodysplasia, and myeloproliferative disorder have lower levels of instability. Anticancer therapeutics Promotion of mitotic catastrophe in cancer cells is an area of cancer therapeutic research that has garnered interest and is seen as a potential target to overcome resistance developed to current chemotherapies. Cancer cells have been found to be more sensitive to mitotic catastrophe induction than non-cancerous cells in the body. Tumors cells often have inactivated the machinery that is required for apoptosis such as the p53 protein. This is usually achieved by mutations in the p53 protein or by loss of the chromosome region that contains the genetic code for it. p53 acts to prevent the propagation of tumor cells and is considered a major tumor suppressor protein. p53 works by either halting progression through the cell cycle when uncontrolled cell division is sensed or it can promote cell death through apoptosis in the presence of irreparable DNA damage. Mitotic catastrophe can occur in a p53 independent fashion and thus presents a therapeutic avenue of interest. Furthermore, doses of DNA damaging drugs lower than lethal levels have been shown to induce mitotic catastrophe. This would allow for administration of a drug while the patient has fewer side effects. Cancer therapies can induce mitotic catastrophe by either damaging the cells DNA or inhibiting spindle assembly. Drugs, known as spindle poisons, affect the polymerization or depolymerization of microtubule spindles and thus interfere with the correct formation of the mitotic spindles. When this happens, the spindle assembly checkpoint becomes activated and the transition from metaphase to anaphase is inhibited. See also Mitosis Cancer Apoptosis Senescence References Cell cycle Mitosis Cancer
Mitotic catastrophe
Biology
2,403
32,953,202
https://en.wikipedia.org/wiki/Drought%20refuge
A drought refuge is a site that provides permanent fresh water or moist conditions for plants and animals, acting as a refuge habitat when surrounding areas are affected by drought and allowing ecosystems and core species populations to survive until the drought breaks. Drought refuges are important for conserving ecosystems in places where the effects of climatic variability are exacerbated by human activities. Description Reliable drought refuges are characterised by the ability to retain sufficient water throughout the drought, having water quality good enough to maintain the life of the ecosystem that are not subject to physical disturbance and that have access to surrounding habitat, so that refugees can recolonise the main habitat when the drought ends. For fish and aquatic invertebrates a drought refuge may be an isolated permanent pool in a stream that ceases to flow and mostly dries up during a period of drought. Permanent wetlands may serve as non-breeding drought refuges for a range of waterbirds that nest at ephemeral lakes when these are inundated. "Drought refuge is a secure place persisting through a disturbance with the critical criterion being that after the disturbance the refuge provides colonist to allow populations to recover." For some species the refuge is their only water source and is necessary for survival. For birds and invertebrate taxa, the drought refuge is not only necessary for survival but contributes to their reproductive success. Some organisms are able to adapt to the environment when there is a drought, but adapting traits that will be beneficial for survival in a prolonged drought is extremely difficult to accomplish. Terms refuge and drought The term refugium (plural: refugia) was originally used by evolutionary biologists for refuges that protected entire species from disturbance events of large temporal and spatial scales, such as glaciation or the long-term effects of climate change. A disturbance involves a temporary removal of biomass resulting in change in physical environment. Smaller-scale ecologists now use this term synonymously with the simpler term refuge, to define places that protect populations of plants or animals from smaller-scale disturbances, such as fire, flood, storm, or human impacts. Refugia are the habitats or environmental factors that give spatial and temporal resistance and resilience to biotic communities impacted by disturbance. Here negative effects of disturbance are lower than surrounding areas or times. Refugia buffer species long-term, where as, a refuge buffers species short-term. There are other uses of the term refuge, such as for a wildlife reserve or a place free from predators (predation refuge). A refuge is a place or situation that provides safety or shelter. Here, species are minimally affected by changing climate conditions. Lack of precipitation causes drying of aquatic ecosystems and leads to a natural disturbance called a drought. In order for organisms to survive a drought, the disturbance must be minimal or there must be a drought refuge available. Effects of drought The severity of a disturbance is measured by its intensity, duration, and recovery time. Intensity and duration influence the strength of a disturbance and the likelihood of the survival of organisms within an area. Recovery time influences the level of recovery abundance and composition in a disturbed habitat until next stimulus forces species to seek shelter. Disturbances, such as drought, influence spatial and temporal patterns of refuge use, as well as the role of refuges in community dynamics. Variability in patterns of disturbance affect refuge use patterns and community structure. Decreased time between disturbances increases refuge usage until a certain frequency is reached and the usage declines as a result of the weakening resilience and resistance of a species. Refuge degradation increases mortality for sensitive species during larger disturbance times. Droughts decrease surface area and volume, while increasing physical and chemical water quality extremes, such as, temperature levels, oxygen concentration and water levels. This links with interactions that structure the communities of different species and affects mortality, birth and migration rates. During a drought, species must seek refuge or have adaptations that provide refuge. Hydrological extremes, such as flood and drought, modify habitats. Droughts lead to not only the loss of habitats, but also to isolated habitat patches created by the separation of populations which together form a meta population. Increased density of organisms is another result of droughts. Increased organism density leads to resource limitations, movement limitations, increased competition, and increased predation pressure. Droughts also cause changes in food resources and water quality. Function and importance of drought refuges Drought refuges protect plant and animal populations from extreme weather events as climate trends evolve . They serve as places that support populations of plants and animals not able to live elsewhere in a landscape during disturbance events, whether those events are seasonal and relatively predictable, or otherwise. A habitat's ability to act as a refuge depends on the disturbance. The ability of a refuge to retain water becomes essential for the maintenance of most populations. Refuges of sufficient size and duration maintain populations, sustain biodiversity and may harbour relict populations. They are of particular importance during increasing aridification when few other suitable habitats remain. Biota depend heavily on seasonal refuges. Refuges increase survival rate and recovery time of populations experiencing an environmental disturbance. Refugial effectiveness is the ability of a refuge to fulfill habitat-related criteria. Knowledge of refuges in mediterranean and semi-arid streams and rivers has increased during the last decade. The disturbance process and the recolonization process are two ecological processes which are associated with how refuges function. The disturbance process makes locations into refuges and the recolonization process restocks the wider landscape once a disturbance has passed. Recolonization is driven by resistance, local survival in drought refuges, or resilience, high local mortality with individuals moving back to streams when conditions improve. The processes of disturbance, refuge formation, refuge function and recolonization occur at varying temporal and spatial scales. The spatial distribution of refuges influences the usage and recolonization. Spatial factors alone have a small contribution. Refuges vary with morphological and physicochemical factors as well; contribution is shared. Refuges can be small or large and can be used for short or long periods of time. Refugia are relative depending on species adaptations, spatial and temporal scale, and disturbance regime. Many relative influences are unclear as each situation is different. Drought refuges are important for sustaining biodiversity over larger spatial scales. Perennial waters are the most important drought refuge. As refuges, they require the least investment by stream invertebrates and have the greatest biodiversity. Perennial surface water is crucial to the survival of macroinvertebrate and fish. Differences in longitudinal pattern affect the location and function of perennial water refuges. Refuge occupancy is predictable based on species' traits, but not all suitable refuges within a system are occupied. Refuge community structure is mostly constant because the response to a disturbance carries across a species; the same species takes advantage of the same type of refuge. Refugia play a central role in the structuring of communities. Most non-perennial stream taxa appear to have more than one potential refuge from drought. The primary determinant of which drought refuges a species uses in a landscape are its intrinsic traits. There are specific regions (refuges) to which individuals move during a drought, and within these regions there are specific characteristics of sites used as refuges by different species. A species may use more than one type of refuge during its life cycle. A variation in refuge use is caused by topography, individual species susceptibility and response to disturbance. Patterns of refuge use are influenced by disturbance type, species type, patch size, potential occupants and location. These patterns are poorly understood. Drought refuges form habitat mosaics which are prone to increased fragmentation by flow regulation. Some mosaics are more vulnerable to water abstraction than others. The drying of pools results in a patchy mosaic of pools in a dry channel which vary in suitability for different species and life stages. Different species favor different sized pools in different locations with different physicochemical properties. Refuges with low abundance of species require less effort to be adequate than diverse refuges. The size of a pool influences the set of species, total number of organisms, and assembly structure because of physicochemical factors. Species richness and abundance are related to pool morphology. Shade, location, and soil composition are all contributing factors. Heavily shaded pools have colder water, where as lightly shaded pools have increased levels of primary productivity. Large refuges have increased abundance and enrichment and are likely to persist through long disturbances. While used infrequently and often containing only few individuals during normal years, range edges may episodically serve as refuges from extreme weather events or conditions such as drought. During these extreme conditions, survival probability, reproductive success or both is higher at the edge than in the core of its range. Refuge use is influenced by habitat characteristics, such as hydraulic exchange and sediment type, active migration or passive habitat use and species morphology, behaviour and physiology. A decline in refuge use is due to decreased effectiveness of mortality reduction and reduced time provided for community recovery which leads to reduced time between disturbances. Movement into and out of refuge creates predictable fluxes of biomass and nutrients. This is important in food webs and the ecosystem. A dense amount of nutrients in one location during a disturbance means increased competition and predation. Rates of mortality, birth, migration, and interactions among components of the biota that have retreated to refugia are affected by the nature of the refuge. The spatial extent, the rate of drying, and the ambient physical and chemical conditions are all contributors. Drought refuges for algae are wide- spread because most med-river taxa can survive desiccation and show little specificity for refuges, provided drying occurs slowly. They include dry biofilm on stones and wood, dry leaf packs and perennial pools. Refuges for macrophytes and zooplankton typically comprise egg and seed banks in med-rivers and are resilient to prolonged drying. Importance of Refuge Connectivity Drought leads to a shift in refuge spacing and connections at different spatial and temporal scales. Droughts disrupt hydrological connectivity and impact resident species through loss of water and flow from drying, habitat reduction, and reconfiguration. Delivery of water is restricted to areas within a stream network. Habitat patches engineered by members of community serve as refuges that are crucial for other members. Trails and ponds dug by certain species, like alligators, allow for dispersal into refuges. Hydraulic exchange provides movement of water, nutrients, and organisms into a refuge. Populations of sessile organisms, like flora and fauna in perennial water refuges, cannot persist indefinitely without hydrological connections among refuges. Mobile organisms, like fish, will move into a refugia if there are no barriers, like physical obstructions (ex: dams, isolated pools), biotic factors (ex: predation, competition), and physicochemical factors (ex: low dissolved oxygen levels). During smaller scale, shorter term disturbances, populations within refuges are not necessarily cut off from those in other refuges or those in other undisturbed landscapes, and so genetic exchange can still occur, or will occur during parts of the life cycle not constrained by the disturbance. Under those circumstances, the survival of a species is unlikely to depend upon a single refuge. Recovery processes need to restore connectivity, so that migration can occur from refuges to new patches of habitat. Perennially flowing streams may act as drought refuges for neighboring streams, even if they are not hydro logically connected to them. Refugees must be connected hydrologically at the appropriate times. For insects, refuges on one stream may support recolonization on adjacent streams that are not hydro logically connected, which may also necessitate conservation planning across catchment boundaries. The drought from 1996 until 2009 had a great impact on the Murray-Darling Basin, in Northern Australia (Murphy and Timbal 2007; Umenhofer et al. 2009). When this drought occurred, it dried the wetlands and water storages (the drought refuge). For many species of birds and fish, the refuge is the only freshwater available. The body of water serves as food and shelter; therefore, it must be conserved. Drought refuges are likely to sustain biodiversity over larger spatial scales such as groups of streams or whole drainage networks. Chester, E. T. and Robson, B. J. (2011), Drought refuges, spatial scale and recolonization by invertebrates in non-perennial streams. Freshwater Biology, 56: 2094–2104. Different Types of Drought Refuge A species may use more than one type of refuge during its life cycle. Refugia can be physical characteristics of organisms like short-term behavior or a long-term evolutionary adaptation. Animals and plants have mechanisms to increase resistance (survival) and resilience (recovery) to physical disturbance. They develop adaptations like morphology, physiology, behavior. Physical organism adaptations include an ability to aestivate, mouth orientation that allows for breathing oxygen at the water surface, body armor, and venomous spines. Mobile species' coping methods include refuge-seeking behavior; they seek habitat patches that relieve physiological stress and reduce mortality. Reliance on dispersal improves resilience to climate change in the short term, but over longer timescales, it will not protect macro invertebrate biodiversity from landscape-scale refuge degradation. The hyporheic zone, a region along streambeds where groundwater mixes with surface water, is an important refuge for immobile organisms, like algae. The hyporheic zone protects from freezing, high temperature, and pollution. It reduces displacement, with its relatively stable, slow flow. In a hyporheic zone, free water is retained, and invertebrates remain submerged. The hyporheic zone has been shown to contribute colonists when surface flows recommence. Perennial waters, whether pools, seeps or flowing sections of streams, have repeatedly been shown to be the major refuges. Perennially flowing stream sections and perennial pools act as drought refuges for a wider area of the landscape than the stream on which they are located. Refuges of a size sufficient to maintain whole populations, such as perennially flowing reaches, are likely to be most important and may, during aridification, become refugia containing relictual populations. Perennial pools and perennially flowing water generally harbor the greatest diversity of macro invertebrate taxa because they require the least investment by the invertebrates. Threats and conservation Because drought refuges may provide the only sites allowing populations to persist during droughts, they are highly vulnerable to factors that affect water quality such as water pollution and sedimentation from anthropogenic runoff. Consequently, in areas subject to intermittent drought, habitat conservation requires the identification and protection of drought refuges. Drought is specific to certain regions and climatic zones. Climate change in many med-regions may prolong dry periods and threaten refuges. The capacity for perennial refuges to support biodiversity may be severely compromised due to increasing water temperatures, reducing the quality of refuges by exceeding the thermal tolerance of invertebrates or by causing anoxia in stream pools and existing environmental degradation of many perennial water-ways. Droughts have the ability to reduce agriculture products and be the cause loss of crops and lives. Hence, reserving the refuge is of extreme importance in more ways than one. In order to conserve the drought refuge for these species, action needs to be taken that will effect short term and long term impacts that the drought have on the species that dwell on it for survival. In California, efforts to conserve the drought refuge there, include reserving water when possible. Water conservation is done in order to migrating bird populations (National Wildlife Refuge; March, 1, 2016). The National wildlife refuge also takes part in mowing, disking, spraying and controlled burns. These measures are taken in an effort to stop non-native vegetation from growing; this type of vegetation typically out grows the native when in drought. Thus, allowing for native vegetation to survive during drought, leading to the dependent species to forage on the available vegetation. The clean water act was passed in order to protect the American waters from pollution. Although the act does not protect all waters, it protects many bodies of water. When drought refuges are polluted, they become an even greater danger for the dwelling species. The clean water act is just one step in cleaning waters, and saving drought refuges "(The Clean Water Rule; National Wildlife Organization)." Continuous threats to the drought refuge conservation include; sedimentation, waterhole pumping, and the lack of the structure of the water, it is not near any other bodies of water. These of course, lead to situations where there is an extreme decrease in water availability. As water availability decreases, it increases the chances for dependent species to die out. Groundwater aquifers support drought refuges for water-dependent ecosystems.Pollution and over-extraction of groundwater are both problematic because it lowers its ability to support groundwater-supplied drought refuges. Over-extraction lowers the water table and degrades water-dependent ecosystems. Over-extraction often occurs in areas with surface water scarcity and frequent drought; where groundwater refuges and refugia are most important. Man-made disturbances can mimic the effects of drought, like water withdrawal, and dams. Man-made channel modifications threaten the hyporheic zone as a refuge. Groundwater salinization compromises buffering properties. Vegetation clearance, along with irrigation, causes serious issues. Irrigation increases the water table and mobilizes salts, and vegetation clearance allows it to come in contact with water habitats and vegetation. This stresses species not adapted to high salinity. High levels of salinity reduces water uptake in plants, by causing stomatal closure, reducing photosynthesis. Forests undergo decline in areas of high salinity and shallow groundwater depths because these conditions make them more susceptible to droughts. Forests undergo decline in areas of high salinity and shallow groundwater depths making them more susceptible to droughts. There needs to be an increased focus on conservation efforts. Knowledge of refuge functions is critical for understanding their role in conservation of biodiversity, especially climatically sensitive species. Especially in regions where climate change is increasing the frequency and duration of dry periods. To best conserve species facing extreme weather events, it is necessary to identify conditions ‘pushing’ species, and, more importantly, to identify the refuge sites to which individuals move. Perennially flowing streams and perennial pools may be crucially important for sustaining biodiversity within a mosaic of stream habitats with drier flow regimes. The primary emphasis of drought refuge protection should be on protecting perennial surface waters and range edges within that landscape. Conservation approaches for river systems will need to focus on identifying and conserving refuges together with maintaining refuge connectivity, reducing the impacts of other disturbances on these systems, and sustaining predictable seasonal flow patterns. Releases from hydroelectricity reservoirs could be used to lower river water temperatures or replenish reaches of formerly perennial flow, thereby creating refuges for river biota. Also, focusing on maintaining groundwater quality is more beneficial than focusing on surface water resources. Re-vegetation can reduce water pollution in ground and surface water, benefitting biodiversity. There is increasing evidence that habitats created by humans, such as canals, ditches, and farm ponds can support freshwater biodiversity and, therefore, have potential to provide refuges. They can prevent larger organisms, like fish, from becoming stranded as water levels decrease. While the preservation of refuges is crucial to provide recolonization sources, it is not sufficient if colonists cannot get from the refuge to habitat patches suitable for colonization. Conversely, where management of pest species is necessary, controlling them in their drought refuges during droughts may be more cost-effective than broad-scale control at other times. One example of this is controlling rabbits in arid and semi-arid regions of Australia. See also Refuge (ecology) References Freshwater ecology Wetlands Ecology terminology
Drought refuge
Biology,Environmental_science
4,046
19,061,336
https://en.wikipedia.org/wiki/Bresle%20method
The Bresle method is used to determine concentration of soluble salts on metal surfaces prior to coating application, such as painting. These salts can cause serious adhesion problems after time. Importance Salt is in coastal areas. It can be tasted on the lips after walking on a beach. Salt concentration by weight is about 3.5% in sea water. With spray from waves and by other means, salt gets into the air as an aerosol, and eventually as a dust-like particle. This salt dust can be found everywhere near the coast. Salt is hygroscopic, and this property makes it harmful to coatings. Salt contamination beneath a coating, such as paint on steel, can cause adhesion and corrosion problems due to the hygroscopic nature of salt. Its tendency to attract water through a permeable coating creates a build-up of water molecules between substrate and coating. These molecules, together with salt and other oxidation agents trapped during coating or migrating through the coating, create an electrolytic cell, causing corrosion. Blast cleaning is frequently used to clean surfaces before coating; however, with salt contamination, blast cleaning may increase the problem by forcing salt into the base material. Washing a surface with deionized water before coating is a common solution. IMO PSPC (performance standard for protective coatings) regulations set a maximum allowable concentration of soluble salts on a surface to be coated, measured as sodium chloride, of 50 mg•m−2. The maximum amount of salt allowed on a surface prior to coating application is typically determined by the coating supplier and the user, such as a shipyard. Standard values have not been established. Origin of the Bresle Method The Bresle method was launched in 1995 in the ISO 8502-6 and ISO 8502-9 standards. The test was developed to measure soluble salt concentration on steel surfaces prior to blasting cleaning and coating. Not only ISO, but also the US Navy, IMO, NAVSEA, and ASTM adopted this method as their standard. The method remains the primary and most flexible test method for soluble salts on metal surfaces. Principle The Bresle method uses the difference of conductivity of salts in water, each salt having a characteristic conductivity-versus-concentration relationship. The correlation between concentration and conductivity can be found in "Handbook of Chemistry and Physics". This relationship is useful only if the dissolved salt is known. Sodium chloride, the main salt in sea water, causes a big increase in conductivity with increased concentration. A special patch is applied to the surface to be tested, and a specified volume of deionized water is injected under the patch. Any soluble salts present on the surface will dissolve in the water. The fluid is extracted and its conductivity measured. The conductivity of the collected salt solution depends on the volume of water used and its initial conductivity, and the amount of salt in solution depends on the area of the patch. The calculation of the salt per area is based on increased conductivity but in the IMO PSPC method the salt is calculated as sodium chloride, in the ISO 8502-9 method it is calculated as a specific mixture of salts, but expressed as Sodium Chloride. Calculation Factors Factors are applied to the measured conductivity, depending on what is known or assumed about the salt contamination and various conditions, in order to yield meaningful measurements of contamination. Some of the variables are: salt type volume of solution (in below table factors are calculated with a volume of 15 ml solution) temperature equipment-specific scaling A common source of error is not knowing the composition of the contamination being measured. Measurement Tools There are multiple suppliers of Bresle method test kits. The Principle The solubility in water depends on the type of salt. Sodium chloride can be dissolved in cold water to a concentration of 357 g∙l−1. Not only solubility differs between salts but also the conductivity. When performing a Bresle method test, not only sodium chloride is dissolved but also all other salts present on the surface. Because it is impossible to predict which salts are present at the surface, an assumption is made in the Bresle method. The term "measured as sodium chloride" indicates that this mixture of salts is interpreted as sodium chloride. Reporting how conductivity is factored is essential when creating a report. In Practice All parties involved should be clear on the impact on results of climate and the variance of the potentially different salt contents. An informed agreement should be reached between all parties as to what is an acceptable level of reading. Dependent on the size and nature of the surface to be coated, several readings may need to be taken. Test Patches A test patch should be as clean as possible. Contamination of a patch can influence the results significantly. The ISO 8502-6 standard prescribes in annex A that certified patches shall be used. This annex describes a stress test to ensure patch adhesion and wash ability. Not being supplied with a certificate that the patches pass this test will render the results obtained by these patches useless. Climate A soluble salts report should include climate conditions and substrate temperature. ISO 8502-6 requires that the test is done at 23 °C and a relative humidity of 50%, with deviations reported and agreed upon by both inspector and customer. During arbitration, absence of these values in a report may render the results invalid. References Handbook of Chemistry and Physics ISO 8502-6, "Extraction for soluble contaminants for analysis – The Bresle Method" ISO 8502-9, "Field method for soluble salts by conductometric measurement" Coatings
Bresle method
Chemistry
1,132
35,360,165
https://en.wikipedia.org/wiki/Kummer%20configuration
In geometry, the Kummer configuration, named for Ernst Kummer, is a geometric configuration of 16 points and 16 planes such that each point lies on 6 of the planes and each plane contains 6 of the points. Further, every pair of points is incident with exactly two planes, and every two planes intersect in exactly two points. The configuration is therefore a biplane, specifically, a 2−(16,6,2) design. The 16 nodes and 16 tropes of a Kummer surface form a Kummer configuration. There are three different non-isomorphic ways to select 16 different 6-sets from 16 elements satisfying the above properties, that is, forming a biplane. The most symmetric of the three is the Kummer configuration, also called "the best biplane" on 16 points. Construction Following the method of Jordan (1869), but see also Assmus and Sardi (1981), arrange the 16 points (say the numbers 1 to 16) in a 4x4 grid. For each element in turn, take the 3 other points in the same row and the 3 other points in the same column, and combine them into a 6-set. This creates one 6-element block for each point. Consider two points on the same row or column. There are two other points in that row or column which show up in the blocks for both starting points, therefore those blocks intersect in two points. Now consider two points not in the same row or column. Their corresponding blocks intersect in two points which form a rectangle with the two starting points. Thus all blocks intersect in two points. By examining the blocks corresponding to those intersection points, one sees that any two starting points are present in two blocks. Automorphisms There are exactly 11520 permutations of the 16 points that give the same blocks back. Additionally, exchanging the block labels with the point labels yields another automorphism of size 2, resulting in 23040 automorphisms. See also Klein configuration References Configurations (geometry) Algebraic geometry
Kummer configuration
Mathematics
407
18,436,210
https://en.wikipedia.org/wiki/Massera%27s%20lemma
In stability theory and nonlinear control, Massera's lemma, named after José Luis Massera, deals with the construction of the Lyapunov function to prove the stability of a dynamical system. The lemma appears in as the first lemma in section 12, and in more general form in as lemma 2. In 2004, Massera's original lemma for single variable functions was extended to the multivariable case, and the resulting lemma was used to prove the stability of switched dynamical systems, where a common Lyapunov function describes the stability of multiple modes and switching signals. Massera's original lemma Massera’s lemma is used in the construction of a converse Lyapunov function of the following form (also known as the integral construction) for an asymptotically stable dynamical system whose stable trajectory starting from The lemma states: Let be a positive, continuous, strictly decreasing function with as . Let be a positive, continuous, nondecreasing function. Then there exists a function such that and its derivative are class-K functions defined for all t ≥ 0 There exist positive constants k1, k2, such that for any continuous function u satisfying 0 ≤ u(t) ≤ g(t) for all t ≥ 0, Extension to multivariable functions Massera's lemma for single variable functions was extended to the multivariable case by Vu and Liberzon. Let be a positive, continuous, strictly decreasing function with as . Let be a positive, continuous, nondecreasing function. Then there exists a differentiable function such that and its derivative are class-K functions on . For every positive integer , there exist positive constants k1, k2, such that for any continuous function satisfying for all , we have References Footnotes Stability theory
Massera's lemma
Mathematics
380
40,418,235
https://en.wikipedia.org/wiki/Micromatabilin
Micromatabilin, the green pigment of the spider species Micrommata virescens, is characterized as a mixture of biliverdin conjugates. The two isolated fractions have identical absorption bands (free base: 620–630 μm, hydrochloride: 690 μm, zinc complex: 685–690 μm). Chromic acid degradation yields imides I, II, IIIa, and IIIb. Differences in the non-hydrolytic degradation and in polarity lead to the conclusion that fraction 1 is a monoconjugate and fraction 2a diconjugate of biliverdin. References Biological pigments Tetrapyrroles
Micromatabilin
Chemistry,Biology
142
6,396,576
https://en.wikipedia.org/wiki/Scheil%20equation
In metallurgy, the Scheil-Gulliver equation (or Scheil equation) describes solute redistribution during solidification of an alloy. Assumptions Four key assumptions in Scheil analysis enable determination of phases present in a cast part. These assumptions are: No diffusion occurs in solid phases once they are formed () Infinitely fast diffusion occurs in the liquid at all temperatures by virtue of a high diffusion coefficient, thermal convection, Marangoni convection, etc. () Equilibrium exists at the solid-liquid interface, and so compositions from the phase diagram are valid Solidus and liquidus are straight segments The fourth condition (straight solidus/liquidus segments) may be relaxed when numerical techniques are used, such as those used in CALPHAD software packages, though these calculations rely on calculated equilibrium phase diagrams. Calculated diagrams may include odd artifacts (i.e. retrograde solubility) that influence Scheil calculations. Derivation The hatched areas in the figure represent the amount of solute in the solid and liquid. Considering that the total amount of solute in the system must be conserved, the areas are set equal as follows: . Since the partition coefficient (related to solute distribution) is (determined from the phase diagram) and mass must be conserved the mass balance may be rewritten as . Using the boundary condition at the following integration may be performed: . Integrating results in the Scheil-Gulliver equation for composition of the liquid during solidification: or for the composition of the solid: . Applications of the Scheil equation: Calphad Tools for the Metallurgy of Solidification Nowadays, several Calphad softwares are available - in a framework of computational thermodynamics - to simulate solidification in systems with more than two components; these have recently been defined as Calphad Tools for the Metallurgy of Solidification. In recent years, Calphad-based methodologies have reached maturity in several important fields of metallurgy, and especially in solidification-related processes such as semi-solid casting, 3d printing, and welding, to name a few. While there are important studies devoted to the progress of Calphad methodology, there is still space for a systematization of the field, which proceeds from the ability of most Calphad-based software to simulate solidification curves and includes both fundamental and applied studies on solidification, to be substantially appreciated by a wider community than today. The three applied fields mentioned above could be widened by specific successful examples of simple modeling related to the topic of this issue, with the aim of widening the application of simple and effective tools related to Calphad and Metallurgy. See also "Calphad Tools for the Metallurgy of Solidification" in an ongoing issue of an Open Journal. https://www.mdpi.com/journal/metals/special_issues/Calphad_Solidification Given a specific chemical composition, using a software for computational thermodynamics - which might be open or commercial - the calculation of the Scheil curve is possible if a thermodynamic database is available. A good point in favour of some specific commercial softwares is that the install is easy indeed and you can use it on a windows based system - for instance with students or for self training. One should get some open, chiefly binary, databases (extension *.tdb), one could find - after registering - at Computational Phase Diagram Database (CPDDB) of the National Institute for Materials Science of Japan, NIMS https://cpddb.nims.go.jp/index_en.html. They are available - for free - and the collection is rather complete; in fact currently 507 binary systems are available in the thermodynamic data base (tdb) format. Some wider and more specific alloy systems partly open - with tdb compatible format - are available with minor corrections for Pandat use at Matcalc https://www.matcalc.at/index.php/databases/open-databases. Numerical expression and numerical derivative of the Scheil curve: application to grain size on solidification and semi-solid processing A key concept that might be used for applications is the (numerical) derivative of the solid fraction fs with temperature. A numerical example using a copper zinc alloy at composition Zn 30% in weight is proposed as an example here using the opposite sign for using both temperature and its derivative in the same graph. Kozlov and Schmid-Fetzer have calculated numerically the derivative of the Scheil curve in an open paper https://iopscience.iop.org/article/10.1088/1757-899X/27/1/012001 and applied it to the growth restriction factor Q in Al-Si-Mg-Cu alloys. Application to grain size on solidification This - Calphad calculated value of numerical derivative - Q has some interesting applications in the field of metal solidification. In fact, Q reflects the phase diagram of the alloy system and its reciprocal has been found to have a relationship with grain size d on solidification, which empirically has been found in some cases to be linear: where a and b are constants, as illustrated with some examples from the literature for Mg and Al alloys. Before Calphad use, Q values were calculated from the conventional relationship: Q=m*c0(k−1) where m is the slope of the liquidus, c0 is the solute concentration, and k is the equilibrium distribution coefficient. More recently some other possible correlation of Q with grain size d have been found, for instance: where B is a constant independent of alloy composition. Application to solidification cracking In recent publications, prof. Sindo Kou has proposed an approach to evaluate susceptibility to solidification cracking; this approach is based on a similar approach where a quantity, , which has the dimensions of a temperature is proposed as an index of the cracking susceptibility. Again one could exploit Scheil based solidification curves to link this index to the slope of the (Scheil) solidification curve: ∂T/(∂(fS)^{1/2})= ∂T/(∂(fS)*(∂(fS)^{1/2})/∂(fS))= (1/2)∂T/∂(fS)*(fS)^{1/2}= Application to semi-solid processing Last but not least prof. E.J.Zoqui has summarized in his work the approach proposed by several researchers in the criteria for semi-solid processing, which involves the stability of the solid phase fs with the temperature; to process semisolid alloys the sensitivity to variation of solid fraction with temperature should be minimal: in one direction it could evolve to a difficult to deform solid, on the other to a liquid which may be difficult to shape without proper moulding. It turns out that we can express this criterion again by evaluating the slope of the solidification curve, in fact ∂(fS)/∂T should be less than a certain threshold, which is commonly accepted in the scientific and technical literature to be below 0.03 1/K. Mathematically this may be expressed by an inequation, ∂(fS)/∂T < 0.03 (1/K) - where K stands for Kelvin degrees - could be equally assumed for a rough estimate of the two main semi-solid casting processing: both rheocasting ( 0.3<fs<0.4 ) and thixoforming (0.6<fs<0.7). If one would go back just to the (numerical) and functional approaches above, one should consider the reciprocal value i.e. ∂T/∂(fS)> 33 (K) References Gulliver, G.H., The Quantitative Effect of Rapid Cooling Upon the Constitution of Binary Alloys, J. Inst. Met., 1913, 9, p 120-157 Scheil, E., Bemerkungen zur Schichtkristallbildung, Z. Metallkd., 1942, 34, p 70-72 Greer L., et al. Modelling of inoculation of metallic melts: application to grain refinement of aluminium by Al–Ti–B Acta Mat. 48, 11, 2000, 2823-2835 https://doi.org/10.1016/S1359-6454(00)00094-X Porter, D. A., and Easterling, K. E., Phase Transformations in Metals and Alloys (2nd Edition), Chapman & Hall, 1992. https://doi.org/10.1201/9781439883570 Kou, S., Welding Metallurgy, 2nd Edition, Wiley -Interscience, 2003. https://doi.org/10.1002/0471434027 Karl B. Rundman Principles of Metal Casting Textbook - Michigan Technological University Quested T.E., Dinsdale A.T., Greer A.L. Thermodynamic modelling of growth-restriction effects in aluminium alloys Acta Materialia 53, 5, 2005, 1323-1334. https://doi.org/10.1016/j.actamat.2004.11.024 H. Fredriksson, Y. Akerlind, Materials Processing during Casting, Chapter 7, Wiley, 2006. https://www.wiley.com/en-us/Materials+Processing+During+Casting-p-9780470015148 H. Fredriksson, Y. Akerlind, Materials Processing during Casting, Supplementary (open) Material https://www.wiley.com/legacy/wileychi/fredriksson/features.html Schmid-Fetzer, R. Phase Diagrams: The Beginning of Wisdom. J. Phase Equilib. Diffus. 35, 735–760, 2014. https://doi.org/10.1007/s11669-014-0343-5 Zoqui, E. Alloys for Semisolid Processing, Comprehensive Materials Processing Volume 5, 2014, Pages 163-190 https://doi.org/10.1016/B978-0-08-096532-1.00520-3 Zhang, D., Prasad, A., Bermingham, M.J. et al. Grain Refinement of Alloys in Fusion-Based Additive Manufacturing Processes. Metall Mater Trans A 51, 4341–4359 (2020). https://doi.org/10.1007/s11661-020-05880-4 Todaro C.J., Easton M.A., Qiu D., Brandt M., StJohn D.H., Qian M. Grain refinement of stainless steel in ultrasound-assisted additive manufacturing, Additive Manufacturing 37, 2021, https://doi.org/10.1016/j.addma.2020.101632 Balart, M.J., Patel, J.B., Gao, F. et al. Grain Refinement of Deoxidized Copper. Metall Mater Trans A 47, 4988–5011 (2016) https://doi.org/10.1007/s11661-016-3671-8 Kou, S. Predicting Susceptibility to Solidification Cracking and Liquation Cracking by CALPHAD, Metals 2021, 11(9), 1442 https://doi.org/10.3390/met11091442 Zhang F., Liang S., Zhang C., Chen S., Lv D., Cao W., Kou S. Prediction of Cracking Susceptibility of Commercial Aluminum Alloys during Solidification, Metals 2021, 11(9), 1479; https://doi.org/10.3390/met11091479 External links Metallurgy Eponymous equations of physics Differential equations
Scheil equation
Physics,Chemistry,Materials_science,Mathematics,Engineering
2,549
2,515,464
https://en.wikipedia.org/wiki/Quadricyclane
Quadricyclane is a strained, multi-cyclic hydrocarbon with the formula CH2(CH)6. A white volatile colorless liquid, it is highly strained molecule (78.7 kcal/mol). Isomerization of quadricyclane proceeds slowly at low temperatures. Because of quadricyclane's strained structure and thermal stability, it has been studied extensively. Preparation Quadricyclane is produced by the irradiation of norbornadiene (bicyclo[2.2.1]hepta-2,5-diene) in the presence of Michler's ketone or ethyl Michler's ketone. Other sensitizers, such as acetone, benzophenone, acetophenone, etc., may be used but with a lesser yield. The yield is higher for freshly distilled norbornadiene, but commercial reagents will suffice. Proposed applications to solar energy The conversion of norbornadiene into quadricyclane is achieved with ~300 nm UV radiation. When converted back to norbornadiene, ring strain energy is liberated in the form of heat (ΔH = −89 kJ/mol). This reaction has been proposed to store solar energy. However, the absorption edge of light does not extend past 300 nm whereas most solar radiation has wavelengths longer than 400 nm. Quadricyclane's relative stability and high energy content have also given rise to its use as a propellant additive or fuel. However, quadricyclane undergoes thermal decomposition at relatively low temperatures (less than 400 °C). This property limits its applications, as propulsion systems may operate at temperatures exceeding 500 °C. Reactions Quadricyclane readily reacts with acetic acid to give a mixture of nortricyclyl acetate and exo-norbornyl acetate. Quadricyclane also reacts with many dienophiles to form 1:1 adducts. Notes Hydrocarbons Cyclopropanes Tetracyclic compounds
Quadricyclane
Chemistry
434
8,003,258
https://en.wikipedia.org/wiki/Dol%20hareubang
A (Jejuan: ), alternatively , or , is a type of traditional volcanic rock statue from Jeju Island, Korea. It is not known when the statues first began to be made; various theories exist for their origin. They possibly began to be made at latest 500 years ago, since the early Joseon period. There are either 47 or 48 original pre-modern statues that are known to exist; most of them are located on Jeju Island. The statues are traditionally placed in front of gates, as symbolic projections of power and as guardians against evil spirits. They were also symbols and ritual objects for fertility. The statues have been compared to jangseung, traditional wooden totem poles around Korea whose function was similarly to ward off bad spirits. They are now considered symbols of Jeju Island. Recreations of them in miniature and in full size have since been created. Names Dol hareubang is a term in the Jeju language, and means "stone grandfather". The term was reportedly not common until recently, and was mostly used by children. It was decided by the Jeju Cultural Property Committee in 1971 to make dol hareubang the official term for the statue, and this name has since become the predominant one. The statues have gone by a significant variety of names that were possibly regional and dependent on the characteristics of the statues. Names including useongmok (), museongmok (), ujungseok (), beoksumeori (), dolyeonggam (), sumunjang (), janggunseok (), dongjaseok (), mangjuseok (), and ongjungseok (). The name useongmok was possibly the most common. Description Each dol hareubang has different features and sizes, but they tend to share some commonalities. They are made of volcanic stone, and often depict figures wearing a round hat. This round hat is said to make the statue phallic, and thus a symbol of fertility. They tend to have large eyes, closed mouths, and one shoulder raised higher than the other. Their expressions have been described as stern, dignified, or humorous. Some have big ears, and some have hands placed either in front, on their stomachs, or around their backs. The statues were often erected at the entrance of fortresses (and thus at the boundaries of settlements), facing each other. They often had grooves in them for placing wooden logs in. The position of these logs signaled whether entrances were open or closed to passersby, as per the jeongnang system used around Jeju. The statues projected images of power and security, and also served superstitious function in warding off bad spirits. Some people reportedly paid their respects to the statues whenever they passed. There is some commonalities shared between the dol hareubang of three Joseon-era historical regions of Jeju, although there is still intra-region variance. Dol hareubang in Jeju-seong and Jeongeuihyeon-seong tend to be standing on stone platforms called giseok (), but those in Daejeonghyeon-seong do not. There are reportedly either 47 or 48 extant pre-modern dol hareubang. In Jeju City, there are 21. In Seongeup-ri in Seogwipo, there are 12. Across Inseong-ri, Anseong-ri, and Boseong-ri there are 12. In the National Folk Museum of Korea in Seoul, there are 2 that were originally from Jeju City. It is reportedly not known with certainty when most of these statues were produced. The statues were reportedly moved around over time, which caused wear-and-tear and made it difficult to place where they were originally from. They also served other superstitious functions. One folk belief had it that, if a woman was experiencing issues with infertility, she could secretly take parts of a statue's nose, grind it into a powder, then consume the powder to improve her fertility. Many statues reportedly have worn noses due to this belief. Some reportedly believe that touching the nose of the statue improves fertility. History The origin of dol hareubangs is unclear, with at least three theories surrounding it. Records surrounding the number and location of the statues from before 1914 are reportedly sparse. One theory has it that a sea-faring people brought the statues to Jeju. A second theory argues that the statues developed from jangseung or beoksu () statues. Around 1416 (during the Joseon period), 6 dol hareubang in three pairs reportedly existed on the island. By 1754, there were reportedly 48 statues; 24 of these were at Jeju-mok (now Jeju City), with 4 pairs each at the fortress's west, south, and east gates. Some scholars argue the earliest known dol hareubang in their current form were created in 1754. There is a record that dol hareubang (called ongjungseok) statues were built in 1754 in Jeju-mok. The creation of the statues was reportedly motivated by a belief that, after several famines in the reigns of kings Sukjong and Yeongjo, vengeful spirits were roaming and tormenting the living. The head of Jeju-mok then ordered that the statues be built. It is not clear whether these were the earliest occurrences of the statues. During the 1910–1945 Japanese colonial period, the statues were reportedly disregarded and moved around. This pattern reportedly continued into the rapid urban development after the liberation of Korea. Research on the statues occurred in the 1960s, and two of them were moved to the National Folk Museum of Korea in 1968. In recent years, the statue has become a symbol of Jeju Island. The first time a dol hareubang souvenir was created was reportedly in 1963, by sculptor Song Jong-Won. Song made a tall replica of a statue at the south gate of Jeju-mok. Tourist goods now widely feature the statues, with miniature to full-sized statues being sold. During the 1991 Soviet-South Korean summit on Jeju Island, Soviet leader Mikhail Gorbachev was given a dol hareubang as a gift. In 2002, a statue was gifted to Laizhou in China, and in 2003 another was gifted to the city hall of Santa Rosa, California in the United States. See also Kurgan stelae Korean shamanism Shigandang Seonangdang Moai Religion in Korea References Sources External links Religion in Korea Religion in South Korea Culture of Korea Colossal statues Stone sculptures Outdoor sculptures in South Korea Korean folk religion Korean traditions Culture of Jeju Province
Dol hareubang
Physics,Mathematics
1,401
44,314,903
https://en.wikipedia.org/wiki/Security%2C%20Territory%2C%20Population
Security, Territory, Population: Lectures at the Collège de France, 1977–1978 pertains to a lecture series given by French philosopher Michel Foucault at the Collège de France between 1977 and 1978 and published posthumously. See also Foucault's lectures at the Collège de France References External links Michel Foucault Audio Archive Home Full text at Springerlink Works by Michel Foucault Biopolitics Political philosophy Political science
Security, Territory, Population
Engineering,Biology
92
5,501,977
https://en.wikipedia.org/wiki/Zero-order%20hold
The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication. Time-domain model A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming one sample per time interval T: where is the rectangular function. The function is depicted in Figure 1, and is the piecewise-constant signal depicted in Figure 2. Frequency-domain model The equation above for the output of the ZOH can also be modeled as the output of a linear time-invariant filter with impulse response equal to a rect function, and with input being a sequence of dirac impulses scaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as the Whittaker–Shannon interpolation formula suggested by the Nyquist–Shannon sampling theorem, or such as the first-order hold or linear interpolation between sample values. In this method, a sequence of Dirac impulses, xs(t), representing the discrete samples, x[n], is low-pass filtered to recover a continuous-time signal, x(t). Even though this is not what a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses, xs(t), to a linear, time-invariant filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct constant pulse in the output. Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions: The scaling by , which arises naturally by time-scaling the delta function, has the result that the mean value of xs(t) is equal to the mean value of the samples, so that the lowpass filter needed will have a DC gain of 1. Some authors use this scaling, while many others omit the time-scaling and the T, resulting in a low-pass filter model with a DC gain of T, and hence dependent on the units of measurement of time. The zero-order hold is the hypothetical filter or LTI system that converts the sequence of modulated Dirac impulses xs(t)to the piecewise-constant signal (shown in Figure 2): resulting in an effective impulse response (shown in Figure 4) of: The effective frequency response is the continuous Fourier transform of the impulse response. where is the (normalized) sinc function commonly used in digital signal processing. The Laplace transform transfer function of the ZOH is found by substituting s = i 2 π f: The fact that practical digital-to-analog converters (DAC) do not output a sequence of dirac impulses, xs(t) (that, if ideally low-pass filtered, would result in the unique underlying bandlimited signal before sampling), but instead output a sequence of rectangular pulses, xZOH(t) (a piecewise constant function), means that there is an inherent effect of the ZOH on the effective frequency response of the DAC, resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency, corresponding to a gain of sinc(1/2) = 2/π). This drop is a consequence of the hold property of a conventional DAC, and is not due to the sample and hold that might precede a conventional analog-to-digital converter (ADC). See also Nyquist–Shannon sampling theorem First-order hold Discretization of linear state space models (assuming zero-order hold) References Digital signal processing Electrical engineering Control theory Signal processing
Zero-order hold
Mathematics,Technology,Engineering
832
11,951,096
https://en.wikipedia.org/wiki/List%20of%20active%20Solar%20System%20probes
This is a list of active space probes which have escaped Earth orbit. It includes lunar space probes, but does not include space probes orbiting at the Sun–Earth Lagrangian points (for these, see List of objects at Lagrangian points). A craft is deemed "active" if it is still able to transmit usable data to Earth (whether or not it can receive commands). The craft are further grouped by mission status – "en-route", "mission in progress" or "mission complete" – based on their primary mission. For example, though Voyager 1 is still contactable en-route to the Oort Cloud and has exited the Solar System, it is listed as "mission complete" because its primary task of studying Jupiter and Saturn has been accomplished. Once a probe has reached its first primary target, it is no longer listed as "en route" whether or not further travel is involved. Missions in progress Moon ARTEMIS P1/P2 Mission: studying the effect of the solar wind on the Moon. Originally launched as Earth satellites, they were later repurposed and moved to lunar orbit. Launched: February 17, 2007 Destination: Moon (in lunar orbit) Arrival: July 2011 Institution: NASA Lunar Reconnaissance Orbiter Mission: Orbiter engaged in lunar mapping intended to identify safe landing sites, locate potential resources on the Moon, characterize the radiation environment, and demonstrate new technology. Launched: 18 June 2009 Destination: Moon (in lunar orbit) Arrival: 23 June 2009 Institution: NASA Queqiao Mission: Halo orbiter serving as communications satellite for Chang'e 4 lunar far-side mission; conducting joint China-Netherlands low frequency astronomy experiment. Launched: 21:28 UT on 20 May 2018 Destination: in halo orbit about Earth-Moon L2 Arrival: 14 June 2018 Institution: CNSA Chang'e 4 lander and rover Mission: Lander engaging in low-frequency radio spectrometry experiment, neutron and dosimetry experiment, and biological experiment. Rover seeking to characterize lunar far-side environment (including possible lunar mantle material) using visible/near-infrared spectrometer, ground penetrating radar, cameras, and neutral particle analyzer. Launched: 18:23 UT on 8 December 2018 Destination: Lunar far side Arrival: 02:26 UT on 3 January 2019 Institution: CNSA Chandrayaan-2 Orbiter Mission: engaged in lunar topography and mineralogy, elemental abundance, the lunar exosphere, and signatures of hydroxyl and water. Launched: 22 July 2019 Destination: Moon (in lunar orbit) Arrival: 20 August 2019 Institution: ISRO CAPSTONE Mission: Lunar orbiting CubeSat that will test and verify the calculated orbital stability planned for the Gateway space station. Launched: 28 June 2022 Destination: Moon (in a Near-rectilinear halo orbit (NRHO)) Arrival: 14 November 2022 Institution: NASA Danuri (Korea Pathfinder Lunar Orbiter) Mission: Lunar Orbiter by the Korea Aerospace Research Institute (KARI) of South Korea. The orbiter, its science payload and ground control infrastructure are technology demonstrators. The orbiter will also be tasked with surveying lunar resources such as water ice, uranium, helium-3, silicon, and aluminium, and produce a topographic map to help select future lunar landing sites. Launched: 4 August 2022 Destination: Moon (in lunar orbit) Arrival: 16 December 2022 Institution: collaboration between KARI and NASA EQUULEUS Mission: Halo orbiter to image the Earth's plasmasphere, impact craters on the Moon's far side and L2 experiments. Launched: 16 November 2022 Destination: in halo orbit about Earth-Moon L2 Arrival: November 2022 Institution: JAXA Queqiao-2 Mission: lunar orbiter serving as communications satellite for Chang'e 6, Chang'e 7 Chang'e 8 and International Lunar Research Station on lunar far-side mission; Launched: 20 March 2024 Destination: Moon (in lunar orbit) Arrival: 2024 (planned) Institution: CNSA Tiandu-1 Mission: Testing technologies for a future lunar Satellite constellation. Launched: 20 March 2024 Destination: Moon (in lunar orbit) Arrival: 24 March 2024 Institution: Deep Space Exploration Laboratory Tiandu-2 Mission: Testing technologies for a future lunar Satellite constellation. Launched: 20 March 2024 Destination: Moon (in lunar orbit) Arrival: 24 March 2024 Institution: Deep Space Exploration Laboratory DRO A/B Mission: Testing technologies to establish lunar navigation and communications infrastructure to support lunar exploration. Launched: 3 March 2024 Destination: Moon (in DRO) Arrival: ~20 August 2024 Institution: China Academy of Sciences ICUBE-Q Mission: First Pakistani lunar mission piggybacking with Chang'e 6. Launched: 3 May 2024 Destination: Moon (in lunar orbit) Arrival: 8 May 2024 Institution: SUPARCO Blue Ghost M1 Mission: lunar lander, carrying NASA-sponsored experiments and commercial payloads as a part of Commercial Lunar Payload Services program to Mare Crisium Launched: 15 January 2025 Destination: Lunar surface Arrival: 2 March 2025 Institution: NASA Hakuto-R Mission 2 Resilience lander and Tenacious rover Mission: Lunar landing demonstration mission. Launched: 06:11 UT on 15 January 2025 Destination: Lunar far side Arrival: April 2025 Institution: Ispace Inc. Ispace Europe Mercury BepiColombo Mission: Spacecraft consists of the Mercury Transfer Module (MTM), Mercury Planetary Orbiter (MPO), and the Mercury Magnetospheric Orbiter (MMO or Mio). MTM and MPO are built by ESA while the MMO is mostly built by JAXA. Once the MTM delivers the MPO and MMO to Mercury orbit, the two orbiters will have the following objectives: to study Mercury's form, interior structure, geology, composition, and craters; to study the origin, structure, and dynamics of its magnetic field; to characterize the composition and dynamics of Mercury's vestigial atmosphere; to test Einstein's theory of general relativity; to search for asteroids sunward of Earth; and to generally study the origin and evolution of a planet close to a parent star. Launched: 01:45:28 UT on 19 October 2018 Destination: Mercury Arrival: En route (anticipated to enter Mercury polar orbit in November 2026) Institution: ESA JAXA Mars 2001 Mars Odyssey Mission: Mars Odyssey was designed to map the surface of Mars and also acts as a relay for the Curiosity rover. Its name is a tribute to the novel and 1968 film 2001: A Space Odyssey. Launched: 7 April 2001 Destination: Mars Arrival: 24 October 2001 Institution: NASA Mars Express Mission: Mars orbiter designed to study the planet's atmosphere and geology and search for sub-surface water. In 2017 the mission was extended until at least the end of 2020. Launched: 2 June 2003 Destination: Mars Arrival: 25 December 2003 Institution: ESA Mars Reconnaissance Orbiter Mission: the second NASA satellite orbiting Mars. It is specifically designed to analyze the landforms, stratigraphy, minerals, and ice of the red planet. Launched: 12 August 2005 Destination: Mars Arrival: 10 March 2006 Institution: NASA [[Curiosity rover|Curiosity rover]] Mission: searching for evidence of organic material on Mars, monitoring methane levels in the atmosphere, and engaging in exploration of the landing site at Gale Crater. Launched: 26 November 2011 Destination: Mars Arrival: 6 August 2012 Institution: NASAMAVEN — Mars Atmosphere and Volatile Evolution. Mission: study the Martian upper atmosphere and its gradual loss to space Launched: 18 November 2013 Destination: Mars Arrival: September 2014 Institution: NASATrace Gas Orbiter (ExoMars 2016) Mission: study methane and other trace gases in the Martian atmosphere Launched: 14 March 2016 Destination: Mars Arrived: 19 October 2016 (Mars orbit insertion), 21 April 2018 (final orbit) Institution: ESAEmirates Mars MissionMission: study weather and atmosphere. Launched: 19 July 2020 Destination: Mars Arrival: 9 February 2021 Institution: UAESATianwen-1 orbiterMission: find evidence for current and past life and produce Martian surface maps. Orbital studies of Martian surface morphology, soil, and atmosphere. Launched: 23 July 2020 Destination: Mars Arrival: 10 February 2021 Institution: CNSAPerseverance roverMission: searching for evidence of organic material on Mars, and engaging in exploration of the landing site at Jezero crater. Launched: 30 July 2020 Destination: Jezero crater, Mars Arrival: 18 February 2021 Institution: NASA Asteroids and comets Hayabusa2 Mission: asteroid study and sample-return Launched: 3 December 2014 First Destination: 162173 Ryugu Arrival: 27 June 2018 Left Ryugu: 12 November 2019 Second Destination: Institution: JAXA OSIRIS-APEX Mission: asteroid study and sample-return Launched: 8 September 2016 Destination: 101955 Bennu Arrival: 3 December 2018 Left Bennu: 10 May 2021 Destination: 99942 Apophis Arrival: April 2029 Institution: NASA Lucy Mission: to flyby 8 Jupiter trojan and one main belt asteroid Launched: 16 October 2021 Destination: 52246 Donaldjohanson Arrival: 20 April 2025 Institution: NASA Psyche Mission: to orbit a main belt asteroid Launched: 13 October 2023 Destination: 16 Psyche Arrival: August 2029 Institution: NASA Hera Mission: to orbit a binary asteroid and observe the asteroids, post DART impact. Launched: 7 October 2024 Destination: 65803 Didymos system Arrival: December 2026 Institution: ESA Heliocentric orbit Parker Solar Probe Mission: observation of solar wind, magnetic fields, and coronal energy flow. Launched: 12 August 2018 Destination: low solar orbit, perihelion 6.9 million km Arrival: 19 January 2019 Institution: NASA Solar Orbiter Mission: detailed measurements of the inner heliosphere and nascent solar wind, and close observations of the polar regions of the Sun. Launched: 10 February 2020 Destination: High inclination solar orbit Arrival: Operational orbit in 2023 Institution: ESA Outer Solar System Europa Clipper Mission: mission to study Jupiter and Europa. Launched: 14 October 2024 Destination: Jupiter Arrival: 11 April 2030 (en route) Institution: NASA Juice (Jupiter Icy Moons Explorer) Mission: mission to study Jupiter's three icy moons Callisto, Europa and Ganymede, eventually orbiting Ganymede as the first spacecraft to orbit a satellite of another planet. Launched: 14 April 2023 Destination: Jupiter Arrival: July 2031 (en route) Destination: Ganymede Arrival: December 2034 (en route) Institution: ESA Juno Mission: studying Jupiter from polar orbit. Originally intended to de-orbit into the Jovian atmosphere after 2021, now operating until 2025. Launched: 5 August 2011 Destination: Jupiter Arrival: 4 July 2016 Institution: NASA New Horizons Mission: the first spacecraft to study Pluto up close, and ultimately the Kuiper Belt. It was the fastest spacecraft when leaving Earth and will be the fifth probe to leave the Solar System. Launched: 19 January 2006 Destination: Pluto and Charon Arrival: 14 July 2015 Left Charon: 14 July 2015 Institution: NASA Voyager 1 Mission: investigating Jupiter and Saturn, and the moons of these planets. Its continuing data feed offered the first direct measurements of the heliosheath and the heliopause. It is currently the furthest man-made object from Earth, as well as the first object to leave the heliosphere and cross into interstellar space. As of November 2017 it has a distance from the Sun of about 140 astronomical units (AU) (21 billion kilometers, or 0.002 light years), and it will not be overtaken by any other current craft. In August 2012, Voyager 1 became the first human-built spacecraft to enter interstellar space. Though declining, the onboard power source should keep some of the probe's instruments running until 2025. Launched: 5 September 1977 Destination: Jupiter and Saturn Arrival: January 1979 Institution: NASA Primary mission completion: November 1980 Current trajectory: entered interstellar space August 2012 Voyager 2 Mission: studying all four giant planets. This mission was one of NASA's most successful, yielding a wealth of new information. As of November 2017 it is some 116 AU from the Sun (17.34 billion kilometers). It left the heliosphere and crossed into interstellar space in December 2018. As with Voyager 1, scientists are now using Voyager 2 to learn what the Solar System is like beyond the heliosphere. Launched: 20 August 1977 Destination: Jupiter, Saturn, Uranus, Neptune Arrival: 9 July 1979 Institution: NASA Primary mission completion: August 1989 Current trajectory: entered interstellar space December 2018 See also Lists of spacecraft References Probes Solar System, Active Probes, Active Probes Probes
List of active Solar System probes
Astronomy
2,623