id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,050,506 | https://en.wikipedia.org/wiki/Glycosyl | In organic chemistry, a glycosyl group is a univalent free radical or substituent structure obtained by removing the hydroxyl () group from the hemiacetal () group found in the cyclic form of a monosaccharide and, by extension, of a lower oligosaccharide. Glycosyl groups are exchanged during glycosylation from the glycosyl donor, the electrophile, to the glycosyl acceptor, the nucleophile. The outcome of the glycosylation reaction is largely dependent on the reactivity of each partner. Glycosyl also reacts with inorganic acids, such as phosphoric acid, forming an ester such as glucose 1-phosphate.
Examples
In cellulose, glycosyl groups link together 1,4-β-D-glucosyl units to form chains of (1,4-β-D-glucosyl)n.
Other examples include ribityl in 6,7-Dimethyl-8-ribityllumazine, and glycosylamines.
Alternative substituent groups
Instead of the hemiacetal hydroxyl group, a hydrogen atom can be removed to form a substituent, for example the hydrogen from the C3 hydroxyl of a glucose molecule. Then the substituent is called D-glucopyranos-3-O-yl as it appears in the name of the drug Mifamurtide.
Recent detection of Au3+ in vivo used C-glycosyl pyrene. Its fluorescence and permeability through cell membranes helped detect Au3+.
See also
Acyl group
References
Substituents
Biomolecules
Monosaccharides
Oligosaccharides | Glycosyl | [
"Chemistry",
"Biology"
] | 386 | [
"Carbohydrates",
"Natural products",
"Substituents",
"Monosaccharides",
"Organic compounds",
"Oligosaccharides",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Molecular biology"
] |
1,050,551 | https://en.wikipedia.org/wiki/Multiple-criteria%20decision%20analysis | Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.
Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition. On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences.
Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software, have been developed for their application in an array of disciplines, ranging from politics and business to the environment and energy.
Foundations, concepts, definitions
MCDM or MCDA are acronyms for multiple-criteria decision-making and multiple-criteria decision analysis. Stanley Zionts helped popularizing the acronym with his 1979 article "MCDM – If not a Roman Numeral, then What?", intended for an entrepreneurial audience.
MCDM is concerned with structuring and solving decision and planning problems involving multiple criteria. The purpose is to support decision-makers facing such problems. Typically, there does not exist a unique optimal solution for such problems and it is necessary to use decision-makers' preferences to differentiate between solutions.
"Solving" can be interpreted in different ways. It could correspond to choosing the "best" alternative from a set of available alternatives (where "best" can be interpreted as "the most preferred alternative" of a decision-maker). Another interpretation of "solving" could be choosing a small set of good alternatives, or grouping alternatives into different preference sets. An extreme interpretation could be to find all "efficient" or "nondominated" alternatives (which we will define shortly).
The difficulty of the problem originates from the presence of more than one criterion. There is no longer a unique optimal solution to an MCDM problem that can be obtained without incorporating preference information. The concept of an optimal solution is often replaced by the set of nondominated solutions. A solution is called nondominated if it is not possible to improve it in any criterion without sacrificing it in another. Therefore, it makes sense for the decision-maker to choose a solution from the nondominated set. Otherwise, they could do better in terms of some or all of the criteria, and not do worse in any of them. Generally, however, the set of nondominated solutions is too large to be presented to the decision-maker for the final choice. Hence we need tools that help the decision-maker focus on the preferred solutions (or alternatives). Normally one has to "tradeoff" certain criteria for others.
MCDM has been an active area of research since the 1970s. There are several MCDM-related organizations including the International Society on Multi-criteria Decision Making, Euro Working Group on MCDA, and INFORMS Section on MCDM. For a history see: Köksalan, Wallenius and Zionts (2011).
MCDM draws upon knowledge in many fields including:
Mathematics
Decision analysis
Economics
Computer technology
Software engineering
Information systems
A typology
There are different classifications of MCDM problems and methods. A major distinction between MCDM problems is based on whether the solutions are explicitly or implicitly defined.
Multiple-criteria evaluation problems: These problems consist of a finite number of alternatives, explicitly known in the beginning of the solution process. Each alternative is represented by its performance in multiple criteria. The problem may be defined as finding the best alternative for a decision-maker (DM), or finding a set of good alternatives. One may also be interested in "sorting" or "classifying" alternatives. Sorting refers to placing alternatives in a set of preference-ordered classes (such as assigning credit-ratings to countries), and classifying refers to assigning alternatives to non-ordered sets (such as diagnosing patients based on their symptoms). Some of the MCDM methods in this category have been studied in a comparative manner in the book by Triantaphyllou on this subject, 2000.
Multiple-criteria design problems (multiple objective mathematical programming problems): In these problems, the alternatives are not explicitly known. An alternative (solution) can be found by solving a mathematical model. The number of alternatives is either finite or infinite (countable or not countable), but typically exponentially large (in the number of variables ranging over finite domains.)
Whether it is an evaluation problem or a design problem, preference information of DMs is required in order to differentiate between solutions. The solution methods for MCDM problems are commonly classified based on the timing of preference information obtained from the DM.
There are methods that require the DM's preference information at the start of the process, transforming the problem into essentially a single criterion problem. These methods are said to operate by "prior articulation of preferences". Methods based on estimating a value function or using the concept of "outranking relations", analytical hierarchy process, and some rule-based decision methods try to solve multiple criteria evaluation problems utilizing prior articulation of preferences. Similarly, there are methods developed to solve multiple-criteria design problems using prior articulation of preferences by constructing a value function. Perhaps the most well-known of these methods is goal programming. Once the value function is constructed, the resulting single objective mathematical program is solved to obtain a preferred solution.
Some methods require preference information from the DM throughout the solution process. These are referred to as interactive methods or methods that require "progressive articulation of preferences". These methods have been well-developed for both the multiple criteria evaluation (see for example, Geoffrion, Dyer and Feinberg, 1972, and Köksalan and Sagala, 1995 ) and design problems (see Steuer, 1986).
Multiple-criteria design problems typically require the solution of a series of mathematical programming models in order to reveal implicitly defined solutions. For these problems, a representation or approximation of "efficient solutions" may also be of interest. This category is referred to as "posterior articulation of preferences", implying that the DM's involvement starts posterior to the explicit revelation of "interesting" solutions (see for example Karasakal and Köksalan, 2009).
When the mathematical programming models contain integer variables, the design problems become harder to solve. Multiobjective Combinatorial Optimization (MOCO) constitutes a special category of such problems posing substantial computational difficulty (see Ehrgott and Gandibleux, 2002, for a review).
Representations and definitions
The MCDM problem can be represented in the criterion space or the decision space. Alternatively, if different criteria are combined by a weighted linear function, it is also possible to represent the problem in the weight space. Below are the demonstrations of the criterion and weight spaces as well as some formal definitions.
Criterion space representation
Let us assume that we evaluate solutions in a specific problem situation using several criteria. Let us further assume that more is better in each criterion. Then, among all possible solutions, we are ideally interested in those solutions that perform well in all considered criteria. However, it is unlikely to have a single solution that performs well in all considered criteria. Typically, some solutions perform well in some criteria and some perform well in others. Finding a way of trading off between criteria is one of the main endeavors in the MCDM literature.
Mathematically, the MCDM problem corresponding to the above arguments can be represented as
subject to
where is the vector of k criterion functions (objective functions) and is the feasible set, .
If is defined explicitly (by a set of alternatives), the resulting problem is called a multiple-criteria evaluation problem.
If is defined implicitly (by a set of constraints), the resulting problem is called a multiple-criteria design problem.
The quotation marks are used to indicate that the maximization of a vector is not a well-defined mathematical operation. This corresponds to the argument that we will have to find a way to resolve the trade-off between criteria (typically based on the preferences of a decision maker) when a solution that performs well in all criteria does not exist.
Decision space representation
The decision space corresponds to the set of possible decisions that are available to us. The criteria values will be consequences of the decisions we make. Hence, we can define a corresponding problem in the decision space. For example, in designing a product, we decide on the design parameters (decision variables) each of which affects the performance measures (criteria) with which we evaluate our product.
Mathematically, a multiple-criteria design problem can be represented in the decision space as follows:
where is the feasible set and is the decision variable vector of size n.
A well-developed special case is obtained when is a polyhedron defined by linear inequalities and equalities. If all the objective functions are linear in terms of the decision variables, this variation leads to multiple objective linear programming (MOLP), an important subclass of MCDM problems.
There are several definitions that are central in MCDM. Two closely related definitions are those of nondominance (defined based on the criterion space representation) and efficiency (defined based on the decision variable representation).
Definition 1. is nondominated if there does not exist another such that and .
Roughly speaking, a solution is nondominated so long as it is not inferior to any other available solution in all the considered criteria.
Definition 2. is efficient if there does not exist another such that and .
If an MCDM problem represents a decision situation well, then the most preferred solution of a DM has to be an efficient solution in the decision space, and its image is a nondominated point in the criterion space. Following definitions are also important.
Definition 3. is weakly nondominated if there does not exist another such that .
Definition 4. is weakly efficient if there does not exist another such that .
Weakly nondominated points include all nondominated points and some special dominated points. The importance of these special dominated points comes from the fact that they commonly appear in practice and special care is necessary to distinguish them from nondominated points. If, for example, we maximize a single objective, we may end up with a weakly nondominated point that is dominated. The dominated points of the weakly nondominated set are located either on vertical or horizontal planes (hyperplanes) in the criterion space.
Ideal point: (in criterion space) represents the best (the maximum for maximization problems and the minimum for minimization problems) of each objective function and typically corresponds to an infeasible solution.
Nadir point: (in criterion space) represents the worst (the minimum for maximization problems and the maximum for minimization problems) of each objective function among the points in the nondominated set and is typically a dominated point.
The ideal point and the nadir point are useful to the DM to get the "feel" of the range of solutions (although it is not straightforward to find the nadir point for design problems having more than two criteria).
Illustrations of the decision and criterion spaces
The following two-variable MOLP problem in the decision variable space will help demonstrate some of the key concepts graphically.
In Figure 1, the extreme points "e" and "b" maximize the first and second objectives, respectively. The red boundary between those two extreme points represents the efficient set. It can be seen from the figure that, for any feasible solution outside the efficient set, it is possible to improve both objectives by some points on the efficient set. Conversely, for any point on the efficient set, it is not possible to improve both objectives by moving to any other feasible solution. At these solutions, one has to sacrifice from one of the objectives in order to improve the other objective.
Due to its simplicity, the above problem can be represented in criterion space by replacing the with the as follows:
subject to
We present the criterion space graphically in Figure 2. It is easier to detect the nondominated points (corresponding to efficient solutions in the decision space) in the criterion space. The north-east region of the feasible space constitutes the set of nondominated points (for maximization problems).
Generating nondominated solutions
There are several ways to generate nondominated solutions. We will discuss two of these. The first approach can generate a special class of nondominated solutions whereas the second approach can generate any nondominated solution.
Weighted sums (Gass & Saaty, 1955)
If we combine the multiple criteria into a single criterion by multiplying each criterion with a positive weight and summing up the weighted criteria, then the solution to the resulting single criterion problem is a special efficient solution. These special efficient solutions appear at corner points of the set of available solutions. Efficient solutions that are not at corner points have special characteristics and this method is not capable of finding such points. Mathematically, we can represent this situation as
=
subject to
By varying the weights, weighted sums can be used for generating efficient extreme point solutions for design problems, and supported (convex nondominated) points for evaluation problems.
Achievement scalarizing function (Wierzbicki, 1980)
Achievement scalarizing functions also combine multiple criteria into a single criterion by weighting them in a very special way. They create rectangular contours going away from a reference point towards the available efficient solutions. This special structure empower achievement scalarizing functions to reach any efficient solution. This is a powerful property that makes these functions very useful for MCDM problems.
Mathematically, we can represent the corresponding problem as
= },
subject to
The achievement scalarizing function can be used to project any point (feasible or infeasible) on the efficient frontier. Any point (supported or not) can be reached. The second term in the objective function is required to avoid generating inefficient solutions. Figure 3 demonstrates how a feasible point, , and an infeasible point, , are projected onto the nondominated points, and , respectively, along the direction using an achievement scalarizing function. The dashed and solid contours correspond to the objective function contours with and without the second term of the objective function, respectively.
Solving MCDM problems
Different schools of thought have developed for solving MCDM problems (both of the design and evaluation type). For a bibliometric study showing their development over time, see Bragge, Korhonen, H. Wallenius and J. Wallenius [2010].
Multiple objective mathematical programming school
(1) Vector maximization: The purpose of vector maximization is to approximate the nondominated set; originally developed for Multiple Objective Linear Programming problems (Evans and Steuer, 1973; Yu and Zeleny, 1975).
(2) Interactive programming: Phases of computation alternate with phases of decision-making (Benayoun et al., 1971; Geoffrion, Dyer and Feinberg, 1972; Zionts and Wallenius, 1976; Korhonen and Wallenius, 1988). No explicit knowledge of the DM's value function is assumed.
Goal programming school
The purpose is to set apriori target values for goals, and to minimize weighted deviations from these goals. Both importance weights as well as lexicographic pre-emptive weights have been used (Charnes and Cooper, 1961).
Fuzzy-set theorists
Fuzzy sets were introduced by Zadeh (1965) as an extension of the classical notion of sets. This idea is used in many MCDM algorithms to model and solve fuzzy problems.
Ordinal data based methods
Ordinal data has a wide application in real-world situations. In this regard, some MCDM methods were designed to handle ordinal data as input data. For example, Ordinal Priority Approach and Qualiflex method.
Multi-attribute utility theorists
Multi-attribute utility or value functions are elicited and used to identify the most preferred alternative or to rank order the alternatives. Elaborate interview techniques, which exist for eliciting linear additive utility functions and multiplicative nonlinear utility functions, may be used (Keeney and Raiffa, 1976). Another approach is to elicit value functions indirectly by asking the decision-maker a series of pairwise ranking questions involving choosing between hypothetical alternatives (PAPRIKA method; Hansen and Ombler, 2008).
French school
The French school focuses on decision aiding, in particular the ELECTRE family of outranking methods that originated in France during the mid-1960s. The method was first proposed by Bernard Roy (Roy, 1968).
Evolutionary multiobjective optimization school (EMO)
EMO algorithms start with an initial population, and update it by using processes designed to mimic natural survival-of-the-fittest principles and genetic variation operators to improve the average population from one generation to the next. The goal is to converge to a population of solutions which represent the nondominated set (Schaffer, 1984; Srinivas and Deb, 1994). More recently, there are efforts to incorporate preference information into the solution process of EMO algorithms (see Deb and Köksalan, 2010).
Grey system theory based methods
In the 1980s, Deng Julong proposed Grey System Theory (GST) and its first multiple-attribute decision-making model, called Deng's Grey relational analysis (GRA) model. Later, the grey systems scholars proposed many GST based methods like Liu Sifeng's Absolute GRA model, Grey Target Decision Making (GTDM) and Grey Absolute Decision Analysis (GADA).
Analytic hierarchy process (AHP)
The AHP first decomposes the decision problem into a hierarchy of subproblems. Then the decision-maker evaluates the relative importance of its various elements by pairwise comparisons. The AHP converts these evaluations to numerical values (weights or priorities), which are used to calculate a score for each alternative (Saaty, 1980). A consistency index measures the extent to which the decision-maker has been consistent in her responses. AHP is one of the more controversial techniques listed here, with some researchers in the MCDA community believing it to be flawed.
Several papers reviewed the application of MCDM techniques in various disciplines such as fuzzy MCDM, classic MCDM, sustainable and renewable energy, VIKOR technique, transportation systems, service quality, TOPSIS method, energy management problems, e-learning, tourism and hospitality, SWARA and WASPAS methods.
MCDM methods
The following MCDM methods are available, many of which are implemented by specialized decision-making software:
Aggregated Indices Randomization Method (AIRM)
Analytic hierarchy process (AHP)
Analytic network process (ANP)
Balance Beam process
Best worst method (BWM)
Brown–Gibson model
Characteristic Objects METhod (COMET)
Choosing By Advantages (CBA)
Conjoint Value Hierarchy (CVA)
Data envelopment analysis
Decision EXpert (DEX)
Disaggregation – Aggregation Approaches (UTA*, UTAII, UTADIS)
Rough set (Rough set approach)
Dominance-based rough set approach (DRSA)
ELECTRE (Outranking)
Evaluation Based on Distance from Average Solution (EDAS)
Evidential reasoning approach (ER)
FITradeoff (www.fitradeoff.org)
Goal programming (GP)
Grey relational analysis (GRA)
Inner product of vectors (IPV)
Measuring Attractiveness by a categorical Based Evaluation Technique (MACBETH)
Multi-Attribute Global Inference of Quality (MAGIQ)
Multi-attribute utility theory (MAUT)
Multi-attribute value theory (MAVT)
Markovian Multi Criteria Decision Making
New Approach to Appraisal (NATA)
Nonstructural Fuzzy Decision Support System (NSFDSS)
Ordinal Priority Approach (OPA)
Potentially All Pairwise RanKings of all possible Alternatives (PAPRIKA)
PROMETHEE (Outranking)
Simple Multi-Attribute Rating Technique (SMART)
Stratified Multi Criteria Decision Making (SMCDM)
Stochastic Multicriteria Acceptability Analysis (SMAA)
Superiority and inferiority ranking method (SIR method)
System Redesigning to Creating Shared Value (SYRCS)
Technique for the Order of Prioritisation by Similarity to Ideal Solution (TOPSIS)
Value analysis (VA)
Value engineering (VE)
VIKOR method
Weighted product model (WPM)
Weighted sum model (WSM)
See also
Architecture tradeoff analysis method
Decision-making
Decision-making software
Decision-making paradox
Decisional balance sheet
Multicriteria classification problems
Rank reversals in decision-making
Superiority and inferiority ranking method
References
Further reading
A Brief History prepared by Steuer and Zionts
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
Decision analysis
Management systems
Mathematical optimization
Utility
sr:Вишекритеријумска оптимизација | Multiple-criteria decision analysis | [
"Mathematics"
] | 4,545 | [
"Mathematical optimization",
"Mathematical analysis"
] |
1,050,706 | https://en.wikipedia.org/wiki/Gundestrup%20cauldron | The Gundestrup cauldron is a richly decorated silver vessel, thought to date from between 200 BC and 300 AD, or more narrowly between 150 BC and 1 BC. This places it within the late La Tène period or early Roman Iron Age. The cauldron is the largest known example of European Iron Age silver work (diameter: ; height: ). It was found dismantled, with the other pieces stacked inside the base, in 1891, in a peat bog near the hamlet of Gundestrup in the Aars parish of Himmerland, Denmark (). It is now usually on display in the National Museum of Denmark in Copenhagen, with replicas at other museums; it was in the UK on a travelling exhibition called The Celts during 2015–2016.
The cauldron is not complete, and now consists of a rounded cup-shaped bottom making up the lower part of the cauldron, usually called the base plate, above which are five interior plates and seven exterior ones; a missing eighth exterior plate would be needed to encircle the cauldron, and only two sections of a rounded rim at the top of the cauldron survive. The base plate is mostly smooth and undecorated inside and out, apart from a decorated round medallion in the centre of the interior. All the other plates are heavily decorated with repoussé work, hammered from beneath to push out the silver. Other techniques were used to add detail, and there is extensive gilding and some use of inlaid pieces of glass for the eyes of figures. Other pieces of fittings were found. Altogether it weighs just under .
While the vessel was found in Denmark, it was probably not made there or nearby; it includes elements of Gaulish and Thracian origin in the workmanship, metallurgy, and imagery. The techniques and elements of the style of the panels relate closely to other Thracian silver, while much of the depiction, in particular of the human figures, relates to the Celts, though attempts to relate the scenes closely to Celtic mythology remain controversial. Other aspects of the iconography derive from the Near East.
Hospitality on a large scale was probably an obligation for Celtic elites, and although cauldrons were therefore an important item of prestige metalwork, they are usually much plainer and smaller than this. This is an exceptionally large and elaborate object with no close parallel, except a large fragment from a bronze cauldron also found in Denmark, at Rynkeby; however the exceptional wetland deposits in Scandinavia have produced a number of objects of types that were probably once common but where other examples have not survived. It has been much discussed by scholars, and represents a fascinatingly complex demonstration of the many cross-currents in European art, as well as an unusual degree of narrative for Celtic art, though we are unlikely ever to fully understand its original meanings.
Discovery
The Gundestrup cauldron was discovered by peat cutters in a small peat bog called "Rævemose" (near the larger "Borremose" bog) on 28 May 1891. The Danish government paid a large reward to the finders, who subsequently quarreled bitterly amongst themselves over its division. Palaeobotanical investigations of the peat bog at the time of the discovery showed that the land had been dry when the cauldron was deposited, and the peat gradually grew over it. The manner of stacking suggested an attempt to make the cauldron inconspicuous and well-hidden. Another investigation of Rævemose was undertaken in 2002, concluding that the peat bog may have existed when the cauldron was buried.
The cauldron was found in a dismantled state with five long rectangular plates, seven short plates, one round plate (normally called the "base plate"), and two fragments of tubing stacked inside the curved base.
In addition, there is a piece of iron from a ring originally placed inside the silver tubes along the rim of the cauldron. It is assumed that an eighth short plate is missing because the circumference of the seven outer plates is smaller than the circumference of the five inner plates.
A set of careful full-size replicas have been made. One is in the National Museum of Ireland,
and several are in France, including the Musée gallo-romain de Fourvière at Lyon and the Musée d'archéologie nationale at Saint-Germain-en-Laye.
Reconstruction
Since the cauldron was found in pieces, it had to be reconstructed. The traditional order of the plates was determined by Sophus Müller, the first of many to analyze the cauldron. His logic uses the positions of the trace solder located at the rim of the bowl. In two cases, a puncture mark penetrating the inner and outer plates also helps to establish the order. In its final form, the plates are arranged in an alternation of female-male depictions, assuming the missing eighth plate is of a female.
Not all analysts agree with Müller's ordering, however. Taylor has pointed out that aside from the two cases of puncturing, the order cannot be determined from the solder alignments. His argument is that the plates are not directly adjacent to each other, but are separated by a 2 cm gap; thus, the plates in this order cannot be read with certainty as the true narrative, supposing one exists. However, Larsen indicates, not only did his study vindicate the order for the inner plates established, by Muller, Klindt-Jensen, and Olmsted, but the order of the outer plates is also established by the rivet holes, the solder alignments, and the scrape marks.
Metallurgy
The Gundestrup cauldron is composed almost entirely of silver, but there is also a substantial amount of gold for the gilding, tin for the solder and glass for the figures' eyes. According to experimental evidence, the materials for the vessel were not added at the same time, so the cauldron can be considered as the work of artisans over a span of several hundred years. The quality of the repairs to the cauldron, of which there are many, is inferior to the original craftsmanship.
Silver was not a common material in Celtic art, and certainly not on this scale. Except sometimes for small pieces of jewellery, gold or bronze were more usual for prestige metalwork. At the time that the Gundestrup cauldron was created, silver was obtained through cupellation of lead / silver ores.
From comparisons of the concentration of lead isotopes with the silver work by other cultures, it seems that the silver came from multiple ore deposits, mostly from Celtic northern France and western Germany in the pre-Roman period. Lead isotope studies also indicate that the silver for manufacturing the plates was prepared by repeatedly melting ingots and/or scrap silver. Three to six distinct batches of recycled silver may have been used in making the vessel. Specifically, the circular "base plate" may have originated as a phalera, and it is commonly thought to have been positioned in the bottom of the bowl as a late addition, soldered in to repair a hole. By an alternative theory, this phalera was not initially part of the bowl, but instead formed part of the decorations of a wooden cover.
The gold can be sorted into two groups based on purity and separated by the concentration of silver and copper. The less pure gilding, which is thicker, can be considered a later repair, as the thinner, purer inlay adheres better to the silver. The adherence of the overall gold is quite poor. The lack of mercury from the gold analysis suggests that a fire-gilding technique was not used on the Gundestrup cauldron. The gilding appears to have instead been made by mechanical means, which explains the function of closely spaced punch marks on the gilded areas.
An examination of lead isotopes similar to the one used on the silver was employed for the tin. All of the samples of tin soldering are consistent in lead-isotope composition with ingots from Cornwall in western Britain. The tin used for soldering the plates and bowl together, as well as the glass eyes, is very uniform in its high purity.
Finally, the glass inlays of the Gundestrup cauldron have been determined through the use of X-ray fluorescence radiation to be of a soda-lime type composition. The glass contained elements that can be attributed to calcareous sand and mineral soda, typical of the east coast of the Mediterranean region. The analyses also narrowed down the production time of the glass to between the second century BC and first century AD.
Flow of raw material
The workflow of the manufacturing process consisted of a few steps that required a great amount of skill. Batches of silver were melted in crucibles with the addition of copper for a subtler alloy. The melted silver was cast into flat ingots and hammered into intermediate plates.
For the relief work, the sheet-silver was annealed to allow shapes to be beaten into high repoussé; these rough shapes were then filled with pitch from the back to make them firm enough for further detailing with punches and tracers. The pitch was melted out, areas of pattern were gilded, and the eyes of the larger figures were inlaid with glass. The plates were probably worked in a flat form and later bent into curves to solder them together.
It is generally agreed that the Gundestrup cauldron was the work of multiple silversmiths. Using scanning electron microscopy, Benner Larson has identified 15 different punches used on the plates, falling into three distinct tool sets. No individual plate has marks from more than one of these groups, and this fits with previous attempts at stylistic attribution, which identify at least three different silversmiths. Multiple artisans would also explain the highly variable purity and thickness of the silver.
Origins
The silverworking techniques used in the cauldron are unknown from the Celtic world, but are consistent with the renowned Thracian sheet-silver tradition. The scenes depicted are not distinctively Thracian, but certain elements of composition, decorative motifs, and illustrated items (such as the shoelaces on the antlered figure) identify it as Thracian work.
Taylor and Bergquist have postulated that the Celtic tribe known as the Scordisci commissioned the cauldron from native Thracian silversmiths. According to classical historians, the Cimbri, a Teutonic tribe, went south from the lower Elbe region and attacked the Scordisci in 118 BC. After withstanding several defeats at the hands of the Romans, the Cimbri retreated north, possibly taking with them this cauldron to settle in Himmerland, where the vessel was found.
According to the art style of the Gundestrup cauldron is that utilized in Armorican coinage dating to as exemplified in the billon coins of the Coriosolites. This art style is unique to northwest Gaul and is largely confined to the region between the Seine and the Loire, a region in which, according to Caesar, the wealthy sea-faring Veneti played a dominant and hegemonic role. Agreeing with this area of production, determined by the art style, is the fact that the
"lead isotope compositions of the [Gundestrup] cauldron plates" [mostly included] "the same silver as used in northern France for the Coriosolite coins".
Not only does the Gundestrup cauldron enlighten us about this coin-driven art style, where the larger-metalwork smiths were also the mint-masters producing the coins, but the cauldron also portrays cultural items, such as swords, armor, and shields, found and produced in this same cultural area, confirming the agreement between art style and metal analysis. If as and suggest, the Veneti also produced the silver phalerae, found on the Isle of Sark, as well as the Helden phalera, then there are a number of silver items of the type exemplified by the Gundestrup cauldron originating in northwest France, dating to just before the Roman conquest.
Nielsen believes that the question of origin is the wrong one to ask and can produce misleading results. Because of the widespread migration of numerous ethnic groups like the Celts and Teutonic peoples and events like Roman expansion and subsequent Romanization, it is highly unlikely that only one ethnic group was responsible for the development of the Gundestrup cauldron. Instead, the make and art of the cauldron can be thought of as the product of a fusion of cultures, each inspiring and expanding upon one another. In the end, based on accelerator datings from beeswax found on the back of the plates, Nielsen concludes that the vessel was created within the Roman Iron Age. However, an addendum to Nielson's article indicates that results from the Leibniz Lab on the same beeswax dated some 400 years earlier than reported in his article.
According to Ronald Hutton, because the cauldron's source metals have been traced to the Black Sea region, and depicts elephants, the cauldron should no longer be considered [strictly] Celtic.
Iconography
Base plate
The decorated medallion on the circular base plate depicts a bull. Above the back of the bull is a female figure wielding a sword; three dogs are also portrayed, one over the bull's head and another under its hooves. Presumably all of these figures are in combat; the third dog, located beneath the bull and near its tail, seems to be dead, and is only faintly shown in engraving, and the bull may have been brought down. Below the bull is scrolling ivy that draws from classical Greco-Roman art. The horns of the bull are missing, but there is a hole right through the head where they were originally fitted; perhaps they were gold. The head of the bull rises entirely clear of the plate, and the medallion is considered the most accomplished part of the cauldron in technical and artistic terms.
Exterior plates
Each of the seven exterior plates centrally depicts a bust. Plates a, b, c, and d show bearded male figures, and the remaining three are female.
{| style="vertical-align:top;text-align:left;"
|-
! Plate
! Depicts
|- style="vertical-align:top;"
|style="text-align:center;"|
| A bearded man holds in each hand a much smaller figure by the arm. Each of those two reach upward toward a small boar. Under the feet of the figures (on the shoulders of the larger man) are a dog on the left side and a winged horse on the right side.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A bearded male figure holds a sea-horse or dragon in each hand.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A bearded male figure raises his empty fists. On his right shoulder is a man in a "boxing" position, and on his left shoulder, there is a leaping figure with a small horseman underneath.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A bearded male figure holding a stag by the hind quarters in each hand.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A female figure is flanked by two smaller male busts.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A female figure holds a bird in her upraised right hand. Her left arm is horizontal, supporting a man and a dog lying on its back. Two birds of prey are situated on either side of her head. Her hair is being plaited by a small woman on the right.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A female figure has her arms crossed. On her right shoulder, a scene of a man fighting a lion is shown. On her left shoulder is a leaping figure similar to the one on plate c.
|}
Interior plates
{| style="vertical-align:top;text-align:left;"
|-
! Plate
! Depicts
|- style="vertical-align:top;"
|style="text-align:center;"|
| A beardless male figure wearing a cap with deer antlers is seated in the center of the plate; he is often identified as Cernunnos. In his right hand, he holds a torc, and with his left hand he grips a horned serpent a little below the head. To the left is a stag with antlers that are identical in size and shape to the Cernunnos figure's cap. Surrounding the scene are other canine, feline, and bovine animals, some but not all facing the male figure, as well as a human riding a dolphin. Between the antlers of the god is an unknown ivy-like motif, either an ivy vine or possibly a tree branch. It is thought to most likely be just a standard background decoration.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A large bust of a torc-wearing female is flanked by two six-spoked wheels, and what seem to be two elephants and two griffins. A feline or hound is underneath the bust. In northwest Gaulish coinage from 150 to 50 BC, such wheels often indicate a chariot, so the scene could be seen as a goddess in an elephant biga.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A large bust of a bearded male figure holding on to a broken wheel at the centre. A smaller, leaping figure with a horned helmet is also holding the rim of the wheel. Under the leaping figure is a horned serpent. The group is surrounded by three griffins facing left below, and above, two strange animals who look like hyenas, facing right. The wheel's spokes are rendered asymmetrically, but judging from the lower half, the wheel may have had twelve spokes.
|- style="vertical-align:top;"
|style="text-align:center;"|
| A bull-slaying scene, with the same composition repeated three times across the plate; the only place where such repetition appears on the cauldron. Three large bulls are arranged in a row, facing right, and each of them is attacked by a man with a sword. A feline and a dog, both running to the left, appear respectively over and below each bull. Note that after the Stowe version of the Táin, Medb's men run forward to kill the Donn bull after his fight with Medb's "white-horned" bull, whom he kills.
|- style="vertical-align:top;"
|
| A line of warriors bearing spears and shields march to the left, bringing up the rear is a warrior with no shield, bearing a sword, and wearing a boar-crested helmet which resembles helmets from later Germanic cultures. Behind him are three carnyx players. In front of this group a dog leaps up, perhaps holding them back. Behind the dog, at the left side of the scene, a figure over twice the size of the others holds a man upside down, apparently with ease, and apparently is about to immerse him in a barrel or cauldron.
|- style="vertical-align:top;"
|
| Warriors on horseback with crested helmets and spears ride away to the right, with at the right a horned serpent, fitted in above the tops of the carnyxes, who is perhaps leading them. The two lines are below and above what appears to be a tree, still in leaf, lying sideways. This is now most often interpreted as a scene where fallen warriors are dipped into a cauldron to be reborn into their next life, or afterlife. This can be paralleled in later Welsh literature.
|}
Interpretation and parallels
For many years, some scholars have interpreted the cauldron's images in terms of the Celtic pantheon, and Celtic mythology as it is presented in much later literature in Celtic languages from the British Isles. Others regard the latter interpretations with great suspicion. Much less controversially, there are clear parallels between details of the figures and Iron Age Celtic artifacts excavated by archaeology.
Other details of the iconography clearly derive from the art of the ancient Near East, and there are intriguing parallels with ancient India and later Hindu deities and their stories. Scholars are mostly content to regard the former as motifs borrowed purely for their visual appeal, without carrying over anything much of their original meaning, but despite the distance some have attempted to relate the latter to wider traditions remaining from Proto-Indo-European religion.
Celtic archaeology
Among the most specific details that are clearly Celtic are the group of carnyx players. The carnyx war horn was known from Roman descriptions of the Celts in battle and Trajan's Column, and a few pieces are known from archaeology, their number greatly increased by finds at Tintignac in France in 2004.
"Their trumpets again are of a peculiar barbarian kind; they blow into them and produce a harsh sound which suits the tumult of war".
Another detail that is easily matched to archaeology is the torc worn by several figures, clearly of the "buffer" type, a fairly common Celtic artefact found in Western Europe, most often France, from the period the cauldron is thought to have been made.
Other details with more tentative Celtic links are the long swords carried by some figures, and the horned and antlered helmets or head-dresses and the boar crest worn on their helmet by some warriors. These can be related to Celtic artefacts such as a helmet with a raptor crest from Romania, the Waterloo Helmet, Torrs Pony-cap and Horns and various animal figures including boars, of uncertain function. The shield bosses, spurs and horse harness also relate to Celtic examples.
The antlered figure in plate A has been commonly identified as Cernunnos, who is named (the only source for the name) on the 1st-century Gallo-Roman Pillar of the Boatmen, where he is shown as an antlered figure with torcs hanging from his antlers. Possibly the lost portion below his bust showed him seated cross-legged as the figure on the cauldron is. Otherwise there is evidence of a horned god from several cultures.
The figure holding the broken wheel in plate C is more tentatively thought to be Taranis, the solar or thunder "wheel-god" named by Lucian and represented in a number of Iron Age images; there are also many wheels that seem to have been amulets.
Near East and Asia
The many animals depicted on the cauldron include elephants, a dolphin, leopard-like felines, and various fantastic animals, as well as animals that are widespread across Eurasia, such as snakes, cattle, deer, boars and birds. Celtic art often includes animals, but not often in fantastic forms with wings and aspects of different animals combined. There are exceptions to this, some when motifs are clearly borrowed, as the boy riding a dolphin is borrowed from Greek art, and others that are more native, like the ram-headed horned snake who appears three times on the cauldron. The art of Thrace often shows animals, most often powerful and fierce ones, many of which are also very common in the ancient Near East, or the Scythian art of the Eurasian steppe, whose mobile owners provided a route for the very rapid transmission of motifs and objects between the civilizations of Asia and Europe.
In particular, the two figures standing in profile flanking the large head on exterior plate F, each with a bird with outstretched wings just above their head, clearly resemble a common motif in ancient Assyrian and Persian art, down to the long garments they wear. The figure is usually the ruler, and the wings belong to a symbolic representation of a deity protecting him. Other plates show griffins borrowed from Ancient Greek art of that of the Near East. On several of the exterior plates the large heads, probably of deities, in the centre of the exterior panels, have small arms and hands, either each grasping an animal or human in a version of the common Master of Animals motif, or held up empty at the side of the head in a way suggesting inspiration from this motif.
Celtic mythology
Apart from Cernunnos and Taranis, discussed above, there is no consensus regarding the other figures, and many scholars reject attempts to tie them in to figures known from much later and geographically distant sources. Some Celticists have explained the elephants depicted on plate B as a reference to Hannibal's crossing of the Alps.
Because of the double-headed wolfish monster attacking the two small figures of fallen men on plate b, parallels can be drawn to the Welsh character Manawydan or the Irish Manannán, a god of the sea and the Otherworld. Another possibility is the Gaulish version of Apollo, who was not only a warrior, but one associated with springs and healing besides.
Olmsted relates the scenes of the cauldron to those of the Táin Bó Cuailnge, where the antlered figure is Cú Chulainn, the bull of the base plate is Donn Cuailnge, and the female and two males of plate e are Medb, Ailill, and Fergus. Olmsted also toys with the idea that the female figure flanked by two birds on plate f could be Medb with her pets or Morrígan, the Irish war goddess who often changes into a carrion bird. Olmsted sees Cernunnos as Gaulish version of Irish Cu Chulainn. As Olmsted indicates, the scene on the upper right of plate A, a lion, a boy on a dolphin, and a bull, can be interpreted after the origin of the bulls of the Irish Táin, who take on various matched animal forms, fighting each other in each form, as indicated in the two lions fighting on the lower right of plate A.
Plate B could be interpreted after a Gaulish version of the beginning of the Irish Táin, where Medb sets out to get the Donn bull after making a circuit around her army in her chariot to bring luck to the Táin. Olmsted interprets the scene on plate C as a Gaulish version of the Irish Táin incidents where Cu Chulainn kicks in the Morrigan's ribs when she comes at him as an eel and then confronts Fergus with his broken chariot wheel.
interprets the scene with warriors on the lower part of Plate E as a Gaulish version of the "Aided Fraich" episode of the Táin where Fraich and his men leap over the fallen tree, and then Fraech wrestles with his father Cu Chulainn and is drowned by him, while his magic horn blowers play "the music of sleeping" against Cu Chulainn. In the "Aided Fraich" episode, Fraich's body is then taken into the underworld by weeping banchuire to be healed by his aunt and wife Morrigan. This incident is depicted on outer plate f, which is adjacent and opposite to plate E.
Both Olmsted and Taylor agree that the female of plate f might be Rhiannon of the Mabinogion. Rhiannon is famous for her birds, whose songs could "awaken the dead and lull the living to sleep". In this role, Rhiannon could be considered the Goddess of the Otherworld.
Taylor presents a more pancultural view of the cauldron's images; he concludes that the deities and scenes portrayed on the cauldron are not specific to one culture, but many. He compares Rhiannon, whom he thinks is the figure of plate f, with Hariti, an ogress of Bactrian mythology. In addition, he points to the similarity between the female figure of plate B and the Hindu goddess Lakshmi, whose depictions are often accompanied by elephants. Wheel gods are also cross-cultural with deities like Gaulish Taranis and Hindu Vishnu.
See also
Ancient Celtic religion
Migration Period
Pashupati seal
Lyon cup
Gutasaga
References
Sources
—
Further reading
2nd-century BC artifacts
1st-century BC artifacts
1891 archaeological discoveries
Archaeological discoveries in Denmark
Germanic archaeological artifacts
Celtic art
Pre-Roman Iron Age
Thracian archaeological artifacts
Silver objects
Treasure troves in Denmark
Dogs in art
Deer in art
Cattle in art
Cauldrons
Cernunnos
Magic items
Ancient art in metal | Gundestrup cauldron | [
"Physics"
] | 5,941 | [
"Magic items",
"Physical objects",
"Matter"
] |
1,050,741 | https://en.wikipedia.org/wiki/Pythagorean%20trigonometric%20identity | The Pythagorean trigonometric identity, also called simply the Pythagorean identity, is an identity expressing the Pythagorean theorem in terms of trigonometric functions. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions.
The identity is
As usual, means .
Proofs and their relationships to the Pythagorean theorem
Proof based on right-angle triangles
Any similar triangles have the property that if we select the same angle in all of them, the ratio of the two sides defining the angle is the same regardless of which similar triangle is selected, regardless of its actual size: the ratios depend upon the three angles, not the lengths of the sides. Thus for either of the similar right triangles in the figure, the ratio of its horizontal side to its hypotenuse is the same, namely .
The elementary definitions of the sine and cosine functions in terms of the sides of a right triangle are:
The Pythagorean identity follows by squaring both definitions above, and adding; the left-hand side of the identity then becomes
which by the Pythagorean theorem is equal to 1. This definition is valid for all angles, due to the definition of defining and for the unit circle and thus and for a circle of radius and reflecting our triangle in the and setting and .
Alternatively, the identities found at Trigonometric symmetry, shifts, and periodicity may be employed. By the periodicity identities we can say if the formula is true for then it is true for all real . Next we prove the identity in the range . To do this we let , will now be in the range . We can then make use of squared versions of some basic shift identities (squaring conveniently removes the minus signs):
Finally, it remains is to prove the formula for ; this can be done by squaring the symmetry identities to get
Related identities
The two identities are also called Pythagorean trigonometric identities. If one leg of a right triangle has length 1, then the tangent of the angle adjacent to that leg is the length of the other leg, and the secant of the angle is the length of the hypotenuse.
In this way, this trigonometric identity involving the tangent and the secant follows from the Pythagorean theorem. The angle opposite the leg of length 1 (this angle can be labeled ) has cotangent equal to the length of the other leg, and cosecant equal to the length of the hypotenuse. In that way, this trigonometric identity involving the cotangent and the cosecant also follows from the Pythagorean theorem.
The following table gives the identities with the factor or divisor that relates them to the main identity.
Proof using the unit circle
The unit circle centered at the origin in the Euclidean plane is defined by the equation:
Given an angle θ, there is a unique point P on the unit circle at an anticlockwise angle of θ from the x-axis, and the x- and y-coordinates of P are:
Consequently, from the equation for the unit circle,
the Pythagorean identity.
In the figure, the point has a -coordinate, and is appropriately given by , which is a negative number: . Point has a positive -coordinate, and . As increases from zero to the full circle , the sine and cosine change signs in the various quadrants to keep and with the correct signs. The figure shows how the sign of the sine function varies as the angle changes quadrant.
Because the - and -axes are perpendicular, this Pythagorean identity is equivalent to the Pythagorean theorem for triangles with hypotenuse of length 1 (which is in turn equivalent to the full Pythagorean theorem by applying a similar-triangles argument). See Unit circle for a short explanation.
Proof using power series
The trigonometric functions may also be defined using power series, namely (for an angle measured in radians):
Using the multiplication formula for power series at Multiplication and division of power series (suitably modified to account for the form of the series here) we obtain
In the expression for , must be at least 1, while in the expression for , the constant term is equal to 1. The remaining terms of their sum are (with common factors removed)
by the binomial theorem. Consequently,
which is the Pythagorean trigonometric identity.
When the trigonometric functions are defined in this way, the identity in combination with the Pythagorean theorem shows that these power series parameterize the unit circle, which we used in the previous section. This definition constructs the sine and cosine functions in a rigorous fashion and proves that they are differentiable, so that in fact it subsumes the previous two.
Proof using the differential equation
Sine and cosine can be defined as the two solutions to the differential equation:
satisfying respectively , and , . It follows from the theory of ordinary differential equations that the first solution, sine, has the second, cosine, as its derivative, and it follows from this that the derivative of cosine is the negative of the sine. The identity is equivalent to the assertion that the function
is constant and equal to 1. Differentiating using the chain rule gives:
so is constant. A calculation confirms that , and is a constant so for all , so the Pythagorean identity is established.
A similar proof can be completed using power series as above to establish that the sine has as its derivative the cosine, and the cosine has as its derivative the negative sine. In fact, the definitions by ordinary differential equation and by power series lead to similar derivations of most identities.
This proof of the identity has no direct connection with Euclid's demonstration of the Pythagorean theorem.
Proof using Euler's formula
Using Euler's formula and factoring as the complex difference of two squares,
See also
Pythagorean theorem
List of trigonometric identities
Unit circle
Power series
Differential equation
Notes
Mathematical identities
Articles containing proofs
Trigonometry
Identity | Pythagorean trigonometric identity | [
"Mathematics"
] | 1,298 | [
"Mathematical theorems",
"Planes (geometry)",
"Euclidean plane geometry",
"Mathematical objects",
"Equations",
"Pythagorean theorem",
"Articles containing proofs",
"Mathematical identities",
"Mathematical problems",
"Algebra"
] |
1,050,784 | https://en.wikipedia.org/wiki/Diabase | Diabase (), also called dolerite () or microgabbro,
is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbro. Diabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).
Diabase is the preferred name in North America, while dolerite is the preferred name in the rest of the English-speaking world, where sometimes the name diabase refers to altered dolerites and basalts. Some geologists prefer to avoid confusion by using the name microgabbro.
The name diabase comes from the French , and ultimately from the Greek (meaning "act of crossing over, transition") whereas the name dolerite comes from the French , from the Greek doleros (“deceitful, deceptive”), because it was easily confused with diorite.
Petrography
Diabase normally has a fine but visible texture of euhedral lath-shaped plagioclase crystals (62%) set in a finer matrix of clinopyroxene, typically augite (20–29%), with minor olivine (3% up to 12% in olivine diabase), magnetite (2%), and ilmenite (2%). Accessory and alteration minerals include hornblende, biotite, apatite, pyrrhotite, chalcopyrite, serpentine, chlorite, and calcite. The texture is termed diabasic and is typical of diabases. This diabasic texture is also termed interstitial. The feldspar is high in anorthite (as opposed to albite), the calcium endmember of the plagioclase anorthite-albite solid solution series, most commonly labradorite.
Locations
Diabase is usually found in smaller, relatively shallow intrusive bodies such as dikes and sills. Diabase dikes occur in regions of crustal extension and often occur in dike swarms of hundreds of individual dikes or sills radiating from a single volcanic center.
The Palisades Sill which makes up the New Jersey Palisades on the Hudson River, near New York City, New York, United States, is an example of a diabase sill. The dike complexes of the British Tertiary Volcanic Province includes Skye, Rum, Mull, and Arran of western Scotland, the Slieve Gullion region of Ireland, and dolerite dike swarms extending across northern England towards the Midlands, for example Rowley Rag. Parts of the Deccan Traps of India, formed at the end of the Cretaceous, also include dolerite. It is also abundant in large parts of Curaçao, an island off the coast of Venezuela. Another example of diabase dikes has been recognized in the Mongo area within the Guéra Massif of Chad in Central Africa.
In the Death Valley region of California, Precambrian diabase intrusions metamorphosed pre-existing dolomite into economically important talc deposits.
In the Thuringian-Franconian-Vogtland Slate Mountains of central Germany the diabase is entirely of Devonian age. They form typical domed landscapes, especially in the Vogtland. One geotourist attraction is the Steinerne Rose near Saalburg, a natural monument, whose present shape is due to the typical weathering of lava pillows.
Gondwanaland and Australia
A geological event known as the Oenpelli Dolerite intrusive event occurred about 1,720 million years ago in western Arnhem Land, in the Northern Territory, forming curved ridges of Oenpelli Dolerite stretching over . Further west, on the northern coast of Arnhem Land, a "subsurface radial dyke swarm" known as Galiwinku Dolerite, taking its name from the Aboriginal name for Elcho Island, occurs on the Gove Peninsula and continues under the Arafura Sea and on Wessel Islands, including Elcho and Milingimbi Islands.
In the Yilgarn Craton of Western Australia, a Proterozoic long dolerite dike, the Norseman-Wiluna greenstone belt is associated with the non-alluvial gold mining area between Norseman and Kalgoorlie, which includes the largest gold mine in Australia, the Super Pit gold mine. West of the Norseman–Wiluna Belt is the Yalgoo-Singleton greenstone belt, where complex dolerite dike swarms obscure the volcaniclastic sediments. Large dolerite sills such as the Golden Mile Dolerite can exhibit coarse-grained texture, and show a large diversity in petrography and geochemistry across the width of the sill.
The vast areas of mafic volcanism/plutonism associated with the Jurassic breakup of the Gondwana supercontinent in the Southern Hemisphere include many large diabase/dolerite sills and dike swarms. These include the Karoo dolerites of South Africa, the Ferrar Dolerites of Antarctica, and the largest of these, the most extensive of all dolerite formations worldwide, are found in Tasmania. Here, the volume of magma which intruded into a thin veneer of Permian and Triassic rocks from multiple feeder sites, over a period of perhaps a million years, may have exceeded 40,000 cubic kilometres. In Tasmania, dolerite dominates much of the landscape, particularly alpine areas, with many examples of columnar jointing.
Early Jurassic activity resulted in the formation of dolerite intrusion in Prospect in Sydney, and quarrying of basalt for roadstone and other building materials has been an important activity there for over 180 years.
Use
Diabase is crushed and used as a construction aggregate for road beds, buildings, railroad beds (rail ballast), and within dams and levees.
Diabase can be cut for use as headstones and memorials; the base of the Marine Corps War Memorial is made of black diabase "granite" (a commercial term, not actual granite). Diabase can also be cut for use as ornamental stone for countertops, facing stone on buildings, and paving. A form of dolerite, known as bluestone, is one of the materials used in the construction of Stonehenge.
Diabase also serves as local building stone. In Tasmania, where it is one of the most common rocks found, it is used for building, for landscaping and to erect dry-stone farm walls. In northern County Down, Northern Ireland, "dolerite" is used in buildings such as Mount Stewart together with Scrabo Sandstone as both are quarried at Scrabo Hill.
Balls of diabase were used by the ancient Egyptians as pounding tools for working softer (but still hard) stones.
See also
List of rock types
References
External links
Collection of dikes in the Fish River Canyon, Namibia
Aphanitic rocks
Ophitic rocks
Mafic rocks
Subvolcanic rocks | Diabase | [
"Chemistry"
] | 1,486 | [
"Mafic rocks",
"Igneous rocks by composition"
] |
1,050,944 | https://en.wikipedia.org/wiki/Online%20game | An online game is a video game that is either partially or primarily played through the Internet or any other computer network available. Online games are ubiquitous on modern gaming platforms, including PCs, consoles and mobile devices, and span many genres, including first-person shooters, strategy games, and massively multiplayer online role-playing games (MMORPG). In 2019, revenue in the online games segment reached $16.9 billion, with $4.2 billion generated by China and $3.5 billion in the United States. Since the 2010s, a common trend among online games has been to operate them as games as a service, using monetization schemes such as loot boxes and battle passes as purchasable items atop freely-offered games. Unlike purchased retail games, online games have the problem of not being permanently playable, as they require special servers in order to function.
The design of online games can range from simple text-based environments to the incorporation of complex graphics and virtual worlds. The existence of online components within a game can range from being minor features, such as an online leaderboard, to being part of core gameplay, such as directly playing against other players. Many online games create their own online communities, while other games, especially social games, integrate the players' existing real-life communities. Some online games can receive a massive influx of popularity due to many well-known Twitch streamers and YouTubers playing them.
Online gaming has drastically increased the scope and size of video game culture. Online games have attracted players of a variety of ages, nationalities, and occupations. The online game content is now being studied in the scientific field, especially gamers' interactions within virtual societies in relation to the behavior and social phenomena of everyday life. As in other cultures, the community has developed a gamut of slang words or phrases that can be used for communication in or outside of games. Due to their growing online nature, modern video game slang overlaps heavily with internet slang, as well as leetspeak, with many words such as "pwn" and "noob". Another term that was popularized by the video game community is the abbreviation "AFK" to refer to people who are not at the computer or paying attention. Other common abbreviations include "GL HF" which stands for "good luck, have fun," which is often said at the beginning of a match to show good sportsmanship. Likewise, at the end of a game, "GG" or "GG WP" may be said to congratulate the opponent, win or lose, on a "good game, well played". Many video games have also inspired internet memes and achieved a very large following online.
The culture of online gaming sometimes faces criticism for an environment that can promote cyberbullying, violence, and xenophobia. Some are also concerned about gaming addiction or social stigma. However, it has been argued that, since the players of an online game are strangers to each other and have limited communication, the individual player's experience in an online game is not necessarily different from playing with artificial intelligence players.
History
The history of online games dates back to the early days of packet-based computer networking in the 1970s, An early example of online games is MUDs, including the first, MUD1, which was created in 1978 and originally confined to an internal network before becoming connected to ARPANet in 1980. Commercial games followed in the next decade, with Islands of Kesmai, the first commercial online role-playing game, debuting in 1984, as well as more graphical games, such as the MSX LINKS action games in 1986, the flight simulator Air Warrior in 1987, and the Famicom Modem's online Go game in 1987.
The rapid availability of the Internet in the 1990s led to an expansion of online games, with notable titles including Nexus: The Kingdom of the Winds (1996), Quakeworld (1996), Ultima Online (1997), Lineage (1998), StarCraft (1998), Counter-Strike (1999) and EverQuest (1999). Video game consoles also began to receive online networking features, such as the Famicom Modem (1987), Sega Meganet (1990), Satellaview (1995), SegaNet (2000), PlayStation 2 (2000) and Xbox (2001). Following improvements in connection speeds, more recent developments include the popularization of new genres, such as social games, and new platforms, such as mobile games.
Entering into the 2000s, the cost of technology, servers, and the Internet has dropped so far that fast Internet was commonplace, which led to previously unknown genres like massively multiplayer online games (MMOs) becoming well-known. For example, World of Warcraft (2004) dominated much of the decade. Several other MMOs attempted to follow in Warcrafts footsteps, such as Star Wars Galaxies, City of Heroes, Wildstar, Warhammer Online, Guild Wars 2, and Star Wars: The Old Republic, but failed to make a significant impact in Warcrafts market share. Over time, the MMORPG community has developed a sub-culture with its own slang and metaphors, as well as an unwritten list of social rules and taboos.
Separately, a new type of online game came to popularity alongside World of Warcraft, Defense of the Ancients (2003) which introduced the multiplayer online battle arena (MOBA) format. DotA, a community-created mod based on Warcraft III, gained in popularity as interest in World of Warcraft waned, but since the format was tied to the Warcraft property, others began to develop their own MOBAs, including Heroes of Newerth (2009), League of Legends (2010), and Dota 2 (2013). Blizzard Entertainment, the owner of Warcraft property, released their own take on the MOBA genre with Heroes of the Storm (2015), emphasizing on numerous original heroes from Warcraft III and other Blizzard's franchises. By the early 2010s, the genre had become a big part of the esports category.
During the last half of the 2010s, hero shooter, a variation of shooter games inspired by multiplayer online battle arenas and older class-based shooters, had a substantial rise in popularity with the release of Battleborn and Overwatch in 2016. The genre continued to grow with games such as Paladins (2018) and Valorant (2020).
A battle royale game format became widely popular with the release of PlayerUnknown's Battlegrounds (2017), Fortnite Battle Royale (2017), and Apex Legends (2019). The popularity of the genre continued in the 2020s with the release of the Call of Duty: Warzone (2020). Each game has received tens of millions of players within months of its releases.
Demographics
The assumption that online games in general are populated mostly by males has remained somewhat accurate for years. Recent statistics begin to diminish the male domination myth in gaming culture. Although a worldwide number of male gamers still dominates over female (52% by 48%), women accounted for more than half of the players of certain games. As of 2019, the average gamer is 33 years old.
The report Online Game Market Forecasts estimates worldwide revenue from online games to reach $35 billion by 2017, up from $19 billion in 2011.
Platforms
Console gaming
Xbox Live was launched in November 2002. Initially the console only used a feature called system link, where players could connect two consoles using an Ethernet cable, or multiple consoles through a router. With the original Xbox Microsoft launched Xbox Live, allowing shared play over the internet. A similar feature exists on the PlayStation 3 in the form of the PlayStation Network, and the Wii also supports a limited amount of online gaming. Nintendo also had a network, dubbed "Nintendo Network", that fully supported online gaming with the Wii U and Nintendo 3DS. With the launch of the Nintendo Switch, Nintendo launched the Nintendo Switch Online service to replace the older Nintendo Network.
Browser games
As the World Wide Web developed and browsers became more sophisticated, people started creating browser games that used a web browser as a client. Simple single player games were made that could be played using a web browser (most commonly made with web technologies like HTML, JavaScript, ASP, PHP and MySQL).
The development of web-based graphics technologies such as Flash and Java allowed browser games to become more complex. These games, also known by their related technology as "Flash games" or "Java games", became increasingly popular. Games ranged from simple concepts to large-scale games, some of which were later released on consoles. Many Java or Flash games were shared on various different websites, bringing them to wide audiences. Browser-based pet games are popular among the younger generation of online gamers. These games range from gigantic games with millions of users, such as Neopets, to smaller and more community-based pet games.
More recent browser-based games use web technologies like Ajax to make more complicated multiplayer interactions possible and WebGL to generate hardware-accelerated 3D graphics without the need for plugins.
Types of interactions
Player versus environment (PvE)
PvE is a term used in online games, particularly MMORPGs and other role-playing video games, to refer to fighting computer-controlled opponents.
Player versus player (PvP)
PvP is a term broadly used to describe any game, or aspect of a game, where players compete against each other rather than against computer-controlled opponents.
Online games
First-person shooter game (FPS)
During the 1990s, online games started to move from a wide variety of LAN protocols (such as IPX) and onto the Internet using the TCP/IP protocol. Doom popularized the concept of a deathmatch, where multiple players battle each other head-to-head, as a new form of online game. Since Doom, many first-person shooter games contain online components to allow deathmatch or arena style play. And by popularity, first person shooter games are becoming more and more widespread around the world. As games became more realistic and competitive, an e-sports community was born. Games like Counter-Strike, Halo, Call of Duty, Quake Live and Unreal Tournament are popular with these tournaments. These tournaments have a range of winnings from money to hardware.
Expansion of hero shooters, a sub-genre of shooter games, happened in 2016 when several developers released or announced their hero shooter multiplayer online game. Hero shooters have been considered to have strong potential as an esport, as a large degree of skill and coordination arises from the importance of teamwork. Some notable examples include Battleborn, Overwatch, Paladins and Valorant.
Real-time strategy game (RTS)
Early real-time strategy games often allowed multiplayer play over a modem or local network. As the Internet started to grow during the 1990s, software was developed that would allow players to tunnel the LAN protocols used by the games over the Internet. By the late 1990s, most RTS games had native Internet support, allowing players from all over the globe to play with each other. Popular RTS games with online communities have included Age of Empires, Sins of a Solar Empire, StarCraft and Warhammer 40,000: Dawn of War.
Massively multiplayer online game (MMO)
Massively multiplayer online games were made possible with the growth of broadband Internet access in many developed countries, using the Internet to allow hundreds of thousands of players to play the same game together. Many different styles of massively multiplayer games are available, such as:
MMORPG (Massively multiplayer online role-playing game)
MMORTS (Massively multiplayer online real-time strategy)
MMOFPS (Massively multiplayer online first-person shooter)
MMOSG (Massively multiplayer online social game)
Multiplayer online battle arena game (MOBA)
A specific subgenre of strategy video games referred to as multiplayer online battle arena (MOBA) gained popularity in the 2010s as a form of electronic sports, encompassing games such as the Defense of the Ancients mod for Warcraft III, League of Legends, Dota 2, Smite, and Heroes of the Storm. Major esports professional tournaments are held in venues that can hold tens of thousands of spectators and are streamed online to millions more. A strong fanbase has opened up the opportunity for sponsorship and advertising, eventually leading the genre to become a global cultural phenomenon.
Battle Royale games
A battle royale game is a genre that blends the survival, exploration and scavenging elements of a survival game with last-man-standing gameplay. Dozens to hundreds of players are involved in each match, with the winner being the last player or team alive. Some notable examples include PlayerUnknown's Battlegrounds, Fortnite Battle Royale, Apex Legends, and Call of Duty: Warzone, each having received tens of millions of players within months of their releases. The genre is designed exclusively for multiplayer gameplay over the Internet.
MUD
MUD is a class of multi-user real-time virtual worlds, usually but not exclusively text-based, with a history extending back to the creation of MUD1 by Richard Bartle in 1978. MUD were the direct predecessors of MMORPG.
Other notable games
A social deduction game is a multiplayer online game in which players attempt to uncover each other's hidden role or team allegiance using logic and deductive reasoning, while other players can bluff to keep players from suspecting them. A notable example of the social deduction video game is Among Us, which received a massive influx of popularity in 2020 due to many well-known Twitch streamers and YouTubers playing it. Among Us has also inspired internet memes and achieved a very large following online.
Online game governance
Online gamers must agree to an End-user license agreement (EULA) when they first install the game application or an update. EULA is a legal contract between the producer or distributor and the end-user of an application or software, which is to prevent the program from being copied, redistributed or hacked. The consequences of breaking the agreement vary according to the contract. Players could receive warnings to termination, or direct termination without warning. In the 3D immersive world Second Life where a breach of contract will append the player warnings, suspension and termination depending on the offense.
Where online games supports an in-game chat feature, it is not uncommon to encounter hate speech, sexual harassment and cyberbullying. Players, developers, gaming companies, and professional observers are discussing and developing tools which discourage antisocial behavior.
There are also sometimes Moderators present, who attempt to prevent anti-Social behavior. Online games also often involve real-life illegal behavior, such as scam, financial crimes, invasion of privacy, and other issues.
Recent development of gaming governance requires all video games (including online games) to hold a rating label. The voluntary rating system was established by the Entertainment Software Rating Board (ESRB). A scale can range from "E" (stands for Everyone) inferring games that are suitable for both children and adults, to "M" (stands for Mature) recommending games that are restricted to age above 17. Some explicit online games can be rated "AO" (stands for Adult Only), identifying games that have content suitable for only adults over the age of 18. Furthermore, online games must also carry an ESRB notice that warns that any "online interactions are not rated by the ESRB".
Shutdown of games
The video game industry is highly competitive. As a result, many online games end up not generating enough profits, such that the service providers do not have the incentives to continue running the servers. In such cases, the developers of a game might decide to shut down the server permanently.
Shutting down an online game can severely impact the players. Typically, a server shutdown means players will no longer be able to play the game. For many players, this can cause a sense of loss at an emotional level, since they often dedicate time and effort to making in-game progress, e.g., completing in-game tasks to earn items for their characters. In some other cases, the game might still be playable without the server, but certain important functionalities will be lost. For example, earning key in-game items often requires a server that can track each player's progress.
In some cases, an online game may be relaunched in a substantially different form after shutting down, in an attempt to increase the game's quality, remedy low sales, or reverse a declining player base, and see significantly greater success. Final Fantasy XIV was negatively received upon its 2010 release, and relaunched as A Realm Reborn in 2013 - the new version was met with considerable positive reception, and is still running as of 2022. Splitgate: Arena Warfare relaunched as Splitgate in 2021, switching to a free-to-play model and adding cross-platform multiplayer, and subsequently saw 2 million new players, with the servers being unable to handle the influx.
However, games may remain a commercial failure despite a planned relaunch. These include the 2015 asymmetrical first-person shooter Evolve, which transitioned to a free-to-play title known as Evolve Stage 2 a year after launch, after it was criticized for its significant amount of DLC despite being a full-priced game, but had its servers permanently shut down roughly two years later after its user base "evaporated" as a result of infrequent updates. The 2019 looter-shooter Anthem was also planned to be relaunched as Anthem Next, but the changes were never implemented, partially due to the impact of the COVID-19 pandemic and an unwillingness to further invest in the game by Electronic Arts.
See also
List of video game genres
Game server
Massively multiplayer online game
Multiplayer video game
Online text-based role-playing game
Voice chat in online gaming
References
Video game terminology | Online game | [
"Technology"
] | 3,668 | [
"Computing terminology",
"Video game terminology"
] |
1,051,237 | https://en.wikipedia.org/wiki/The%20Sleepwalkers%3A%20A%20History%20of%20Man%27s%20Changing%20Vision%20of%20the%20Universe | The Sleepwalkers: A History of Man's Changing Vision of the Universe is a 1959 book by Arthur Koestler. It traces the history of Western cosmology from ancient Mesopotamia to Isaac Newton. He suggests that discoveries in science arise through a process akin to sleepwalking. Not that they arise by chance, but rather that scientists are neither fully aware of what guides their research, nor are they fully aware of the implications of what they discover.
Synopsis
A central theme of the book is the changing relationship between faith and reason. Koestler explores how these seemingly contradictory threads existed harmoniously in many of the greatest intellectuals of the West. He illustrates that while the two are estranged today, in the past the most ground-breaking thinkers were often very religious.
Another recurrent theme of this book is the breaking of paradigms in order to create new ones. People, scientists included, cling to cherished old beliefs with such love and attachment that they refuse to see what is wrong in their ideas and the truth in the new ideas that will replace them. (This point was developed a few years afterwards by Thomas Kuhn in The Structure of Scientific Revolutions, in which the concept of "paradigm shift" came to the fore.)
Without denying the greatness of Galileo Galilei and the other modern scientists, he pointed out their mistakes and sometimes intellectual dishonesty, arguing that the scientific revolution's intellectual giants were dwarfs from a moral point of view.
According to Koestler, the great cosmological systems, from Ptolemy to Copernicus, have always reflected the metaphysical and psychological prejudices of their authors. Furthermore, it would be wrong to think of the evolution of scientific progress as if it moved in a purely rational way on an ascending vertical line. In reality, he states, the trend has been much more irregular and uncertain, to the point that the history of cosmological conceptions has been, "without exaggeration… a history of collective obsessions and controlled schizophrenias". From here we understand the title: the great scientists moved like "sleepwalkers" rather than according to the current model of the "electronic brain".
In the epilogue, Koestler argues that the "divorce" between science and religion has certainly benefited scientific and technological development, allowing humanity to enjoy prosperity never seen before. However, this has also produced a new "dullness", a kind of new "scholastic" thinking, which has dried up the human soul. Growing materialism has not only deprived man of a meaning in life, but has come into contradiction with the very developments of the most advanced physics. "Mechanism" is now put aside by quantum mechanics and the theory of relativity, in which the role of the observer, and therefore of the human spirit, is decisive in establishing what reality is. This is why Koestler openly contests the contemporary rejection of possible "non-causal interactions" and phenomena such as telepathy and extrasensory perceptions, to which he will return in his subsequent research.
"The conclusion he puts forward at the end of the book is that modern science is trying too hard to be rational. Scientists have been at their best when they allowed themselves to behave as "sleepwalkers" instead of trying too earnestly to ratiocinate."
Analysis
The historian of astronomy Owen Gingerich, while acknowledging that Koestler's book contributed to his interest in the history of science, described it as "highly questionable" and criticized its treatment of historical figures as fictional. Gingerich said Koestler was wrong when he wrote that Copernicus's De revolutionibus was a "book that nobody had read" and "one of the greatest editorial failures of all time."
French mathematician Alexandre Grothendieck wrote about The Sleepwalkers that "The metaphor of the 'sleepwalker' was inspired by the title of the wonderful book 'the sleepwalkers' by Koestler".
Irish writer John Banville stated that the "original idea" of his Revolutions Trilogy came from his reading of The Sleepwalkers, and also that Koestler
deserves to be remembered also as a bridge between the two cultures. The Sleepwalkers, his account of cosmology from the Greeks to Einstein, is still a wonderfully exciting and informative book. It was his misfortune as a writer that his best work was done in the inevitably ephemeral medium of journalism.
Publication data
Arthur Koestler, The Sleepwalkers: A History of Man's Changing Vision of the Universe (1959), Hutchinson
First published in the United States by Macmillan in 1959
Published by Penguin Books in 1964
Reissued by Pelican Books in 1968
Reprinted by Peregrine Books in 1986;
Reprinted by Arkana in 1989;
Chapters on Kepler excerpted as The Watershed published by Doubleday Anchor in 1960, as part of the Science Study Series.
See also
1959 in literature
Owen Gingerich
References
External links
Frankel, Charles (24 May 1959). "The Road to Great Discovery Is Itself a Thing of Wonder". The New York Times. Retrieved 18 June 2014.
1959 non-fiction books
Astronomy books
Books about the history of science
Books by Arthur Koestler
English non-fiction books
English-language non-fiction books
Hutchinson (publisher) books
Cosmology books | The Sleepwalkers: A History of Man's Changing Vision of the Universe | [
"Astronomy"
] | 1,076 | [
"Astronomy books",
"Works about astronomy"
] |
1,051,310 | https://en.wikipedia.org/wiki/Response%20bias | Response bias is a general term for a wide range of tendencies for participants to respond inaccurately or falsely to questions. These biases are prevalent in research involving participant self-report, such as structured interviews or surveys. Response biases can have a large impact on the validity of questionnaires or surveys.
Response bias can be induced or caused by numerous factors, all relating to the idea that human subjects do not respond passively to stimuli, but rather actively integrate multiple sources of information to generate a response in a given situation. Because of this, almost any aspect of an experimental condition may potentially bias a respondent. Examples include the phrasing of questions in surveys, the demeanor of the researcher, the way the experiment is conducted, or the desires of the participant to be a good experimental subject and to provide socially desirable responses may affect the response in some way. All of these "artifacts" of survey and self-report research may have the potential to damage the validity of a measure or study. Compounding this issue is that surveys affected by response bias still often have high reliability, which can lure researchers into a false sense of security about the conclusions they draw.
Because of response bias, it is possible that some study results are due to a systematic response bias rather than the hypothesized effect, which can have a profound effect on psychological and other types of research using questionnaires or surveys. It is therefore important for researchers to be aware of response bias and the effect it can have on their research so that they can attempt to prevent it from impacting their findings in a negative manner.
History of research
Awareness of response bias has been present in psychology and sociology literature for some time because self-reporting features significantly in those fields of research. However, researchers were initially unwilling to admit the degree to which they impact, and potentially invalidate research utilizing these types of measures. Some researchers believed that the biases present in a group of subjects cancel out when the group is large enough. This would mean that the impact of response bias is random noise, which washes out if enough participants are included in the study. However, at the time this argument was proposed, effective methodological tools that could test it were not available. Once newer methodologies were developed, researchers began to investigate the impact of response bias. From this renewed research, two opposing sides arose.
The first group supports Hyman's belief that although response bias exists, it often has minimal effect on participant response, and no large steps need to be taken to mitigate it. These researchers hold that although there is significant literature identifying response bias as influencing the responses of study participants, these studies do not in fact provide empirical evidence that this is the case. They subscribe to the idea that the effects of this bias wash out with large enough samples, and that it is not a systematic problem in mental health research. These studies also call into question earlier research that investigated response bias on the basis of their research methodologies. For example, they mention that many of the studies had very small sample sizes, or that in studies looking at social desirability, a subtype of response bias, the researchers had no way to quantify the desirability of the statements used in the study. Additionally, some have argued that what researchers may believe to be artifacts of response bias, such as differences in responding between men and women, may in fact be actual differences between the two groups. Several other studies also found evidence that response bias is not as big of a problem as it may seem. The first found that when comparing the responses of participants, with and without controls for response bias, their answers to the surveys were not different. Two other studies found that although the bias may be present, the effects are extremely small, having little to no impact towards dramatically changing or altering the responses of participants.
The second group argues against Hyman's point, saying that response bias has a significant effect, and that researchers need to take steps to reduce response bias in order to conduct sound research. They argue that the impact of response bias is a systematic error inherent to this type of research and that it needs to be addressed in order for studies to be able to produce accurate results. In psychology, there are many studies exploring the impact of response bias in many different settings and with many different variables. For example, some studies have found effects of response bias in the reporting of depression in elderly patients. Other researchers have found that there are serious issues when responses to a given survey or questionnaire have responses that may seem desirable or undesirable to report, and that a person's responses to certain questions can be biased by their culture. Additionally, there is support for the idea that simply being part of an experiment can have dramatic effects on how participants act, thus biasing anything that they may do in a research or experimental setting when it comes to self-reporting. One of the most influential studies was one which found that social desirability bias, a type of response bias, can account for as much as 10–70% of the variance in participant response. Essentially, because of several findings that illustrate the dramatic effects response bias has on the outcomes of self-report research, this side supports the idea that steps need to be taken to mitigate the effects of response bias to maintain the accuracy of research.
While both sides have support in the literature, there appears to be greater empirical support for the significance of response bias. To add strength to the claims of those who argue the importance of response bias, many of the studies that reject the significance of response bias report multiple methodological issues in their studies. For example, they have extremely small samples that are not representative of the population as a whole, they only considered a small subset of potential variables that could be affected by response bias, and their measurements were conducted over the phone with poorly worded statements.
Types
Acquiescence bias
Acquiescence bias, which is also referred to as "yea-saying", is a category of response bias in which respondents to a survey have a tendency to agree with all the questions in a measure. This bias in responding may represent a form of dishonest reporting because the participant automatically endorses any statements, even if the result is contradictory responses. For example, a participant could be asked whether they endorse the following statement, "I prefer to spend time with others" but then later on in the survey also endorses "I prefer to spend time alone," which are contradictory statements. This is a distinct problem for self-report research because it does not allow a researcher to understand or gather accurate data from any type of question that asks for a participant to endorse or reject statements. Researchers have approached this issue by thinking about the bias in two different ways. The first deals with the idea that participants are trying to be agreeable, in order to avoid the disapproval of the researcher. A second cause for this type of bias was proposed by Lee Cronbach, when he argued that it is likely due to a problem in the cognitive processes of the participant, instead of the motivation to please the researcher. He argues that it may be due to biases in memory where an individual recalls information that supports endorsement of the statement, and ignores contradicting information.
Researchers have several methods to try and reduce this form of bias. Primarily, they attempt to make balanced response sets in a given measure, meaning that there are a balanced number of positively and negatively worded questions. This means that if a researcher was hoping to examine a certain trait with a given questionnaire, half of the questions would have a "yes" response to identify the trait, and the other half would have a "no" response to identify the trait.
Nay-saying is the opposite form of this bias. It occurs when a participant always chooses to deny or not endorse any statements in a survey or measure. This has a similar effect of invalidating any kinds of endorsements that participants may make over the course of the experiment.
Courtesy bias
Courtesy bias is a type of response bias that occurs when some individuals tend to not fully state their unhappiness with a service or product as an attempt to be polite or courteous toward the questioner. It is a common bias in qualitative research methodology.
In a study on disrespect and abuse during facility based childbirth, courtesy bias was found to be one of the causes of potential underreporting of those behaviors at hospitals and clinics. Evidence has been found that some cultures are especially prone to the courtesy bias, leading respondents to say what they believe the questioner wants to hear. This bias has been found in Asian and in Hispanic cultures. Courtesy bias has been found to be a similar term referring to people in East Asia, who frequently tend to exhibit acquiescence bias. As with most data collection, courtesy bias has been found to be a concern from the phone survey respondents.
Attempts were made to create a good interview environment in order to minimize courtesy bias. An emphasis is needed that both positive and negative experiences must be important to showcase to enhance learning and minimize the bias as much as possible.
Demand characteristics
Demand characteristics refer to a type of response bias where participants alter their response or behavior simply because they are part of an experiment. This arises because participants are actively engaged in the experiment, and may try to figure out the purpose, or adopt certain behaviors they believe belong in an experimental setting. Martin Orne was one of the first to identify this type of bias, and has developed several theories to address their cause. His research points to the idea that participants enter a certain type of social interaction when engaging in an experiment, and this special social interaction drives participants to consciously and unconsciously alter their behaviors There are several ways that this bias can influence participants and their responses in an experimental setting. One of the most common relates to the motivations of the participant. Many people choose to volunteer to be in studies because they believe that experiments are important. This drives participants to be "good subjects" and fulfill their role in the experiment properly, because they believe that their proper participation is vital to the success of the study. Thus, in an attempt to productively participate, the subject may try to gain knowledge of the hypothesis being tested in the experiment and alter their behavior in an attempt to support that hypothesis. Orne conceptualized this change by saying that the experiment may appear to a participant as a problem, and it is his or her job to find the solution to that problem, which would be behaving in a way that would lend support to the experimenter's hypothesis. Alternatively, a participant may try to discover the hypothesis simply to provide faulty information and wreck the hypothesis. Both of these results are harmful because they prevent the experimenters from gathering accurate data and making sound conclusions.
Outside of participant motivation, there are other factors that influence the appearance of demand characteristics in a study. Many of these factors relate to the unique nature of the experimental setting itself. For example, participants in studies are more likely to put up with uncomfortable or tedious tasks simply because they are in an experiment. Additionally, the mannerisms of the experimenter, such as the way they greet the participant, or the way they interact with the participant during the course of the experiment may inadvertently bias how the participant responds during the course of the experiment. Also, prior experiences of being in an experiment, or rumors of the experiment that participants may hear can greatly bias the way they respond. Outside of an experiment, these types of past experiences and mannerisms may have significant effects on how patients rank the effectiveness of their therapist. Many of the ways therapists go about collecting client feedback involve self-reporting measures, which can be highly influenced by response bias. Participants may be biased if they fill out these measure in front of their therapist, or somehow feel compelled to answer in an affirmative matter because they believe their therapy should be working. In this case, the therapists would not be able to gain accurate feedback from their clients, and be unable to improve their therapy or accurately tailor further treatment to what the participants need. All of these different examples may have significant effects on the responses of participants, driving them to respond in ways that do not reflect their actual beliefs or actual mindset, which negatively impact conclusions drawn from those surveys.
While demand characteristics cannot be completely removed from an experiment, there are steps that researchers can take to minimize the impact they may have on the results. One way to mitigate response bias is to use deception to prevent the participant from discovering the true hypothesis of the experiment and then debrief the participants. For example, research has demonstrated that repeated deception and debriefing is useful in preventing participants from becoming familiar with the experiment, and that participants do not significantly alter their behaviors after being deceived and debriefed multiple times. Another way that researchers attempt to reduce demand characteristics is by being as neutral as possible, or training those conducting the experiment to be as neutral as possible. For example, studies show that extensive one-on-one contact between the experimenter and the participant makes it more difficult to be neutral, and go on to suggest that this type of interaction should be limited when designing an experiment. Another way to prevent demand characteristics is to use blinded experiments with placebos or control groups. This prevents the experimenter from biasing the participant, because the researcher does not know in which way the participant should respond. Although not perfect, these methods can significantly reduce the effect of demand characteristics on a study, thus making the conclusions drawn from the experiment more likely to accurately reflect what they were intended to measure.
Extreme responding
Extreme responding is a form of response bias that drives respondents to only select the most extreme options or answers available. For example, in a survey utilizing a Likert scale with potential responses ranging from one to five, the respondent may only give answers as ones or fives. Another example is if the participant only answered questionnaires with "strongly agree" or "strongly disagree" in a survey with that type of response style. There are several reasons for why this bias may take hold in a group of participants. One example ties the development of this type of bias in respondents to their cultural identity. This explanation states that people from certain cultures are more likely to respond in an extreme manner as compared to others. For example, research has found that those from the Middle East and Latin America are more prone to be affected by extremity response, whereas those from East Asia and Western Europe are less likely to be affected. A second explanation for this type of response bias relates to the education level of the participants. Research has indicated that those with lower intelligence, measured by an analysis of IQ and school achievement, are more likely to be affected by extremity response. Another way that this bias can be introduced is through the wording of questions in a survey or questionnaire. Certain topics or the wording of a question may drive participants to respond in an extreme manner, especially if it relates to the motivations or beliefs of the participant.
The opposite of this bias occurs when participants only select intermediate or mild responses as answers.
Question order bias
Question order bias, or "order effects bias", is a type of response bias where a respondent may react differently to questions based on the order in which questions appear in a survey or interview. Question order bias is different from "response order bias" that addresses specifically the order of the set of responses within a survey question. There are many ways that questionnaire items that appear earlier in a survey can affect responses to later questions. One way is when a question creates a "norm of reciprocity or fairness" as identified in the 1950 work of Herbert Hyman and Paul Sheatsley. In their research they asked two questions. One was asked on whether the United States should allow reporters from communist countries to come to the U.S. and send back news as they saw it; and another question was asked on whether a communist country like Russia should let American newspaper reporters come in and send back news as they saw it to America. In the study, the percentage of “yes” responses to the question allowing communist reporters increased by 37 percentage points depending on the order. Similarly results for the American reporters item increased by 24 percentage points. When either of the items was asked second, the context for the item was changed as a result of the answer to the first, and the responses to the second were more in line with what would be considered fair, based on the previous response. Another way to alter the response towards questions based on order depends on the framing of the question. If a respondent is first asked about their general interest in a subject their response interest may be higher than if they are first posed technical or knowledge based questions about a subject. Part-whole contrast effect is yet another ordering effect. When general and specific questions are asked in different orders, results for the specific item are generally unaffected, whereas those for the general item can change significantly. Question order biases occur primarily in survey or questionnaire settings. Some strategies to limit the effects of question order bias include randomization, grouping questions by topic to unfold in a logical order.
Social desirability bias
Social desirability bias is a type of response bias that influences a participant to deny undesirable traits, and ascribe to themselves traits that are socially desirable. In essence, it is a bias that drives an individual to answer in a way that makes them look more favorable to the experimenter. This bias can take many forms. Some individuals may over-report good behavior, while others may under-report bad, or undesirable behavior. A critical aspect of how this bias can come to affect the responses of participants relates to the norms of the society in which the research is taking place. For example, social desirability bias could play a large role if conducting research about an individual's tendency to use drugs. Those in a community where drug use is seen as acceptable or popular may exaggerate their own drug use, whereas those from a community where drug use is looked down upon may choose to under-report their own use. This type of bias is much more prevalent in questions that draw on a subject's opinion, like when asking a participant to evaluate or rate something, because there generally is not one correct answer, and the respondent has multiple ways they could answer the question. Overall, this bias can be very problematic for self-report researchers, especially if the topic they are looking at is controversial. The distortions created by respondents answering in a socially desirable manner can have profound effects on the validity of self-report research. Without being able to control for or deal with this bias, researchers are unable to determine if the effects they are measuring are due to individual differences, or from a desire to conform to the societal norms present in the population they are studying. Therefore, researchers strive to employ strategies aimed at mitigating social desirability bias so that they can draw valid conclusions from their research.
Several strategies exist to limit the effect of social desirability bias. In 1985, Anton Nederhof compiled a list of techniques and methodological strategies for researchers to use to mitigate the effects of social desirability bias in their studies. Most of these strategies involve deceiving the subject, or are related to the way questions in surveys and questionnaires are presented to those in a study. A condensed list of seven of the strategies are listed below:
Ballot-box method: This method allows a subject to anonymously self-complete a questionnaire and submit it to a locked "ballot box", thereby concealing their responses from an interviewer and affording the participant an additional layer of assured concealment from perceived social repercussion.
Forced-choice items: This technique hopes to generate questions that are equal in desirability to prevent a socially desirable response in one direction or another.
Neutral questions: The goal of this strategy is to use questions that are rated as neutral by a wide range of participants so that socially desirable responding does not apply.
Randomized response technique: This technique allows participants to answer a question that is randomly selected from a set of questions. The researcher in this technique does not know which question the subject responds to, so subjects are more likely to answer truthfully. Researchers can then use statistics to interpret the anonymous data.
Self-administered questionnaires: This strategy involves isolating the participant before they begin answering the survey or questionnaire to hopefully remove any social cues the researcher may present to the participant.
Bogus-pipeline: This technique involves a form of deception, where researchers convince a subject through a series of rigged demonstrations that a machine can accurately determine if a participant is being truthful when responding to certain questions. After the participant completes the survey or questionnaire, they are debriefed. This is a rare technique, and does not see much use because of the cost, time commitment and because it is a one-use only technique for each participant.
Selection interviewers: This strategy allows participants to select the person or persons who will be conducting the interview or presiding over the experiment. This, in the hope that with a higher degree of rapport, subjects will be more likely to answer honestly.
Proxy subjects: Instead of asking a person directly, this strategy questions someone who is close to or knows the target individual well. This technique is generally limited to questions about behavior, and is not adequate for asking about attitudes or beliefs.
The degree of effectiveness for each of these techniques or strategies differs depending on the situation and the question asked. In order to be the most successful in reducing social desirability bias in a wide range of situations, it has been suggested that researchers utilize a combination of these techniques to have the best chance at mitigating the effects of social desirability bias. Validations are not made on a "more is better" assumption (higher stated prevalence of the behavior of interest) when selecting the best method for reducing SDB as this is a "weak validation" that does not always guarantee the best results. Instead, ground "truthed" comparisons of observed data to stated data should reveal the most accurate method.
Related terminology
Non-response bias is not the opposite of response bias and is not a type of cognitive bias: it occurs in a statistical survey if those who respond to the survey differ in the outcome variable.
Response rate is not a cognitive bias, but rather refers to a ratio of those who complete the survey and those who do not.
Highly vulnerable areas
Some areas or topics that are highly vulnerable to the various types of response bias include:
alcoholism
self-reporting in mental illness, especially depression
See also
List of cognitive biases
Compound question
Loaded question
Misinformation effect, similar effect for memory instead of opinion.
Opinion poll
Randomized response
Total survey error
Notes
Further reading
External links
Estimation of Response Bias in the NHES:95 Adult Education Survey
Effects of road sign wording on visitor survey - non-response bias
Experimental bias
Survey methodology | Response bias | [
"Mathematics"
] | 4,657 | [
"Experimental bias",
"Statistical concepts"
] |
1,051,315 | https://en.wikipedia.org/wiki/Monocrete%20construction | Monocrete is a building construction method utilising modular bolt-together pre-cast concrete wall panels.
Monocrete construction was widely used in the construction of government housing in the 1940s and 1950s in Canberra, Australia. The expansion of the new capital was exceeding the ability of the Government to build houses, so alternative construction methods were investigated.
The Canberra monocrete homes are built on brick piers and surrounding brick footing, and all of the walls are of monocrete construction including interior ones. They are precast with steel windows and door frames set directly into the concrete. Steel plates in the ceiling space bolt the individual wall panels together. The floor and roof are of normal construction - wood and tile respectively. The gaps between the wall panels are filled with a flexible gap-filling compound and covered with tape on the interior. It has been suggested that the panels tend to move separately to one another, opening up cracks in between them, and that the houses also tend to be susceptible to condensation build up and mold growth on the inside of the walls.
A similar technique is used in the construction of some modern commercial buildings.
References
Building engineering
Building materials
Prefabricated houses | Monocrete construction | [
"Physics",
"Engineering"
] | 239 | [
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Civil engineering",
"Matter",
"Building materials"
] |
1,051,404 | https://en.wikipedia.org/wiki/Monounsaturated%20fat | In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond.
Molecular description
Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats.
Health
Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability.
Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol.
Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles.
The Mediterranean diet is one heavily influenced by monounsaturated fats. In the late 20th century, people in Mediterranean countries consumed more total fat than Northern European countries, but most of the fat was in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of saturated fat was minimal in comparison.
A 2017 review found evidence that the practice of a Mediterranean diet could lead to a decreased risk of cardiovascular diseases, overall cancer incidence, neurodegenerative diseases, diabetes, and early death. A 2018 review showed that the practice of the Mediterranean diet may improve overall health status, such as the reduced risk of non-communicable diseases. It also may reduce the social and economic costs of diet-related illnesses.
Diabetes
Increasing monounsaturated fat and decreasing saturated fat intake could improve insulin sensitivity, but only when the overall fat intake of the diet was low. However, some monounsaturated fatty acids (in the same way as saturated fats) may promote insulin resistance, whereas polyunsaturated fatty acids may be protective against insulin resistance.
Sources
Monounsaturated fats are found in animal flesh such as red meat, whole milk products, nuts, and high fat fruits such as olives and avocados. Algal oil is about 92% monounsaturated fat. Olive oil is about 75% monounsaturated fat. The high oleic variety sunflower oil contains at least 70% monounsaturated fat. Canola oil and cashews are both about 58% monounsaturated fat. Tallow (beef fat) is about 50% monounsaturated fat. and lard is about 40% monounsaturated fat. Other sources include hazelnut, avocado oil, macadamia nut oil, grapeseed oil, groundnut oil (peanut oil), sesame oil, corn oil, popcorn, whole grain wheat, cereal, oatmeal, almond oil, sunflower oil, hemp oil, and tea-oil Camellia.
See also
High density lipoprotein
Fatty acid synthesis
References
External links
Fats (Mayo Clinic)
The Chemistry of Unsaturated Fats
Food science
Lipids
Nutrition
de:Fettsäuren#Gesättigte und ungesättigte Fettsäuren | Monounsaturated fat | [
"Chemistry"
] | 896 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
1,051,526 | https://en.wikipedia.org/wiki/Chemical%20hazard | Chemical hazards are hazards present in hazardous chemicals and hazardous materials. Exposure to certain chemicals can cause acute or long-term adverse health effects. Chemical hazards are usually classified separately from biological hazards (biohazards). Chemical hazards are classified into groups that include asphyxiants, corrosives, irritants, sensitizers, carcinogens, mutagens, teratogens, reactants, and flammables. In the workplace, exposure to chemical hazards is a type of occupational hazard. The use of personal protective equipment may substantially reduce the risk of adverse health effects from contact with hazardous materials.
Long-term exposure to chemical hazards such as silica dust, engine exhausts, tobacco smoke, and lead (among others) have been shown to increase risk of heart disease, stroke, and high blood pressure.
Types of chemical hazard
Routes of exposure
The most common exposure route to chemicals in the work environment is through inhalation. Gas, vapour, mist, dust, fumes, and smoke can all be inhaled. Those with occupations involving physical work may inhale higher levels of chemicals if working in an area with contaminated air. This is because workers who do physical work will exchange over 10,000 litres of air over an 8-hour day, while workers who do not do physical work will exchange only 2,800 litres. If the air is contaminated in the workplace, more air exchange will lead to the inhalation of higher amounts of chemicals.
Chemicals may be ingested when food or drink is contaminated by unwashed hands or from clothing or poor handling practices. When ingestion of a chemical hazard occurs it comes from when those said chemicals are absorbed while in the digestive tract of the body. Ingestion only occurs when food or drink has contact with the toxic chemical. This can happen through direct or indirect ingestion. When food or drink is brought into an environment where harmful chemicals are unsealed there is the possibility of those chemical vapors or particles contaminating the food or the drink. A more direct form of chemical ingestion is the possibility of consuming the chemical directly. This rarely happens but, it is possible, that if there is little to no labeling on the chemical containers and if they aren’t secured properly an accident can occur which could lead to someone mistakenly assuming the chemical was something it was not.
Chemical exposure to the skin is a common workplace injury and may also occur in domestic situations with chemicals such as bleach or drain-cleaners. The exposure of chemicals to the skin most often results in local irritation to the exposed area. In some exposures, the chemical will be absorbed through the skin and will result in poisoning. The eyes have a strong sensitivity to chemicals, and are consequently an area of high concern for chemical exposure. Chemical exposure to the eyes results in irritation and may result in burns and vision loss.
Injection is an uncommon method of chemical exposure in the workplace. Chemicals can be injected into the skin when a worker is punctured by a sharp object, such as a needle. Chemical exposure through injection may result in the chemical entering directly into the bloodstream.
Symbols of chemical hazards
Hazard pictograms are a type of labeling system that alerts people at a glance that there are hazardous chemicals present. The symbols help identify whether the chemicals that are going to be in use may potentially cause physical harm, or harm to the environment. The 9 symbols are:
Explosive (exploding bomb)
Flammable (flame)
Oxidizing (flame above a circle)
Corrosive (corrosion of table and hand)
Acute toxicity (skull and crossbones)
Hazardous to environment (dead tree and fish)
Health hazard/hazardous to the ozone layer (exclamation mark)
Serious health hazard (cross on a human silhouette)
Gas under pressure (gas cylinder)
These pictographs are also subdivided into class and categories for each classification. The assignments for each chemical depends on their type and their severity. The standard set of 9 hazard pictograms was published and distributed as a regulatory requirement through the efforts of the United Nations via the Globally Harmonized System of Classification and Labelling of Chemicals.
Controlling chemical exposure
Elimination and substitution
Chemical exposure is estimated to have caused approximately 190,000 illnesses and 50,000 deaths of workers annually. There exists an unknown link between chemical exposure and subsequent illness or death. Therefore, the majority of these illnesses and deaths are thought to be caused by a lack of knowledge or awareness concerning the dangers of chemicals. The best method of controlling chemical exposure within the workplace is through the elimination or the substitution of all chemicals that are thought or known to cause illness or death.
Engineering controls
Although elimination and substitution of harmful chemicals is the best known method for controlling chemical exposure, there are other methods that can be implemented to diminish exposure. The implementation of engineering controls is an example of another method for controlling chemical exposures. When engineering controls are implemented, there is a physical change made to the work environment that will eliminate or reduce the risk to chemical exposure. An example of engineering controls is the enclosure or isolation of the process that creates the chemical hazard.
Administrative controls and safe work practices
If the process that creates the chemical hazard cannot be enclosed or isolated, the next best method is the implementation of administrative controls and work practices controls. This is the establishment of administrative and work practices that will reduce the amount of time and how often the workers will be exposed to the chemical hazard. An example of administrative and work practices controls is the establishment of work schedules in which workers have rotating job assignments. This will ensure that all workers have limited exposure to chemical hazards.
Personal protective equipment
Employers should provide personal protective equipment (PPE) to protect their workers from chemicals used within the workplace. The use of PPE prevents workers from being exposed to chemicals through the routes of exposure—inhalation, absorption through skin or eyes, ingestion, and injection. One example of how PPE usage can prevent chemical exposure concerns respirators. If workers wear respirators, they will prevent the exposure of chemicals through inhalation.
First aid
In case of an emergency, it is recommended to understand first aid procedures in order to minimize any damage. Different types of chemicals can cause a variety of damage. Most sources agree that it is best to rinse any contacted skin or eye with water immediately. Currently, there is insufficient evidence of how long the rinsing should be done, as the degree of impacts will vary for substances such as corrosive chemicals.
Transporting the affected person to a health care facility may be important, depending on condition. If the victim needs to be transported before the recommended flush time, then flushing should be done during the transportation process. Some chemical manufacturers may state the specific type of cleansing agent that is recommended.
Long-term risks
Cancers
Cardiovascular disease
A 2017 SBU report found evidence that workplace exposure to silica dust, engine exhaust or welding fumes is associated with heart disease. Associations exist for exposure to arsenic, benzopyrenes, lead, dynamite, carbon disulfide, carbon monoxide, metalworking fluids and occupational exposure to tobacco smoke. Working with the electrolytic production of aluminium, or the production of paper when the sulfate pulping process is used, is associated with heart disease. An association was found between heart disease and exposure to compounds which are no longer permitted in certain work environments, such as phenoxy acids containing TCDD (dioxin) or asbestos.
Workplace exposure to silica dust or asbestos is also associated with pulmonary heart disease. There is evidence that workplace exposure to lead, carbon disulfide, or phenoxy acids containing TCDD, as well as working in an environment where aluminium is being electrolytically produced, are associated with stroke.
Reproductive and developmental disorders
Pesticides and carbon disulfide, amongst many other chemical species have been linked to disruptions of endocrine balances in the brain and ovaries. Any contact with harmful chemicals during the first few months of pregnancy or even after has been connected to some miscarriages and has affected the menstrual cycle to the point that it has been able to block ovulation. Chemicals inducing health issues during pregnancy may also affect infants or fetuses.
See also
Health hazardHazards that would affect the health of exposed persons.
Process safety – Discipline dealing with the study and management of fires, explosions and toxic gas clouds from hazardous materials in process plants.
References | Chemical hazard | [
"Chemistry"
] | 1,714 | [
"Chemical hazards"
] |
1,051,589 | https://en.wikipedia.org/wiki/Pyraminx | The Pyraminx () is a regular tetrahedron puzzle in the style of Rubik's Cube. It was made and patented by Uwe Mèffert after the original 3 layered Rubik's Cube by Ernő Rubik, and introduced by Tomy Toys of Japan (then the 3rd largest toy company in the world) in 1981.
Optimal solutions
The maximum number of twists required to solve the Pyraminx is 11. There are 933,120 different positions (disregarding the trivial rotation of the tips), a number that is sufficiently small to allow a computer search for optimal solutions. The table below summarizes the result of such a search, stating the number p of positions that require n twists to solve the Pyraminx:
{| class="wikitable"
!n
|0||1||2 ||3 ||4 ||5 ||6 ||7 ||8 ||9 ||10 ||11
|-
!p
|1||8||48||288||1728||9896||51808||220111||480467||166276||2457||32
|}
Records
The world record single solve is 0.73 seconds, set by Simon Kellum of the United States at Middleton Meetup Thursday 2023. The world record average of five solves (excluding fastest and slowest) is 1.27 seconds, set by Sebastian Lee of Australia at Maitland Spring 2024.
Top 5 solvers by single solve
Top 5 solvers by Olympic average of 5 solves
Methods
There are many methods for solving a Pyraminx. They can be split up into two main groups.
1) V First Methods - In these methods, two or three edges are solved first, and a set of algorithms, also called LL (last layer) algorithms, are used to solve the remainder of the puzzle.
2) Top First Methods- In these methods, three edges around a center piece are solved first, and the remainder of the puzzle is solved using a set of algorithms.
Common V first methods-
a) Layer by Layer - In this method, a face with all edges permuted is solved, and then the remaining puzzle is solved by a single algorithm from a set of 5.
b) Algorithmic L4E and Intuitive L4E - L4E or last 4 edges is somewhat similar to Layer by Layer. The only difference is that only two edges are solved around three centers. Both of these methods solve the last four edges in the same step, hence the name. The difference is that Intuitive L4E requires a lot of visualization and "intuition" to solve the last four edges while algorithmic L4E uses algorithms. Algorithmic L4E is generally used more at higher levels, although there are very fast Intuitive L4E users. It is also easy to transition between Intuitive L4E and Algorithmic L4E.
Common top first methods-
a) One Flip - This method uses two edges around one center solved and the third edge flipped. There are a total of six cases after this step, for which algorithms are memorized and executed. The third step involves using a common set of algorithms for all top first methods, also called Keyhole last layer, which involves 5 algorithms, four of them being the mirrors of each other.
b) Keyhole - This method uses two edges in the right place around one center, and the third edge placed elsewhere on the puzzle. The centers of the fourth color are then solved using the slot formed by the non-permuted edge. The last step is solved using Keyhole last layer algorithms.
c) OKA - In this method, one edge is oriented around two edges in the wrong place, but one of the edges that is in the wrong place belongs to the block itself. The last edge is found on the bottom layer, and a very simple algorithm is executed to get it in the right place, followed by keyhole last layer algorithms.
Some other common top first methods are WO and Nutella.
Many top Pyraminx speedsolvers only use V-first methods, as top-first methods are extremely clunky and outdated due to hardware.
Variations
There are several variations of the puzzle. The simplest, Tetraminx, is equivalent to the (3x) Pyraminx but without the tips (see photo), resembling a truncated tetrahedron. There also exist "higher-order" versions, such as the 4x Master Pyraminx (see photos) and the 5x Professor's Pyraminx.
The Master Pyraminx has 4 layers and 16 triangles-per-face (compared to 3 layers and 9 triangles-per-face of the original), and is based on the Skewb Diamond mechanism. This version has about 2.6817 × 1015 combinations. The Master Pyraminx has
4 "tips" (same as the original Pyraminx)
4 "middle axials" (same as the original Pyraminx)
4 "centers" (similar to Rubik's Cube, none in the original Pyraminx)
6 "inner edges" (similar to Rubik's Cube, none in the original Pyraminx)
12 "outer edges" (2-times more than the 6 of the original Pyraminx)
In summary, the Master Pyraminx has 30 "manipulable" pieces. However, like the original, 8 of the pieces (the tips and middle axials) are fixed in position (relative to each other) and can only be rotated in place. Also, the 4 centers are fixed in position and can only rotate (like the Rubik's Cube). So there are only 18 (30-8-4) "truly movable" pieces; since this is 10% fewer than the 20 "truly movable" pieces of the Rubik's Cube, it should be no surprise that the Master Pyraminx has about 10,000-times fewer combinations than a Rubik's Cube (43 quintilion in the short scale or 43 trilion in the long scale). The Master Pyraminx can be solved in numerous ways: one is layer by layer like the original one or reducing it to a Jing pyraminx.
Reviews
Games
See also
Pyraminx Duo
Pyramorphix and Master Pyramorphix, two regular tetrahedron puzzles which resemble the Pyraminx but are mechanically very different from it
Pocket Cube
Rubik's Cube
Rubik's Revenge
Rubik's Triamid
Professor's Cube
V-Cube 6
V-Cube 7
V-Cube 8
Skewb
Skewb Diamond
Megaminx
Dogic
Combination puzzles
Tower Cube
References
External links
Jaap's Pyraminx and related puzzles page, with solution
Pyraminx solution from PuzzleSolver
Pyraminx - ruwix.com (how to solve)
A solution to the Pyraminx by Jonathan Bowen
An efficient and easy to follow solution favoured by speed solvers
Patterns A collection of pretty patterns for the Pyraminx
1980s toys
Mechanical puzzles
Combination puzzles
Rubik's Cube
Tetrahedra | Pyraminx | [
"Mathematics"
] | 1,502 | [
"Recreational mathematics",
"Mechanical puzzles"
] |
1,051,627 | https://en.wikipedia.org/wiki/Szemer%C3%A9di%E2%80%93Trotter%20theorem | The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given points and lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is
This bound cannot be improved, except in terms of the implicit constants in its big O notation. An equivalent formulation of the theorem is the following. Given points and an integer , the number of lines which pass through at least of the points is
The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition. Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs. This method has been used to produce the explicit upper bound on the number of incidences. Subsequent research has lowered the constant, coming from the crossing lemma, from 2.5 to 2.44. On the other hand, this bound would not remain valid if one replaces the coefficient 2.44 with 0.42.
The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics.
Proof of the first formulation
We may discard the lines which contain two or fewer of the points, as they can contribute at most incidences to the total number. Thus we may assume that every line contains at least three of the points.
If a line contains points, then it will contain line segments which connect two consecutive points along the line. Because after discarding the two-point lines, it follows that , so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if denotes the number of such line segments, it will suffice to show that
Now consider the graph formed by using the points as vertices, and the line segments as edges. Since each line segment lies on one of lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most . The crossing number inequality implies that either , or that . In either case , giving the desired bound
Proof of the second formulation
Since every pair of points can be connected by at most one line, there can be at most lines which can connect at or more points, since . This bound will prove the theorem when is small (e.g. if for some absolute constant ). Thus, we need only consider the case when is large, say .
Suppose that there are m lines that each contain at least points. These lines generate at least incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have
and so at least one of the statements , or is true. The third possibility is ruled out since was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound as desired.
Optimality
Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer a set of points on the integer lattice
and a set of lines
Clearly, and . Since each line is incident to points (i.e., once for each ), the number of incidences is which matches the upper bound.
Generalization to
One generalization of this result to arbitrary dimension, , was found by Agarwal and Aronov. Given a set of points, , and the set of hyperplanes, , which are each spanned by , the number of incidences between and is bounded above by
provided . Equivalently, the number of hyperplanes in containing or more points is bounded above by
A construction due to Edelsbrunner shows this bound to be asymptotically optimal.
József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem.
In
Many proofs of the Szemerédi–Trotter theorem over rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane.
Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane by introducing additional ideas. This result was also obtained independently and through a different method by Zahl. The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be ; the constant is not explicit in Zahl's proof.
When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument.
In finite fields
Let be a field.
A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in : let be the set of all points and let be the set of all lines in the plane. Since each line contains points, there are incidences. On the other hand, a Szemerédi-Trotter bound would give incidences. This example shows that the trivial, combinatorial incidence bound is tight.
Bourgain, Katz and Tao show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained.
Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic.
Large set incidence bounds
Let be an odd prime power. Then Vinh showed that the number of incidences between points and lines in is at most
Note that there is no implicit constant in this bound.
Small set incidence bounds
Let be a field of characteristic . Stevens and de Zeeuw show that the number of incidences between points and lines in is
under the condition in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when .
If the point set is a Cartesian Product, then they show an improved incidence bound: let be a finite set of points with and let be a set of lines in the plane. Suppose that and in positive characteristic that . Then the number of incidences between and is
This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure.
In both the reals and arbitrary fields, Rudnev and Shkredov show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds.
See also
Hopcroft's problem, the algorithmic problem of detecting a point-line incidence
References
Euclidean plane geometry
Theorems in discrete geometry
Theorems in combinatorics
Articles containing proofs | Szemerédi–Trotter theorem | [
"Mathematics"
] | 1,557 | [
"Theorems in combinatorics",
"Theorems in discrete geometry",
"Euclidean plane geometry",
"Theorems in discrete mathematics",
"Combinatorics",
"Theorems in geometry",
"Articles containing proofs",
"Planes (geometry)"
] |
1,051,892 | https://en.wikipedia.org/wiki/Io%20%28moon%29 | Io (), or Jupiter I, is the innermost and second-smallest of the four Galilean moons of the planet Jupiter. Slightly larger than Earth's moon, Io is the fourth-largest moon in the Solar System, has the highest density of any moon, the strongest surface gravity of any moon, and the lowest amount of water by atomic ratio of any known astronomical object in the Solar System. It was discovered in 1610 by Galileo Galilei and was named after the mythological character Io, a priestess of Hera who became one of Zeus's lovers.
With over 400 active volcanoes, Io is the most geologically active object in the Solar System. This extreme geologic activity is the result of tidal heating from friction generated within Io's interior as it is pulled between Jupiter and the other Galilean moons—Europa, Ganymede and Callisto. Several volcanoes produce plumes of sulfur and sulfur dioxide that climb as high as above the surface. Io's surface is also dotted with more than 100 mountains that have been uplifted by extensive compression at the base of Io's silicate crust. Some of these peaks are taller than Mount Everest, the highest point on Earth's surface. Unlike most moons in the outer Solar System, which are mostly composed of water ice, Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core. Most of Io's surface is composed of extensive plains with a frosty coating of sulfur and sulfur dioxide.
Io's volcanism is responsible for many of its unique features. Its volcanic plumes and lava flows produce large surface changes and paint the surface in various subtle shades of yellow, red, white, black, and green, largely due to allotropes and compounds of sulfur. Numerous extensive lava flows, several more than in length, also mark the surface. The materials produced by this volcanism make up Io's thin, patchy atmosphere, and they also greatly affect the nature and radiation levels of Jupiter's extensive magnetosphere. Io's volcanic ejecta also produce a large, intense plasma torus around Jupiter, creating a hostile radiation environment on and around the moon.
Io played a significant role in the development of astronomy in the 17th and 18th centuries; discovered in January 1610 by Galileo Galilei, along with the other Galilean satellites, this discovery furthered the adoption of the Copernican model of the Solar System, the development of Kepler's laws of motion, and the first measurement of the speed of light. In 1979, the two Voyager spacecraft revealed Io to be a geologically active world, with numerous volcanic features, large mountains, and a young surface with no obvious impact craters. The Galileo spacecraft performed several close flybys in the 1990s and early 2000s, obtaining data about Io's interior structure and surface composition. These spacecraft also revealed the relationship between Io and Jupiter's magnetosphere and the existence of a belt of high-energy radiation centered on Io's orbit. Further observations have been made by Cassini–Huygens in 2000, New Horizons in 2007, and Juno since 2017, as well as from Earth-based telescopes and the Hubble Space Telescope.
Nomenclature
Although Simon Marius is not credited with the sole discovery of the Galilean satellites, his names for the moons were adopted. In his 1614 publication Mundus Iovialis anno M.DC.IX Detectus Ope Perspicilli Belgici, he proposed several alternative names for the innermost of the large moons of Jupiter, including "The Mercury of Jupiter" and "The First of the Jovian Planets". Based on a suggestion from Johannes Kepler in October 1613, he also devised a naming scheme whereby each moon was named for a lover of the Greek god Zeus or his Roman equivalent, Jupiter. He named the innermost large moon of Jupiter after the Greek Io:
Marius's names were not widely adopted until centuries later (mid-20th century). In much of the earlier astronomical literature, Io was generally referred to by its Roman numeral designation (a system introduced by Galileo) as "", or as "the first satellite of Jupiter".
The customary English pronunciation of the name is , though sometimes people attempt a more 'authentic' pronunciation, . The name has two competing stems in Latin: Īō and (rarely) Īōn. The latter is the basis of the English adjectival form, Ionian.
Features on Io are named after characters and places from the Io myth, as well as deities of fire, volcanoes, the Sun, and thunder from various myths, and characters and places from Dante's Inferno: names appropriate to the volcanic nature of the surface. Since the surface was first seen up close by Voyager 1, the International Astronomical Union has approved 249 names for Io's volcanoes, mountains, plateaus, and large albedo features. The approved feature categories used for Io for different types of volcanic features include patera ('saucer'; volcanic depression), fluctus ('flow'; lava flow), vallis ('valley'; lava channel), and active eruptive center (location where plume activity was the first sign of volcanic activity at a particular volcano). Named mountains, plateaus, layered terrain, and shield volcanoes include the terms mons, mensa ('table'), planum, and tholus ('rotunda'), respectively. Named, bright albedo regions use the term regio. Examples of named features are Prometheus, Pan Mensa, Tvashtar Paterae, and Tsũi Goab Fluctus.
Observational history
The first reported observation of Io was made by Galileo Galilei on 7 January 1610 using a 20x-power, refracting telescope at the University of Padua. However, in that observation, Galileo could not separate Io and Europa due to the low power of his telescope, so the two were recorded as a single point of light. Io and Europa were seen for the first time as separate bodies during Galileo's observations of the Jovian system the following day, 8 January 1610 (used as the discovery date for Io by the IAU). The discovery of Io and the other Galilean satellites of Jupiter was published in Galileo's Sidereus Nuncius in March 1610. In his Mundus Jovialis, published in 1614, Simon Marius claimed to have discovered Io and the other moons of Jupiter in 1609, one week before Galileo's discovery. Galileo doubted this claim and dismissed the work of Marius as plagiarism. Regardless, Marius's first recorded observation came from 29 December 1609 in the Julian calendar, which equates to 8 January 1610 in the Gregorian calendar, which Galileo used. Given that Galileo published his work before Marius, Galileo is credited with the discovery.
For the next two and a half centuries, Io remained an unresolved, 5th-magnitude point of light in astronomers' telescopes. During the 17th century, Io and the other Galilean satellites served a variety of purposes, including early methods to determine longitude, validating Kepler's third law of planetary motion, and determining the time required for light to travel between Jupiter and Earth. Based on ephemerides produced by astronomer Giovanni Cassini and others, Pierre-Simon Laplace created a mathematical theory to explain the resonant orbits of Io, Europa, and Ganymede. This resonance was later found to have a profound effect on the geologies of the three moons.
Improved telescope technology in the late 19th and 20th centuries allowed astronomers to resolve (that is, see as distinct objects) large-scale surface features on Io. In the 1890s, Edward E. Barnard was the first to observe variations in Io's brightness between its equatorial and polar regions, correctly determining that this was due to differences in color and albedo between the two regions and not due to Io being egg-shaped, as proposed at the time by fellow astronomer William Pickering, or two separate objects, as initially proposed by Barnard. Later telescopic observations confirmed Io's distinct reddish-brown polar regions and yellow-white equatorial band.
Telescopic observations in the mid-20th century began to hint at Io's unusual nature. Spectroscopic observations suggested that Io's surface was devoid of water ice (a substance found to be plentiful on the other Galilean satellites). The same observations suggested a surface dominated by evaporates composed of sodium salts and sulfur. Radiotelescopic observations revealed Io's influence on the Jovian magnetosphere, as demonstrated by decametric wavelength bursts tied to the orbital period of Io.
Pioneer
The first spacecraft to pass by Io were the Pioneer 10 and 11 probes on 3 December 1973 and 2 December 1974, respectively. Radio tracking provided an improved estimate of Io's mass, which, along with the best available information of its size, suggested it had the highest density of the Galilean satellites, and was composed primarily of silicate rock rather than water ice. The Pioneers also revealed the presence of a thin atmosphere and intense radiation belts near the orbit of Io. The camera on board Pioneer 11 took the only good image of the moon obtained by either spacecraft, showing its north polar region and its yellow tint. Close-up images were planned during Pioneer 10s encounter, but those were lost because of the high-radiation environment.
Voyager
When the twin probes Voyager 1 and Voyager 2 passed by Io in 1979, their more advanced imaging systems allowed for far more detailed images. Voyager 1 flew past Io on 5 March 1979 from a distance of . The images returned during the approach revealed a strange, multi-colored landscape devoid of impact craters. The highest-resolution images showed a relatively young surface punctuated by oddly shaped pits, mountains taller than Mount Everest, and features resembling volcanic lava flows.
Shortly after the encounter, Voyager navigation engineer Linda A. Morabito noticed a plume emanating from the surface in one of the images. Analysis of other Voyager 1 images showed nine such plumes scattered across the surface, proving that Io was volcanically active. This conclusion was predicted in a paper published shortly before the Voyager 1 encounter by Stan Peale, Patrick Cassen, and R. T. Reynolds. The authors calculated that Io's interior must experience significant tidal heating caused by its orbital resonance with Europa and Ganymede (see the "Tidal heating" section for a more detailed explanation of the process). Data from this flyby showed that the surface of Io is dominated by sulfur and sulfur dioxide frosts. These compounds also dominate its thin atmosphere and the torus of plasma centered on Io's orbit (also discovered by Voyager).
Voyager 2 passed Io on 9 July 1979 at a distance of . Though it did not approach nearly as close as Voyager 1, comparisons between images taken by the two spacecraft showed several surface changes that had occurred in the four months between the encounters. In addition, observations of Io as a crescent as Voyager 2 departed the Jovian system revealed that seven of the nine plumes observed in March were still active in July 1979, with only the volcano Pele shutting down between flybys.
Galileo
The Galileo spacecraft arrived at Jupiter in 1995 after a six-year journey from Earth to follow up on the discoveries of the two Voyager probes and the ground-based observations made in the intervening years. Io's location within one of Jupiter's most intense radiation belts precluded a prolonged close flyby, but Galileo did pass close by shortly before entering orbit for its two-year, primary mission studying the Jovian system. Although no images were taken during the close flyby on 7 December 1995, the encounter did yield significant results, such as the discovery of a large iron core, similar to that found on the rocky planets of the inner Solar System.
Despite the lack of close-up imaging and mechanical problems that greatly restricted the amount of data returned, several significant discoveries were made during Galileo primary mission. Galileo observed the effects of a major eruption at Pillan Patera and confirmed that volcanic eruptions are composed of silicate magmas with magnesium-rich mafic and ultramafic compositions. Distant imaging of Io was acquired for almost every orbit during the primary mission, revealing large numbers of active volcanoes (both thermal emission from cooling magma on the surface and volcanic plumes), numerous mountains with widely varying morphologies, and several surface changes that had taken place both between the Voyager and Galileo eras and between Galileo orbits.
The Galileo mission was twice extended, in 1997 and 2000. During these extended missions, the probe flew by Io three times in late 1999 and early 2000, and three times in late 2001 and early 2002. Observations during these encounters revealed the geologic processes occurring at Io's volcanoes and mountains, excluded the presence of a magnetic field, and demonstrated the extent of volcanic activity.
Cassini
In December 2000, the Cassini spacecraft had a distant and brief encounter with the Jovian system en route to Saturn, allowing for joint observations with Galileo. These observations revealed a new plume at Tvashtar Paterae and provided insights into Io's aurorae.
New Horizons
The New Horizons spacecraft, en route to Pluto and the Kuiper belt, flew by the Jovian system and Io on 28 February 2007. During the encounter, numerous distant observations of Io were obtained. These included images of a large plume at Tvashtar, providing the first detailed observations of the largest class of Ionian volcanic plume since observations of Pele's plume in 1979. New Horizons also captured images of a volcano near Girru Patera in the early stages of an eruption, and several volcanic eruptions that have occurred since Galileo.
Juno
The Juno spacecraft was launched in 2011 and entered orbit around Jupiter on 5 July 2016. Junos mission is primarily focused on improving our understanding of Jupiter's interior, magnetic field, aurorae, and polar atmosphere. Junos 54-day orbit is highly inclined and highly eccentric in order to better characterize Jupiter's polar regions and to limit its exposure to the planet's harsh inner radiation belts, limiting close encounters with Jupiter's moons. The closest approach to Io during the initial, prime mission occurred in February 2020 at a distance of 195,000 kilomters. Juno's extended mission, begun in June 2021, allowed for closer encounters with Jupiter's Galilean satellites due to Junos orbital precession. After a series of increasingly closer encounters with Io in 2022 and 2023, Juno performed a pair of close flybys on 30 December 2023, and 3 February 2024, both with altitudes of 1,500 kilometers. The primary goal of these encounters were to improve our understanding of Io's gravity field using doppler tracking and to image Io's surface to look for surface changes since Io was last seen up-close in 2007.
During several orbits, Juno has observed Io from a distance using JunoCam, a wide-angle, visible-light camera, to look for volcanic plumes and JIRAM, a near-infrared spectrometer and imager, to monitor thermal emission from Io's volcanoes. JIRAM near-infrared spectroscopy has so far allowed for the coarse mapping of sulfur dioxide frost across Io's surface as well as mapping minor surface components weakly absorbing sunlight at 2.1 and 2.65 μm.
Future missions
There are two forthcoming missions planned for the Jovian system. The Jupiter Icy Moon Explorer (JUICE) is a planned European Space Agency mission to the Jovian system that is intended to end up in Ganymede orbit. JUICE launched in April 2023, with arrival at Jupiter planned for July 2031. JUICE will not fly by Io, but it will use its instruments, such as a narrow-angle camera, to monitor Io's volcanic activity and measure its surface composition during the two-year Jupiter-tour phase of the mission prior to Ganymede orbit insertion. Europa Clipper is a planned NASA mission to the Jovian system focused on Jupiter's moon Europa. Like JUICE, Europa Clipper will not perform any flybys of Io, but distant volcano monitoring is likely. Europa Clipper launched in October 2024, with an arrival at Jupiter in 2030.
The Io Volcano Observer (IVO) was a proposal to NASA for a low-cost, Discovery-class mission selected for a Phase A study along with three other missions in 2020. IVO would launch in January 2029 and perform ten flybys of Io while in orbit around Jupiter beginning in the early 2030s. However, the Venus missions DAVINCI+ and VERITAS were selected in favor of those.
Orbit and rotation
Io orbits Jupiter at a distance of from Jupiter's center and from its cloudtops. It is the innermost of the Galilean satellites of Jupiter, its orbit lying between those of Thebe and Europa. Including Jupiter's inner satellites, Io is the fifth moon out from Jupiter. It takes Io about 42.5 hours (1.77 days) to complete one orbit around Jupiter (fast enough for its motion to be observed over a single night of observation). Io is in a 2:1 mean-motion orbital resonance with Europa and a 4:1 mean-motion orbital resonance with Ganymede, completing two orbits of Jupiter for every one orbit completed by Europa, and four orbits for every one completed by Ganymede. This resonance helps maintain Io's orbital eccentricity (0.0041), which in turn provides the primary heating source for its geologic activity. Without this forced eccentricity, Io's orbit would circularize through tidal dissipation, leading to a less geologically active world.
Like the other Galilean satellites and the Moon, Io rotates synchronously with its orbital period, keeping one face nearly pointed toward Jupiter. This synchrony provides the definition for Io's longitude system. Io's prime meridian intersects the equator at the sub-Jovian point. The side of Io that always faces Jupiter is known as the subjovian hemisphere, whereas the side that always faces away is known as the antijovian hemisphere. The side of Io that always faces in the direction that Io travels in its orbit is known as the leading hemisphere, whereas the side that always faces in the opposite direction is known as the trailing hemisphere.
From the surface of Io, Jupiter would subtend an arc of 19.5°, making Jupiter appear 39 times the apparent diameter of Earth's Moon.
Interaction with Jupiter's magnetosphere
Io plays a significant role in shaping Jupiter's magnetic field, acting as an electric generator that can develop 400,000 volts across itself and create an electric current of 3 million amperes, releasing ions that give Jupiter a magnetic field inflated to more than twice the size it would otherwise have. The magnetosphere of Jupiter sweeps up gases and dust from Io's thin atmosphere at a rate of 1 tonne per second. This material is mostly composed of ionized and atomic sulfur, oxygen and chlorine; atomic sodium and potassium; molecular sulfur dioxide and sulfur; and sodium chloride dust. These materials originate from Io's volcanic activity, with the material that escapes to Jupiter's magnetic field and into interplanetary space coming directly from Io's atmosphere. These materials, depending on their ionized state and composition, end up in various neutral (non-ionized) clouds and radiation belts in Jupiter's magnetosphere and, in some cases, are eventually ejected from the Jovian system.
Surrounding Io (at a distance of up to six Io radii from its surface) is a cloud of neutral sulfur, oxygen, sodium, and potassium atoms. These particles originate in Io's upper atmosphere and are excited by collisions with ions in the plasma torus (discussed below) and by other processes into filling Io's Hill sphere, which is the region where Io's gravity is dominant over Jupiter's. Some of this material escapes Io's gravitational pull and goes into orbit around Jupiter. Over a 20-hour period, these particles spread out from Io to form a banana-shaped, neutral cloud that can reach as far as six Jovian radii from Io, either inside Io's orbit and ahead of it or outside Io's orbit and behind it. The collision process that excites these particles also occasionally provides sodium ions in the plasma torus with an electron, removing those new "fast" neutrals from the torus. These particles retain their velocity (70 km/s, compared to the 17 km/s orbital velocity at Io), and are thus ejected in jets leading away from Io.
Io orbits within a belt of intense radiation known as the Io plasma torus. The plasma in this doughnut-shaped ring of ionized sulfur, oxygen, sodium, and chlorine originates when neutral atoms in the "cloud" surrounding Io are ionized and carried along by the Jovian magnetosphere. Unlike the particles in the neutral cloud, these particles co-rotate with Jupiter's magnetosphere, revolving around Jupiter at 74 km/s. Like the rest of Jupiter's magnetic field, the plasma torus is tilted with respect to Jupiter's equator (and Io's orbital plane), so that Io is at times below and at other times above the core of the plasma torus. As noted above, these ions' higher velocity and energy levels are partly responsible for the removal of neutral atoms and molecules from Io's atmosphere and more extended neutral clouds. The torus is composed of three sections: an outer, "warm" torus that resides just outside Io's orbit; a vertically extended region known as the "ribbon", composed of the neutral source region and cooling plasma, located at around Io's distance from Jupiter; and an inner, "cold" torus, composed of particles that are slowly spiraling in toward Jupiter. After residing an average of 40 days in the torus, particles in the "warm" torus escape and are partially responsible for Jupiter's unusually large magnetosphere, their outward pressure inflating it from within. Particles from Io, detected as variations in magnetospheric plasma, have been detected far into the long magnetotail by New Horizons. To study similar variations within the plasma torus, researchers measured the ultraviolet light it emits. Although such variations have not been definitively linked to variations in Io's volcanic activity (the ultimate source for material in the plasma torus), this link has been established in the neutral sodium cloud.
During an encounter with Jupiter in 1992, the Ulysses spacecraft detected a stream of dust-sized particles being ejected from the Jovian system. The dust in these discrete streams travels away from Jupiter at speeds upwards of several hundred kilometers per second, has an average particle size of 10 μm, and consists primarily of sodium chloride. Dust measurements by Galileo showed that these dust streams originated on Io, but exactly how these form, whether from Io's volcanic activity or material removed from the surface, is unknown.
Jupiter's magnetic field, which Io crosses, couples Io's atmosphere and neutral cloud to Jupiter's polar upper atmosphere by generating an electric current known as the Io flux tube. This current produces an auroral glow in Jupiter's polar regions known as the Io footprint, as well as aurorae in Io's atmosphere. Particles from this auroral interaction darken the Jovian polar regions at visible wavelengths. The location of Io and its auroral footprint with respect to Earth and Jupiter has a strong influence on Jovian radio emissions from our vantage point: when Io is visible, radio signals from Jupiter increase considerably. The Juno mission, currently in orbit around Jupiter, should help shed light on these processes. The Jovian magnetic field lines that do get past Io's ionosphere also induce an electric current, which in turn creates an induced magnetic field within Io's interior. Io's induced magnetic field is thought to be generated within a partially molten, silicate magma ocean 50 kilometers beneath Io's surface. Similar induced fields were found at the other Galilean satellites by Galileo, possibly generated within liquid water oceans in the interiors of those moons.
According to an international study published in the journal Nature in 2024, no magma ocean would exist on the satellite Io despite the large number of volcanoes and the tidal interaction with Jupiter, as historical data from the mission Galileo probe suggested. The scientists used data from two recent overflights by the Juno probe and claimed that an "almost" solid mantle exists beneath Io's surface and not an ocean of magma as previously thought.
Geology
Io is slightly larger than Earth's Moon. It has a mean radius of (about 5% greater than the Moon's) and a mass of 8.9319 kg (about 21% greater than the Moon's). It is a slight ellipsoid in shape, with its longest axis directed toward Jupiter. Among the Galilean satellites, in both mass and volume, Io ranks behind Ganymede and Callisto but ahead of Europa.
Interior
Composed primarily of silicate rock and iron, Io and Europa are closer in bulk composition to terrestrial planets than to other satellites in the outer Solar System, which are mostly composed of a mix of water ice and silicates. Io has a density of , the highest of any regular moon in the Solar System; significantly higher than the other Galilean satellites (Ganymede and Callisto in particular, whose densities are around ) and slightly higher (~5.5%) than the Moon's and Europa's . Models based on the Voyager and Galileo measurements of Io's mass, radius, and quadrupole gravitational coefficients (numerical values related to how mass is distributed within an object) suggest that its interior is differentiated between a silicate-rich crust and mantle and an iron- or iron-sulfide-rich core. Io's metallic core makes up approximately 20% of its mass. Depending on the amount of sulfur in the core, the core has a radius between if it is composed almost entirely of iron, or between for a core consisting of a mix of iron and sulfur. Galileo magnetometer failed to detect an internal, intrinsic magnetic field at Io, suggesting that the core is not convecting.
Modeling of Io's interior composition suggests that the mantle is composed of at least 75% of the magnesium-rich mineral forsterite, and has a bulk composition similar to that of L-chondrite and LL-chondrite meteorites, with higher iron content (compared to silicon) than the Moon or Earth, but lower than Mars. To support the heat flow observed on Io, 10–20% of Io's mantle may be molten, though regions where high-temperature volcanism has been observed may have higher melt fractions. However, re-analysis of Galileo magnetometer data in 2009 revealed the presence of an induced magnetic field at Io, requiring a magma ocean below the surface. Further analysis published in 2011 provided direct evidence of such an ocean. This layer is estimated to be 50 km thick and to make up about 10% of Io's mantle. It is estimated that the temperature in the magma ocean reaches 1,200 °C. It is not known if the 10–20% partial melting percentage for Io's mantle is consistent with the requirement for a significant amount of molten silicates in this possible magma ocean. The lithosphere of Io, composed of basalt and sulfur deposited by Io's extensive volcanism, is at least thick, and likely less than thick.
Tidal heating
Unlike Earth and the Moon, Io's main source of internal heat comes from tidal dissipation rather than radioactive isotope decay, the result of Io's orbital resonance with Europa and Ganymede. Such heating is dependent on Io's distance from Jupiter, its orbital eccentricity, the composition of its interior, and its physical state. Its Laplace resonance with Europa and Ganymede maintains Io's eccentricity and prevents tidal dissipation within Io from circularizing its orbit. The resonant orbit also helps to maintain Io's distance from Jupiter; otherwise tides raised on Jupiter would cause Io to slowly spiral outward from its parent planet. The tidal forces experienced by Io are about 20,000 times stronger than the tidal forces Earth experiences due to the Moon, and the vertical differences in its tidal bulge, between the times Io is at periapsis and apoapsis in its orbit, could be as much as . The friction or tidal dissipation produced in Io's interior due to this varying tidal pull, which, without the resonant orbit, would have gone into circularizing Io's orbit instead, creates significant tidal heating within Io's interior, melting a significant amount of Io's mantle and core. The amount of energy produced is up to 200 times greater than that produced solely from radioactive decay. This heat is released in the form of volcanic activity, generating its observed high heat flow (global total: 0.6 to 1.6×1014 W). Models of its orbit suggest that the amount of tidal heating within Io changes with time; however, the current amount of tidal dissipation is consistent with the observed heat flow. Models of tidal heating and convection have not found consistent planetary viscosity profiles that simultaneously match tidal energy dissipation and mantle convection of heat to the surface.
Although there is general agreement that the origin of the heat as manifested in Io's many volcanoes is tidal heating from the pull of gravity from Jupiter and its moon Europa, the volcanoes are not in the positions predicted with tidal heating. They are shifted 30 to 60 degrees to the east. A study published by Tyler et al. (2015) suggests that this eastern shift may be caused by an ocean of molten rock under the surface. The movement of this magma would generate extra heat through friction due to its viscosity. The study's authors believe that this subsurface ocean is a mixture of molten and solid rock.
Other moons in the Solar System are also tidally heated, and they too may generate additional heat through the friction of subsurface magma or water oceans. This ability to generate heat in a subsurface ocean increases the chance of life on bodies like Europa and Enceladus.
Surface
Based on their experience with the ancient surfaces of the Moon, Mars, and Mercury, scientists expected to see numerous impact craters in Voyager 1 first images of Io. The density of impact craters across Io's surface would have given clues to Io's age. However, they were surprised to discover that the surface was almost completely lacking in impact craters, but was instead covered in smooth plains dotted with tall mountains, pits of various shapes and sizes, and volcanic lava flows. Compared to most worlds observed to that point, Io's surface was covered in a variety of colorful materials (leading Io to be compared to a rotten orange or to pizza) from various sulfurous compounds. The lack of impact craters indicated that Io's surface is geologically young, like the terrestrial surface; volcanic materials continuously bury craters as they are produced. This result was spectacularly confirmed as at least nine active volcanoes were observed by Voyager 1.
Surface composition
Io's colorful appearance is the result of materials deposited by its extensive volcanism, including silicates (such as orthopyroxene), sulfur, and sulfur dioxide. Sulfur dioxide frost is ubiquitous across the surface of Io, forming large regions covered in white or grey materials. Sulfur is also seen in many places across Io, forming yellow to yellow-green regions. Sulfur deposited in the mid-latitude and polar regions is often damaged by radiation, breaking up the normally stable cyclic 8-chain sulfur. This radiation damage produces Io's red-brown polar regions.
Explosive volcanism, often taking the form of umbrella-shaped plumes, paints the surface with sulfurous and silicate materials. Plume deposits on Io are often colored red or white depending on the amount of sulfur and sulfur dioxide in the plume. Generally, plumes formed at volcanic vents from degassing lava contain a greater amount of , producing a red "fan" deposit, or in extreme cases, large (often reaching beyond from the central vent) red rings. A prominent example of a red-ring plume deposit is located at Pele. These red deposits consist primarily of sulfur (generally 3- and 4-chain molecular sulfur), sulfur dioxide, and perhaps sulfuryl chloride. Plumes formed at the margins of silicate lava flows (through the interaction of lava and pre-existing deposits of sulfur and sulfur dioxide) produce white or gray deposits.
Compositional mapping and Io's high density suggest that Io contains little to no water, though small pockets of water ice or hydrated minerals have been tentatively identified, most notably on the northwest flank of the mountain Gish Bar Mons. Io has the least amount of water of any known body in the Solar System. This lack of water is likely due to Jupiter being hot enough early in the evolution of the Solar System to drive off volatile materials like water in the vicinity of Io, but not hot enough to do so farther out.
Volcanism
The tidal heating produced by Io's forced orbital eccentricity has made it the most volcanically active world in the Solar System, with hundreds of volcanic centers and extensive lava flows. During a major eruption, lava flows tens or even hundreds of kilometers long can be produced, consisting mostly of basalt silicate lavas with either mafic or ultramafic (magnesium-rich) compositions. As a by-product of this activity, sulfur, sulfur dioxide gas and silicate pyroclastic material (like ash) are blown up to into space, producing large, umbrella-shaped plumes, painting the surrounding terrain in red, black, and white, and providing material for Io's patchy atmosphere and Jupiter's extensive magnetosphere.
Io's surface is dotted with volcanic depressions known as paterae which generally have flat floors bounded by steep walls. These features resemble terrestrial calderas, but it is unknown if they are produced through collapse over an emptied lava chamber like their terrestrial cousins. One hypothesis suggests that these features are produced through the exhumation of volcanic sills, and the overlying material is either blasted out or integrated into the sill. Examples of paterae in various stages of exhumation have been mapped using Galileo images of the Chaac-Camaxtli region. Unlike similar features on Earth and Mars, these depressions generally do not lie at the peak of shield volcanoes and are normally larger, with an average diameter of , the largest being Loki Patera at . Loki is also consistently the strongest volcano on Io, contributing on average 25% of Io's global heat output. Whatever the formation mechanism, the morphology and distribution of many paterae suggest that these features are structurally controlled, with at least half bounded by faults or mountains. These features are often the site of volcanic eruptions, either from lava flows spreading across the floors of the paterae, as at an eruption at Gish Bar Patera in 2001, or in the form of a lava lake. Lava lakes on Io either have a continuously overturning lava crust, such as at Pele, or an episodically overturning crust, such as at Loki.
Lava flows represent another major volcanic terrain on Io. Magma erupts onto the surface from vents on the floor of paterae or on the plains from fissures, producing inflated, compound lava flows similar to those seen at Kilauea in Hawaii. Images from the Galileo spacecraft revealed that many of Io's major lava flows, like those at Prometheus and Amirani, are produced by the build-up of small breakouts of lava flows on top of older flows. Larger outbreaks of lava have also been observed on Io. For example, the leading edge of the Prometheus flow moved between Voyager in 1979 and the first Galileo observations in 1996. A major eruption in 1997 produced more than of fresh lava and flooded the floor of the adjacent Pillan Patera.
Analysis of the Voyager images led scientists to believe that these flows were composed mostly of various compounds of molten sulfur. However, subsequent Earth-based infrared studies and measurements from the Galileo spacecraft indicate that these flows are composed of basaltic lava with mafic to ultramafic compositions. This hypothesis is based on temperature measurements of Io's "hotspots", or thermal-emission locations, which suggest temperatures of at least 1,300 K and some as high as 1,600 K. Initial estimates suggesting eruption temperatures approaching 2,000 K have since proven to be overestimates because the wrong thermal models were used to model the temperatures.
The discovery of plumes at the volcanoes Pele and Loki were the first sign that Io is geologically active. Generally, these plumes are formed when volatiles like sulfur and sulfur dioxide are ejected skyward from Io's volcanoes at speeds reaching , creating umbrella-shaped clouds of gas and dust. Additional material that might be found in these volcanic plumes include sodium, potassium, and chlorine. These plumes appear to be formed in one of two ways. Io's largest plumes, such as those emitted by Pele, are created when dissolved sulfur and sulfur dioxide gas are released from erupting magma at volcanic vents or lava lakes, often dragging silicate pyroclastic material with them. These plumes form red (from the short-chain sulfur) and black (from the silicate pyroclastics) deposits on the surface. Plumes formed in this manner are among the largest observed at Io, forming red rings more than in diameter. Examples of this plume type include Pele, Tvashtar, and Dazhbog. Another type of plume is produced when encroaching lava flows vaporize underlying sulfur dioxide frost, sending the sulfur skyward. This type of plume often forms bright circular deposits consisting of sulfur dioxide. These plumes are often less than tall, and are among the most long-lived plumes on Io. Examples include Prometheus, Amirani, and Masubi. The erupted sulfurous compounds are concentrated in the upper crust from a decrease in sulfur solubility at greater depths in Io's lithosphere and can be a determinant for the eruption style of a hot spot.
Mountains
Io has 100 to 150 mountains. These structures average in height and reach a maximum of at South Boösaule Montes. Mountains often appear as large (the average mountain is long), isolated structures with no apparent global tectonic patterns outlined, in contrast to the case on Earth. To support the tremendous topography observed at these mountains requires compositions consisting mostly of silicate rock, as opposed to sulfur.
Despite the extensive volcanism that gives Io its distinctive appearance, nearly all of its mountains are tectonic structures, and are not produced by volcanoes. Instead, most Ionian mountains form as the result of compressive stresses on the base of the lithosphere, which uplift and often tilt chunks of Io's crust through thrust faulting. The compressive stresses leading to mountain formation are the result of subsidence from the continuous burial of volcanic materials. The global distribution of mountains appears to be opposite that of volcanic structures; mountains dominate areas with fewer volcanoes and vice versa. This suggests large-scale regions in Io's lithosphere where compression (supportive of mountain formation) and extension (supportive of patera formation) dominate. Locally, however, mountains and paterae often abut one another, suggesting that magma often exploits faults formed during mountain formation to reach the surface.
Mountains on Io (generally, structures rising above the surrounding plains) have a variety of morphologies. Plateaus are most common. These structures resemble large, flat-topped mesas with rugged surfaces. Other mountains appear to be tilted crustal blocks, with a shallow slope from the formerly flat surface and a steep slope consisting of formerly sub-surface materials uplifted by compressive stresses. Both types of mountains often have steep scarps along one or more margins. Only a handful of mountains on Io appear to have a volcanic origin. These mountains resemble small shield volcanoes, with steep slopes (6–7°) near a small, central caldera and shallow slopes along their margins. These volcanic mountains are often smaller than the average mountain on Io, averaging only in height and wide. Other shield volcanoes with much shallower slopes are inferred from the morphology of several of Io's volcanoes, where thin flows radiate out from a central patera, such as at Ra Patera.
Nearly all mountains appear to be in some stage of degradation. Large landslide deposits are common at the base of Ionian mountains, suggesting that mass wasting is the primary form of degradation. Scalloped margins are common among Io's mesas and plateaus, the result of sulfur dioxide sapping from Io's crust, producing zones of weakness along mountain margins.
Atmosphere
Io has an extremely thin atmosphere consisting mainly of sulfur dioxide (), with minor constituents including sulfur monoxide (), sodium chloride (), and atomic sulfur and oxygen. The atmosphere has significant variations in density and temperature with time of day, latitude, volcanic activity, and surface frost abundance. The maximum atmospheric pressure on Io ranges from 3.3 to 3 pascals (Pa) or 0.3 to 3 nbar, spatially seen on Io's anti-Jupiter hemisphere and along the equator, and temporally in the early afternoon when the temperature of surface frost peaks. Localized peaks at volcanic plumes have also been seen, with pressures of 5 to 40 Pa (5 to 40 nbar). Io's atmospheric pressure is lowest on Io's night side, where the pressure dips to 0.1 to 1 Pa (0.0001 to 0.001 nbar). Io's atmospheric temperature ranges from the temperature of the surface at low altitudes, where sulfur dioxide is in vapor pressure equilibrium with frost on the surface, to 1,800 K at higher altitudes where the lower atmospheric density permits heating from plasma in the Io plasma torus and from Joule heating from the Io flux tube. The low pressure limits the atmosphere's effect on the surface, except for temporarily redistributing sulfur dioxide from frost-rich to frost-poor areas, and to expand the size of plume deposit rings when plume material re-enters the thicker dayside atmosphere.
Gas in Io's atmosphere is stripped by Jupiter's magnetosphere, escaping to either the neutral cloud that surrounds Io, or the Io plasma torus, a ring of ionized particles that shares Io's orbit but co-rotates with the magnetosphere of Jupiter. Approximately one ton of material is removed from the atmosphere every second through this process so that it must be constantly replenished. The most dramatic source of are volcanic plumes, which pump kg of sulfur dioxide per second into Io's atmosphere on average, though most of this condenses back onto the surface. Much of the sulfur dioxide in Io's atmosphere is sustained by sunlight-driven sublimation of frozen on the surface. The day-side atmosphere is largely confined to within 40° of the equator, where the surface is warmest and most active volcanic plumes reside. A sublimation-driven atmosphere is also consistent with observations that Io's atmosphere is densest over the anti-Jupiter hemisphere, where frost is most abundant, and is densest when Io is closer to the Sun. However, some contributions from volcanic plumes are required as the highest observed densities have been seen near volcanic vents. Because the density of sulfur dioxide in the atmosphere is tied directly to surface temperature, Io's atmosphere partially collapses at night, or when Io is in the shadow of Jupiter (with an ~80% drop in column density). The collapse during eclipse is limited somewhat by the formation of a diffusion layer of sulfur monoxide in the lowest portion of the atmosphere, but the atmosphere pressure of Io's nightside atmosphere is two to four orders of magnitude less than at its peak just past noon. The minor constituents of Io's atmosphere, such as , , , and derive either from: direct volcanic outgassing; photodissociation, or chemical breakdown caused by solar ultraviolet radiation, from ; or the sputtering of surface deposits by charged particles from Jupiter's magnetosphere.
Various researchers have proposed that the atmosphere of Io freezes onto the surface when it passes into the shadow of Jupiter. Evidence for this is a "post-eclipse brightening", where the moon sometimes appears a bit brighter as if covered with frost immediately after eclipse. After about 15 minutes the brightness returns to normal, presumably because the frost has disappeared through sublimation. Besides being seen through ground-based telescopes, post-eclipse brightening was found in near-infrared wavelengths using an instrument aboard the Cassini spacecraft. Further support for this idea came in 2013 when the Gemini Observatory was used to directly measure the collapse of Io's atmosphere during, and its reformation after, eclipse with Jupiter.
High-resolution images of Io acquired when Io is experiencing an eclipse reveal an aurora-like glow. As on Earth, this is due to particle radiation hitting the atmosphere, though in this case the charged particles come from Jupiter's magnetic field rather than the solar wind. Aurorae usually occur near the magnetic poles of planets, but Io's are brightest near its equator. Io lacks an intrinsic magnetic field of its own; therefore, electrons traveling along Jupiter's magnetic field near Io directly impact Io's atmosphere. More electrons collide with its atmosphere, producing the brightest aurora, where the field lines are tangent to Io (i.e. near the equator), because the column of gas they pass through is the longest there. Aurorae associated with these tangent points on Io are observed to rock with the changing orientation of Jupiter's tilted magnetic dipole. Fainter aurora from oxygen atoms along the limb of Io (the red glows in the image at right), and sodium atoms on Io's night-side (the green glows in the same image) have also been observed.
See also
Atmosphere of Io
Exploration of Io
Jupiter
Moons of Jupiter
Galilean moons (the four biggest moons of Jupiter)
Jupiter's moons in fiction
List of natural satellites
Planetary geology
References
External links
General information
Io profile at NASA's Solar System Exploration site
Bill Arnett's Io webpage from The Nine Planets website
Io overview from the University of Michigan's Windows to the Universe
Calvin Hamilton's Io page from the Views of the Solar System website
Movies
Paul Schenk's 3D images and flyover videos of Io and other outer solar system satellites
High resolution video simulation of rotating Io by Seán Doran
Images
Catalog of NASA images of Io
Galileo images of Io
New Horizons images of Io
New Horizons LORRI Raw Images, includes numerous Io images
Io through Different New Horizons Imagers
Maps
Io global basemaps at the USGS Astrogeology Science Center based on Galileo and Voyager images
Io nomenclature and map with feature names from the USGS planetary nomenclature page
Interactive map of Io by Google Maps
Additional references
Io dynamo from educational website The Exploration of the Earth's Magnetosphere
NASA's Stunning Discoveries on Jupiter's Largest Moons | Our Solar System's Moons
The Conundrum Posed by Io's Minimum Surface Temperatures
Io Mountain Database
Cassini Observations of Io's Visible Aurorae at the USGS Astrogeology Science Center
The Gish Bar Times, Jason Perry's Io-related blog
Articles containing video clips
16100107
Discoveries by Galileo Galilei
Moons of Jupiter
Moons with a prograde orbit
Solar System | Io (moon) | [
"Astronomy"
] | 9,763 | [
"Outer space",
"Solar System"
] |
1,051,970 | https://en.wikipedia.org/wiki/Friedrich%20Tiedemann | Friedrich Tiedemann FRS HFRSE (23 August 178122 January 1861) was a German anatomist and physiologist. He was an expert on the anatomy of the brain.
Tiedemann spent most of his career as professor of anatomy and physiology at Heidelberg University, a position to which he was appointed in 1816, after having filled the chair of anatomy and zoology for ten years at Landshut. He was elected member of the Royal Swedish Academy of Sciences in 1827. In 1836, he was elected Honorary Fellow of the Royal College of Surgeons in Ireland.
Life
Tiedemann was born at Cassel in Prussia (now central Germany), the eldest son of Dietrich Tiedemann (1748–1803), a philosopher and psychologist of considerable repute.
Friedrich studied medicine at Marburg, Bamberg and Würzburg Universities from 1798 and graduated in 1802. Undertaking practical experience he gained his doctorate (MD) from Marburg in 1804, but soon abandoned practice.
From 1804, he became a Docent, lecturing in Physiology and Comparative Osteology at Marburg University. The following year, at only 24 years of age, he became Professor of Zoology, Human Anatomy and Comparative Anatomy at Landshut University. In 1816, he moved to Heidelberg University as Professor of Physiology and Anatomy and remained there until his retirement in 1849.
He was elected a Foreign Fellow of the Royal Society of London in 1832 and an Honorary Fellow of the Royal Society of Edinburgh in 1838.
He died in Munich on 22 January 1861. He is buried in the Alter Südfriedhof in Munich (Old South Cemetery).
Viewpoints
Tiedemann devoted himself to the study of natural science, and upon moving to Paris, became an ardent follower of Georges Cuvier. On his return to Germany, he advocated for anatomical research and aligned himself with the emerging field of experimental natural science. His staunch empiricism placed him at odds with contemporary adherents of romantic Naturphilosophie, such philosopher Friedrich Wilhelm Joseph von Schelling and naturalist Lorenz Oken.
Tiedemann was among the first to scientifically contest racism. In his 1836 article "On the Brain of the Negro, compared with that of the European and the Orang-outang," he compared the brain weight and cranial capacity of European and black human specimens with that of apes and concluded that, contrary to the consensus among his naturalist colleagues, the two racial groups exhibited "absolutely no difference whatsoever" in brain size or structure. He further contested the notion that "there is any innate difference in the intellectual faculties of these two varieties of the human race" and attributed the perceived inferiority of black people to the deleterious effects of slavery and colonialism.
In 1827, he became a correspondent of the Royal Institute of the Netherlands, and when that became the Royal Netherlands Academy of Arts and Sciences in 1851, he joined as a foreign member. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1849.
Tiedemann was influenced by Jean-Baptiste Lamarck and accepted the transmutation of species. Science historian Robert J. Richards has written that Tiedemann "joined the basic notion of species evolution, of a Lamarckian flavor, with the proposition that higher animals in their embryological development recapitulated the morphological stages of those lower in the scale." Writing in 1913, Hans Gadow noted that Tiedemann in 1814 had identified a basic function of sexual selection in preventing less fit males from propagating, and fossils as showing gradual metamorphosis of species over geological time.
In an 1854 medical-historical tract on tobacco, Tiedemann identified several adverse health effects of tobacco consumption, including cancers of the tongue brought on by smoking.
Family
In 1807, he married Frauline von Holzing. He was later married to Charlotte Hecker.
He had a daughter Elise.
One of Tiedemann's sons, Gustav, was a casualty of the 1848 uprisings.
His son Heinrich immigrated to Philadelphia and became a physician in Philadelphia's Germantown Hospital. Perhaps influenced by his father's work, he objected to the Darwinian contention of a continuity between humans and apes.
Legacy
In 2007, Brazilian geneticist Sergio Pena called Tiedemann an "anti-racist ahead of his time".
Works (translated)
References
Attribution:
External links
Neurotree: Friedrich Tiedemann Details
The Great Physiologist of Heidelberg – Friedrich Tiedemann by Stephen Jay Gould
1781 births
1861 deaths
Fellows of the American Academy of Arts and Sciences
Foreign members of the Royal Society
German physiologists
19th-century German zoologists
Lamarckism
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Swedish Academy of Sciences
Recipients of the Pour le Mérite (civil class)
Proto-evolutionary biologists | Friedrich Tiedemann | [
"Biology"
] | 967 | [
"Obsolete biology theories",
"Lamarckism",
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
1,051,985 | https://en.wikipedia.org/wiki/Shift%20work | Shift work is an employment practice designed to keep a service or production line operational at all times. The practice typically sees the day divided into shifts, set periods of time during which different groups of workers perform their duties. The term "shift work" includes both long-term night shifts and work schedules in which employees change or rotate shifts.
In medicine and epidemiology, shift work is considered a risk factor for some health problems in some individuals, as disruption to circadian rhythms may increase the probability of developing cardiovascular disease, cognitive impairment, diabetes, altered body composition and obesity, among other conditions.
History
The shift work system in modern industrial manufacturing originated in the late 18th century.
In 1867, Karl Marx wrote on the shift work system in Capital, Volume 1:
Capitalist production therefore drives, by its inherent nature, towards the appropriation of labour throughout the whole of the 24 hours in the day. But since it is physically impossible to exploit the same individual labour-power constantly, during the night as well as the day, capital has to overcome this physical obstacle. An alternation becomes necessary, between the labour-powers used up by day and those used up by night ... It is well known that this shift-system, this alternation of two sets of workers, predominated in the full-blooded springtime of the English cotton industry, and that at the present time it still flourishes, among other places, in the cotton-spinning factories of the Moscow gubernia. This 24-hour process of production exists today as a system in many of the as yet 'free' branches of industry in Great Britain, in the blast-furnaces, forges, rolling mills and other metallurgical establishments of England, Wales and Scotland.
The Cromford Mill, starting from 1772, ran day and night with two twelve-hour shifts.
Health effects
Shift work increases the risk for the development of many disorders. Shift work sleep disorder is a circadian rhythm sleep disorder characterized by insomnia, excessive sleepiness, or both. Shift work is considered essential for the diagnosis. The risk of diabetes mellitus type 2 is increased in shift workers, especially men. People working rotating shifts are more vulnerable than others.
Women whose work involves night shifts have a 48% increased risk of developing breast cancer. This may be due to alterations in circadian rhythm: melatonin, a known tumor suppressor, is generally produced at night and late shifts may disrupt its production. The WHO's International Agency for Research on Cancer listed "shift work that involves circadian disruption" as probably carcinogenic. Shift work may also increase the risk of other types of cancer. Working rotating shift work regularly during a two-year interval has been associated with a 9% increased the risk of early menopause compared to women who work no rotating shift work. The increased risk among rotating night shift workers was 25% among women predisposed to earlier menopause. Early menopause can lead to a host of other problems later in life. A recent study, found that women who worked rotating night shifts for more than six years, eleven percent experienced a shortened lifespan. Women who worked rotating night shifts for more than 15 years also experienced a 25 percent higher risk of death due to lung cancer.
Shift work also increases the risk of developing cluster headaches, heart attacks, fatigue, stress, sexual dysfunction, depression, dementia, obesity, metabolic disorders, gastrointestinal disorders, musculoskeletal disorders, and reproductive disorders.
Shift work also can worsen chronic diseases, including sleep disorders, digestive diseases, heart disease, hypertension, epilepsy, mental disorders, substance abuse, asthma, and any health conditions that are treated with medications affected by the circadian cycle. Artificial lighting may additionally contribute to disturbed homeostasis. Shift work may also increase a person's risk of smoking.
The health consequences of shift work may depend on chronotype, that is, being a day person or a night person, and what shift a worker is assigned to. When individual chronotype is opposite of shift timing (day person working night shift), there is a greater risk of circadian rhythms disruption. Nighttime workers sleep an average of one–four hours less than daytime workers.
Different shift schedules will have different impacts on the health of a shift worker. The way the shift pattern is designed affects how shift workers sleep, eat and take holidays. Some shift patterns can exacerbate fatigue by limiting rest, increasing stress, overworking staff or disrupting their time off.
Muscle health is also compromised by shift work: altered sleep and eating times, changes to appetite-regulating hormones and total energy expenditure, increased snacking and binge drinking, and reduced protein intake can contribute to negative protein balance, increases in insulin resistance and increases in body fat, resulting in weight gain and more long-term health challenges.
Compared with the day shift, injuries and accidents have been estimated to increase by 15% on evening shifts and 28% on night shifts. Longer shifts are also associated with more injuries and accidents: 10-hour shifts had 13% more and 12-hour shifts had 28% more than 8-hour shifts. Other studies have shown a link between fatigue and workplace injuries and accidents. Workers with sleep deprivation are far more likely to be injured or involved in an accident. Breaks reduce accident risks.
One study suggests that, for those working a night shift (such as 23:00 to 07:00), it may be advantageous to sleep in the evening (14:00 to 22:00) rather than the morning (08:00 to 16:00). The study's evening sleep subjects had 37% fewer episodes of attentional impairment than the morning sleepers.
There are four major determinants of cognitive performance and alertness in healthy shift-workers: circadian phase, sleep inertia, acute sleep deprivation and chronic sleep deficit.
The circadian phase is relatively fixed in humans; attempting to shift it so that an individual is alert during the circadian bathyphase is difficult. Sleep during the day is shorter and less consolidated than night-time sleep. Before a night shift, workers generally sleep less than before a day shift.
The effects of sleep inertia wear off after two–four hours of wakefulness, such that most workers who wake up in the morning and go to work suffer some degree of sleep inertia at the beginning of their shift. The relative effects of sleep inertia vs. the other factors are hard to quantify; however, the benefits of napping appear to outweigh the cost associated with sleep inertia.
Acute sleep deprivation occurs during long shifts with no breaks, as well as during night shifts when the worker sleeps in the morning and is awake during the afternoon, prior to the work shift. A night shift worker with poor daytime sleep may be awake for more than 18 hours by the end of his shift. The effects of acute sleep deprivation can be compared to impairment due to alcohol intoxication, with 19 hours of wakefulness corresponding to a BAC of 0.05%, and 24 hours of wakefulness corresponding to a BAC of 0.10%. Much of the effect of acute sleep deprivation can be countered by napping, with longer naps giving more benefit than shorter naps. Some industries, specifically the fire service, have traditionally allowed workers to sleep while on duty, between calls for service. In one study of EMS providers, 24-hour shifts were not associated with a higher frequency of negative safety outcomes when compared to shorter shifts.
Chronic sleep deficit occurs when a worker sleeps for fewer hours than is necessary over multiple days or weeks. The loss of two hours of nightly sleep for a week causes an impairment similar to those seen after 24 hours of wakefulness. After two weeks of such deficit, the lapses in performance are similar to those seen after 48 hours of continual wakefulness. The number of shifts worked in a month by EMS providers was positively correlated with the frequency of reported errors and adverse events.
Sleep assessment during shift work
A cross-sectional study investigated the relationship between several sleep assessment criteria and different shift work schedules (3-day, 6-day, 9-day and 21-day shift) and a control group of day shift work in Korean firefighters. The results found that all shift work groups exhibited significant decreased total sleep time (TST) and decreased sleep efficiency in the night shift but efficiency increased in the rest day. Between-group analysis of the different shift work groups revealed that day shift sleep efficiency was significantly higher in the 6-day shift while night shift sleep efficiency was significantly lower in the 21-day shift in comparison to other shift groups (p < 0.05). Overall, night shift sleep quality was worse in shift workers than those who just worked the day shift, whereas 6-day shift provided better sleep quality compared to the 21-day shift.
Safety and regulation
Shift work has been shown to negatively affect workers, and has been classified as a specific disorder (shift work sleep disorder). Circadian disruption by working at night causes symptoms like excessive sleepiness at work and sleep disturbances. Shift work sleep disorder also creates a greater risk for human error at work. Shift work disrupts cognitive ability and flexibility and impairs attention, motivation, decision making, speech, vigilance, and overall performance.
To mitigate the negative effects of shift work on safety and health, many countries have enacted regulations on shift work. The European Union, in its directive 2003/88/EC, has established a 48-hour limit on working time (including overtime) per week; a minimum rest period of 11 consecutive hours per 24-hour period; and a minimum uninterrupted rest period of 24 hours of mandated rest per week (which is in addition to the 11 hours of daily rest). The EU directive also limits night work involving "special hazards or heavy physical or mental strain" to an average of eight hours in any 24-hour period. The EU directive allows for limited derogations from the regulation, and special provisions allow longer working hours for transportation and offshore workers, fishing vessel workers, and doctors in training (see also medical resident work hours).
Aircraft traffic flight controllers and pilots
For fewer operational errors, the FAA goal calls for Flight Controllers to be on duty for 5 to 6 hours per shift, with the remaining shift time devoted to meals and breaks. For aircraft pilots, the actual time at the controls (flight time) is limited to 8 or 9 hours, depending on the time of day.
Industrial disasters
Fatigue due to shift work has contributed to several industrial disasters, including the Three Mile Island accident, the Space Shuttle Challenger disaster and the Chernobyl disaster. The Alaska Oil Spill Commission's final report on the Exxon Valdez oil spill disaster found that it was "conceivable" that excessive work hours contributed to crew fatigue, which in turn contributed to the vessel's running aground.
Prevention
Management practices
The practices and policies put in place by managers of round-the-clock or 24/7 operations can significantly influence shift worker alertness (and hence safety) and performance.
Air traffic controllers typically work an 8-hour day, 5 days per week. Research has shown that when controllers remain "in position" for more than two hours, even at low traffic levels, performance can deteriorate rapidly, so they are typically placed "in position" for 30-minute intervals (with 30 minutes between intervals).
These practices and policies can include selecting an appropriate shift schedule or rota and using an employee scheduling software to maintain it, setting the length of shifts, managing overtime, increasing lighting levels, providing shift worker lifestyle training, retirement compensation based on salary in the last few years of employment (which can encourage excessive overtime among older workers who may be less able to obtain adequate sleep), or screening and hiring of new shift workers that assesses adaptability to a shift work schedule. Mandating a minimum of 10 hours between shifts is an effective strategy to encourage adequate sleep for workers. Allowing frequent breaks and scheduling 8- or 10-hour shifts instead of 12-hour shifts can also minimize fatigue and help to mitigate the negative health effects of shift work.
Multiple factors need to be considered when developing optimal shift work schedules, including shift timing, length, frequency and length of breaks during shifts, shift succession, worker commute time, as well as the mental and physical stress of the job. Even though studies support 12-hour shifts are associated with increased occupational injuries and accident (higher rates with subsequent, successive shifts), a synthesis of evidence cites the importance of all factors when considering the safety of a shift.
Shift work was once characteristic primarily of the manufacturing industry, where it has a clear effect of increasing the use that can be made of capital equipment and allows for up to three times the production compared to just a day shift. It contrasts with the use of overtime to increase production at the margin. Both approaches incur higher wage costs. Although 2nd-shift worker efficiency levels are typically 3–5% below 1st shift, and 3rd shift 4–6% below 2nd shift, the productivity level, i.e. cost per employee, is often 25% to 40% lower on 2nd and 3rd shifts due to fixed costs which are "paid" by the first shift.
Shift system
The 42-hour work-week allows for the most even distribution of work time. A 3:1 ratio of work days to days off is most effective for eight-hour shifts, and a 2:2 ratio of work days to days off is most effective for twelve-hour shifts. Eight-hour shifts and twelve-hour shifts are common in manufacturing and health care. Twelve-hour shifts are also used with a very slow rotation in the petroleum industry. Twenty-four-hour shifts are common in health care and emergency services.
Shift schedule and shift plan
The shift plan or rota is the central component of a shift schedule. The schedule includes considerations of shift overlap, shift change times and alignment with the clock, vacation, training, shift differentials, holidays, etc., whereas the shift plan determines the sequence of work and free days within a shift system.
Rotation of shifts can be fast, in which a worker changes shifts more than once a week, or slow, in which a worker changes shifts less than once a week. Rotation can also be forward, when a subsequent shift starts later, or backward, when a subsequent shift starts earlier. Evidence supports forward rotating shifts are more adaptable for shift workers' circadian physiology.
One main concern of shift workers is knowing their schedule more than two weeks at a time. Shift work is stressful. When on a rotating or ever changing shift, workers have to worry about daycare, personal appointments, and running their households. Many already work more than an eight-hour shift. Some evidence suggests giving employees schedules more than a month in advance would give proper notice and allow planning, their stress level would be reduced.
Management
Though shift work itself remains necessary in many occupations, employers can alleviate some of the negative health consequences of shift work. The United States National Institute for Occupational Safety and Health recommends employers avoid quick shift changes and any rotating shift schedules should rotate forward. Employers should also attempt to minimize the number of consecutive night shifts, long work shifts and overtime work. A poor work environment can exacerbate the strain of shift work. Adequate lighting, clean air, proper heat and air conditioning, and reduced noise can all make shift work more bearable for workers.
Good sleep hygiene is recommended. This includes blocking out noise and light during sleep, maintaining a regular, predictable sleep routine, avoiding heavy foods and alcohol before sleep, and sleeping in a comfortable, cool environment. Alcohol consumption, caffeine consumption and heavy meals in the few hours before sleep can worsen shift work sleep disorders. Exercise in the three hours before sleep can make it difficult to fall asleep.
Free online training programs are available to educate workers and managers about the risks associated with shift work and strategies they can use to prevent these.
Scheduling
Algorithmic scheduling of shift work can lead to what has been colloquially termed as "clopening" where the shift-worker has to work the closing shift of one day and the opening shift of the next day back-to-back resulting in short rest periods between shifts and fatigue. Co-opting employees to fill the shift roster helps to ensure that the human costs are taken into account in a way which is hard for an algorithm to do as it would involve knowing the constraints and considerations of each individual shift worker and assigning a cost metric to each of those factors. Shift based hiring which is a recruitment concept that hires people for individual shifts, rather than hiring employees before scheduling them into shifts enables shift workers to indicate their preferences and availability for unfilled shifts through a shift-bidding mechanism. Through this process, the shift hours are evened out by human-driven market mechanism rather than an algorithmic process. This openness can lead to work hours that are tailored to an individual's lifestyle and schedule while ensuring that shifts are optimally filled, in contrast to the generally poor human outcomes of fatigue, stress, estrangement with friends and family and health problems that have been reported with algorithm-based scheduling of work-shifts.
Mental (cognitive) fatigue due to inadequate sleep an/or disturbances of circadian rhythms is a common contributor to accidents and untoward incidents. While this risk cannot be eliminated, it can be managed through personal and administrative controls. This type of management is conducted through a Fatigue Risk Management System (FRMS). One method used within an FRMS is objective fatigue modeling to predict periods of high risk within a 24-hour shift plan.
Missing income is also a large part of shift worker. Several companies run twenty-four-hour shifts. Most of the work is done during the day. When the work dries up, it usually is the second and third shift workers who pay the price. They are told to punch out early or use paid time off if they have any to make up the difference in their paychecks. That practice costs the average worker $92.00 a month.
Medications
Melatonin may increase sleep length during both daytime and nighttime sleep in people who work night shifts. Zopiclone has also been investigated as a potential treatment, but it is unclear if it is effective in increasing daytime sleep time in shift workers. There are, however, no reports of adverse effects.
Modafinil and R-modafinil are useful to improve alertness and reduce sleepiness in shift workers. Modafinil has a low risk of abuse compared to other similar agents. However, 10% more participants reported adverse effects (nausea and headache) while taking modafinil. In post-marketing surveillance, modafinil was associated with Stevens–Johnson syndrome. The European Medicines Agency withdrew the license for modafinil for shift workers for the European market because it judged that the benefits did not outweigh the adverse effects.
Using caffeine and naps before night shifts can decrease sleepiness. Caffeine has also been shown to reduce errors made by shift workers.
Epidemiology
According to data from the National Health Interview Survey and the Occupational Health Supplement, 27% of all U.S. workers in 2015 worked an alternative shift (not a regular day shift) and 7% frequently worked a night shift. Prevalence rates were higher for workers aged 18–29 compared to other ages. Those with an education level beyond high school had a lower prevalence rate of alternative shifts compared to workers with less education. Among all occupations, protective service occupations had the highest prevalence of working an alternative shift (54%).
One of the ways in which working alternative shifts can impair health is through decreasing sleep opportunities. Among all workers, those who usually worked the night shift had a much higher prevalence of short sleep duration (44.0%, representing approximately 2.2 million night shift workers) than those who worked the day shift (28.8%, representing approximately 28.3 million day shift workers). An especially high prevalence of short sleep duration was reported by night shift workers in the transportation and warehousing (69.7%) and health-care and social assistance (52.3%) industries.
Adoption
It is estimated that 15–20% of workers in industrialized countries are employed in shift work. Shift work is common in the transportation sector as well. Some of the earliest instances appeared with the railroads, where freight trains have clear tracks to run on at night.
Shift work is also the norm in fields related to public protection and healthcare, such as law enforcement, emergency medical services, firefighting, security and hospitals. Shift work is a contributing factor in many cases of medical errors. Shift work has often been common in the armed forces. Military personnel, pilots, and others that regularly change time zones while performing shift work experience jet lag and consequently suffer sleep disorders.
Those in the field of meteorology, such as the National Weather Service and private forecasting companies, also use shift work, as constant monitoring of the weather is necessary. Much of the Internet services and telecommunication industry relies on shift work to maintain worldwide operations and uptime.
Service industries now increasingly operate on some shift system; for example a restaurant or convenience store will normally be open on most days for much longer than a working day.
There are many industries requiring 24/7 coverage that employ workers on a shift basis, including:
Caregiver
Direct support professional
Customer service, including call centers
Data center and IT operations
Death care (medical examiner or coroner)
Emergency services
Police
Firefighting
Emergency medical services
Entertainment
Casino workers
Health care
Funeral workers
Hospitality
Logistics and transportation
Railways
Ship crew
Manufacturing
Flight testing
Military
Mining
Public utilities
Nuclear power
Fossil fuel
Solar, wind, and hydro power
Retail
Telecommunications
Television
Radio broadcasting
Security
Weather
See also
Effects of overtime
Fatigue Avoidance Scheduling Tool
Gantt chart
Occupational cancer
Sleep
Split shift
References
Further reading
Pati, A.K., Chandrawanshi, A. & Reinberg, A. (2001) 'Shift work: consequences and management'. Current Science, 81(1), 32–52.
Burr, Douglas Scott (2009) 'The Schedule Book', ''.
External links
Shift work and health, Issue Briefing, Institute for Work & Health, April 2010.
Scientific Symposium on the Health Effects of Shift Work , Toronto, 12 April 2010, hosted by the Occupational Cancer Research Centre and the Institute for Work & Health (IWH).
CDC – Work Schedules: Shift Work and Long Work Hours – NIOSH Workplace Safety and Health Topic
Three-hour night shift system, For a crew of three on a small boat at sea
Working Time Society, a global research society addressing questions of working time and shift-work with biannual symposia.
Consensus papers regarding Health, ... and Shiftwork (2019) of the ICOH-Scientific Committee on Shiftwork and Working Time and the Working Time Society
Employment classifications
Working time
Circadian rhythm
Occupational safety and health
IARC Group 2A carcinogens | Shift work | [
"Biology"
] | 4,688 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
1,052,019 | https://en.wikipedia.org/wiki/Mundane%20astrology | Mundane astrology, also known as political astrology, is the branch of astrology dealing with politics, the government, and the laws governing a particular nation, state, or city. The name derives name from the Latin term , 'world'.
Certain countries have astrological charts (or horoscopes) just like a person is said to in astrology; for example, the chart for the United States is widely thought to be sometime during the day of July 4, 1776, for this is the exact day that the Declaration of Independence was signed and made fully official, thus causing the "birth" of the United States as a nation. Indeed, July 4 is a major national holiday in America and unequivocally thought of as the "birthday" of the entire nation.
History
Mundane astrology is widely believed by astrological historians to be the most ancient branch of astrology. Early Babylonian astrology was exclusively concerned with mundane astrology, being geographically oriented, specifically applied to countries cities and nations, and almost wholly concerned with the welfare of the state and the king as the governing head of the nation. Astrological practices of divination and planetary interpretation have been used for millennia to answer political questions, but only with the gradual emergence of horoscopic astrology, from the sixth century BC, did astrology develop into the two distinct branches of mundane astrology and natal astrology.
Techniques and principles
Astrologically, the affairs of a nation are judged from the horoscope set up at the time of its official inauguration or the birth chart of its leader, or various phenomena such as eclipses, lunations, great conjunctions, planetary stations, comets and ingresses.
The techniques of the subject were discussed in detail in the 2nd century work of the Alexandrian astronomer Ptolemy, who outlined its principles in the second book of his Tetrabiblos. Ptolemy set this topic before his discussion of individual birth charts because he argued that the astrological assessment of any 'particular' individual must rest upon prior knowledge of the 'general' temperament of their ethnic type; and that the circumstances of individual lives are subsumed, to some extent, within the fate of their community. The third chapter of his work offers an association between planets, zodiac signs and the national characteristics of 73 nations. It concludes with three assertions which act as core principles of mundane astrology:
Each of the fixed stars has familiarity with the countries attributed to the sign of its ecliptic rising.
The time of the first founding of a city (or nation) can be used in a similar way to an individual horoscope, to astrologically establish the characteristics and experiences of that city. The most significant considerations are the regions of the zodiac which mark the place of the Sun and Moon, and the four angles of the chart – in particular the ascendant.
If the time of the foundation of the city or nation is not known, a similar use can be made of the horoscope of whoever holds office or is king at the time, with particular attention given to the midheaven of that chart.
Practice
The first English astrologer for whom we have evidence of astrological practice is Richard Trewythian, whose notebook is largely concerned with mundane astrology. He constructed horoscopes for the Sun's ingress into Aries over thirty years, and recorded general predictions for twelve of those years between 1430 and 1458. His notebooks demonstrate how he recorded the logic for his conclusions:
He also made several predictions concerning the king (Henry VI), such as one he made in 1433 where he noted: "it seems that the king will be sick this year because Saturn is lord of the tenth house".
Notes
References
Works cited
External links
17th Century study in the Ancient Art of Mundane Astrology hosted by Skyscript (accessed 1 July 2012). The complete fourth book of William Ramesey's Astrologiae Restaurata, 'Astrology Restored' (London, 1653), edited and annotated by Steven Birchfield (1.43MB). The Fourth book is entitled Astrologia Munda, 'Mundane Astrology' - said by Birchfield to be the closest thing we have to an accessible textbook on traditional mundane astrology.
Astrology | Mundane astrology | [
"Astronomy"
] | 868 | [
"Astrology",
"History of astronomy"
] |
1,052,023 | https://en.wikipedia.org/wiki/Book%20of%20Moses | The Book of Moses, dictated by Joseph Smith, is part of the scriptural canon for some denominations in the Latter Day Saint movement. The book begins with the "Visions of Moses", a prologue to the story of the creation and the fall of man (Moses chapter 1), and continues with material corresponding to the Joseph Smith Translation of the Bible's (JST) first six chapters of the Book of Genesis (Moses chapters 2–5, 8), interrupted by two chapters of "extracts from the prophecy of Enoch" (Moses chapters 6–7).
The Book of Moses begins with Moses speaking with God "face to face" and seeing a vision of all existence. Moses is initially overwhelmed by the immensity of the cosmos and humanity's smallness in comparison, but God then explains that he made the earth and heavens to bring humans to eternal life. The book subsequently provides an enlarged account of the Genesis creation narrative which describes God having a corporeal body, followed by a rendering of the fall of Adam and Eve in celebratory terms which emphasize eating the forbidden fruit as part of a process of gaining knowledge and becoming more like God. The Book of Moses also expands the story of Enoch, described in the Bible as being an ancestor of Noah. In the expanded narrative, Enoch has a theophany in which he discovers that God is capable of sorrow, and that human sin and suffering cause him to grieve. Enoch then receives a prophetic calling, and he eventually builds a city of Zion so righteous that it is taken to heaven. Enoch's example inspired Smith's own hopes to establish the nascent Church of Christ as a Zion community. The book also elaborates some passages that (to Christians) foreshadowed the coming of Christ, into explicit Christian knowledge of and faith in Jesus as a Savior - in effect Christianizing the Old Testament.
Portions of the Book of Moses were originally published separately by the Church of Jesus Christ of Latter-day Saints (LDS Church) in 1851, but later combined and published as the Book of Moses in the Pearl of Great Price, one of the four books of its scriptural canon. The same material is published by the Community of Christ as parts of its Doctrine and Covenants and Inspired Version of the Bible.
Origin
In June 1830, Joseph Smith began a new translation of the Bible into English that was intended to restore "many important points touching the salvation of men, [that] had been taken from the Bible, or lost before it was compiled." The chapters that now make up the Book of Moses were first published in the church newspapers Evening and Morning Star and Times and Seasons in the 1830s and 1840s.
Publication by the Church of Jesus Christ of Latter-day Saints
The Book of Moses is considered part of the Standard Works, which constitute the scriptural canon of the Church of Jesus Christ of Latter-day Saints (LDS Church). The eight chapters of the Book of Moses were included as a separate book within the Pearl of Great Price through a series of events subsequent to Smith's death. Franklin D. Richards, who published the first edition of the Pearl of Great Price in 1851, only had access to the early versions of the JST found in church newspapers along with another incomplete handwritten part of JST Genesis, not the original manuscripts. For this reason the Book of Moses ended abruptly in the middle of the story of Noah. Richards published everything he had at the time, and what is now the Book of Moses was later added by Orson Pratt in the 1878 edition of the Pearl of Great Price. The Pearl of Great Price, including the Book of Moses, was officially canonized by the LDS Church in 1880.
Publication by the Community of Christ
The Community of Christ, formerly known as the Reorganized Church of Jesus Christ of Latter Day Saints (RLDS Church), began publishing portions of the Book of Moses in its canonical Doctrine and Covenants (D&C) in 1864. Section 22 of the D&C contains Moses chapter 1, and section 36 contains Moses chapter 7. The inclusion of these excerpts in the Doctrine and Covenants was officially approved by the RLDS Church in 1970.
The RLDS Church began publishing the complete Joseph Smith Translation of the Bible in 1867 (giving it the name "The Holy Scriptures" and more commonly known as the "Inspired Version"); the portions of the Book of Moses that are not contained in the church's D&C are contained within this larger translation.
Synopsis and ancient parallels
Moses 1
Moses 1: The events described in Moses 1 are portrayed as taking place sometime after Jehovah spoke to Moses out of the burning bush but before Moses had returned to Egypt to deliver the children of Israel (See Exodus 4:27). The details of Moses' experience in chapter 1 place it squarely in the tradition of ancient "heavenly ascent" literature (e.g., the pseudepigraphal Apocalypse of Abraham) and its relationship to temple theology, rites, and ordinances. Following a brief prologue, Moses is given a description of God's majesty and a confirmation of the work to which he had previously been foreordained as a "son of God." He is then shown the "world upon which he was created" and "all the children of men which are, and which were created." Then, having gone out the presence of God and no longer being clothed with His glory, Moses falls to the earth. He is then left to himself to be tested in a dramatic encounter with Satan. Having banished Satan through the power of the Only Begotten, Moses is "filled with the Holy Ghost." He "calls upon the name of God" in prayer, and is answered by a voice enumerating specific blessings. While "the voice is still speaking," Moses beholds every particle of the earth and all of its inhabitants. The culminating sequence begins in verse 31 when Moses, having continued to inquire of the Lord, returns to his presence. God then speaks with Moses face to face, describing his purposes for this earth and its inhabitants ("this is my work and my glory: to bring to pass the immortality and eternal life of man" Moses 1:39). Finally, the chapter closes with an allusion referring to Smith's restoration of the lost words of scripture (echoing a similar prophecy in the pseudepigraphal 2 Enoch 35:1–2), and stating that these words are to be shown only to those that believe (paralleling the pseudepigraphal 4 Ezra 14:6, 45–47). Then follows a vision outlining the creation, the fall of man, and subsequent events in the lives of Adam and Eve and their descendants. This is consistent with ancient Jewish sources which affirm that Moses saw these events in vision.
Moses 2–8
Moses 2–8 generally follow the first chapters of the Book of Genesis, but often provide alternative interpretations of the text or significant additional detail not found in the Bible. Among the notable differences are the following:
Moses 2 (cf. Genesis 1): A brief prologue affirming that the account derives from the words of God directly to Moses is added in verse 1. The repetition of the phrase "I, God" throughout the chapter also emphasizes the purported firsthand nature of the account. The idea that all things were created "by mine Only Begotten" (i.e., Jesus Christ, in his premortal state) is made clear, as is the Son's identity as the co-creator at the time when God said "Let us make man." Otherwise, the structure and basic premises of the Genesis account of the Creation are left intact. While following generally similar schemas, the two later versions of the creation story given in the Book of Abraham and in the temple endowment are replete with additional changes—some subtle and others stunning—that give new perspectives on the events portrayed.
Moses 3 (cf. Genesis 2): The Book of Moses explains the meaning of verse 5 in terms of the LDS idea of a spiritual creation. God explains that He: "created all things … spiritually, before they were naturally upon the face of the earth. For I, the Lord God, had not caused it to rain upon the face of the earth. And I, the Lord God, had created all the children of men; and not yet a man to till the ground; for in heaven created I them; and there was not yet flesh upon the earth, neither in the water, neither in the air (additions italicized). Consistent with this concept, some ancient sources assert that the heavenly hosts—variously described as including the angels, the sons of God, and/or the souls of humanity—were part of the light that appeared on day one of creation. Verse 17 is expanded in a way that reinforces the LDS teaching that Adam and Eve were placed in a situation where they were required to exercise freedom of choice in order to continue their progression through the experience of earth life As in the Quran, the transgression of Adam and Eve that led to their coming to earth is seen as a positive and necessary step that would provide the preparatory schooling they needed for an eventual glorious return to heaven.
Moses 4 (cf. Genesis 3): Four verses are added to the beginning of the Genesis version of this chapter, interrupting the flow of the story to give an account of heavenly councils where the nature and purposes of creation were discussed and decided. These verses echo stories in Jewish midrash recording that God "took counsel with the souls of the righteous before creating the world" A summary of the story of Satan's fall from heaven is also given. Like the Quran, and in contrast to Genesis, the corresponding accounts of Satan's rebellion and Adam and Eve's fall form a "single, continuous story."
Moses 5 (cf. Genesis 4): The Book of Moses adds fifteen verses to the beginning of the Genesis account. Verses 1–6 highlight the obedience of Adam and Eve by enumerating their faithfulness to each of the commandments they had been given. Adam and Eve began to "till the earth, and to have dominion over all the beasts of the field, and to eat his bread by the sweat of his brow." Likewise Eve fulfilled the commission she had received in the Garden of Eden and "bare … sons and daughters, and they began to replenish the earth." Moreover, "Adam was obedient to the commandments of the Lord" to "offer the firstlings of their flocks" for "many days," despite the fact that he did not yet fully understand the reason why he had been thus commanded. The period of testing for Adam involving "many days" mentioned in the Book of Moses corresponds to the "testing" of the first couple described in pseudepigraphal accounts such as the Life of Adam and Eve. Also recalling parallels in these ancient stories is the book of Moses account of how Adam and Eve's enduring obedience is rewarded by the announcement of their redemption through the eventual sacrifice of the son of God (vv. 6–13). In light of this extended prologue extolling the virtue of obedience and the promise of redemption, the Book of Moses' expanded story of Cain's rebellion and murder of his brother Abel appears in even starker relief. Cain's murderous pact with Satan is portrayed as the foundation of "secret combinations" that later flourish among the wicked, and provide a plausible context for the more fragmentary Genesis account of Lamech's slaying of his rival. The chapter ends with the declaration that "all things were confirmed unto Adam, by an holy ordinance, and the Gospel preached, and a decree sent forth, that it should be in the world, until the end thereof."
Moses 6 (cf. Genesis 5): Expansions in the early part of the chapter further describe the story of the righteous Seth. The "genealogy" of his descendants are said to be kept in a "book of remembrance." Jewish and Islamic sources describe a similar book, intended to preserve "the primordial wisdom of paradise for Adam and his generations" and also "the genealogy of the entire human race". Moses chapter 6 contains the story of the call and preaching of Enoch. Though the biblical account of Enoch's life occupies only two verses, his story fills most of chapter 6 and all of chapter 7 of the book of Moses. Extended accounts of the experiences of Enoch, which contain surprising parallels with the Book of Moses (particularly in Qumran's Enochic Book of Giants), also circulated widely in Second Temple Judaism and early Christianity. Some of the most significant resemblances in Moses chapter 6 are found not in 1 Enoch, but in related pseudepigrapha published after the death of Joseph Smith such as the Second Book of Enoch (first published at the end of the 19th century), 3 Enoch (first widely circulated translation was by Odeberg in 1928), but especially in the intriguing elaborations of the Qumranic Book of Giants (discovered in 1948). As an example of parallels with Second Book of Enoch and 3 Enoch, Moses 6:31 calls the 65-year-old Enoch a "lad" (the only use of this term in LDS scripture), corresponding to the somewhat puzzling use of this term to describe Enoch/Metatron in, e.g., 2 Enoch 10:4 and 3 Enoch 3:2, 4:2, and 4:10. Speaking of a reference to "lad" in the Second Book of Enoch, non-Mormon scholar Gary Anderson writes: "The acclamation of Enoch as 'lad' is curious .… It is worth noting that of all the names given Enoch, the title 'lad' is singled out as being particularly apt and fitting by the heavenly host." With regard to the Book of the Giants the parallels with the Enoch chapters in the Book of Moses are concentrated in a scant three pages of Qumran fragments. These resemblances range from general themes in the story line (secret works, murders, visions, earthly and heavenly books of remembrance that evoke fear and trembling, moral corruption, hope held out for repentance, and the eventual defeat of Enoch's adversaries in battle, ending with their utter destruction and imprisonment) to specific occurrences of rare expressions in corresponding contexts (the reference to the "wild man," the name and parallel role of Mahijah/Mahujah, and the "roar of the wild beasts").
Moses 7: This chapter continues the story of Enoch's preaching, including a vision of the "Son of Man"—a favorite motif in pseudepigraphal Book of Parables in 1 Enoch that also appears in marked density throughout the Book of Moses vision of Enoch. "Chosen One" "Anointed One", and "Righteous One" that appear prominently both in 1 Enoch and the LDS Enoch story. After considering the sometimes contentious debate among scholars about the single or multiple referent(s) of these titles and their relationship to other texts, Nickelsburg and VanderKam conclude that the author of 1 Enoch (like the author of the Book of Moses) "saw the … traditional figures as having a single referent and applied the various designations and characteristics as seemed appropriate to him." Consistent with texts found at Nag Hammadi, Smith's Enoch straightforwardly equates the filial relationship between God and His Only Begotten Son in the New Testament to the Enochic notion of the perfect Man and the Son of Man: "Man of Holiness is [God’s] name, and the name of his Only Begotten is the Son of Man, even Jesus Christ, a righteous Judge, who shall come in the meridian of time" (Moses 6:57). The single specific description of the role of the Son of Man given in this verse from the book of Moses as a "righteous judge" is highly characteristic of the Book of the Parables within 1 Enoch, where the primary role of the Son of Man is also that of a judge (e.g., 1 Enoch 69:27. Cf. John 5:27)." In a vision of Enoch found in the book of Moses, three distinct parties weep for the wickedness of mankind: God (Moses 7:28; cf. v. 29), the heavens (Moses 7:28, 37), and Enoch himself (Moses 7:41, 49). In addition, the earth mourns for her children (Moses 7:48–49). This chorus of weeping is consistent with the ancient Enoch literature. Moses chapter 7 concludes with the story of how Enoch gathered the righteous into a city he called Zion that was taken to heaven, a story whose ancient parallels have been explored by David J. Larsen.
Moses 8 (cf. Genesis 5-6): Additional details are given about the story of Methuselah and the preaching of Noah, again stressing the coming of Jesus Christ and the necessity of baptism. The term, "sons of God," as it occurs in the enigmatic episode of mismatched marriages in the Bible (Genesis 6:1) and relating to passages in 1 Enoch 6–7 about the "Watchers" has been the source of controversy among scholars. Contradicting traditions that depict these husbands as fallen angels, the Book of Moses (Moses 8:13–15) is consistent with early Christian traditions that portray them as mere mortals who lay claim on the title of sons of God by virtue of their priesthood (see Moses 6:64–68). The Book of Moses ends abruptly just before the flood of Noah, but the story continues in the remainder of the JST version of Genesis.
Scholarship
In contrast to numerous scholarly analyses of Smith's translations of the Book of Mormon and the Book of Abraham that began to appear in the 19th century, explorations of the textual foundations of the JST began in earnest only in the 1960s, with the pioneering work of the RLDS scholar Richard P. Howard and the LDS scholar Robert J. Matthews. A facsimile transcription of all the original manuscripts of the JST was at last published in 2004. Among other studies of the JST, Brigham Young University Professor Kent P. Jackson, a longtime student of these topics, prepared a detailed study of the text of the portions of the JST relating to the Book of Moses in 2005.
Although several brief studies of the teachings of the Book of Moses had previously appeared as part of apologetic and doctrinally focused LDS commentaries on the Pearl of Great Price, the first detailed verse-by-verse commentary—and the first to incorporate significant amounts of modern non-LDS Bible scholarship—was published by Richard D. Draper, S. Kent Brown, and Michael D. Rhodes in 2005.
In 2009, an 1100-page volume by Jeffrey M. Bradshaw was published, titled In God's Image and Likeness, which contains a comprehensive commentary on Moses 1–6:12, and incorporates a wide range of scholarly perspectives and citations from ancient texts. The book features an extensive annotated bibliography on ancient sources and over a hundred relevant illustrations with detailed captions.
In his master's thesis, Salvatore Cirillo cites and amplifies the arguments of D. Michael Quinn that the available evidence that Smith had access to published works related to 1 Enoch has moved "beyond probability—to fact." He concludes that there is no other explanation than this for the substantial similarities that he finds between the Book of Moses and the pseudepigraphal Enoch literature. However, reflecting on the "coincidence" of the appearance of the first English translation of 1 Enoch in 1821, just a few years before Smith received his Enoch revelations, Richard L. Bushman concludes: "It is scarcely conceivable that Joseph Smith knew of Laurence's Enoch translation." Perhaps even more significant is the fact that the principal themes of "Laurence’s 105 translated chapters do not resemble Joseph Smith’s Enoch in any obvious way." Apart from the shared prominence of the Son of Man motif in the 1 Enoch Book of the Parables and the Book of Moses and some common themes in Enoch's visions of Noah, the most striking resemblances to Smith's writings are found not in 1 Enoch, but in Enochic literature published after the Smith's death. As an impressive example of such post-mortem resemblances, Cirillo cites (but does not provide any explanation of provenance) for the Mahujah/Mahijah character in Qumran Book of the Giants and the Book of Moses.
As an alternative explanation for the Mahujah/Mahijah name and role in the Book of Moses, Matthew Black formulated a hypothesis in a conversation reported by Mormon scholar Gordon C. Thomasson that "certain carefully clandestine groups had, up through the middle-ages, maintained, sub rosa, an esoteric religious tradition based in the writings of Enoch, at least into the time of and influencing Dante" and "that a member of one of the esoteric groups he had described previously must have survived into the 19th century, and hearing of Joseph Smith, must have brought the group’s Enoch texts to New York from Italy for the prophet to translate and publish."
John L. Brooke claims that Sidney Rigdon, among others, was a "conduit of Masonic lore during Joseph’s early years" and then goes on to make a set of claims connecting Mormonism and Masonry. These claims, including connections with the story of Enoch's pillars in Royal Arch Masonry, are disputed by Mormon scholars William J. Hamblin, et al. Non-Mormon scholar Stephen Webb agreed with Hamblin, et al., concluding that "actual evidence for any direct link between [Joseph Smith’s] theology and the hermetic tradition is tenuous at best, and given that scholars vigorously debate whether hermeticism even constitutes a coherent and organized tradition, Brooke’s book should be read with a fair amount of skepticism."
Some non-Mormon scholars have signaled their appreciation of the significance of the Smith's translation efforts in light of ancient documents. Yale University critic of secular and sacred literature Harold Bloom, who classes the Book of Moses and the Book of Abraham among the "more surprising" and "neglected" works of LDS scripture, is intrigued by the fact that many of their themes are "strikingly akin to ancient suggestions" that essentially restate "the archaic or original Jewish religion, a Judaism that preceded even the Yahwist." While expressing "no judgment, one way or the other, upon the authenticity" of LDS scripture, he finds "enormous validity" in the way these writings "recapture … crucial elements in the archaic Jewish religion … that had ceased to be available either to normative Judaism or to Christianity, and that survived only in esoteric traditions unlikely to have touched Smith directly." With respect to any possibility that Smith could have drawn from ancient manuscripts in his writings, Bloom concludes: "I hardly think that written sources were necessary." Stephen Webb concludes that Smith "knew more about theology and philosophy than it was reasonable for anyone in his position to know, as if he were dipping into the deep, collective unconsciousness of Christianity with a very long pen."
Genealogy
The Book of Moses contains a detailed account of Adam's descendants. Genealogy from the Book of Abraham is shown below. Bold denotes individuals not from Genesis. The names Egyptus and Pharaoh are not present in the Book of Moses, but they are mentioned in the Book of Abraham, another book of Mormon scripture.
See also
Book of Jubilees
Scrolls of Moses
Footnotes
References
External links
Images of Old Testament revision manuscript (including section canonized as the Book of Moses) from the Joseph Smith Papers Project website. Originals housed at Community of Christ Library-Archives.
1830s books
1851 non-fiction books
1851 in Christianity
Cain and Abel
Creation myths
Enoch (ancestor of Noah)
Works by Joseph Smith
Noah
Pearl of Great Price (Mormonism)
Satan
Texts attributed to Moses
Works in the style of the King James Version
Mormonism and the Bible
Adam and Eve in Mormonism
Works based on the Book of Genesis | Book of Moses | [
"Astronomy"
] | 4,953 | [
"Cosmogony",
"Creation myths"
] |
1,052,083 | https://en.wikipedia.org/wiki/Real%20World%20Records | Real World Records is a British record label specializing in world music. It was founded in 1989 by English musician Peter Gabriel and original members of WOMAD. A majority of the works released on Real World Records feature music recorded at Real World Studios, in Box, Wiltshire, England.
History
The goal of its founding in 1989 was to give talented musicians from around the world access to state-of-the-art recording facilities and audiences beyond their geographic region. The musical relationships formed at WOMAD festivals were also intended to lead to new music recordings. As a result, the music label is known for bringing together musicians who share a common interest in music in general. New recording methods and new meeting places are created.
In 1999, the label had sold over 3 million records worldwide and released 90 albums. In 2015, it had reached the mark of over 200 albums.
Many of the released recordings continue to be made at Real World Studios, also founded in 1989, whose facilities support the goals of Real World Records.
In 2011, EMI Music Publishing renewed the distribution deal for the Real World catalogue outside of the United Kingdom, thereby also covering the United States for the first time.
Artists
Afro Celt Sound System
Ashkhabad
Ayub Ogada
Bernard Kabanda
Blind Boys of Alabama
Charlie Winston
Creole Choir of Cuba
Dengue Fever
Farafina
Fatala
Geoffrey Oryema
Guo Brothers
Hoba Hoba Spirit
Jasdeep Singh Degun
Johnny Kalsi
Joi
Joseph Arthur
Les Amazones d'Afrique
Little Axe
Mamer
Maryam Mursal
Nusrat Fateh Ali Khan
Ozomatli
Paban Das Baul
Pan-African Orchestra
Papa Wemba
Peter Gabriel
Portico Quartet
Rupert Hine
Samuel Yirga
Sheila Chandra
Sevara Nazarkhan
Spiro
The Imagined Village
U. Srinivas
Värttinä
Yungchen Lhamo
The Zawose Queens
Partial discography
ABoneCroneDrone, Sheila Chandra, 1996
Among Brothers, Abderrahmane Abdelli, 2003
And I'll Scratch Yours, various artists, 2013
Atom Bomb, The Blind Boys of Alabama, 2005
Beat the Border, Geoffrey Oryema, 1993
Big Blue Ball, various artists, 2008 (recorded 1991, 1992, 1995)
Big City Secrets, Joseph Arthur, 1997
Black Rock, Djivan Gasparyan & Michael Brook, 1998
Coming Home, Yungchen Lhamo, 1998
Djabote, Doudou Ndiaye Rose, 1992
Emotion, Papa Wemba, 1995
En Mana Kuoyo, Ayub Ogada, 1993
Espace, Tama, 2002
Go Tell It on the Mountain, Blind Boys of Alabama, 2003
Higher Ground, The Blind Boys of Alabama, with Robert Randolph and the Family Band, and special guest Ben Harper, 2002
In Your Hands, Charlie Winston, 2009
Le Voyageur, Papa Wemba
My Songs and a Poem, Estrella Morente, 2001
Mustt Mustt, Nusrat Fateh Ali Khan & Michael Brook, 1990
New Blood, Peter Gabriel, 2011Night Song, Nusrat Fateh Ali Khan & Michael Brook, 1995Night to Night, Geoffrey Oryema, 1996
Passion: Music for The Last Temptation of Christ, Peter Gabriel, 1989Pod, Afro Celt Sound System, 2004Plus from US, various artists, 1993Quick Look, Pina, 2002Rama Sreerama, U. Srinivas, 1994Real Sugar, Paban Das Baul & Sam Mills, 1997Sampradaya, Pandit Shiv Kumar Sharma, with Rahul Sharma, Shafaat Ahmed Khan & Manorama Sharma, 1999
Scratch My Back, Peter Gabriel, 2010Serious Tam, Telek, 2000Sezoni, Mara! with Martenitsa Choir, 1999 (original release on Rufus Records, 1997)Songs for the Poor Man, Remmy Ongala, 1989The Journey, Maryam Mursal, 1998The Last Prophet, Nusrat Fateh Ali Khan & Party, 1994The Truth (Ny Marina), The Justin Vali Trio, 1995The Zen Kiss, Sheila Chandra, 1994Tibet, Tibet, Yungchen Lhamo, 1996Trance, Hassan Hakmoun and Zahar, 1993Untold Things, Jocelyn Pook, 2001
Up, Peter Gabriel, 2002
Us, Peter Gabriel, 1992Volume 2: Release, Afro Celt Sound System, 1999Volume 3: Further in Time, Afro Celt Sound System, 2001Weaving My Ancestor's Voices Sheila Chandra, 1992Yo‘l Bo‘lsin'', Sevara Nazarkhan, 2003
References
External links
Website Real World Studios (retrieved on 24 March 2023)
Website Real World Records (retrieved on 24 March 2023)
1989 establishments in England
Record labels established in 1989
British record labels
World music record labels
Progressive rock record labels
Peter Gabriel
Virgin Records
Multimedia
Companies based in Wiltshire
British independent record labels | Real World Records | [
"Technology"
] | 989 | [
"Multimedia"
] |
1,052,096 | https://en.wikipedia.org/wiki/Logarithmic%20form | In algebraic geometry and the theory of complex manifolds, a logarithmic differential form is a differential form with poles of a certain kind. The concept was introduced by Pierre Deligne. In short, logarithmic differentials have the mildest possible singularities needed in order to give information about an open submanifold (the complement of the divisor of poles). (This idea is made precise by several versions of de Rham's theorem discussed below.)
Let X be a complex manifold, D ⊂ X a reduced divisor (a sum of distinct codimension-1 complex subspaces), and ω a holomorphic p-form on X−D. If both ω and dω have a pole of order at most 1 along D, then ω is said to have a logarithmic pole along D. ω is also known as a logarithmic p-form. The p-forms with log poles along D form a subsheaf of the meromorphic p-forms on X, denoted
The name comes from the fact that in complex analysis, ; here is a typical example of a 1-form on the complex numbers C with a logarithmic pole at the origin. Differential forms such as make sense in a purely algebraic context, where there is no analog of the logarithm function.
Logarithmic de Rham complex
Let X be a complex manifold and D a reduced divisor on X. By definition of and the fact that the exterior derivative d satisfies d2 = 0, one has
for every open subset U of X. Thus the logarithmic differentials form a complex of sheaves , known as the logarithmic de Rham complex associated to the divisor D. This is a subcomplex of the direct image , where is the inclusion and is the complex of sheaves of holomorphic forms on X−D.
Of special interest is the case where D has normal crossings: that is, D is locally a sum of codimension-1 complex submanifolds that intersect transversely. In this case, the sheaf of logarithmic differential forms is the subalgebra of generated by the holomorphic differential forms together with the 1-forms for holomorphic functions that are nonzero outside D. Note that
Concretely, if D is a divisor with normal crossings on a complex manifold X, then each point x has an open neighborhood U on which there are holomorphic coordinate functions such that x is the origin and D is defined by the equation for some . On the open set U, sections of are given by
This describes the holomorphic vector bundle on . Then, for any , the vector bundle is the kth exterior power,
The logarithmic tangent bundle means the dual vector bundle to . Explicitly, a section of is a holomorphic vector field on X that is tangent to D at all smooth points of D.
Logarithmic differentials and singular cohomology
Let X be a complex manifold and D a divisor with normal crossings on X. Deligne proved a holomorphic analog of de Rham's theorem in terms of logarithmic differentials. Namely,
where the left side denotes the cohomology of X with coefficients in a complex of sheaves, sometimes called hypercohomology. This follows from the natural inclusion of complexes of sheaves
being a quasi-isomorphism.
Logarithmic differentials in algebraic geometry
In algebraic geometry, the vector bundle of logarithmic differential p-forms on a smooth scheme X over a field, with respect to a divisor with simple normal crossings, is defined as above: sections of are (algebraic) differential forms ω on such that both ω and dω have a pole of order at most one along D. Explicitly, for a closed point x that lies in for and not in for , let be regular functions on some open neighborhood U of x such that is the closed subscheme defined by inside U for , and x is the closed subscheme of U defined by . Then a basis of sections of on U is given by:
This describes the vector bundle on X, and then is the pth exterior power of .
There is an exact sequence of coherent sheaves on X:
where is the inclusion of an irreducible component of D. Here β is called the residue map; so this sequence says that a 1-form with log poles along D is regular (that is, has no poles) if and only if its residues are zero. More generally, for any p ≥ 0, there is an exact sequence of coherent sheaves on X:
where the sums run over all irreducible components of given dimension of intersections of the divisors Dj. Here again, β is called the residue map.
Explicitly, on an open subset of that only meets one component of , with locally defined by , the residue of a logarithmic -form along is determined by: the residue of a regular p-form is zero, whereas
for any regular -form . Some authors define the residue by saying that has residue , which differs from the definition here by the sign .
Example of the residue
Over the complex numbers, the residue of a differential form with log poles along a divisor can be viewed as the result of integration over loops in around . In this context, the residue may be called the Poincaré residue.
For an explicit example, consider an elliptic curve D in the complex projective plane , defined in affine coordinates by the equation where and is a complex number. Then D is a smooth hypersurface of degree 3 in and, in particular, a divisor with simple normal crossings. There is a meromorphic 2-form on given in affine coordinates by
which has log poles along D. Because the canonical bundle is isomorphic to the line bundle , the divisor of poles of must have degree 3. So the divisor of poles of consists only of D (in particular, does not have a pole along the line at infinity). The residue of ω along D is given by the holomorphic 1-form
It follows that extends to a holomorphic one-form on the projective curve D in , an elliptic curve.
The residue map considered here is part of a linear map , which may be called the "Gysin map". This is part of the Gysin sequence associated to any smooth divisor D in a complex manifold X:
Historical terminology
In the 19th-century theory of elliptic functions, 1-forms with logarithmic poles were sometimes called integrals of the second kind (and, with an unfortunate inconsistency, sometimes differentials of the third kind). For example, the Weierstrass zeta function associated to a lattice in C was called an "integral of the second kind" to mean that it could be written
In modern terms, it follows that is a 1-form on C with logarithmic poles on , since is the zero set of the Weierstrass sigma function
Mixed Hodge theory for smooth varieties
Over the complex numbers, Deligne proved a strengthening of Alexander Grothendieck's algebraic de Rham theorem, relating coherent sheaf cohomology with singular cohomology. Namely, for any smooth scheme X over C with a divisor with simple normal crossings D, there is a natural isomorphism
for each integer k, where the groups on the left are defined using the Zariski topology and the groups on the right use the classical (Euclidean) topology.
Moreover, when X is smooth and proper over C, the resulting spectral sequence
degenerates at . So the cohomology of with complex coefficients has a decreasing filtration, the Hodge filtration, whose associated graded vector spaces are the algebraically defined groups .
This is part of the mixed Hodge structure which Deligne defined on the cohomology of any complex algebraic variety. In particular, there is also a weight filtration on the rational cohomology of . The resulting filtration on can be constructed using the logarithmic de Rham complex. Namely, define an increasing filtration by
The resulting filtration on cohomology is the weight filtration:
Building on these results, Hélène Esnault and Eckart Viehweg generalized the Kodaira–Akizuki–Nakano vanishing theorem in terms of logarithmic differentials. Namely, let X be a smooth complex projective variety of dimension n, D a divisor with simple normal crossings on X, and L an ample line bundle on X. Then
and
for all .
See also
Adjunction formula
Borel–Moore homology
Differential of the first kind
Log structure
Mixed Hodge structure
Residue theorem
Poincaré residue
Notes
References
External links
Aise Johan de Jong, Algebraic de Rham cohomology.
Complex analysis
Algebraic geometry | Logarithmic form | [
"Mathematics"
] | 1,847 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
1,052,135 | https://en.wikipedia.org/wiki/Human%20action%20cycle | The human action cycle is a psychological model which describes the steps humans take when they interact with computer systems. The model was proposed by Donald A. Norman, a scholar in the discipline of human–computer interaction. The model can be used to help evaluate the efficiency of a user interface (UI). Understanding the cycle requires an understanding of the user interface design principles of affordance, feedback, visibility and tolerance.
The human action cycle describes how humans may form goals and then develop a series of steps required to achieve that goal, using the computer system. The user then executes the steps, thus the model includes both cognitive activities and physical activities.
The three stages of the human action cycle
The model is divided into three stages of seven steps in total, and is (approximately) as follows:
Goal formation stage
1. Goal formation.
Execution stage
2. Translation of goals into a set of unordered tasks required to achieve goals.
3. Sequencing the tasks to create the action sequence.
4. Executing the action sequence.
Evaluation stage
5. Perceiving the results after having executed the action sequence.
6. Interpreting the actual outcomes based on the expected outcomes.
7. Comparing what happened with what the user wished to happen.
Use in evaluation of user interfaces
Typically, an evaluator of the user interface will pose a series of questions for each of the cycle's steps, an evaluation of the answer provides useful information about where the user interface may be inadequate or unsuitable. These questions might be:
Step 1, Forming a goal:
Do the users have sufficient domain and task knowledge and sufficient understanding of their work to form goals?
Does the UI help the users form these goals?
Step 2, Translating the goal into a task or a set of tasks:
Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the tasks?
Does the UI help the users formulate these tasks?
Step 3, Planning an action sequence:
Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the action sequence?
Does the UI help the users formulate the action sequence?
Step 4, Executing the action sequence:
Can typical users easily learn and use the UI?
Do the actions provided by the system match those required by the users?
Are the affordance and visibility of the actions good?
Do the users have an accurate mental model of the system?
Does the system support the development of an accurate mental model?
Step 5, Perceiving what happened:
Can the users perceive the system’s state?
Does the UI provide the users with sufficient feedback about the effects of their actions?
Step 6, Interpreting the outcome according to the users’ expectations:
Are the users able to make sense of the feedback?
Does the UI provide enough feedback for this interpretation?
Step 7, Evaluating what happened against what was intended:
Can the users compare what happened with what they were hoping to achieve?
Further reading
Norman, D. A. (1988). The Design of Everyday Things. New York, Doubleday/Currency Ed.
Related terms
Gulf of evaluation exists when the user has trouble performing the evaluation stage of the human action cycle (steps 5 to 7).
Gulf of execution exists when the user has trouble performing the execution stage of the human action cycle (steps 2 to 4).
OODA Loop is an equivalent in military strategy.
Human–computer interaction
Motor control
Psychological models | Human action cycle | [
"Engineering",
"Biology"
] | 685 | [
"Human–computer interaction",
"Behavior",
"Human–machine interaction",
"Motor control"
] |
1,052,154 | https://en.wikipedia.org/wiki/Interpersonal%20attraction | Interpersonal attraction, as a part of social psychology, is the study of the attraction between people which leads to the development of platonic or romantic relationships. It is distinct from perceptions such as physical attractiveness, and involves views of what is and what is not considered beautiful or attractive.
Within the study of social psychology, interpersonal attraction is related to how much one likes or dislikes another person. It can be viewed as a force acting between two people that tends to draw them together and to resist their separation. When measuring interpersonal attraction, one must refer to the qualities of the attracted and those of the attractor to achieve predictive accuracy. It is suggested that to determine attraction, both the personalities and the situation must be taken into account.
Measurement
In social psychology, interpersonal attraction is most-frequently measured using the Interpersonal Attraction Judgment Scale developed by Donn Byrne. It is a scale in which a subject rates another person on factors such as intelligence, knowledge of current events, morality, adjustment, likability, and desirability as a work partner. This scale seems to be directly related with other measures of social attraction such as social choice, feelings of desire for a date, sexual partner or spouse, voluntary physical proximity, frequency of eye contact, etc.
Kiesler and Goldberg analyzed a variety of response measures that were typically utilized as measures of attraction and extracted two factors: the first, characterized as primarily socioemotional, included variables such as liking, the desirability of the person's inclusion in social clubs and parties, seating choices, and lunching together. The second factor included variables such as voting for, admiration and respect for, and also seeking the opinion of the target. Another widely used measurement technique scales verbal responses expressed as subjective ratings or judgments of the person of interest.
Causes and effects
There are factors that lead to interpersonal attraction. Studies suggest that all factors involve social reinforcement. The most frequently studied include physical attractiveness, propinquity (frequency of interaction), familiarity, similarity, complementarity, reciprocal liking, and reinforcement. The impact of familiarity, for example, is shown in the way physical proximity and interaction enhances cohesiveness, a social concept that facilitates communication and positive attitude towards a particular individual on account of similarities or the ability to satisfy important goals. Similarity is believed to more likely lead to liking and attraction than differences. Numerous studies have focused on the role of physical attractiveness to personal attraction. One finding was that people tend to attribute positive qualities such as intelligence, competence, and warmth to individuals who have a pleasing physical appearance.
Physical attractiveness
Physical attractiveness is the perception of the physical traits of an individual human person as pleasing or beautiful. It can include various implications, such as sexual attractiveness, cuteness, similarity and physique.
Judgment of attractiveness of physical traits is partly universal to all human cultures, partly dependent on culture or society or time period, partly biological, and partly subjective and individual.
According to a study determining the golden ratio for facial beauty, the most attractive face is one with average distances between facial features, and an average length and width of the face itself. Facial attractiveness, or beauty, can also be determined by symmetry. If a face is asymmetrical, this can indicate unhealthy genetic information. Therefore, if a face is symmetrical (see facial symmetry), healthy genetic information is implied. People will judge potential mates based on the physical expression of the genetic health, which is their apparent attractiveness. This supports the good genes theory, which indicates that attractiveness is seen as a way to ensure that offspring will have the healthiest genes and therefore the best chance of survival. Certain traits that indicate good genes (such as clear skin or facial symmetry) are seen as desirable when choosing a partner.
Personality
Studies have reported mixed findings on whether or not similarity in personality traits between people in interpersonal relationships (romantic, friendship, etc.) is necessary or essential for relationship satisfaction. This has been due to different types of research methodologies used to reach conclusions. It is argued that the previous lack of evidence that congruence in personality traits between two people is an important predictor for relationship satisfaction has been due to individuals making judgements of each other at a salient level (local group) rather than a global group comparison (reference-group effect).
A 2014 study suggested that people who tend to portray positive personality traits such as kindness are typically seen as more attractive than people who portray negative personality traits.
Similarity attraction effect
The proverb "birds of a feather flock together" has been used to illustrate that similarity is a crucial determinant of interpersonal attraction. Studies about attraction indicate that people are strongly attracted to lookalikes in physical and social appearance. This similarity is in the broadest sense: similarity in bone-structure, characteristics, life goals and physical appearance. The more these points match, the happier, satisfied and prosperous people are in these relationships.
The lookalike effect plays the role of self-affirmation. A person typically enjoys receiving confirmation of aspects of his or her life, ideas, attitudes and personal characteristics, and people seem to look for an image of themselves to spend their life with. A basic principle of interpersonal attraction is the rule of similarity: similarity is attractive — an underlying principle that applies to both friendships and romantic relationships. The proportion of attitudes shared correlates well with the degree of interpersonal attraction. Cheerful people like to be around other cheerful people and negative people would rather be around other negative people. A 2004 study, based on indirect evidence, concluded that humans choose mates based partly on facial resemblance to themselves.
According to Morry's attraction-similarity model (2007), there is a lay belief that people with actual similarity produce initial attraction. The perceived similarity is either self-serving, as in a friendship, or relationship-serving, as in a romantic relationship. In a 1963 study, Theodore Newcomb pointed out that people tend to change perceived similarity to obtain balance in a relationship. Additionally, perceived but not actual similarity was found to predict interpersonal attraction during a face-to-face initial romantic encounter.
In a 1988 study, Lydon, Jamieson & Zanna suggest that interpersonal similarity and attraction are multidimensional constructs in which people are attracted to people similar to themselves in demographics, physical appearance, attitudes, interpersonal style, social and cultural background, personality, preferred interests and activities, and communication and social skills. Newcomb's earlier 1961 study on college-dorm roommates also suggested that individuals with shared backgrounds, academic achievements, attitudes, values, and political views typically became friends.
Physical appearance
The matching hypothesis proposed by sociologist Erving Goffman suggests that people are more likely to form long standing relationships with those who are equally matched in social attributes, like physical attractiveness. The study by researchers Walster and Walster supported the matching hypothesis by showing that partners who were similar in terms of physical attractiveness expressed the most liking for each other. Another study also found evidence that supported the matching hypothesis: photos of dating and engaged couples were rated in terms of attractiveness, and a definite tendency was found for couples of similar attractiveness to date or engage. Several studies support this evidence of similar facial attractiveness. Penton-Voak, Perrett and Peirce (1999) found that subjects rated the pictures with their own face morphed into it as more attractive. DeBruine (2002) demonstrated in her research how subjects entrusted more money to their opponents in a game play, when the opponents were presented as similar to them. Little, Burt & Perrett (2006) examined similarity in sight for married couples and found that the couples were assessed at the same age and level of attractiveness.
A speed-dating experiment done on graduate students from Columbia University showed that although physical attractiveness is preferred in a potential partner, men show a greater preference for it than women. However, more recent work suggests that sex differences in stated ideal partner-preferences for physical attractiveness disappear when examining actual preferences for real-life potential partners. For example, Eastwick and Finkel (2008) failed to find sex differences in the association between initial ratings of physical attractiveness and romantic interest in potential partners during a speed dating paradigm.
Quality of voice
In addition to physical looks, quality of voice has also been shown to enhance interpersonal attraction. Oguchi and Kikuchi (1997) had 25 female students from one university rank the level of vocal attraction, physical attraction, and overall interpersonal attraction of 4 male students from another university. Vocal and physical attractiveness had independent effects on overall interpersonal attraction. In a second part of the same study, these results were replicated in a larger sample of students for both genders (62 subjects, 20 males and 42 females with 16 target students, 8 males and 8 females). Similarly, Zuckerman, Miyake and Hodgins (1991) found that both vocal and physical attractiveness contributed significantly to observers' ratings of targets for general attractiveness. These results suggest that when people evaluate one's voice as attractive, they also tend to evaluate that person as physically attractive.
Attitudes
Based on cognitive consistency theories, difference in attitudes and interests can lead to dislike and avoidance whereas similarity in attitudes promotes social attraction. Miller (1972) pointed out that attitude similarity activates the perceived attractiveness and favorability information from each other, whereas dissimilarity would reduce the impact of these cues.
The studies by Jamieson, Lydon and Zanna (1987–88) showed that attitude similarity could predict how people evaluate their respect for each other, and also predict social and intellectual first impressions – the former by activity preference similarity and the latter by value-based attitude similarity. In intergroup comparisons, high attitude-similarity would lead to homogeneity among in-group members whereas low attitude-similarity would lead to diversity among in-group members, promoting social attraction and achieving high group performance in different tasks.
Although attitude similarity and attraction are linearly related, attraction may not contribute significantly to attitude change.
Other social and cultural aspects
Byrne, Clore and Worchel (1966) suggested that people with similar economic status are likely to be attracted to each other. Buss & Barnes (1986) also found that people prefer their romantic partners to be similar in certain demographic characteristics, including religious background, political orientation and socio-economic status.
Researchers have shown that interpersonal attraction was positively correlated to personality similarity. People are inclined to desire romantic partners who are similar to themselves on agreeableness, conscientiousness, extroversion, emotional stability, openness to experience, and attachment style.
Activity similarity was especially predictive of liking judgments, which affects the judgments of attraction. According to the post-conversation measures of social attraction, tactical similarity was positively correlated with partner satisfaction and global competence ratings, but was uncorrelated with the opinion change and perceived persuasiveness measures.
When checking similar variables they were also seen as more similar on a number of personality characteristics. This study found that the length of the average relationship was related to perceptions of similarity; the couples who were together longer were seen as more equal. This effect can be attributed to the fact that when time passes by couples become more alike through shared experiences, or that couples that are alike stay together longer.
Similarity has effects on starting a relationship by initial attraction to know each other. It is shown that high attitude similarity resulted in a significant increase in initial attraction to the target person and high attitude dissimilarity resulted in a decrease of initial attraction. Similarity also promotes relationship commitment. Study on heterosexual dating couples found that similarity in intrinsic values of the couple was linked to relationship commitment and stability.
Social homogamy refers to "passive, indirect effects on spousal similarity". The result showed that age and education level are crucial in affecting the mate preference. Because people with similar age study and interact more in the same form of the school, propinquity effect (i.e., the tendency of people to meet and spend time with those who share the common characteristics) plays a significant impact in spousal similarity. Convergence refers to an increasing similarity with time. Although the previous research showed that there is a greater effect on attitude and value than on personality traits, however, it is found that initial assortment (i.e., similarity within couples at the beginning of marriage) rather than convergence, plays a crucial role in explaining spousal similarity.
Active assortment refers to direct effects on choosing someone similar to oneself in mating preferences. The data showed that there is a greater effect on political and religious attitudes than on personality traits. A follow-up issue on the reason of the finding was raised. The concepts of idiosyncratic (i.e. different individuals have different mate preferences) and consensual (i.e. a consensus of preference on some prospective mates to others) in mate preference. The data showed that mate preference on political and religious bases tend to be idiosyncratic, for example, a Catholic would be more likely to choose a mate who is also a Catholic, as opposed to a Buddhist. Such idiosyncratic preferences produce a high level of active assortment which plays a vital role in affecting spousal similarity. In summary, active assortment plays a large role, whereas convergence has little evidence on showing such effect.
Propinquity effect
The propinquity effect relies on the observation that: "The more we see and interact with a person, the more likely he or she is to become our friend or sexual partner." This effect is very similar to the mere exposure effect in that the more a person is exposed to a stimulus, the more the person likes it; however, there are exceptions. Familiarity can also occur without physical exposure. Recent studies show that relationships formed over the Internet resemble those developed face-to-face, in terms of perceived quality and depth.
Exposure effect
The exposure effect, also known as the familiarity principle, states that the more a person is exposed to something, the more they come to like it. This applies equally to both objects and people. A clear illustration is in a 1992 study: the researchers had four women of similar appearance attend a large college course over a semester such that each woman attended a different number of sessions (0, 5, 10, or 15). Students then rated the women for perceived familiarity, attractiveness and similarity at the end of the term. Results indicated a strong effect of exposure on attraction that was mediated by the effect of exposure on familiarity. However, exposure does not always increase attraction. For example, the social allergy effect can occur when a person grows increasingly annoyed by and hypersensitive to another's repeated behaviors instead of growing more fond of his or her idiosyncrasies over time.
Pheromones
Certain pheromones secreted by animals, including humans, can attract others, and this is viewed as being attracted to smell. Human sex pheromones may play a role in human attraction, although it is unclear how well humans can actually sense the pheromones of another.
Types of attraction
Split attraction model describes different types of attraction separating different aspects of experiences people may have. They can roughly be grouped into physical and non-physical. Physical being: sexual, sensual, aesthetic... Non-physical may include emotional, mental (intellectual), spiritual...
Sensual attraction is a type of physical attraction to another person involving all the senses, although usually the sense of touch is considered first of all. Sensual attraction is defined as the drive, desire to have non-sexual forms of touch such as sensual cuddling, kissing, holding hands, hugging, massage etc. with another person in particular and other sensual activities like experiencing their voice, odor, taste. Asensual (sometimes known in short as asen) is an identity on the asensual spectrum (asen-spec) defined by a lack of sensual attraction. For non-asensual also known as allosensual people, sensual attraction is involuntary, and possibly even occurs when someone does not know the other person (though one might not act on it). Asensual people do not have this innate desire to have sensual experiences with any specific person.
Asensuality refers to the way sensual attraction is experienced, not to how it is acted upon. How asensual people feel about touching others and/or being touched by others and other sensual activities can vastly vary. They may feel disconnected from the idea of engaging in sensual activities or even be repulsed by the concept of sensuality. Terms like touch-averse/repulsed, touch-indifferent, touch-favorable, or touch-ambivalent can be used to describe some of these feelings. Some asensual people do engage in sensual activities involving other people. This could be for any reason, such as satisfying overall sensual drive not directed toward a particular person, meeting their sensory needs or those of a friend, partner/partners. They may also meet their sensory needs by using a weighted or heated blanket, cuddling with a stuffed animal or pet etc. Being asensual does not mean that one is unable to experience other types of attraction like sexual, aesthetic, emotional etc. including they may very much enjoy sexual activities. It is also important to remember that one can receive pleasure usually associated with a form of attraction without actually feeling that form of attraction. The term "asensual" can also be used as an umbrella term to describe someone on the asensual spectrum.
Some identities on the asensual spectrum are Asenflux, Greysensual (Grey Asen), Demisensual, Aegosensual, Cupiosensual, Homosensual etc.
Chemistry
In the context of relationships, chemistry is a simple emotion that two people get when they share a special connection. It is very early in one's relationship that they can intuitively work out whether they have positive or negative chemistry.
Some people describe chemistry in metaphorical terms, such as "like peanut butter and jelly", or "like a performance". It can be described in the terms of mutual feelings — "a connection, a bond or common feeling between two people", or as a chemical process — "[it] stimulates love or sexual attraction...brain chemicals are definitely involved". A common misconception is that chemistry is an unconscious decision, informed by a complex blend of criteria.
Some of the core components of chemistry are: "non-judgment, similarity, mystery, attraction, mutual trust, and effortless communication". Chemistry can be described as the combination of "love, lust, infatuation, and a desire to be involved intimately with someone".
Research suggests that "not everyone experiences chemistry", and that "chemistry occurred most often between people who are down-to-earth and sincere". This is because "if a person is comfortable with themselves, they are better able to express their true self to the world, which makes it easier to get to know them...even if perspectives on important matters differed." Sharing similarities is also deemed essential to chemistry as "feeling understood is essential to forming relational bonds."
There are various psychological, physical and emotional symptoms of having good chemistry with another person. It has been described as a "combination of basic psychological arousal combined with a feeling of pleasure". The nervous system gets aroused, causing one to get adrenaline in the form of "rapid heartbeat, shortness of breath, and sensations of excitement that are often similar to sensations associated with danger". Other physical symptoms include "blood pressure go[ing] up a little, the skin...flush[ing], the face and ears...turn[ing] red and...[a] feeling of weakness in the knees". However, all these symptoms vary on an individual basis, and not all individuals may experience the same symptoms. One can feel a sense of obsession over the other person, longing for "the day [when they return] to that person". One can also uncontrollably smile whenever thinking about the other person.
There is some debate over whether one can artificially create chemistry if they are "not initially feeling it". While some people hold that it is something that you "can't learn and can't teach...[and you] either have...or you don't", others hold that chemistry is a process rather than a moment, "build[ing] up and adds up and eventually you get this kind of chemical bonding". Some people, while believing it is possible to artificially create chemistry, think that it is better to let chemistry hit them spontaneously.
In Western society, chemistry is generally considered the "igniter [and] catalyst for the relationship", i.e., without this chemistry, there can be no relationship. Having chemistry "can be the difference between a relationship being romantic or platonic". Chemistry "can cause people to act sexually impulsively or unwisely". It can also be the difference between someone remaining faithful in their relationship, and seeking one night stands and affairs.
Dating coach Evan Marc Katz suggests that "chemistry is one of the most misleading indicators of a future relationship. Chemistry predicts nothing but chemistry." This is because chemistry can make people blind to actual incompatibilities or warning signs. Psychologist Laurie Betito notes that arranged marriages actually do quite well in terms of relationship satisfaction, and this is because "a spark can build based on what you have in common. You can grow into love, but you grow out of lust." Neil Clark Warren argues that physical chemistry is important because "couples who don't share strong chemistry may have additional problems during the ups and downs of a life together." Like Betito, he suggests not ruling someone out on the first date due to lack of chemistry. "But", he adds, "if by the second or third date you don't feel a strong inclination to kiss the other person, be near him, or hold his hand, you're probably never going to feel it." Although this quote assumes the other person is male, the truth of the matter is that the other person may instead be female. April Masini likewise says that chemistry is a strong predictor of relationship success. She suggests that chemistry comes and goes, and it is important to actively cultivate it because it can help couples deal with future conflicts.
Complementarity theory
The model of complementarity explains whether "birds of a feather flock together" or "opposites attract."
Studies show that complementary interaction between two partners increases their attractiveness to each other. Complementary partners preferred closer interpersonal relationship. Couples who reported the highest level of loving and harmonious relationship were more dissimilar in dominance than couples who scored lower in relationship quality.
Mathes and Moore (1985) found that people were more attracted to peers approximating to their ideal self than to those who did not. Specifically, low self-esteem individuals appeared more likely to desire a complementary relationship than high self-esteem people. We are attracted to people who complement us because this allows us to maintain our preferred style of behavior, and interaction with someone who complements our own behavior likely confers a sense of self-validation and security.
Similarity or complementarity
Principles of similarity and complementarity seem to be contradictory on the surface. In fact, they agree on the dimension of warmth. Both principles state that friendly people would prefer friendly partners.
The importance of similarity and complementarity may depend on the stage of the relationship. Similarity seems to carry considerable weight in initial attraction, while complementarity assumes importance as the relationship develops over time. Markey (2007) found that people would be more satisfied with their relationship if their partners differed from them, at least in terms of dominance, as two dominant persons may experience conflicts while two submissive individuals may have frustration as neither take the initiative.
Perception and actual behavior might not be congruent with each other. There were cases that dominant people perceived their partners to be similarly dominant, yet to independent observers, the actual behavior of their partner was submissive, i.e. complementary to them. Why people perceive their romantic partners to be similar to them despite evidence of the contrary remains unclear.
Evolutionary theories
The evolutionary theory of human interpersonal attraction states that opposite-sex attraction most often occurs when someone has physical features indicating that he or she is very fertile. Considering that one primary purpose of conjugal/romantic relationships is reproduction, it would follow that people invest in partners who appear very fertile, increasing the chance of their genes being passed down to the next generation.
Evolutionary theory also suggests that people whose physical features suggest they are healthy are seen as more attractive. The theory suggests that a healthy mate is more likely to possess genetic traits related to health that would be passed on to offspring (known as indirect benefits), and also that a healthier mate may be able to provide better resources and parental investment than less healthy mates (known as direct benefits). People's tendency to consider people with facial symmetry more attractive than those with less symmetrical faces is one example. However, a test was conducted that found that perfectly symmetrical faces were less attractive than normal faces. According to this study, the exact ratio of symmetric to asymmetric facial features depicting the highest attraction is still undetermined.
It has also been suggested that people are attracted to faces similar to their own as these features serve as cues of kinship. This preference for facial-resemblance is thought to vary across contexts. For example, a study by DeBruine et al. (2008) found that individuals rated faces which had been manipulated to be similar to their own as having more prosocial attributes, but were less likely to find them sexually attractive. These results support "inclusive fitness theory", which predicts that organisms will help closely related kin over more distant relatives. Results further suggest inherent mate-selective mechanisms that consider costs of inbreeding to offspring health.
Increased female attraction to men in relationships
A 2009 study by Melissa Burkley and Jessica Parker found that 59% of women tested were interested in pursuing a relationship with an "ideal" single man (who was, unknown to the women, fictitious). When they believed the "ideal" man was already in a romantic relationship, 90% of the women were interested in a romantic relationship.
Breaking up
There are several reasons that a relationship, whether friendly or romantic, may come to an end (break up). One reason derives from the equity theory: if a person in the relationship feels that the personal costs of being in the relationship outweigh the rewards there is a strong chance that this person will end the relationship.
See also
Bad boy (archetype)
Beer goggles
Dating
Human bonding
Interpersonal compatibility
Love (scientific views)
Platonic love
Popularity
Pratfall effect
Puppy love
Romantic attraction
Seduction
Sexual attraction
Socionics
Social connection
Vulnerability and care theory of love
Inertia
Notes
References
External links
Interpersonal relationships
Love
Dating | Interpersonal attraction | [
"Biology"
] | 5,518 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
1,052,176 | https://en.wikipedia.org/wiki/Euler%20system | In mathematics, an Euler system is a collection of compatible elements of Galois cohomology groups indexed by fields. They were introduced by in his work on Heegner points on modular elliptic curves, which was motivated by his earlier paper and the work of . Euler systems are named after Leonhard Euler because the factors relating different elements of an Euler system resemble the Euler factors of an Euler product.
Euler systems can be used to construct annihilators of ideal class groups or Selmer groups, thus giving bounds on their orders, which in turn has led to deep theorems such as the finiteness of some Tate-Shafarevich groups. This led to Karl Rubin's new proof of the main conjecture of Iwasawa theory, considered simpler than the original proof due to Barry Mazur and Andrew Wiles.
Definition
Although there are several definitions of special sorts of Euler system, there seems to be no published definition of an Euler system that covers all known cases. But it is possible to say roughly what an Euler system is, as follows:
An Euler system is given by collection of elements cF. These elements are often indexed by certain number fields F containing some fixed number field K, or by something closely related such as square-free integers. The elements cF are typically elements of some Galois cohomology group such as H1(F, T) where T is a p-adic representation of the absolute Galois group of K.
The most important condition is that the elements cF and cG for two different fields F ⊆ G are related by a simple formula, such as
Here the "Euler factor" P(τ|B;x) is defined to be the element det(1-τx|B) considered as an element of O[x], which when x happens to act on B is not the same as det(1-τx|B) considered as an element of O.
There may be other conditions that the cF have to satisfy, such as congruence conditions.
Kazuya Kato refers to the elements in an Euler system as "arithmetic incarnations of zeta" and describes the property of being an Euler system as "an arithmetic reflection of the fact that these incarnations are related to special values of Euler products".
Examples
Cyclotomic units
For every square-free positive integer n pick an n-th root ζn of 1, with ζmn = ζmζn for m,n coprime. Then the cyclotomic Euler system is the set of numbers
αn = 1 − ζn. These satisfy the relations
modulo all primes above l
where l is a prime not dividing n and Fl is a Frobenius automorphism with Fl(ζn) = ζ.
Kolyvagin used this Euler system to give an elementary proof of the Gras conjecture.
Gauss sums
Elliptic units
Heegner points
Kolyvagin constructed an Euler system from the Heegner points of an elliptic curve, and used this to show that in some cases the Tate-Shafarevich group is finite.
Kato's Euler system
Kato's Euler system consists of certain elements occurring in the algebraic K-theory of modular curves. These elements—named Beilinson elements after Alexander Beilinson who introduced them in —were used by Kazuya Kato in to prove one divisibility in Barry Mazur's main conjecture of Iwasawa theory for elliptic curves.
Notes
References
. Proceedings of the congress held in Madrid, August 22–30, 2006
External links
Several papers on Kolyvagin systems are available at Barry Mazur's web page (as of July 2005).
Algebraic number theory | Euler system | [
"Mathematics"
] | 767 | [
"Algebraic number theory",
"Number theory"
] |
2,265,038 | https://en.wikipedia.org/wiki/Caret%20notation | Caret notation is a notation for control characters in ASCII. The notation assigns to control-code 1, sequentially through the alphabet to assigned to control-code 26 (0x1A). For the control-codes outside of the range 1–26, the notation extends to the adjacent, non-alphabetic ASCII characters.
Often a control character can be typed on a keyboard by holding down the and typing the character shown after the caret. The notation is often used to describe keyboard shortcuts even though the control character is not actually used (as in "type ^X to cut the text").
The meaning or interpretation of, or response to the individual control-codes is not prescribed by the caret notation.
Description
The notation consists of a caret () followed by a single character (usually a capital letter). The character has the ASCII code equal to the control code with the bit representing 0x40 reversed. A useful mnemonic, this has the effect of rendering the control codes 1 through 26 as through . Seven ASCII control characters map outside the upper-case alphabet: 0 (NUL) is , 27 (ESC) is , 28 is , 29 is , 30 is , 31 is , and 127 (DEL) is .
Examples are "" for the Windows CR, LF newline pair, and describing the ANSI escape sequence to clear the screen as "".
Only the use of characters in the range of 63–95 ("") is specifically allowed in the notation, but use of lower-case alphabetic characters entered at the keyboard is nearly always allowed – they are treated as equivalent to upper-case letters. When converting to a control character, except for '?', masking with 0x1F will produce the same result and also turn lower-case into the same control character as upper-case.
There is no corresponding version of the caret notation for control-codes with more than 7 bits such as the C1 control characters from 128–159 (0x80–0x9F). Some programs that produce caret notation show these as backslash and octal ("" through ""). Also see the bar notation used by Acorn Computers, below.
History
The convention dates back to at least the PDP-6 (1964). A manual for the PDP-6 describes as printing ↑C, i.e., a small superscript upwards arrow before the C. In the change from 1961 ASCII to 1968 ASCII, the up arrow became a caret.
Use in software
Many computer systems allow the user to enter a control character by holding down and pressing the letter used in the caret notation. This is practical, because many control characters (e.g., EOT) cannot be entered directly from a keyboard. Although there are many ways to represent control characters, this correspondence between notation and typing makes the caret notation suitable for many applications.
Usually, the need to hold down is avoided, for instance lower-case letters work just like upper-case ones. On a US keyboard layout produces DEL and produces ^@. It is also common for to produce ^@.
Caret notation is used to describe control characters in output by many programs, particularly Unix terminal drivers and text file viewers such as and commands. Although the use of control-codes is somewhat standard, some uses differ from operating system to operating system, or even from program to program. The actual meaning or interpretation of the individual control-codes is not prescribed by the caret notation, and although the ASCII specification does give names to the control-codes, it does not prescribe how software should respond to them.
Alternate notations
The GSTrans string processing API on the operating systems for the Acorn Atom and the BBC Micro, and on RISC OS for the Acorn Archimedes and later machines, use the vertical bar character in place of the caret. For example, (pronounced "control em", the same as for the notation) is the carriage return character, ASCII 13. is the vertical bar character code 124, is character 127 as above and adds 128 to the code of the character that follows it, so is character code .
See also
C0 and C1 control codes, which shows the caret notation for all C0 control codes as well as DEL
Control key
References
Control characters
Character sets | Caret notation | [
"Technology"
] | 905 | [
"Computing stubs",
"Digital typography stubs"
] |
2,265,048 | https://en.wikipedia.org/wiki/Comparison%20of%20project%20management%20software | The following is a comparison of project management software.
General information
Features
Monetary features
See also
Kanban (development)
Project management software
Project planning
Comparison of scrum software
Comparison of development estimation software
Comparison of source-code-hosting facilities
Comparison of CRM systems
Notes
References
Project management software | Comparison of project management software | [
"Technology"
] | 57 | [
"Software comparisons",
"Computing comparisons"
] |
2,265,144 | https://en.wikipedia.org/wiki/Syneresis%20%28chemistry%29 | Syneresis (also spelled 'synæresis' or 'synaeresis'), in chemistry, is the extraction or expulsion of a liquid from a gel, such as when serum drains from a contracting clot of blood. Another example of syneresis is the collection of whey on the surface of yogurt. Syneresis can also be observed when the amount of diluent in a swollen polymer exceeds the solubility limit as the temperature changes. A household example of this is the counterintuitive expulsion of water from dry gelatin when the temperature increases. Syneresis has also been proposed as the mechanism of formation for the amorphous silica composing the frustule of diatoms.
Examples
In the processing of dairy milk, for example during cheese making, syneresis is the formation of the curd due to the sudden removal of the hydrophilic macropeptides, which causes an imbalance in intermolecular forces. Bonds between hydrophobic sites start to develop and are enforced by calcium bonds, which form as the water molecules in the micelles start to leave the structure. This process is usually referred to as the phase of coagulation and syneresis. The splitting of the bond between residues 105 and 106 in the κ-casein molecule is often called the primary phase of the rennet action, while the phase of coagulation and syneresis is referred to as the secondary phase.
In cooking, syneresis is the sudden release of moisture contained within protein molecules, usually caused by excessive heat, which over-hardens the protective shell. Moisture inside expands upon heating. The hard protein shell pops, expelling the moisture.
This process is responsible for transforming juicy rare steak into dry steak when cooked thoroughly. It creates weeping in scrambled eggs, with dry protein curd swimming in the released moisture. It also causes emulsified sauces, such as hollandaise, to "break" ("split"). Additionally, it creates unsightly moisture pockets within baked custard dishes, such as flan or crème brûlée.
Gels formed from agarose are prone to syneresis, and the degree of syneresis is inversely proportional to the concentration of the agarose in the gels.
In dentistry, syneresis is the expulsion of water or other liquid molecules from dental impression materials (for instance, alginate) after an impression has been taken. Due to this process, the impression shrinks a little and therefore its size is no longer accurate. For this reason, many dental impression companies strongly recommend to pour the dental cast as soon as possible to prevent distortion of the dimension of the teeth and objects in the impression.
The opposite process of syneresis is imbibition, which is the process of a material absorbing water molecules from the surroundings. Alginate also demonstrates imbibition because it will absorb water if soaked in it.
See also
Coagulation
Flocculation
References
Chemical processes
Chemical mixtures
Colloidal chemistry | Syneresis (chemistry) | [
"Physics",
"Chemistry",
"Materials_science"
] | 624 | [
"Colloidal chemistry",
"Colloids",
"Surface science",
"Chemical processes",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Chemical process engineering",
"Chemical process stubs"
] |
2,265,169 | https://en.wikipedia.org/wiki/List%20of%20tautonyms | The following is a list of tautonyms: zoological names of species consisting of two identical words (the generic name and the specific name have the same spelling). Such names are allowed in zoology, but not in botany, where the two parts of the name of a species must differ (though differences as small as one letter are permitted, as in cumin, Cuminum cyminum).
Mammals
Alces alces (Linnaeus, 1758) — Eurasian elk, moose
Axis axis (Erxleben, 1777) — chital, axis deer
Bison bison (Linnaeus, 1758) — American bison, buffalo
Capreolus capreolus (Linnaeus, 1758) — European roe deer, roe deer
Caracal caracal (Schreber, 1776) — caracal
Chinchilla chinchilla (Lichtenstein, 1829) — short-tailed chinchilla
Chiropotes chiropotes (Humboldt, 1811) — red-backed bearded saki
Cricetus cricetus (Linnaeus, 1758) — common hamster, European hamster
Crocuta crocuta (Erxleben, 1777) — spotted hyena
Dama dama (Linnaeus, 1758) — European fallow deer
Feroculus feroculus (Kelaart, 1850) — Kelaart's long-clawed shrew
Gazella gazella (Pallas, 1766) — mountain gazelle
Genetta genetta (Linnaeus, 1758) — common genet
Gerbillus gerbillus (Olivier, 1801) — lesser Egyptian gerbil
Giraffa giraffa (von Schreber, 1784) — southern giraffe
Glis glis (Linnaeus, 1766) — European edible dormouse, European fat dormouse
Gorilla gorilla (Savage, 1847) — western gorilla
Gulo gulo (Linnaeus, 1758) — wolverine
Hoolock hoolock (Harlan, 1834) — western hoolock gibbon
Hyaena hyaena (Linnaeus, 1758) — striped hyena
Indri indri (Gmelin, 1788) — indri
Jaculus jaculus (Linnaeus, 1758) — lesser Egyptian jerboa
Lagurus lagurus (Pallas, 1773) — steppe vole, steppe lemming
Lemmus lemmus (Linnaeus, 1758) — Norway lemming
Lutra lutra (Linnaeus, 1758) — European otter
Lynx lynx (Linnaeus, 1758) — Eurasian lynx
Macrophyllum macrophyllum (Schinz, 1821) — long-legged bat
Marmota marmota (Linnaeus, 1758) — Alpine marmot
Martes martes (Linnaeus, 1758) — European pine marten, pine marten
Meles meles (Linnaeus, 1758) — European badger, Eurasian badger
Mephitis mephitis (Schreber, 1776) — striped skunk
Molossus molossus (Pallas, 1766) — Pallas's mastiff bat
Monachus monachus (Hermann, 1779) — Mediterranean monk seal
Mops mops (de Blainville, 1840) — Malayan free-tailed bat
Myospalax myospalax (Laxmann, 1773) — Siberian zokor
Myotis myotis (Borkhausen, 1797) — mouse-eared myotis, greater mouse-eared bat
Nasua nasua (Linnaeus, 1766) — South American coati, coatimundi
Niviventer niviventer (Hodgson, 1836) — Himalayan niviventer, white-bellied rat
Nombe nombe (Flannery et al., 1983)
Oreotragus oreotragus (Zimmermann, 1783) — klipspringer
Papio papio (Desmarest, 1820) — Guinea baboon
Petaurista petaurista (Pallas, 1766) — red giant flying squirrel
Phocoena phocoena (Linnaeus, 1758) — harbor porpoise, harbour porpoise
Pipistrellus pipistrellus (Schreber, 1774) — common pipistrelle
Pithecia pithecia (Linnaeus, 1766) — white-faced saki
Rattus rattus (Linnaeus, 1758) — black rat, roof rat
Redunca redunca (Pallas, 1767) — Bohor reedbuck
Rupicapra rupicapra (Linnaeus, 1758) — chamois, Alpine chamois
Saccolaimus saccolaimus (Temminck, 1838) — naked-rumped pouched bat
Vulpes vulpes (Linnaeus, 1758) — red fox
Birds
Alle alle (Linnaeus, 1758) — little auk, dovekie
Amandava amandava (Linnaeus, 1758) — red avadavat
Anhinga anhinga (Linnaeus, 1766) — anhinga
Anser anser (Linnaeus, 1758) — greylag goose
Antigone antigone (Linnaeus, 1758) — sarus crane
Apus apus (Linnaeus, 1758) — common swift
Bubo bubo (Linnaeus, 1758) — Eurasian eagle-owl
Buteo buteo (Linnaeus, 1758) — common buzzard
Calliope calliope (Pallas, 1776) — Siberian rubythroat
Cardinalis cardinalis (Linnaeus, 1758) — northern cardinal
Carduelis carduelis (Linnaeus, 1758) — European goldfinch
Casuarius casuarius (Linnaeus, 1758) — southern cassowary
Chloris chloris (Linnaeus, 1758) — European greenfinch
Ciconia ciconia (Linnaeus, 1758) — white stork
Cinclus cinclus (Linnaeus, 1758) — white-throated dipper
Clanga clanga (Pallas, 1811) — greater spotted eagle
Coccothraustes coccothraustes (Linnaeus, 1758) — hawfinch
Cochlearius cochlearius (Linnaeus, 1766) — boat-billed heron
Coeligena coeligena (Lesson, 1833) — bronzy inca
Colius colius (Linnaeus, 1766) — white-backed mousebird
Coscoroba coscoroba (Molina, 1782) — coscoroba swan
Cotinga cotinga (Linnaeus, 1766) — purple-breasted cotinga
Coturnix coturnix (Linnaeus, 1758) — common quail
Crex crex (Linnaeus, 1758) — corn crake, corncrake
Crossoptilon crossoptilon (Hodgson, 1838) — white eared pheasant
Curaeus curaeus (Molina, 1782) — austral blackbird
Curruca curruca (Linnaeus, 1758) — lesser whitethroat
Cyanicterus cyanicterus (Vieillot, 1819) — blue-backed tanager
Cygnus cygnus (Linnaeus, 1758) — whooper swan
Diuca diuca (Molina, 1782) — diuca finch
Dives dives (Deppe, 1830) — melodious blackbird
Ensifera ensifera (Boissonneau, 1840) — sword-billed hummingbird
Erythrogenys erythrogenys (Vigors, 1831) — rusty-cheeked scimitar babbler
Falcipennis falcipennis (Hartlaub, 1855) — Siberian grouse
Francolinus francolinus (Linnaeus, 1766) — black francolin
Galbula galbula (Linnaeus, 1766) — green-tailed jacamar
Gallinago gallinago (Linnaeus, 1758) — common snipe
Gallus gallus (Linnaeus, 1758) — red junglefowl
Granatina granatina (Linnaeus, 1766) — violet-eared waxbill
Grus grus (Linnaeus, 1758) — common crane
Guira guira (Gmelin, 1788) — guira cuckoo
Himantopus himantopus (Linnaeus, 1758) — black-winged stilt
Histrionicus histrionicus (Linnaeus, 1758) — harlequin duck
Ichthyaetus ichthyaetus (Pallas, 1773) — Pallas's gull
Icterus icterus (Linnaeus, 1766) — Venezuelan troupial
Incana incana (Sclater & Hartlaub, 1881) — Socotra warbler
Indicator indicator (Sparrman, 1777) — greater honeyguide
Jacana jacana (Linnaeus, 1766) — wattled jacana
Lagopus lagopus (Linnaeus, 1758) — willow ptarmigan
Lerwa lerwa (Hodgson, 1833) — snow partridge
Leucogeranus leucogeranus (Pallas, 1773) — Siberian crane
Limosa limosa (Linnaeus, 1758) — black-tailed godwit
Luscinia luscinia (Linnaeus, 1758) — thrush nightingale
Manacus manacus (Linnaeus, 1766) — white-bearded manakin
Mascarinus mascarinus (Linnaeus, 1771) — Mascarene parrot or mascarin
Melanodera melanodera (Quoy & Gaimard, 1824) — white-bridled finch
Milvus milvus (Linnaeus, 1758) — red kite
Mitu mitu (Linnaeus, 1766) — Alagoas curassow
Nycticorax nycticorax (Linnaeus, 1758) — black-crowned night heron
Oenanthe oenanthe (Linnaeus, 1758) — northern wheatear
Oriolus oriolus (Linnaeus, 1758) — Eurasian golden oriole
Pampa pampa (Lesson, 1832) — wedge-tailed sabrewing
Pauxi pauxi (Linnaeus, 1766) — helmeted curassow
Perdix perdix (Linnaeus, 1758) — grey partridge
Petronia petronia (Linnaeus, 1766) — rock sparrow
Phoenicurus phoenicurus (Linnaeus, 1758) — common redstart
Pica pica (Linnaeus, 1758) — Eurasian magpie
Pipile pipile (Jacquin, 1784) — Trinidad piping guan
Poliocephalus poliocephalus (Jardine & Selby, 1827) — hoary-headed grebe
Porphyrio porphyrio (Linnaeus, 1758) — western swamphen
Porphyrolaema porphyrolaema (Deville & Sclater, 1852) — purple-throated cotinga
Porzana porzana (Linnaeus, 1766) — spotted crake
Puffinus puffinus (Brünnich, 1764) — Manx shearwater
Pyrilia pyrilia (Bonaparte, 1853) — saffron-headed parrot
Pyrope pyrope (von Kittlitz, 1830) — fire-eyed diucon
Pyrrhocorax pyrrhocorax (Linnaeus, 1758) — red-billed chough
Pyrrhula pyrrhula (Linnaeus, 1758) — Eurasian bullfinch
Quelea quelea (Linnaeus, 1758) — red-billed quelea
Radjah radjah (Garnot & Lesson, 1828) — radjah shelduck
Regulus regulus (Linnaeus, 1758) — goldcrest
Riparia riparia (Linnaeus, 1758) — sand martin, bank swallow
Rupicola rupicola (Linnaeus, 1766) — Guianan cock-of-the-rock
Serinus serinus (Linnaeus, 1766) — European serin
Spinus spinus (Linnaeus, 1758) — Eurasian siskin
Suiriri suiriri (Vieillot, 1818) — suiriri flycatcher
Sula sula (Linnaeus, 1766) — red-footed booby
Tadorna tadorna (Linnaeus, 1758) — common shelduck
Tchagra tchagra (Vieillot, 1816) — southern tchagra
Temnurus temnurus (Temminck, 1825) — ratchet-tailed treepie
Tetrax tetrax (Linnaeus, 1758) — little bustard
Todus todus (Linnaeus, 1758) — Jamaican tody
Troglodytes troglodytes (Linnaeus, 1758) — Eurasian wren
Tyrannus tyrannus (Linnaeus, 1758) — eastern kingbird
Urile urile (Gmelin, 1789) — red-faced cormorant
Vanellus vanellus (Linnaeus, 1758) — northern lapwing
Xanthocephalus xanthocephalus (Bonaparte, 1826) — yellow-headed blackbird
Xenopirostris xenopirostris (Lafresnaye, 1850) — Lafresnaye's vanga
Reptiles
Agama agama (Linnaeus, 1758) — rainbow agama
Ameiva ameiva (Linnaeus, 1758) — giant ameiva
Basiliscus basiliscus (Linnaeus, 1758) — common basilisk
Calotes calotes (Linnaeus, 1758) — common green forest lizard
Caretta caretta (Linnaeus, 1758) — loggerhead sea turtle
Cerastes cerastes (Linnaeus, 1758) — desert horned viper
Chalcides chalcides (Linnaeus, 1758) — Italian three-toed skink
Clelia clelia (Daudin, 1803) — mussurana
Cordylus cordylus (Linnaeus, 1758) — Cape girdled lizard
Enhydris enhydris (Schneider, 1799) — rainbow water snake
Hypnale hypnale (Merrem, 1820) — hump-nosed viper
Iguana iguana (Linnaeus, 1758) — green iguana, common iguana
Naja naja (Linnaeus, 1758) — Indian cobra
Natrix natrix (Linnaeus, 1758) — grass snake
Ophioscincus ophioscincus (Boulenger, 1887) — yolk-bellied snake-skink
Plica plica (Linnaeus, 1758) — collared treerunner
Scincus scincus (Linnaeus, 1758) — sandfish
Suta suta (Peters, 1863) — curl snake
Tetradactylus tetradactylus (Daudin, 1802) — long-toed seps
Amphibians
Bombina bombina (Linnaeus, 1761) — European fire-bellied toad
Bufo bufo (Linnaeus, 1758) — common toad
Pipa pipa (Linnaeus, 1758) — Suriname toad
Salamandra salamandra (Linnaeus, 1758) — fire salamander
Fish
Alburnus alburnus — bleak
Alosa alosa — allis shad
Anableps anableps — largescale foureyes
Anguilla anguilla — European eel
Anostomus anostomus — striped headstander
Anthias anthias — swallowtail seaperch
Aspredo aspredo — a species of banjo catfish
Badis badis — blue perch, blue badis
Bagarius bagarius — devil catfish, goonch
Bagre bagre — coco sea catfish
Banjos banjos — banjofish
Barbatula barbatula — stone loach
Barbus barbus — barbel
Batasio batasio — a species of naked catfish
Belobranchus belobranchus — throat-spine gudgeon
Belone belone — garfish
Bidyanus bidyanus — silver perch
Boops boops — bogue
Brama brama — Atlantic pomfret
Brosme brosme — cusk
Butis butis — duckbill sleeper, crazy fish
Calamus calamus — saucereye porgy
Callichthys callichthys — cascarudo, armoured catfish
Capoeta capoeta — Caucasian scraper, Sevan khramulya
Carassius carassius — crucian carp
Catla catla — catla
Catostomus catostomus — longnose sucker
Chaca chaca — frogmouth catfish
Chandramara chandramara — a species of naked catfish
Chanos chanos — milkfish
Chitala chitala — Indian featherback
Chromis chromis — Mediterranean damselfish, Mediterranean chromis
Conger conger — European conger
Conta conta — conta catfish
Cynoglossus cynoglossus — Bengal tonguesole
Dactylopus dactylopus — Fingered dragonet
Dario dario — scarlet badis, scarlet dario
Decorus decorus — a species of Chinese carp/minnow
Dentex dentex — common dentex
Devario devario — Bengal danio
Erythrinus erythrinus — red wolffish
Gagata gagata — a species of sisorid catfish
Glyphis glyphis — speartooth shark
Gobio gobio — gudgeon
Gonorynchus gonorynchus — mousefish, beaked sandfish, beaked salmon
Hara hara — a species of South Asian river catfish
Hemilepidotus hemilepidotus — red Irish lord
Hippocampus hippocampus — short-snouted seahorse
Hippoglossus hippoglossus — Atlantic halibut
Histrio histrio — sargassum fish
Hucho hucho — Danube salmon, huchen
Huso huso — beluga sturgeon
Lactarius lactarius — false trevally
Lagocephalus lagocephalus — oceanic puffer
Lepadogaster lepadogaster — shore clingfish
Leuciscus leuciscus — common dace
Limanda limanda — common dab
Liparis liparis — common seasnail
Lithognathus lithognathus — white steenbras
Lota lota — burbot
Lutjanus lutjanus — bigeye snapper
Menidia menidia — Atlantic silverside
Merluccius merluccius — European hake
Microstoma microstoma — slender argentine
Mola mola — ocean sunfish
Molva molva — common ling
Mustelus mustelus — common smooth-hound
Myaka myaka — myaka
Nangra nangra — a species of sisorid catfish
Notopterus notopterus — bronze featherback
Oplopomus oplopomus — spinecheek goby
Pagrus pagrus — red porgy
Pangasius pangasius — pangas catfish
Phoxinus phoxinus — Eurasian minnow
Phycis phycis — forkbeard
Pinjalo pinjalo — pinjalo
Pollachius pollachius — pollack
Pristis pristis — largetooth sawfish
Pseudobagarius pseudobagarius — a species of stream catfish
Pungitius pungitius — ninespine stickleback
Rama rama — a species of naked catfish
Rasbora rasbora — Gangetic scissortail rasbora
Remora remora — common remora
Retropinna retropinna — New Zealand smelt
Rhinobatos rhinobatos — common guitarfish
Rita rita — rita
Rubicundus rubicundus — a species of hagfish
Rutilus rutilus — common roach
Sarda sarda — Atlantic bonito
Solea solea — common sole
Sphyraena sphyraena — European barracuda
Spinachia spinachia — sea stickleback
Sprattus sprattus — European sprat
Squatina squatina — angelshark
Synodus synodus — diamond lizardfish
Tandanus tandanus — eel-tailed catfish
Thymallus thymallus — grayling
Tinca tinca — tench
Torpedo torpedo — common torpedo
Trachurus trachurus — Atlantic horse mackerel
Trachycorystes trachycorystes — black catfish
Tropheops tropheops — golden tropheops
Vimba vimba — vimba bream
Zebrus zebrus — zebra goby
Zingel zingel — zingel
Zungaro zungaro — gilded catfish
Arthropods
Aniculus aniculus — a hermit crab
Anthrax anthrax — a bee fly
Appia appia — appia skipper
Ariadne ariadne — angled castor (a brush-footed butterfly)
Arita arita — arita skipper
Aroma aroma — aroma skipper
Aspitha aspitha — aspitha firetip (a skipper)
Astacus astacus — European crayfish
Avicularia avicularia — pinktoe tarantula
Balanus balanus — a barnacle
Bruna bruna — a skipper
Bucayana bucayana — a cranaid harvestman
Cactus cactus — a water flea
Calappa calappa — smooth box crab
Caleta caleta — angled Pierrot (a gossamer-winged butterfly)
Cephise cephise — a skipper
Clibanarius clibanarius — a hermit crab
Corticea corticea — redundant skipper
Cossus cossus — goat moth
Crangon crangon — brown shrimp
Cressida cressida — big greasy or clearwing swallowtail
Cumbre cumbre — a skipper
Cynea cynea — cynea skipper
Danis danis — large green-banded blue (a gossamer-winged butterfly)
Decinea decinea — decinea or Huastecan skipper
Ebusus ebusus — ebusus skipper
Ephippiger ephippiger — saddle-backed bush cricket
Erina erina — small dusky-blue (a gossamer-winged butterfly)
Flandria flandria — small flandria (a skipper)
Furcula furcula — sallow kitten (a notodontid moth)
Gesta gesta — impostor duskywing (a skipper)
Grapsus grapsus — red rock crab
Gryllotalpa gryllotalpa — European mole cricket
Idea idea — Linnaeus's idea (a brush-footed butterfly)
Joanna joanna — Joanna's skipper
Lamponia lamponia — a skipper
Lento lento — a skipper
Levina levina — a skipper
Librita librita — librita skipper
Ludens ludens — ludens skipper
Mashuna mashuna — Mashuna ringlet (a brush-footed butterfly)
Megacephala megacephala — big-headed tiger beetle
Melolontha melolontha — common cockchafer
Menander menander — menander metalmark
Meza meza — common missile (a skipper)
Misius misius — misius skipper
Moeros moeros — a skipper
Molla molla — a skipper
Mortola mortola — a solifuge
Narcosius narcosius — a skipper
Neita neita — Neita brown (a brush-footed butterfly)
Nyctelius nyctelius — violet-banded or nyctelius skipper
Orthos orthos — orthos skipper
Pamba pamba — a skipper
Passova passova — passova firetip (a skipper)
Pilosa pilosa — a zalmoxid harvestman
Plumbago plumbago — a skipper
Pollicipes pollicipes — a goose barnacle
Polyctor polyctor — polyctor tufted-skipper
Pompeius pompeius — pompeius skipper
Propertius propertius — propertius skipper
Protesilaus protesilaus — great kite-swallowtail
Punta punta — a skipper
Racta racta — racta skipper
Ranina ranina — red frog crab
Repens repens — a skipper
Ridens ridens — frosted skipper
Roche roche – an ochyroceratid spider
Sacrator sacrator — a skipper
Salatis salatis — variable scarlet-eye (a skipper)
Saturnus saturnus — a skipper
Scalpellum scalpellum — a goose barnacle
Scolytus scolytus — large elm bark beetle
Sodreana sodreana — a gonyleptid harvestman
Speculum speculum — hidden mirror skipper
Sucova sucova — sucova skipper
Tosta tosta — a skipper
Tromba tromba — a skipper
Turmada turmada — a skipper
Vermileo vermileo — a wormlion
Vidius vidius — a skipper
Xanthopygus xanthopygus — a rove beetle
Xanthostigma xanthostigma — a snakefly
Zera zera — zera skipper
Zingha zingha — a brush-footed butterfly
Zoma zoma — a ray spider
Zonia zonia — zonia skipper
Zygoneura zygoneura — a dark-winged fungus gnat
Molluscs
Achatina achatina — African giant snail, giant tiger land snail
Agagus agagus
Arcinella arcinella — Caribbean spiny jewel box
Belonimorphis belonimorphis
Columella columella
Concholepas concholepas — loco or Chilean abalone
Cymbium cymbium — false elephant's snout volute
Dolabrifera dolabrifera
Ensis ensis — razor clam
Extra extra
Faustina faustina
Ficus ficus — paper fig shell
Fragum fragum — white strawberry cockle
Gemma gemma — amethyst gem clam
Gibberulus gibberulus — humpbacked conch
Glycymeris glycymeris — dog cockle
Harpa harpa — true harp
Haustellum haustellum
Hippopus hippopus — bear paw clam or horse's hoof clam
Irus irus — irus clam
Janthina janthina — violet snail
Koilofera koilofera
Lambis lambis — a spider conch
Lima lima — spiny fileclam
Lithophaga lithophaga — date mussel
Lutraria lutraria — otter shell
Mandarina mandarina
Margaritifera margaritifera — freshwater pearl mussel
Melo melo — Indian volute or melon shell
Melongena melongena - Caribbean crown conch
Mercenaria mercenaria — northern quahog, hard clam
Meretrix meretrix
Mitra mitra — Episcopal miter shell
Modiolus modiolus — northern horse mussel
Modulus modulus
Neocrassa neocrassa
Ogasawarana ogasawarana
Oliva oliva — olive shell
Perna perna — brown mussel
Planorbis planorbis
Quadriplicata quadriplicata
Quadrula quadrula — mapleleaf mussel
Rapa rapa — bubble turnip
Spirula spirula — ram's horn squid
Staphylaea staphylaea — stippled cowry
Sultana sultana
Telescopium telescopium — telescope snail
Terebellum terebellum — terebellum conch
Tergipes tergipes
Tricornis tricornis — three-cornered conch
Umbraculum umbraculum — umbrella slug
Velutina velutina — velvet shell
Villosa villosa — a freshwater mussel
Viviparus viviparus — a European freshwater snail
Volva volva — shuttlecock volva
Other
Aaptos aaptos — a sponge
Acanthogyrus acanthogyrus — a parasitic worm in the family Quadrigyridae
Cephea cephea — crown jellyfish, cauliflower jellyfish
Chaos chaos — an amoeba
Cidaris cidaris — long-spine slate pen sea urchin
Convoluta convoluta — a flatworm
Crambe crambe — oyster sponge
Crassicauda crassicauda — a nematode
Echiurus echiurus — a spoon worm
Hamigera hamigera — a sponge
Heterophyes heterophyes — an intestinal fluke
Loa loa — a nematode
Mediocris mediocris — a foraminiferan
Moniliformis moniliformis — an acanthocephalan worm
Ophiura ophiura – serpent star (a brittle star)
Periphylla periphylla — helmet jellyfish
Porites porites — hump coral, finger coral
Porpita porpita — blue button (a siphonophore)
Spirorbis spirorbis — an annelid
Thalassema thalassema — a spoon worm
Tubifex tubifex — sludge worm
Turgida turgida — a nematode
Velella velella — by-the-wind-sailor
Yukonensis yukonensis — an archaeocyathan
Plant near-tautonyms
Abutilon abutiloides — shrubby Indian mallow
Araucaria araucana — monkey puzzle tree
Arctostaphylos uva-ursi — kinnikinnick (does not look tautonym, but Arctostaphylos means "bear grape" in Greek and uva-ursi means "bear grape" in Latin)
Bituminaria bituminosa — Arabian pea
Cajanus cajan — pigeon pea
Canarina canariensis — Canary Island bellflower
Cuminum cyminum — cumin
Elymus elymoides — squirreltail
Elymus hystrix var. hystrix (formerly Hystrix hystrix) — eastern bottlebrush grass
Hypericum hypericoides — St. Andrew's cross
Inga ingoides — ice cream bean
Larix laricina — tamarack
Luzula luzuloides — white wood-rush
Madia madioides — woodland madia
Medinilla medinilliana
Mielichhoferia mielichhoferiana — Mielichhof's copper moss
Mycoporum mycoporoides
Mycranthemum mycranthemoides — Nuttall's mudflower
Phleum pleoides — Boehmer's cat's-tail
Pinus pinea — stone pine
Pyrus pyrifolia — asian pear
Raphanus raphanistrum — wild radish
Sagittaria sagittifolia — arrowhead
Salacca zalacca — salak
Selaginella selaginoides — common spikemoss
Silaum silaus — pepper saxifrage
Soleirolia soleirolii — mother of thousands
Spartina spartinae — gulf cordgrass
Spiranthes spiralis — autumn lady's-tresses
Tetragonia tetragonioides — New Zealand spinach
Thalictrum thalictroides — rue-anemone
Uncinia uncinata — Hawai'i bird-catching sedge
Zinnia zinnioides — Zinnia-like Zinnia
Ziziphus zizyphus — jujube (note that the currently accepted name is Ziziphus jujuba)
Fungal near-tautonyms
Alternaria alternata
Aspergillus asper
Bovista bovistoides
Flagelloscypha flagellata
Hydropus hydrophoroidesLaccaria laccata — lackluster laccariaMelanoleuca melaleucaRoridomyces roridus — dripping bonnetSclerotinia sclerotiorum — plant pathogen causing white moldScutellinia scutellata — eyelash pixie cupVolvariella volvacea'' — straw mushroom
See also
List of triple tautonyms
Bilingual tautological expressions
Zoological nomenclature
Lists of animals | List of tautonyms | [
"Biology"
] | 6,450 | [
"Zoological nomenclature",
"Animals",
"Lists of biota",
"Biological nomenclature",
"Lists of animals"
] |
2,265,316 | https://en.wikipedia.org/wiki/Queen%27s%20metal | Queen's Metal, an alloy of nine parts tin and one each of antimony, lead, and bismuth, is intermediate in hardness between pewter and britannia metal. It was developed by English pewtersmiths in the 16th century; the recipe was initially a secret and was reserved for pieces made for the English royal family.
References
Fusible alloys
Tin alloys
Lead alloys
Antimony alloys
Bismuth alloys | Queen's metal | [
"Chemistry",
"Materials_science"
] | 85 | [
"Lead alloys",
"Alloy stubs",
"Metallurgy",
"Bismuth alloys",
"Fusible alloys",
"Tin alloys",
"Alloys",
"Antimony alloys"
] |
2,265,351 | https://en.wikipedia.org/wiki/Spite%20%28game%20theory%29 | In fair division problems, spite is a phenomenon that occurs when a player's value of an allocation decreases when one or more other players' valuation increases. Thus, other things being equal, a player exhibiting spite will prefer an allocation in which other players receive less than more (if more of the good is desirable).
In this language, spite is difficult to analyze because one has to assess two sets of preferences. For example, in the divide and choose method, a spiteful player would have to make a trade-off between depriving his opponent of cake, and getting more himself.
Within the field of sociobiology, spite is used to describe those social behaviors that have a negative impact on both the actor and recipient(s). Spite can be favored by kin selection when: (a) it leads to an indirect benefit to some third party that is sufficiently related to the actor (Wilsonian spite); or (b) when it is directed primarily at negatively related individuals (Hamiltonian spite). Negative relatedness occurs when two individuals are less related than average.
In game theory
The iterated prisoner's dilemma provides an example where players may "punish" each other for failing to cooperate in previous rounds, even if doing so would cause negative consequences for both players. For example, the simple "tit for tat" strategy has been shown to be effective in round-robin tournaments of iterated prisoner's dilemma.
In industrial relations
There is always difficulty in fairly dividing the proceeds of a business between the business owners and the employees.
When a trade union decides to call a strike, both employer and the union members lose money (and may damage the national economy). The unionists hope that the employer will give in to their demands before such losses have destroyed the business.
In the reverse direction, an employer may terminate the employment of certain productive workers who are agitating for higher wages or organising a trade union. Losing productive workers is a setback to both the business and the employees but this can serve as an example to others and thus maximise employer power.
In behavioral ecology
Polyembryonic wasps, including C. floridanum, exhibit spite through instances of precocious larval development. Spite provides an explanation for how natural selection can favor harmful behaviors that are costly to both the actor and the recipient; spite is typically considered a form of altruism that benefits a secondary recipient. Two criteria demonstrate that spite is truly occurring: (i) the behavior is truly costly to the actor and does not provide a long-term direct benefit; and (ii) harming behaviors are directed toward relatively unrelated individuals.
See also
Appeal to spite
Hamilton's rule
Spite (sentiment)
References
Foster, K.R., Wenseleers, T. & Ratnieks, F.L.W. (2001) Spite: Hamilton's unproven theory. Annales Zoologici Fennici, p. 38,229-238.
Gardner, A. & West, S.A. (2006) Spite. Current Biology, p. 16, R662-R664.
Fair division
Game theory | Spite (game theory) | [
"Mathematics"
] | 634 | [
"Recreational mathematics",
"Game theory",
"Fair division"
] |
2,265,373 | https://en.wikipedia.org/wiki/De%20Arte%20Combinatoria | The Dissertatio de arte combinatoria ("Dissertation on the Art of Combinations" or "On the Combinatorial Art") is an early work by Gottfried Leibniz published in 1666 in Leipzig. It is an extended version of his first doctoral dissertation, written before the author had seriously undertaken the study of mathematics. The booklet was reissued without Leibniz' consent in 1690, which prompted him to publish a brief explanatory notice in the Acta Eruditorum. During the following years he repeatedly expressed regrets about its being circulated as he considered it immature. Nevertheless it was a very original work and it provided the author the first glimpse of fame among the scholars of his time.
Summary
The main idea behind the text is that of an alphabet of human thought, which is attributed to Descartes. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters. All truths may be expressed as appropriate combinations of concepts, which can in turn be decomposed into simple ideas, rendering
the analysis much easier. Therefore, this alphabet would provide a logic of invention, opposed to that of demonstration which was known so far. Since all sentences are composed of a subject and a predicate, one might
Find all the predicates which are appropriate to a given subject, or
Find all the subjects which are convenient to a given predicate.
For this, Leibniz was inspired in the Ars Magna of Ramon Llull, although he criticized this author because of the arbitrariness of his categories indexing.
Leibniz discusses in this work some combinatorial concepts. He had read Clavius' comments to Sacrobosco's De sphaera mundi, and some other contemporary works. He introduced the term variationes ordinis for the permutations, combinationes for the combinations of two elements, con3nationes (shorthand for conternationes) for those of three elements, etc. His general term for combinations was complexions. He found the formula
which he thought was original.
The first examples of use of his ars combinatoria are taken from law, the musical registry of an organ, and the Aristotelian theory of generation of elements from the four primary qualities. But philosophical applications are of greater importance. He cites the idea of Thomas Hobbes that all reasoning is just a computation.
The most careful example is taken from geometry, from where we shall give some definitions. He introduces the Class I concepts, which are primitive.
Class I 1 point, 2 space, 3 included, [...] 9 parts, 10 total, [...] 14 number, 15 various [...]
Class II contains simple combinations.
Class II.1 Quantity is 14 των 9
Where των means "of the" (from ). Thus, "Quantity" is the number of the parts. Class III contains the con3nationes:
Class III.1 Interval is 2.3.10
Thus, "Interval" is the space included in total. Of course, concepts deriving from former classes may also be defined.
Class IV.1 Line is 1/3 των 2
Where 1/3 means the first concept of class III. Thus, a "line" is the interval of (between) points.
Leibniz compares his system to the Chinese and Egyptian languages, although he did not really understand them at this point. For him, this is a first step towards the Characteristica Universalis, the perfect language which would provide a direct representation of ideas along with a calculus for the philosophical reasoning.
As a preface, the work begins with a proof of the existence of God, cast in geometrical form, and based on the argument from motion.
Notes
References
E. J. Aiton, Leibniz: A Biography. Hilger, Bristol, 1985. .
External links
De Arte Combinatoria, the original Latin-language text
Partial English translation
Partial German translation
Combinatorics
Philosophy of language literature
1666 books
Books by Gottfried Wilhelm Leibniz
17th-century books in Latin | De Arte Combinatoria | [
"Mathematics"
] | 840 | [
"Discrete mathematics",
"Combinatorics"
] |
2,265,435 | https://en.wikipedia.org/wiki/Arabic%20chat%20alphabet | The Arabic chat alphabet, Arabizi, Arabeezi, Arabish, Franco-Arabic or simply Franco (from ) refer to the romanized alphabets for informal Arabic dialects in which Arabic script is transcribed or encoded into a combination of Latin script and Arabic numerals. These informal chat alphabets were originally used primarily by youth in the Arab world in very informal settings—especially for communicating over the Internet or for sending messages via cellular phones—though use is not necessarily restricted by age anymore and these chat alphabets have been used in other media such as advertising.
These chat alphabets differ from more formal and academic Arabic transliteration systems, in that they use numerals and multigraphs instead of diacritics for letters such as ṭāʾ () or ḍād () that do not exist in the basic Latin script (ASCII), and in that what is being transcribed is an informal dialect and not Standard Arabic. These Arabic chat alphabets also differ from each other, as each is influenced by the particular phonology of the Arabic dialect being transcribed and the orthography of the dominant European language in the area—typically the language of the former colonists, and typically either French or English.
Because of their widespread use, including in public advertisements by large multinational companies, large players in the online industry like Google and Microsoft have introduced tools that convert text written in Arabish to Arabic (Google Translate and Microsoft Translator). Add-ons for Mozilla Firefox and Chrome also exist (Panlatin and ARABEASY Keyboard ). The Arabic chat alphabet is never used in formal settings and is rarely, if ever, used for long communications.
History
During the last decades of the 20th century, Western text-based communication technologies, such as mobile phone text messaging, the World Wide Web, email, bulletin board systems, IRC, and instant messaging became increasingly prevalent in the Arab world. Most of these technologies originally permitted the use of the Latin script only, and some still lack support for displaying Arabic script. As a result, Arabic-speaking users frequently transliterate Arabic text into Latin script when using these technologies to communicate.
To handle those Arabic letters that do not have an approximate phonetic equivalent in the Latin script, numerals and other characters were appropriated known as "code switching". For example, the numeral "3" is used to represent the Arabic letter ()—note the choice of a visually similar character, with the numeral resembling a mirrored version of the Arabic letter. Many users of mobile phones and computers use Arabish even though their system is capable of displaying Arabic script. This may be due to a lack of an appropriate keyboard layout for Arabic, or because users are already more familiar with the QWERTY or AZERTY keyboard layout.
Online communication systems, such as IRC, bulletin board systems, and blogs, are often run on systems or over protocols that do not support code pages or alternate character sets. Thus, the Arabic chat alphabet has become commonplace. It can be seen even in domain names, like Qal3ah.
According to one 2020 paper based on a survey done in and around Nazareth, there is now "a high degree of normativization or standardisation in Arabizi orthography."
Comparison table
Because of the informal nature of this system, there is no single "correct" or "official" usage. There may be some overlap in the way various letters are transliterated.
Most of the characters in the system make use of the Latin character (as used in English and French) that best approximates phonetically the Arabic letter that one would otherwise use (for example, corresponds to b). Regional variations in the pronunciation of an Arabic letter can also produce some variation in its transliteration (e.g. might be transliterated as j by a speaker of the Levantine dialect, or as g by a speaker of the Egyptian dialect).
Those letters that do not have a close phonetic approximation in the Latin script are often expressed using numerals or other characters, so that the numeral graphically approximates the Arabic letter that one would otherwise use (e.g. is represented using the numeral 3 because the latter looks like a vertical reflection of the former).
Since many letters are distinguished from others solely by a dot above or below the main portion of the character, the transliterations of these letters frequently use the same letter or number with an apostrophe added before or after (e.g. '3 is used to represent ).
é, è, ch, and dj are most likely to be used in regions where French is the primary non-Arabic language. dj is especially used in Algerian Arabic.
Mainly in the Nile Valley, the final form is always (without dots), representing both final and . It is the more traditional way of spelling the letter for both cases.
In Iraq, and sometimes in the Persian Gulf, this may be used to transcribe . However, it is most often transcribed as if it were . In Egypt, it is instead used for transcribing (which can be a reduction of ). In Israel, it is used to transcribe , as in "ﺭﻣﺎت ﭼﺎﻥ" (Ramat Gan) or "چيميل يافيت" (Gimel Yafit).
Only used in Morocco to transliterate Spanish .
Depending on the region, different letters may be used for the same phoneme.
The dollar sign is only used in Jordan.
This use for h is also found in Morocco.
Capitalized D and T may be used in Lebanon.
The number 8 is used for only in Lebanon.
Less common forms for .
The letters t and d are used for the pronunciations , respectively.
Used in a Palestinian dialect where the letter is sometimes pronounced .
rarely spelled ⟨a⟩ as names are commonly transcribed in official documents.
Used in Morocco.
Examples
Each of the different varieties of Arabic chat alphabets is influenced by the particular phonology of the Arabic dialect being transcribed and the orthography of the dominant European language in the area—typically the language of the former colonists. Below are some examples of Arabic chat alphabet varieties.
Egyptian Arabic
The frequent use of y and w to represent ى and و demonstrates the influence of English orthography on the romanization of Egyptian Arabic.
Additionally, the letter qāf (ق) is usually pronounced as a glottal stop, like a hamza (ء) in Metropolitan (Cairene) Egyptian Arabic—unlike Standard Arabic in which it represents a voiceless uvular stop. Therefore, in Egyptian Arabizi, the numeral 2 can represent either a Hamza or a qāf pronounced as a glottal stop.
Levantine Arabic
Moroccan Arabic
The use of ch to represent ش demonstrates the influence of French orthography on the romanization of Moroccan Arabic or Darija. French became the primary European language in Morocco as a result of French colonialism.
One of the characteristics of Franco-Arabic as it is used to transcribe Darija is the presence of long consonant clusters that are typically unorthodox in other languages. These clusters represents the deletion of short vowels and the syllabification of medial consonants in the phonology of Darija, a feature shared with and derived from Amazigh languages.
Gulf Arabic
Spoken along the Persian Gulf coasts of Kuwait, Bahrain, Qatar, Oman, UAE and eastern Saudi Arabia
Iraqi Arabic
Baghdadi Arabic
Palestinian Bedouin/Triangle Region Arabic
The use of ch to represent (kāf) indicates one of the Palestinian Arabic variant pronunciations of the letter in one of its subdialects, in which it is sometimes palatalized to (as in English "chip"). Where this palatalization appears in other dialects, the Arabic letter is typically respelled to either or .
Sudanese Arabic
Chadian Arabic
Shuwa Arabic spoken in N'Djamena, Chad.
Criticism
The phenomenon of writing Arabic with these improvised chat alphabets has drawn sharp rebuke from a number of different segments of Arabic-speaking communities. While educators and members of the intelligentsia mourn the deterioration and degradation of the standard, literary, academic language, conservative Muslims, as well as Pan-Arabists and some Arab nationalists, view the Arabic Chat Alphabet as a detrimental form of Westernization. Arabic chat alphabets emerged amid a growing trend among Arab youth, from Morocco to Iraq, to incorporate former colonial languages—especially English and French—into Arabic through code switching or as a form of slang. These improvised chat alphabets are used to replace Arabic script, and this raises concerns regarding the preservation of the quality of the language.
See also
Arablish
Arabic alphabet
Varieties of Arabic
Arabic phonology
Arabic transliteration
Romanization of Syriac
Arabist
Fingilish, the same idea with Persian
l33t
Yamli, a tool for real time Arabic transliteration
Greeklish, a similar phenomenon in Greek
Maltese, a related standardized Semitic language written in Latin script
References
ASCII
Chat alphabet
Instant messaging
Nonstandard spelling
Chat alphabet
Internet slang | Arabic chat alphabet | [
"Technology"
] | 1,837 | [
"Instant messaging"
] |
2,265,543 | https://en.wikipedia.org/wiki/Virtual%20globe | A virtual globe is a three-dimensional (3D) software model or representation of Earth or another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventional globe, virtual globes have the additional capability of representing many different views of the surface of Earth. These views may be of geographical features, man-made features such as roads and buildings, or abstract representations of demographic quantities such as population.
On November 20, 1997, Microsoft released an offline virtual globe in the form of Encarta Virtual Globe 98, followed by Cosmi's 3D World Atlas in 1999. The first widely publicized online virtual globes were NASA WorldWind (released in mid-2004) and Google Earth (mid-2005).
Types
Virtual globes may be used for study or navigation (by connecting to a GPS device) and their design varies considerably according to their purpose. Those wishing to portray a visually accurate representation of the Earth often use satellite image servers and are capable not only of rotation but also zooming and sometimes horizon tilting. Very often such virtual globes aim to provide as true a representation of the world as is possible, with worldwide coverage up to a very detailed level. When this is the case, the interface often has the option of providing simplified graphical overlays to highlight man-made features, since these are not necessarily obvious from a photographic aerial view. The other issue raised by such detail available is that of security, with some governments having raised concerns about the ease of access to detailed views of sensitive locations such as airports and military bases.
Another type of virtual globe exists whose aim is not the accurate representation of the planet, but instead a simplified graphical depiction. Most early computerized atlases were of this type and, while displaying less detail, these simplified interfaces are still widespread since they are faster to use because of the reduced graphics content and the speed with which the user can understand the display.
List of virtual globe software
As more and more high-resolution satellite imagery and aerial photography become accessible for free, many of the latest online virtual globes are built to fetch and display these images. They include:
ArcGIS Explorer, a lightweight client for ArcGIS Server, supports WMS and many other GIS file formats. Retired as of Oct 1, 2017.
ArcGIS Earth, a 3D application for viewing, editing and sharing GIS data. Supports WMS, KML, Shapefile, and other GIS file formats.
Bing Maps, 3D interface runs inside Internet Explorer and Firefox, and uses NASA Blue Marble: Next Generation.
Bhuvan is an India-specific virtual globe.
Earth3D, a program that visualizes the Earth in a real-time 3D view. It uses data from NASA, USGS, the CIA and the city of Osnabrück. Earth3D is free software (GPL).
EarthBrowser, an Adobe Flash/AIR-based virtual globe with real-time weather forecasts, earthquakes, volcanoes, and webcams.
Google Earth, satellite and aerial photos dataset (including commercial DigitalGlobe images) with international road dataset, the first popular virtual globe along with NASA World Wind.
MapJack is a flash based map covering areas in Canada, France, Latvia, Macau, Malaysia, Puerto Rico, Singapore, Sweden, Thailand, and the United States.
Marble, part of KDE, with data provided by OpenStreetMap, as well as NASA Blue Marble: Next Generation and others. Marble is free and open-source software (LGPL).
NASA World Wind, USGS topographic maps and several satellite and aerial image datasets, the first popular virtual globe along with Google Earth. World Wind is open-source software (NOSA).
NORC is a street view web service for Central and Eastern Europe.
OpenWebGlobe, a virtual globe SDK written in JavaScript using WebGL. OpenWebGlobe is free and open-source software (MIT).
WorldWide Telescope features an Earth mode with emphasis on data import/export, time-series support and a powerful tour authoring environment.
As well as the availability of satellite imagery, online public domain factual databases such as the CIA World Factbook have been incorporated into virtual globes.
History
In 1993 the German company ART+COM developed a first interactive Virtual globe, the project Terravision; supported by the Deutsche Post as a "networked virtual representation of the Earth based on satellite images, aerial shots, altitude data and architectural data".
The use of virtual globe software was widely popularized by (and may have been first described in) Neal Stephenson's famous science fiction novel Snow Crash. In the metaverse in Snow Crash, there is a piece of software called Earth made by the Central Intelligence Corporation (CIC). The CIC uses their virtual globe as a user interface for keeping track of all their geospatial data, including maps, architectural plans, weather data, and data from real-time satellite surveillance.
Virtual globes (along with all hypermedia and virtual reality software) are distant descendants of the Aspen Movie Map project, which pioneered the concept of using computers to simulate distant physical environments (though the Movie Map's scope was limited to the city of Aspen, Colorado).
Many of the functions of virtual globes were envisioned by Buckminster Fuller who in 1962 envisioned the creation of a Geoscope that would be a giant globe connected by computers to various databases. This would be used as an educational tool to display large scale global patterns related to topics such as economics, geology, natural resource use, etc.
See also
Digital Earth
Geovisualization
Geoweb
Macroscope (science concept)
Orbiter
Planetarium software
Science On a Sphere
Terravision (computer program)
Terragen
References
External links
VirtualGlobes@Benneten – screenshots of many virtual globes
Atlases
Map types
Virtual reality
Geodesy | Virtual globe | [
"Mathematics"
] | 1,209 | [
"Applied mathematics",
"Geodesy"
] |
2,266,150 | https://en.wikipedia.org/wiki/Vinorelbine | Vinorelbine (NVB), sold under the brand name Navelbine among others, is a chemotherapy medication used to treat a number of types of cancer. This includes breast cancer and non-small cell lung cancer. It is given by injection into a vein or by mouth.
Common side effects include bone marrow suppression, pain at the site of injection, vomiting, feeling tired, numbness, and diarrhea. Other serious side effects include shortness of breath. Use during pregnancy may harm the baby. Vinorelbine is in the vinca alkaloid family of medications. It is believed to work by disrupting the normal function of microtubules and thereby stopping cell division.
Vinorelbine was approved for medical use in the United States in 1994. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Vinorelbine is approved for the treatment of non-small-cell lung cancer. It is used off-label for other cancers such as metastatic breast cancer and for aggressive fibromatosis (desmoid tumor). It is also active in rhabdomyosarcoma.
Side effects
Vinorelbine has a number of side-effects that can limit its use:
Chemotherapy-induced peripheral neuropathy (a progressive, enduring and often irreversible tingling numbness, intense pain, and hypersensitivity to cold, beginning in the hands and feet and sometimes involving the arms and legs), lowered resistance to infection, bruising or bleeding, anaemia, constipation, vomitings, diarrhea, nausea, tiredness and a general feeling of weakness (asthenia), inflammation of the vein into which it was injected (phlebitis). Seldom severe hyponatremia is seen.
Less common effects are hair loss and allergic reaction.
Pharmacology
The antitumor activity is due to inhibition of mitosis through interaction with tubulin.
History
Vinorelbine was invented by the pharmacist Pierre Potier and his team from the CNRS in France in the 1980s and was licensed to the oncology department of the Pierre Fabre Group. The drug was approved in France in 1989 under the brand name Navelbine for the treatment of non-small cell lung cancer. It gained approval to treat metastatic breast cancer in 1991. Vinorelbine received approval by the United States Food and Drug Administration (FDA) in December 1994 sponsored by Burroughs Wellcome Company. Pierre Fabre Group now markets Navelbine in the U.S., where the drug went generic in February 2003.
In most European countries, vinorelbine is approved to treat non-small cell lung cancer and breast cancer. In the United States it is approved only for non-small cell lung cancer.
Sources
The Madagascan periwinkle Catharanthus roseus L. is the source for a number of important natural products, including catharanthine and vindoline and the vinca alkaloids it produces from them: leurosine and the chemotherapy agents vinblastine and vincristine, all of which can be obtained from the plant. The newer semi-synthetic chemotherapeutic agent vinorelbine, which is used in the treatment of non-small-cell lung cancer and is not known to occur naturally. However, it can be prepared either from vindoline and catharanthine or from leurosine, in both cases by synthesis of anhydrovinblastine. The leurosine pathway uses the Nugent–RajanBabu reagent in a highly chemoselective de-oxygenation of leurosine. Anhydrovinblastine is then reacted sequentially with N-bromosuccinimide and trifluoroacetic acid followed by silver tetrafluoroborate to yield vinorelbine.
Oral formulation
An oral formulation has been marketed and registered in most European countries. It has similar efficacy as the intravenous formulation, but it avoids venous toxicities of an infusion and is easier to take. The oral form is not approved in the United States, or Australia.
References
External links
Mitotic inhibitors
Acetate esters
Cancer treatments
World Health Organization essential medicines
Spiro compounds
Wikipedia medicine articles ready to translate | Vinorelbine | [
"Chemistry"
] | 890 | [
"Organic compounds",
"Spiro compounds",
"Harmful chemical substances",
"Mitotic inhibitors"
] |
2,266,155 | https://en.wikipedia.org/wiki/Telescopic%20handler | A telescopic handler, also called a lull, telehandler, teleporter, reach forklift, or zoom boom, is a machine widely used in agriculture and industry. It is somewhat like a forklift but has a boom (telescopic cylinder), making it more a crane than a forklift, with the increased versatility of a single telescopic boom that can extend forwards and upwards from the vehicle. The boom can be fitted with different attachments, such as a bucket, pallet forks, muck grab, or winch.
History
The first telescopic handler was believed to have been manufactured by French company Sambron in 1957.
In 1971, Liner Construction Equipment of Hull launched the Giraffe 4WD, 4WS telehandler based on a design by Matbro who created a similar machine based on their articulated forestry machines.
JCB launched their 2WD, rear steer Loadall in October 1977. The JCB 520 was originally aimed at construction sites, the potential for agricultural uses soon followed. JCB sold 100,000 units by
Uses
In industry, the most common attachment for a telehandler is pallet forks and the most common application is to move loads to and from places unreachable for a conventional forklift. For example, telehandlers have the ability to remove palletised cargo from within a trailer and to place loads on rooftops and other high places. The latter application would otherwise require a crane, which is not always practical or time-efficient.
In agriculture the most common attachment for a telehandler are buckets or bucket grabs; again the most common application is to move loads to and from places unreachable for a 'conventional machine' which in this case is a wheeled loader or backhoe loader. For example, telehandlers have the ability to reach directly into a high-sided trailer or hopper. The latter application would otherwise require a loading ramp, conveyor, or something similar.
The telehandler can also work with a crane jib for lifting loads. Attachments on the market include dirt buckets, grain buckets, rotators, and power booms. Agricultural models can also be fitted with three-point linkage and power take-off.
The advantage of the telehandler is also its biggest limitation: as the boom extends or raises while bearing a load, it acts as a lever and causes the vehicle to become increasingly unstable, despite counterweights in the rear. This means that the lifting capacity quickly decreases as the working radius (distance between the front of the wheels and the centre of the load) increases. When used as a loader the single boom (rather than twin arms) is very highly loaded and even with careful design is a weakness. A vehicle with a capacity with the boom retracted may be able to safely lift as little as with it fully extended at a low boom angle. The same machine with a lift capacity with the boom retracted may be able to support as much as with the boom raised to 70°. The operator is equipped with a load chart which helps him determine whether a given task is possible, taking into account weight, boom angle and height. Failing this, most telehandlers now utilize a computer which uses sensors to monitor the vehicle and will warn the operator and/or cut off further control input if the limits of the vehicle are exceeded, the latter being a legal requirement in Europe controlled by EN15000. Machines can also be equipped with front stabilizers which extend the lifting capability of the equipment while stationary. Machines that are fully stabilised with a rotary joint between upper and lower frames can be called mobile cranes; they can typically still use a bucket, and are also often referred to as 'Roto' machines, and may be considered a hybrid between a telehandler and small crane.
Operator licensing
In some jurisdictions, a license is required in order to operate a telehandler under law or regulations of a national or other jurisdictional authority.
For example, in Australia, a Gold Card can be obtained for telehandlers with a capacity of three tonnes or less for standard attachments where the machine is operated from below. The Gold Card is issued by the Telescopic Handler Association of Australia (TSHA). The Gold Card is not a legally required qualification however verbal instruction is not considered an appropriate training method as it lacks evidence of competency training. Competency training with evidence of learning and written assessment is legally required in Australia.
In Victoria, Australia, a WorkSafe CN licence is a legally required licence for machines with a capacity of over three tonnes with standard attachments where the machine is operated from below.
Telehandlers fitted with elevated work platform attachments and are operated from the basket are classified as elevated work platforms and require elevated work platform licences, such as the EWPA Yellow Card or Worksafe WP Licence.
A WorkSafe C2 licence or higher may apply when using slewing-type telehandlers.
See also
Reach stacker
References
External links
Agricultural machinery
Engineering vehicles
Mobile cranes | Telescopic handler | [
"Engineering"
] | 1,034 | [
"Engineering vehicles"
] |
2,266,487 | https://en.wikipedia.org/wiki/Problem%20solving%20environment | A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results.
Purpose of PSE
Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design.
History
The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60. People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs.
The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation. By 1990s, hypertext, point-and-click had moved towards inter-operability. Moving on, a "Software Parts" Industry finally existed.
Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing and Grid/Cloud computing.
Examples of PSE
Grid-Based Numerical Optimisation
The shell software GOSPEL is an example of how a PSE can be designed for EHL modelling using a Grid resource. With the PSE, one can visualise the optimisation progress, as well as interact with other simulations.
The PSE parallelise and embed many individual numerical calculations in an industrial serial optimisation code. It is built in NAG's IRIS Explorer package to solve EHL and Parallelism problems and can use the gViz libraries, to run all the communication between the PSE and the simulation. It also uses MPI — part of the NAG libraries — which gives significantly quicker and better solutions by combining the maximum levels of continuation.
Moreover, the system is designed to allow users to steer simulations using visualised output. An example is utilising local minima, or layering additional details when around a local in and out of the simulation and it can imagine the information which is produced in any sharp and also still allow to steer the simulation.
Grid-based PSEs for mobile devices
PSEs are require a large amount of resources that strain even the most powerful computers of today. Translating PSEs into software that can be used for mobile devices in an important challenge that faces programmers today.
Grid computing is seen as a solution to the rescue issues of PSEs for mobile devices. This is made possible through a "Brokering Service". This service is started by an initiating device that sends the necessary information for PSE to resolve task. The brokering service then breaks this down into subtasks that distributes the information to various subordinate devices that perform these subtasks. The brokering necessitates an Active Agent Repository (AAR) and a Task Allocation Table (TAT) that both work to manage the subtasks. A Keep-Alive Server is tapped to handle communication between the brokering service and the subordinate devices. The Keep-Alive server relies on a lightweight client application installed in the participating mobile devices.
Security, transparency and dependability are issues that may arise when using the grid for mobile device-based PSEs.
Education Support
There are a revolution for network-based learning and e-learning for education but it is very difficult to collect education data and data of the student activities. TSUNA-TASTE, is developed by T. Teramoto, a PSE to support education and learning processes. This system may create a new idea of the e-learning by supporting teachers and students in computer-related education. It consists of four parts, including agents of students, an education support server, a database system and a Web server. This system makes e-learning more convenient as information is earlier to store and collect for students and teachers.
P-NCAS
A computer-assisted parallel program generation support(P-NCAS), is a PSE, creates a new way to reduce the programming hard task for the computer programming. This program can avoid or reduce the chance that huge computer software breaking down so this restrict uncertainty and major accidents in the society. Moreover, partial differential equations(PDEs) problems can be solved by parallel programs which are generated by P-NCAS supports. P-NCAS employs the Single Program Multi Data (SPMD) and uses a decomposition method for the parallelisation. These enable users of P-NCAS to input problems described by PDES, algorithm and discretisation scheme etc., and to view and edit all details through the visualisation and windows for edition. At last, parallel program will be outputted in C language by P-NCAS and also include documents which show everything has inputted in the beginning.
Future Improvement
At first it was difficult doing 2-D EHL problems because of the expense and computer power available. The development of parallel 2-D EHL codes and faster computers have now paved the way for 2-D EHL problem solving to be possible. Friction and lubricant data need a higher level of security given their sensitivity. Accounting for simulations may be difficult because these are done in rapidly and in the thousands. This can be solved by a registration system or a 'directory'. Collaborative PSEs with multiple users will encounter difficulties tracking changes, especially which specific changes were made and when those changes were made. This may also be solved with a directory of changes made.
Secondly, future improvement of the Grid-based PSEs for mobile devices, the group aims to generate new scenarios through manipulation of the control variables available. By changing those control variables, the simulation software is able to create scenarios from each other, allowing for more scrutiny of the conditions in each scenario. It is expected that manipulation of three variables will generate twelve different scenarios.
The variables that we are interested in studying are network stability and device mobility. We feel that these variables will hater the greatest impact on grid performance. Our study will measure performance using task completion time as the primary outcome.
PSE Park
As PSEs grow more complex, the need for computing resources has risen dramatically. Conversely, with PSE applications venturing into fields and environments of growing complexity, the creation of PSEs have become tedious and difficult.
Hirumichi Kobashi and his colleagues have designed a PSE meant to create other PSEs. This has been dubbed as a 'meta PSE' or a PSEs. This was how PSE Park was born.
The Framework
The architecture of PSE Park emphasises flexibility and extensibility. These characteristics make it an attractive platform for varied levels of expertise, from entry-level users to developers.
PSE Park provides these through its repository of functions. the repository contains modules required to build PSEs. Some of the most basic modules, called Cores, are used as the foundation of PSEs. More complex modules are available for use by programmers. Users access PSE Park through a console linked to the programmers. Once the user is register, he/she has assess to the repository. A PIPE server is used as the mediator between the user and PSE Park. It grants access to modules and constructs the selected functions into a PSE.
Developers can develop functions, or even whole PSEs, for inclusion into the repository. Entry-level and expert users can access these pre-made PSEs for their own purposes. Given this architecture, PSE Park requires a cloud computing environment to support the enormous data sharing that occurs during PSE use and development.
The PIPE Server
The PIPE Server differs from other servers in terms of how it handles intermediate results. Since the PIPE Server acts as a mediator in a meta-PSE, any results or variables generated by a core module are retrieved as global variables to be used by the next core. The sequence or hierarchy is defined by the user. The way, same name variables are revised to the new set of variables.
Another important characteristics of the PIPE Server is that it executes each module or core independently. This means that the language of each module does not have to be the same as the others in the PSE. Modules are implemented depending on the defined hierarchy. This feature brings enormous flexibility for developers and users who have varied backgrounds in programming. The modular format also enables that existing PSEs can be extended and modified easily.
Cores
In order be registered, a core must be fully defined. The input and output definitions allow the PIPE server to determine compatibility with other cores and modules. Any lack of definition is flagged by the PIPE server for incompatibility.
Registration Engine and Console
The registration engine keeps track of all cores that may be used in PSE Park. A history of use is also created. A core map may be developed in order to help users understand a core or module better. The console is the users' main interface with PSE Park. It is highly visual and diagrammatic, allowing users to better understand the linkages between modules and cores for the PSEs that they are working on.
See also
Virginia Tech
Prototype
Grid computing
Cloud computing
Mathematical optimisation
External links
PSE research
References
Computer systems
Problem solving | Problem solving environment | [
"Technology",
"Engineering"
] | 1,997 | [
"Computer science",
"Computers",
"Computer engineering",
"Computer systems"
] |
2,266,631 | https://en.wikipedia.org/wiki/Finger%20binary | Finger binary is a system for counting and displaying binary numbers on the fingers of either or both hands. Each finger represents one binary digit or bit. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both: that is, up to 25−1 or 210−1 respectively.
Modern computers typically store values as some whole number of 8-bit bytes, making the fingers of both hands together equivalent to 1¼ bytes of storage—in contrast to less than half a byte when using ten fingers to count up to 10.
Mechanics
In the binary number system, each numerical digit has two possible states (0 or 1) and each successive digit represents an increasing power of two.
Note: What follows is but one of several possible schemes for assigning the values 1, 2, 4, 8, 16, etc. to fingers, not necessarily the best. (see below the illustrations.): The rightmost digit represents two to the zeroth power (i.e., it is the "ones digit"); the digit to its left represents two to the first power (the "twos digit"); the next digit to the left represents two to the second power (the "fours digit"); and so on. (The decimal number system is essentially the same, only that powers of ten are used: "ones digit", "tens digit" "hundreds digit", etc.)
It is possible to use anatomical digits to represent numerical digits by using a raised finger to represent a binary digit in the "1" state and a lowered finger to represent it in the "0" state. Each successive finger represents a higher power of two.
With palms oriented toward the counter's face, the values for when only the right hand is used are:
When only the left hand is used:
When both hands are used:
And, alternately, with the palms oriented away from the counter:
The values of each raised finger are added together to arrive at a total number. In the one-handed version, all fingers raised is thus 31 (16 + 8 + 4 + 2 + 1), and all fingers lowered (a fist) is 0. In the two-handed system, all fingers raised is 1,023 (512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1) and two fists (no fingers raised) represents 0.
It is also possible to have each hand represent an independent number between 0 and 31; this can be used to represent various types of paired numbers, such as month and day, X-Y coordinates, or sports scores (such as for table tennis or baseball). Showing the time as hours and minutes is possible using 10 fingers, with the hour using 4 fingers (0-23) and the minutes using 6 fingers (0-59).
Examples
Right hand
Left hand
When used in addition to the right.
Negative numbers and non-integers
Just as fractional and negative numbers can be represented in binary, they can be represented in finger binary.
Negative numbers
Representing negative numbers is extremely simple, by using the leftmost finger as a sign bit: raised means the number is negative, in a sign-magnitude system. Anywhere between and +511 can be represented this way, using two hands. Note that, in this system, both a positive and a negative zero may be represented.
If a convention were reached on palm up/palm down or fingers pointing up/down representing positive/negative, you could maintain 210 −1 in both positive and negative numbers ( to +1023, with positive and negative zero still represented).
Fractions
Dyadic fractions
Fractions can be stored natively in a binary format by having each finger represent a fractional power of two: . (These are known as dyadic fractions.)
Using the left hand only:
Using two hands:
The total is calculated by adding all the values in the same way as regular (non-fractional) finger binary, then dividing by the largest fractional power being used (32 for one-handed fractional binary, 1024 for two-handed), and simplifying the fraction as necessary.
For example, with thumb and index finger raised on the left hand and no fingers raised on the right hand, this is (512 + 256)/1024 = 768/1024 = 3/4. If using only one hand (left or right), it would be (16 + 8)/32 = 24/32 = 3/4 also.
The simplification process can itself be greatly simplified by performing a bit shift operation: all digits to the right of the rightmost raised finger (i.e., all trailing zeros) are discarded and the rightmost raised finger is treated as the ones digit. The digits are added together using their now-shifted values to determine the numerator and the rightmost finger's original value is used to determine the denominator.
For instance, if the thumb and index finger on the left hand are the only raised digits, the rightmost raised finger (the index finger) becomes "1". The thumb, to its immediate left, is now the 2s digit; added together, they equal 3. The index finger's original value (1/4) determines the denominator: the result is 3/4.
Rational numbers
Combined integer and fractional values (i.e., rational numbers) can be represented by setting a radix point somewhere between two fingers (for instance, between the left and right pinkies). All digits to the left of the radix point are integers; those to the right are fractional.
Decimal fractions and vulgar fractions
Dyadic fractions, explained above, have limited use in a society based around decimal figures. A simple non-dyadic fraction such as 1/3 can be approximated as 341/1024 (0.3330078125), but the conversion between dyadic and decimal (0.333) or vulgar (1/3) forms is complicated.
Instead, either decimal or vulgar fractions can be represented natively in finger binary. Decimal fractions can be represented by using regular integer binary methods and dividing the result by 10, 100, 1000, or some other power of ten. Numbers between 0 and 102.3, 10.23, 1.023, etc. can be represented this way, in increments of 0.1, 0.01, 0.001, etc.
Vulgar fractions can be represented by using one hand to represent the numerator and one hand to represent the denominator; a spectrum of rational numbers can be represented this way, ranging from 1/31 to 31/1 (as well as 0).
Finger ternary
In theory, it is possible to use other positions of the fingers to represent more than two states (0 and 1); for instance, a ternary numeral system (base 3) could be used by having a fully raised finger represent 2, fully lowered represent 0, and "curled" (half-lowered) represent 1. This would make it possible to count up to 242 (35−1) on one hand or 59,048 (310−1) on two hands. In practice, however, many people will find it difficult to hold all fingers independently (especially the middle and ring fingers) in more than two distinct positions.
See also
Chisanbop
References
External links
Binary Counting
Finger-counting
Elementary arithmetic
Binary arithmetic | Finger binary | [
"Mathematics"
] | 1,537 | [
"Elementary arithmetic",
"Elementary mathematics",
"Numeral systems",
"Arithmetic",
"Finger-counting",
"Binary arithmetic"
] |
2,266,648 | https://en.wikipedia.org/wiki/Complementary%20code%20keying | Complementary code keying (CCK) is a modulation scheme used with wireless networks (WLANs) that employ the IEEE 802.11b specification. In 1999, CCK was adopted to supplement the Barker code in wireless digital networks to achieve data rate higher than 2 Mbit/s at the expense of shorter distance. This is due to the shorter chipping sequence in CCK (8 bits versus 11 bits in Barker code) that means less spreading to obtain higher data rate but more susceptible to narrowband interference resulting in shorter radio transmission range. Beside shorter chipping sequence, CCK also has more chipping sequences to encode more bits (4 chipping sequences at 5.5 Mbit/s and 8 chipping sequences at 11 Mbit/s) increasing the data rate even further. The Barker code, however, only has a single chipping sequence.
The complementary codes first discussed by Golay were pairs of binary complementary codes and he noted that when the elements of a code of length N were either [−1 or 1] it followed immediately from their definition that the sum of their respective autocorrelation sequences was zero at all points except for the zero shift where it is equal to K×N. (K being the number of code words in the set).
CCK is a variation and improvement on M-ary Orthogonal Keying and uses 'polyphase complementary codes'. They were developed by Lucent Technologies and Harris Semiconductor and were adopted by the 802.11 working group in 1998. CCK is the form of modulation used when 802.11b operates at either 5.5 or 11 Mbit/s. CCK was selected over competing modulation techniques as it used approximately the same bandwidth and could use the same preamble and header as pre-existing 1 and 2 Mbit/s wireless networks and thus facilitated interoperability.
Polyphase complementary codes, first proposed by Sivaswamy, 1978, are codes where each element is a complex number of unit magnitude and arbitrary phase, or more specifically for 802.11b is one of [1, −1, j, −j].
Networks using the 802.11g specification employ CCK when operating at 802.11b speeds.
Mathematical description
The CCK modulation used by 802.11b transmits data in symbols of eight chips, where each chip is a complex QPSK bit-pair at a chip rate of 11Mchip/s. In 5.5 Mbit/s and 11 Mbit/s modes respectively 4 and 8 bits are modulated onto the eight chips of the symbol c0,...,c7, where
and are determined by the bits being modulated.
In other words, the phase change is applied to every chip, is applied to all even code chips (starting with ), is applied to the first two of every four chips, and is applied to the first four of the eight chips. Therefore, it can also be viewed as a form of generalized Hadamard transform encoding.
References
IEEE Std 802.11b-1999, §18.4.6.5
Quantized radio modulation modes
Wireless networking
IEEE 802.11 | Complementary code keying | [
"Technology",
"Engineering"
] | 646 | [
"Wireless networking",
"Computer networks engineering"
] |
2,267,331 | https://en.wikipedia.org/wiki/Perfect%20mixing | Perfect mixing is a term heavily used in relation to the definition of models that predict the behavior of chemical reactors. Perfect mixing assumes that there are no spatial gradients in a given physical envelope, such as:
concentration (with respect to any chemical species)
temperature
chemical potential
catalytic activity
Physical chemistry | Perfect mixing | [
"Physics",
"Chemistry"
] | 59 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Physical chemistry stubs",
"nan"
] |
2,267,488 | https://en.wikipedia.org/wiki/Mule%20%28smuggling%29 | A mule or courier is someone who personally smuggles contraband across a border (as opposed to sending by mail, etc.) for a smuggling organization. The organizers employ mules to reduce the risk of getting caught themselves. Methods of smuggling include hiding the goods in vehicles or carried items, attaching them to one's body, or using the body as a container.
In the case of transporting illegal drugs, the term drug mule applies. Other slang terms include Kinder Surprise and Easter Egg. Small-scale operations, in which one courier carries one piece or a very small quantity, are sometimes called the ant trade.
Techniques
Concealment
Methods of smuggling include hiding the goods in a large vehicle, luggage, or clothes. In a vehicle, the contraband is hidden in secret compartments.
Sometimes the goods are hidden in the bag or vehicle of an innocent person, who does not know about the contraband, for the purpose of retrieving the goods elsewhere.
Some contraband is legal to possess but is subject to taxes or other import restrictions, such as second-hand clothes and computers, and the purpose of the smuggling is to get around these restrictions. In this case, smuggling may be done in plain sight, in smaller quantities, so that a suitcase full of used clothes or a new computer can be passed off as a personal possession rather than an importing business.
Body packing
The practice of transporting goods outside or inside of the body is called body packing. This is done by a person usually called a mule or bait. The contraband is attached to the outside of the body using adhesive tape, glue, or straps, often in such places as between the cheeks of the buttocks or between rolls of fat. Other inconspicuous places, like the soles of cut out shoes, inside belts, or the rim of a hat, were used more often prior to the early 1990s. Due to increased airport security the "body packing" method is rarely used any more.
Some narcotics-trafficking organizations, such as the Mexican cartels, will purposely send one or two people with drugs on the outside of their body to be caught, so that the authorities are preoccupied while dozens of mules pass by undetected with drugs inside their body. However, even these diversionary tactics are becoming less prevalent as airport security increases.
Swallowing
This is often done using a mule's gastrointestinal tract or other body cavities as containers. Swallowing has been used for the transportation of heroin, cocaine, and MDMA (ecstasy). A swallower typically fills tiny balloons with small quantities of a drug. The balloons may be made with multilayered condoms, fingers of latex gloves, or more sophisticated hollow pellets. One smuggling method involves swallowing the balloons, which are recovered later from the excreted feces. Alternatively, the balloons may be hidden in other natural or artificial body cavities – such as rectum, colostomy, vagina, and mouth – although this method is far more vulnerable to body cavity searches. A drug mule may swallow dozens upon dozens of balloons. The swallower then attempts to cross international borders, excrete the balloons, and sell the drugs.
It is most common for the swallower to be making the trip on behalf of a drug lord or drug dealer. Swallowers are often impoverished and agree to transport the drugs in exchange for money or other favors. In fewer cases, the drug dealers can attempt extortion against people by threatening physical harm against friends or family, but the more common practice is for swallowers to willingly accept the job in exchange for big payoffs. As reported in Lost Rights by James Bovard: "Nigerian drug lords have employed an army of 'swallowers', those who will swallow as many as 150 balloons and smuggle drugs into the United States. Given the per capita yearly income of Nigeria is $2,100, Nigerians can collect as much as $15,000 per trip." Swallowers have been apprehended from a variety of age groups, including adults, teens, and children.
Detection and medical treatment
Routine detection of the smuggled packets is extremely difficult, and many cases come to light because a packet has ruptured or because of intestinal obstruction. Unruptured packets may sometimes be detected by rectal or vaginal examination, but the only reliable way is by X-ray of the abdomen. Hashish appears denser than stool, cocaine is approximately the same density as stool, while heroin looks like air.
An increasingly popular type of swallowing involves having the drug in the form of liquid-filled balloons or condoms/packages. These are impossible to detect unless the airport has high-sensitivity X-Ray equipment, as a liquid mixture of water and the drug will most likely not be detected using a standard X-Ray machine. Most of the major airports in Europe, Canada, and the US have the more sensitive machines.
In most cases, it is only necessary to wait for the packets to pass normally, but if a packet ruptures or if there is intestinal obstruction, then it may be necessary to operate and surgically remove the packets. Oil-based laxatives should never be used, as they can weaken the latex of condoms and cause packets to rupture. Emetics like syrup of ipecac, enemas, and endoscopic retrieval all carry a risk of packet rupture and should not be used. Repeat imaging is only necessary if the mule does not know the packet count.
Ruptured packets can be fatal and often require treatment as for a drug overdose and may require admission to an intensive care unit. Body packers are not always reliable sources of information about the contents of the packages (either because of fears about information being passed on to law enforcement agencies or because the mule genuinely does not know). Urine toxicology may be necessary to determine what drugs are being carried and what antidotes are needed.
International incidents
China
Some mobile phones and electronics are available for less in Hong Kong, one of China's Special Administrative Regions where the tax laws are relaxed. Mules employed by smugglers have been found with devices strapped to their bodies in an effort to smuggle them across the border from Hong Kong to Shenzhen. According to Customs Law of China and Smuggling Penalties, a person shall be subject to a criminal charge if found smuggling small quantities of goods three times in one year. The maximum jail sentence is three years.
United States
The U.S. Supreme Court dealt with body packing in United States v. Montoya De Hernandez. In Hernandez, a woman attempted to smuggle 88 balloons of cocaine in her gastrointestinal tract. She had been detained for over 16 hours by customs inspectors before she finally passed some of the balloons. She was being held because her abdomen was noticeably swollen (she claimed to be pregnant), and a search of her body had revealed that she was wearing two pairs of elastic underpants and had lined her crotch area with paper towels. This is done because balloon swallowing makes bowel movements difficult to control. The woman claimed her Fourth Amendment rights had been violated, but the court found in favor of the border authorities.
With regard to traffic from South America to the US, the US Drug Enforcement Administration reports: "Unlike cocaine, heroin is often smuggled by people who swallow large numbers of small capsules (50–90), allowing them to transport up to 1.5 kilograms of heroin."
United Kingdom
In 2003, over 50% of foreign female prisoners in UK jails were drug mules from Jamaica. Nigerian women make a large contribution to the remaining figure.
In all, around 18% of the UK's female jail population are foreigners, 60% of whom are serving sentences for drug-related offences – most of them drug mules.
See also
Drug Enforcement Administration
Illegal drug trade in Colombia
Money mule
U.S. Immigration and Customs Enforcement (ICE)
United States Border Patrol
War on Drugs
References
Smuggling
Drug control law
Illegal drug trade techniques | Mule (smuggling) | [
"Chemistry"
] | 1,623 | [
"Drug control law",
"Regulation of chemicals"
] |
2,267,622 | https://en.wikipedia.org/wiki/Victor%20Veselago | Victor Georgievich Veselago (; 13 June 1929 – 15 September 2018) was a Soviet Russian physicist, doctor of physical and mathematical sciences, and a university professor. In 1967, he was the first to publish a theoretical analysis of materials with negative permittivity, ε, and permeability, μ.
He published his seminal work in a paper entitled "The Electrodynamics of Substances with Simultaneously Negative Values of ε and μ".
It was first published in Russian (1967), and was later translated into English (1968). His published paper was key to the advancement of physics research in electrodynamics and optics. It has been cited 4118 times by other scientific works, according to Cross ref and 15,378 times according to Google Scholar as of February 2024.Interview with Veselago, Victor G.
He received awards and continued to contribute to electrodynamics throughout his career.
Background
In the senior years of his high school he was an avid ham radio amateur. This hobby sparked an interest in the workings of electricity, and more generally, an interest in physics. Veselago enrolled in the Physico-Technical Department, of M.V. Lomonosov Moscow State University. This department had at that time was just recently opened at this university. He matriculated for four years there. These university years were the happiest time of his life.
Professor Mark Yefremovich Zhabotinsky supervised Veselago's project for his graduation diploma. This same professor also helped him to build a foundation in radio electronics and electrodynamics. Also, as a result of reading the book "What is radio?", which popularized the subject he became involved in the amateur field of Ham Radio. Veselago then studied under the author of the book, Professor Semen Emmanuilovich Khaikin, for three summers at the P N Lebedev FIAN Radioastronomy Station in Crimea. He also studied under Professor Sergei Mikhailovich Rytov, corresponding member of the USSR Academy of Sciences, who lectured on the theory of oscillations. These three professors had a notable impact on Veselago.
It appears that the most significant event of his career, and the most important moment in his life was when he realized that materials with both negative permittivity and permeability are possible.
He was also on the advisory board of the peer reviewed journal Metamaterials, along with a number of other notable board members who have significantly contributed to metamaterial research. The journal was first published in March, 2007.
In 2009, Victor Vesalgo won the C.E.K. Mees Medal from the Optical Society of America (OSA). The recipient is awarded this medal because he or she "exemplifies the thought that 'optics transcends all boundaries,' interdisciplinary and international alike." He was also a Fellow of OSA.
Results and importance of first published work
His first paper was "The Electrodynamics of Substances with Simultaneously Negative Values of ε and μ". Up to this point, the refractive index was traditionally regarded as having only positive values. In this paper he was able to show that refractive index may also be negative. He hypothesized that negative refraction can occur if both the (electric) permittivity and the magnetic permeability of a material are negative. This prediction was confirmed 33 years later when David Smith et al., created a composite material with negative refractive index. Veselago also predicted a flat plate consisting of these materials will produce some curved lens properties.
Sir John Pendry demonstrated this prediction in the lab and noted greatly improved optical resolution. This has been named the Veselago lens.
After Smith's and Pendry's accomplishments with metamaterials, Veselago realized that the most important contribution of his original paper is not that a composite material can be designed to produce a negative refraction, but that a composite material can be designed to produce any value for permittivity and permeability. At least a part of his research goals was then to critically reconsider all formulas of classical electrodynamics that involve permittivity, permeability or refractive index. The fact that prior research is based on positive values for these parameters leads to erroneous solutions when negative values are considered or researched. He stated that many of these formulas need to be corrected.
Veselago perceived that the next big breakthrough with metamaterials will be the fabrication of transparent low-absorption metamaterials with negative refraction in the visible spectrum range.
Education
Victor Veselago attended Moscow University and graduated from there in 1952. He received his PhD in 1959 for investigation of molecular spectra with radiospectropy. He later received a Doctor of Science degree in Solid State Physics in 1974 for his investigation of solid states in magnetic fields. His doctorate and Doctor of Science degree were both achieved at the P.N. Lebedev Physical Institute, where he worked (see Career section below).
Career
After graduating Moscow University in 1952, he went to work with the P.N. Lebedev Physical Institute of the Russian Academy of Sciences in Moscow. He was there from 1952 to 1983. In 1983 he became Head of Laboratory of Magnetic Materials in the Lebedev Physical Institute.
In 1980 he became a professor of applied physics for the Moscow Institute of Physics and Technology. Moreover, he was noted to have been mainly interested in the sciences of magnetism, solid-state physics, and electrodynamics.
Besides notable work establishing and publishing the theory of negative refraction in electrodynamics from 1966 to 1972 he is a winner of the State Prize for Science of the USSR (1976), and a winner of the academician V.A. Fock prize (2004), and an Honored Scientist of the Russian Federation(2002). In 2011, Professor Victor G. Veselago was nominated for the Nobel Prize. In 2007, he was actively involved as an expert for the Russian Foundation for Fundamental Research, the Russian Foundation for Humanitarian Research, and was the vice-chairman of the physics department of the Supreme Attestation Committee of Russia (VAK). He was a founder and vice-editor of the electronic Russian scientific journal "Исследовано в России" ("Investigated in Russia").Profile of V.G. Veselago
V. Veselago was married and had three daughters along with one son. His favorite animal was a female cat named Fifa. His notable hobby was real railways, not hobby sets.
See also
Metamaterials
Negative index metamaterials
Photonic crystal
2009 recipient of the OSA
Kenneth Mees Medal
References
Further reading
"Electrodynamics of materials with negative index of refraction''" by V.G. Veselago. 2003.
Moscow Institute of Physics and Technology alumni
Academic staff of the Moscow Institute of Physics and Technology
1929 births
2018 deaths
Metamaterials scientists
Soviet physicists
20th-century Russian physicists
21st-century Russian physicists
Russian scientists | Victor Veselago | [
"Materials_science"
] | 1,456 | [
"Metamaterials scientists",
"Metamaterials"
] |
2,267,894 | https://en.wikipedia.org/wiki/Nemko | Norges Elektriske Materiellkontroll (NEMKO) is a Norwegian private organization that supervises safety testing for electrical equipment manufacturing.
The Nemko Group offers testing, inspection and certification services concerning products, machinery, installations and systems worldwide.
History
The original NEMKO was established in 1933 as an institution for mandatory safety testing and national approval of electrical equipment marketed and sold in Norway for connection to the public utility network.
Later, testing of radio interference requirements became another part of the approval regime.
In 1990, as Norway entered into the European Economic Area agreement, European Community Directives for product safety were adopted, and the traditional mandatory approval scheme was abandoned. As this stage, NEMKO was transformed into an independent, self-owned foundation, having a council of representatives from different interest groups (industry and trade organizations, consumer associations, utility companies etc.) as the highest level of supervision. At the same time, the foundation established and became the sole owner of Nemko AS, which constitutes the central operating company and is responsible for what is today denoted the Nemko Group.
Since 1992, both the scope of services and the global presence of Nemko have been greatly expanded.
See also
Certification mark
References
External links
A brief history of Nemko
Business organisations based in Norway
Certification marks
Electrical safety standards organizations
Product-testing organizations
Product certification | Nemko | [
"Mathematics"
] | 277 | [
"Symbols",
"Certification marks"
] |
2,267,976 | https://en.wikipedia.org/wiki/ETL%20SEMKO | ETL SEMKO (formerly Electrical Testing Laboratory) is a division of Intertek Group plc (LSE: ITRK) which is based in London. It specializes in electrical product safety testing, EMC testing, and benchmark performance testing. ETL SEMKO operates more than 30 offices and laboratories on six continents. SEMKO (Svenska Elektriska Materielkontrollanstalten "The Swedish Electric Equipment Control Office") was, until 1990, the body responsible for testing and certifying electric appliances in Sweden. The "S" mark was mandatory for products sold in Sweden until the common European CE mark was adopted prior to Sweden's accession to the European Union.
See also
Product certification
Canadian Standards Association
CE mark
Certification mark
Underwriters Laboratories
References
External links
Intertek - Electrical Safety Testing
Certification marks
Electrical safety standards organizations
Product-testing organizations | ETL SEMKO | [
"Mathematics"
] | 178 | [
"Symbols",
"Certification marks"
] |
2,267,996 | https://en.wikipedia.org/wiki/PLGA | PLGA, PLG, or poly(lactic-co-glycolic) acid (CAS: ) is a copolymer which is used in a host of Food and Drug Administration (FDA) approved therapeutic devices, owing to its biodegradability and biocompatibility. PLGA is synthesized by means of ring-opening co-polymerization of two different monomers, the cyclic dimers (1,4-dioxane-2,5-diones) of glycolic acid and lactic acid. Polymers can be synthesized as either random or block copolymers thereby imparting additional polymer properties. Common catalysts used in the preparation of this polymer include tin(II) 2-ethylhexanoate, tin(II) alkoxides, or aluminum isopropoxide. During polymerization, successive monomeric units (of glycolic or lactic acid) are linked together in PLGA by ester linkages, thus yielding a linear, aliphatic polyester as a product.
Copolymer
Depending on the ratio of lactide to glycolide used for the polymerization, different forms of PLGA can be obtained: these are usually identified in regard to the molar ratio of the monomers used (e.g. PLGA 75:25 identifies a copolymer whose composition is 75% lactic acid and 25% glycolic acid). The crystallinity of PLGAs will vary from fully amorphous to fully crystalline depending on block structure and molar ratio. PLGAs typically show a glass transition temperature in the range of 40-60 °C. PLGA can be dissolved by a wide range of solvents, depending on composition. Higher lactide polymers can be dissolved using chlorinated solvents whereas higher glycolide materials will require the use of fluorinated solvents such as HFIP.
PLGA degrades by hydrolysis of its ester linkages in the presence of water. It has been shown that the time required for degradation of PLGA is related to the monomers' ratio used in production: the higher the content of glycolide units, the lower the time required for degradation as compared to predominantly lactide materials. An exception to this rule is the copolymer with 50:50 monomers' ratio which exhibits the faster degradation (about two months). In addition, polymers that are end-capped with esters (as opposed to the free carboxylic acid) demonstrate longer degradation half-lives. This flexibility in degradation has made it convenient for fabrication of many medical devices, such as, grafts, sutures, implants, prosthetic devices, surgical sealant films, micro and nanoparticles.
PLGA undergoes hydrolysis in the body to produce the original monomers: lactic acid and glycolic acid. These two monomers under normal physiological conditions, are by-products of various metabolic pathways in the body. Lactic acid is metabolized in the tricarboxylic acid cycle and eliminated via carbon dioxide and water. Glycolic acid is metabolized in the same way, and also excreted through the kidney. The body also can metabolize the two monomers, which in the case of glycolic acid produces small amounts of the toxic oxalic acid, though the amounts produced from typical applications are minuscule and there is minimal systemic toxicity associated with using PLGA for biomaterial applications. However, it has been reported that the acidic degradation of PLGA reduces the local pH low enough to create an autocatalytic environment. It has been shown that the pH inside a microsphere can become as acidic as pH 1.5.
Biocompatibility
Generally PLGA is considered to be quite biocompatible. Its high biocompatibility results from its composition due to lactic and glycolic acid fermentation from sugars, making them eco-friendly and less reactive in the body. PLGA also degrades into non-toxic and non-reactive products that makes them quite useful for various medical and pharmaceutical applications.
The biocompatibility of PLGA has been tested both in vivo and in vitro. The biocompatibility of this polymer is generally determined by the products that it degrades into, as well as the rate of degradation into degradation products. The way that PLGA degrades is by means of an enzyme known as esterase, which forms lactic acid and glycolic acid. These acids then undergo the Krebs Cycle to be degraded as carbon dioxide () and water (). These byproducts then get removed from the body through cellular respiration and through the digestive process.
While the byproducts usually do not accumulate in the body, there are instances where these byproducts (lactic and glycolic acid) can be dangerous to the body when accumulated in high local concentrations. There can also be small pieces of the polymers as the polymer degrades, causing an immune response by macrophages. These adverse effects can be reduced by using lower concentrations of the polymer, so that it gets naturally released throughout the body.
Something else to consider regarding PLGA biocompatibility is the location at which the polymer is implanted or placed in the body. There are different immune responses that the body could have depending on where the polymer is placed. For example, in drug delivery systems (DDS), PLGA and PLA implants with high surface area and low volume of injection can increase one's chance of immune response as the polymers degrade in the body.
Biodegradability
The biodegradation of PLGA makes it useful for plenty of medical practices. PLGA undergoes bulk degradation, which is when a catalyst such as water inserts itself throughout the matrix of the polymer. A 75:25 lactide to glycolide PLGA ratio can be made as microspheres that degrade via bulk erosion. This allows degradation throughout the whole polymer to occur equally.
Another injectable form of PLGA was developed to have eroding systems. This form can be used in Lupron Depot. To achieve this, PLGA is mixed with an organic water-miscible solvent approved by the Food and Drug Administration (FDA). Once the PLGA is mixed into the solvent with the drug of choice to create a homogeneous solution or suspension. When this mixture is injected, the PLGA solidifies due to water insolubility and is replaced by the water. Slowly, the drug is delivered from the solution. A problem that may occur is during the initial injection, the drug may be released in a quick burst instead of gradually.
Examples
Specific examples of PLGA's use include:
Synthetic Barrier Membrane by Powerbone: This device is a resorbable synthetic membrane that acts as an alternative to polytetrafluoroethylene (PTFE), which is a synthetic polymer often used in dental implants and many other applications. The synthetic barrier membrane is used specifically in dental implants and for guided tissue regeneration (GTR) as well as guided bone regeneration (GBE). Some are biodegradable membranes, while others are not, and are typically correlated with more surgical complications. In general, these membranes are important to provide biocompatibility, biosafety, barrier function, and mechanical properties to the implant. They are also typically bioactive, promoting the regeneration of tissues around the site of implantation.
Lupron Depot: This is a drug delivery device that helps treat prostate cancer and has been used to treat other types of similar cancers. It is also known as leuprorelin or leuprolide. PLGA is used as a key component in this drug, in the form of microparticles to deliver the drug into the body over a period of 1 week to 6 months. This drug is typically used as an alternative to radiation therapy, and is considered to be quite effective as it reduces the levels of testosterone in the body, slowing the effects of the cancer. There are many side effects of this drug, including muscle loss, hot flashes, fatigue, osteoporosis, growth of breast tissue, and many others.
Prophylactic delivery: This refers to preventative healthcare that is meant to prevent infections or other illnesses. One case of prophylactic delivery involving PLGA is for the antibiotic vancomycin, which is typically injected after brain surgery to prevent infections from bacteria including Staphylococcus aureus.
See also
Polycaprolactone
Polyglycolide
Polymer-drug conjugates
Polylactic acid
Poly-3-hydroxybutyrate
References
External links
Copolymers
Synthetic fibers
Biodegradable plastics
Polyesters | PLGA | [
"Chemistry"
] | 1,817 | [
"Synthetic materials",
"Synthetic fibers"
] |
2,268,024 | https://en.wikipedia.org/wiki/Smart%20message | Smart message is a communications protocol designed by Intel and Nokia by which various software upgrades—including ringtones—can be made "over the air", through the wireless connection.
Smart Messaging is basically a special type of short message, with its own prefixes and codes, that makes it possible for the phone to recognize the message as, instead of a text message to the attention of the user, a "functional" message that should be treated as: a ringtone, a screen logo, in some cases even a business card or group graphics that can be used to identify who is calling.
Mobile telecommunication services | Smart message | [
"Technology"
] | 123 | [
"Mobile telecommunications",
"Mobile telecommunication services"
] |
2,268,029 | https://en.wikipedia.org/wiki/Phenocopy | In phenomics, a phenocopy is a variation in phenotype (generally referring to a single trait) which is caused by environmental conditions (often, but not necessarily, during the organism's development), such that the organism's phenotype matches a phenotype which is determined by genetic factors. It is not a type of mutation, as it is non-hereditary.
The term was coined by German geneticist Richard Goldschmidt in 1935. He used it to refer to forms, produced by some experimental procedure, whose appearance duplicates or copies the phenotype of some mutant or combination of mutants.
Examples
The butterfly genus Vanessa can change phenotype based on the local temperature. If introduced to Lapland they mimic butterflies localised to this area; and if localised to Syria they mimic butterflies of this area.
The larvae of Drosophila melanogaster have been found to be particularly vulnerable to environmental factors which produce phenocopies of known mutations; these factors include temperature, shock, radiation, and various chemical compounds. In fruit fly, Drosophila melanogaster, the normal body colour is brownish gray with black margins. A hereditary mutant for this was discovered by T.H. Morgan in 1910 where the body colour is yellow. This was a genotypic character which was constant in both the flies in all environments. However, in 1939, Rapoport discovered that if larvae of normal flies were fed with silver salts, they develop into yellow bodied flies irrespective of their genotype. The yellow bodied flies which are genetically brown is a variant of the original yellow bodied fly.
Phenocopy can also be observed in Himalayan rabbits. When raised in moderate temperatures, Himalayan rabbits are white in colour with black tail, nose, and ears, making them phenotypically distinguishable from genetically black rabbits. However, when raised in cold temperatures, Himalayan rabbits show black colouration of their coats, resembling the genetically black rabbits. Hence this Himalayan rabbit is a phenocopy of the genetically black rabbit.
Reversible and/or cosmetic modifications such as the use of hair bleach are not considered to be phenocopy, as they are not inherent traits.
See also
Genocopy
References
Genetics | Phenocopy | [
"Biology"
] | 464 | [
"Genetics"
] |
2,268,547 | https://en.wikipedia.org/wiki/Plasmalogen | Plasmalogens are a class of glycerophospholipid with a plasmenyl group linked to a lipid at the sn-1 position of the glycerol backbone. Plasmalogens are found in multiple domains of life, including mammals, invertebrates, protozoa, and anaerobic bacteria. They are commonly found in cell membranes in the nervous, immune, and cardiovascular systems. In humans, lower levels of plasmalogens are studied in relation to some diseases. Plasmalogens are also associated with adaptations to extreme environments in non-human organisms.
Structure
Glycerophospholipids of biochemical relevance are divided into three subclasses based on the substitution present at the sn-1 position of the glycerol backbone: acyl, alkyl and alkenyl. Of these, the alkyl and alkenyl moiety in each case form an ether bond, which makes for two types of ether phospholipids, plasmanyl (alkyl moiety at sn-1), and plasmenyl (alkenyl moiety with vinyl ether linkage at sn-1). Plasmalogens are plasmenyls with an ester (acyl group) linked lipid at the sn-2 position of the glycerol backbone, chemically designated 1-0(1Z-alkenyl)-2-acyl-glycerophospholipids. The lipid attached to the vinyl ether at sn-1 can be C16:0, C18:0, or C18:1 (saturated and monounsaturated), and the lipid attached to the acyl group at sn-2 can be C22:6 ω-3 (docosahexaenoic acid) or C20:4 ω-6 (arachidonic acid), (both are polyunsaturated acids). Plasmalogens are classified according to their head group, mainly as PC plasmalogens (plasmenylcholines) and PE plasmalogens (plasmenylethalomines). Plasmalogens should not be confused with plasmanyls.
Functions
Plasmalogens are found in numerous human tissues, with particular enrichment in the nervous, immune, and cardiovascular systems. In human heart tissue, nearly 30–40% of choline glycerophospholipids are plasmalogens. Even more striking is the fact that 32% of the glycerophospholipids in the adult human heart and 20% in brain and up to 70% of myelin sheath ethanolamine glycerophospholipids are plasmalogens.
Although the functions of plasmalogens have not yet been fully elucidated, it has been demonstrated that they can protect mammalian cells against the damaging effects of reactive oxygen species. In addition, they have been implicated as being signaling molecules and modulators of membrane dynamics.
History
Plasmalogens were first described by Feulgen and Voit in 1924 based on studies of tissue sections. They treated these tissue sections with acid or mercuric chloride as part of a method to stain the nucleus. This resulted in the breakage of the plasmalogen vinyl-ether bond to yield aldehydes. In turn, the latter reacted with a fuchsine-sulfurous acid stain used in this nuclear staining method and gave rise to colored compounds inside the cytoplasm of the cells. Plasmalogens were named based on the fact that these colored compounds were present in the "plasmal" or inside of the cell.
Biosynthesis
Biosynthesis of plasmalogens begins with association of peroxisomal matrix enzymes GNPAT (glycerone phosphate acyl transferase) and AGPS (alkyl-glycerone phosphate synthase)
on the luminal side of the peroxisomal membrane.
These two enzymes can interact with each other to increase efficiency. Therefore, fibroblasts without AGPS activity have a reduced GNPAT level and activity.
The first step of the biosynthesis is catalyzed by GNPAT. This enzyme acylates dihydroxyacetone phosphate at the sn-1 position. This is followed by the exchange of the acyl group for an alkyl group by AGPS.
The 1-alkyl-DHAPdihydroxyacetone phosphate is then reduced to 1-O-alkyl-2-hydroxy-sn-glycerophosphate (GPA) by an acyl/alkyl-dihydroxyacetone phosphate reductase located in both peroxisomal and endoplasmatic reticulum membranes.
All other modifications occur in the endoplasmatic reticulum. There an acyl group is placed at the sn-2 position by an alkyl/acyl GPA acyltransferase and the phosphate group is removed by a phosphatidic acid phosphatase to form 1-O-alkyl-2-acyl-sn-glycerol.
Using CDP-ethanolamine a phosphotransferase forms 1-O-alkyl-2-acyl-sn-GPEtn. After dehydrogenation at the 1- and 2-positions of the
alkyl group by an electron transport system and plasmanylethanolamine desaturase the vinyl ether bond of plasmalogens is finally formed. The protein corresponding to plasmanylethanolamine desaturase has been identified and is called CarF in bacteria and PEDS1 (TMEM189) in humans (and animals).
Plasmenylcholine is formed from 1-O-alkyl-2-acyl-sn-glycerol by choline phosphotransferase. As there is no plasmenylcholine desaturase choline plasmalogens can be formed only after hydrolysis of ethanolamine plasmalogens to 1-O-(1Z-alkenyl)-2-acyl-sn-glycerol that can be modified by choline phosphotransferase and CDP choline.
Pathology
Peroxisome biogenesis disorders are autosomal recessive disorders often characterized by impaired plasmalogen biosynthesis. In these cases, the peroxisomal enzyme GNPAT, necessary for the initial steps of plasmalogen biosynthesis, is mislocalized to the cytoplasm where it is inactive. In addition, genetic mutations in the GNPAT or AGPS genes can result in plasmalogen deficiencies, which lead to the development of rhizomelic chondrodysplasia punctata (RCDP) type 2 or 3, respectively. In such cases, both copies of the GNPAT or AGPS gene must be mutated in order for disease to manifest. Unlike the peroxisome biogenesis disorders, other aspects of peroxisome assembly in RCDP2 and RCDP3 patients are normal as is their ability to metabolize very long chain fatty acids. Individuals with severe plasmalogen deficiencies frequently show abnormal neurological development, skeletal malformation, impaired respiration, and cataracts.
Deficits in plasmalogen levels contribute to pathology in Zellweger syndrome.
Plasmalogen-knockout mice show similar alterations like arrest of spermatogenesis, development of cataract and defects in central nervous system myelination.
Plasmalogen alkyl chains have been shown to promote or inhibit the cell death from ferroptosis, depending on their degree of saturation.
During inflammation
During inflammation, neutrophil-derived myeloperoxidase produces hypochlorous acid, which causes oxidative chlorination of plasmalogens at the sn-1 chain by reacting with the vinyl ether bond. Several researchers are currently investigating the impact of chlorinated lipids on pathology.
Possible disease links
The lack of good methods to assay plasmalogen has created difficulties for scientists to assess how plasmalogen might be involved in human diseases other than RCDP and Zellweger spectrum, in which the involvement is certain. There is some evidence in humans that low plasmalogens are involved in the pathology of bronchopulmonary dysplasia, which is an important complication of premature birth. One study showed that plasmalogen levels are reduced in people with COPD who smoked compared with non-smokers.
There is some evidence from humans and animals that there are reduced levels of plasmalogens in the brain in neurodegenerative disorders including Alzheimer disease, Parkinson's disease, Niemann–Pick disease, type C, Down syndrome, and multiple sclerosis, it is not clear if this is causal or correlative. A study with mice concluded that plasmalogens can eliminate aging-associated synaptic defects.
More recently, population studies have also associated lower circulating plasmalogen levels with cardiometabolic disease. Animal studies have also demonstrated lower cardiac plasmalogen levels under settings of dilated cardiomyopathy and myocardial infarction.
Evolution
In addition to mammals, plasmalogens are also found in invertebrates and single cell organisms protozoans. Among bacteria they have been found in many anaerobic species including Clostridia, Megasphaera, and Veillonella. Among aerobic bacteria, plasmalogens occur in myxobacteria, and their plasmanylethanolamine desaturase (CarF) required to generate the vinyl ether bond, and hence plasmalogen, is conserved as TMEM189 in humans (and animals). Plasmalogens have been shown to have a complex evolutionary history based on the fact that their biosynthetic pathways differ in aerobic and anaerobic organisms.
Recently, it has been demonstrated that the red blood cells of humans and great apes (chimpanzees, gorillas and orangutans) have differences in their plasmalogen composition. Total RBC plasmalogen levels were found to be lower in humans than in chimpanzees, or gorillas, but higher than in orangutans. Gene expression data from all these species caused the authors to speculate that other human and great ape cells and tissues differ in plasmalogen levels. Although the consequences of these potential differences are unknown, cross-species differences in tissue plasmalogens could influence organ functions and multiple biological processes.
Plasmalogens form a major component in the cell membranes of deep-sea animals like the comb jelly, enhancing molecular resistance to high pressure.
See also
Ether lipid
References
External links | Plasmalogen | [
"Chemistry"
] | 2,254 | [
"Phospholipids",
"Signal transduction"
] |
2,268,600 | https://en.wikipedia.org/wiki/Ether%20lipid | In biochemistry, an ether lipid refers to any lipid in which the lipid "tail" group is attached to the glycerol backbone via an ether bond at any position. In contrast, conventional glycerophospholipids and triglycerides are triesters. Structural types include:
Ether phospholipids: phospholipids are known to have ether-linked "tails" instead of the usual ester linkage.
Ether on sn-1, ester on sn-2: "ether lipids" in the context of bacteria and eukaryotes refer to this class of lipids. Compared to the usual 1,2-diacyl-sn-glycerol (DAG), the sn-1 linkage is replaced with an ester bond.
Based on whether the sn-1 lipid is unsaturated next to the ether linkage, they can be further divided into alkenyl-acylphospholipids ("plasmenylphospholipid", 1-0-alk-1’-enyl-2-acyl-sn-glycerol) and alkyl-acylphospholipids ("plasmanylphospholipid"). This class of lipids have important roles in human cell signaling and structure.
Ether on sn-2 and sn-3: this class with flipped chirality on the phosphate connection is called an "archaeal ether lipid". With few (if any) exceptions, it is only found among archaea. The part excluding the phoshphate group is known as archaeol.
Ether analogues of triglycerides: 1-alkyldiacyl-sn-glycerols (alkyldiacylglycerols) are found in significant proportions in marine animals.
Other ether lipids: a number of other lipids not belonging to any of the classes above contain the ether linkage. For example, seminolipid, a vital part of the testes and sperm cells, has a ether linkage.
The term "plasmalogen" can refer to any ether lipid with a vinyl ether linkage, i.e. ones with a carbon-carbon double bond next to the ether linkage. Without specification it generally refers to alkenyl-acylphospholipids, but "neutral plasmalogens" (alkenyldiacylglycerols) and "diplasmalogens" (dialkenylphospholipids) also exist. The prototypical plasmalogen is platelet-activating factor.
In eukaryotes
Biosynthesis
The formation of the ether bond in mammals requires two enzymes, dihydroxyacetonephosphate acyltransferase (DHAPAT) and alkyldihydroxyacetonephosphate synthase (ADAPS), that reside in the peroxisome. Accordingly, peroxisomal defects often lead to impairment of ether-lipid production.
Monoalkylglycerol ethers (MAGEs) are also generated from 2-acetyl MAGEs (precursors of PAF) by KIAA1363.
Functions
Structural
Plasmalogens as well as some 1-O-alkyl lipids are ubiquitous and sometimes major parts of the cell membranes in mammals. The glycosylphosphatidylinositol anchor of mammalian proteins generally consist of an 1-O-alkyl lipid.
Second messenger
Differences between the catabolism of ether glycerophospholipids by specific phospholipases enzymes might be involved in the generation of lipid second messenger systems such as prostaglandins and arachidonic acid that are important in signal transduction. Ether lipids can also act directly in cell signaling, as the platelet-activating factor is an ether lipid signaling molecule that is involved in leukocyte function in the mammalian immune system.
Antioxidant
Another possible function of the plasmalogen ether lipids is as antioxidants, as protective effects against oxidative stress have been demonstrated in cell culture and these lipids might therefore play a role in serum lipoprotein metabolism. This antioxidant activity comes from the enol ether double bond being targeted by a variety of reactive oxygen species.
Synthetic ether lipid analogs
Synthetic ether lipid analogs have cytostatic and cytotoxic properties, probably by disrupting membrane structure and acting as inhibitors of enzymes within signal transmission pathways, such as protein kinase C and phospholipase C.
A toxic ether lipid analogue miltefosine has recently been introduced as an oral treatment for the tropical disease leishmaniasis, which is caused by leishmania, a protozoal parasite with a particularly high ether lipid content in its membranes.
In archaea
The cell membrane of archaea consist mostly of ether phospholipids. These lipids have a flipped chirality compared to bacterial and eukaryotic membranes, a conundrum known as the "lipid divide". The "tail" groups are also not simply n-alkyl groups, but highly methylated chains made up of saturated isoprenoid units (e.g. phytanyl).
Among different groups of archaea, diverse modifications on the basic archaeol backbone have emerged.
The two tails can be linked together, forming a macrocyclic lipid.
Bipolar macrocyclic tetraether lipids (caldarchaeol), with two glycerol units connected by two C40 "tail" chains, form covalently linked 'bilayers'.
Some such covelant bilayers feature crosslinks between the two chains, giving an H-shaped molecule.
Crenarchaeol is a tetraether backbone with cyclopentane and cyclohexane rings on the cross-linked "tail"s.
Some lipids replace the glycerol backbone with four-carbon polyols (tetriols).
In bacteria
Ether phospholipids are major parts of the cell membrane in anaerobic bacteria. These lipids can be variously 1-O-alkyl, 2-O-alkyl, or 1,2-O-dialkyl. Some groups have, like archaea, evolved tetraether lipids.
In prokaryotes
Some ether lipids found in marine animals are S-batyl alcohol, S-chimyl alcohol, and S-selachyl alcohol.
See also
Membrane lipid
Glycerol dialkyl glycerol tetraether
References
External links
Lipids | Ether lipid | [
"Chemistry"
] | 1,432 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
2,268,669 | https://en.wikipedia.org/wiki/Resource%20holding%20potential | In biology, resource holding potential (RHP) is the ability of an animal to win an all-out fight if one were to take place. The term was coined by Geoff Parker to disambiguate physical fighting ability from the motivation to persevere in a fight (Parker, 1974). Originally the term used was 'resource holding power', but 'resource holding potential' has come to be preferred. The latter emphasis on 'potential' serves as a reminder that the individual with greater RHP does not always prevail.
An individual with more RHP may lose a fight if, for example, it is less motivated (has less to gain by winning) than its opponent. Mathematical models of RHP and motivation ( resource value or V) have traditionally been based on the hawk-dove game (e.g. Hammerstein, 1981) in which subjective resource value is represented by the variable 'V'. In addition to RHP and V, George Barlow (Barlow et al., 1986) proposed that a third variable, which he termed 'daring', played a role in determining fight outcome. Daring (a.k.a. aggressiveness) represents an individual's tendency to initiate or escalate a contest independent of the effects of RHP and V.
It is instinctive for all animals to live a life according to fitness (Parker 1974). Animals will do what they can to improve their fitness and therefore survive long enough to produce offspring. However, when resources are not in abundance, this can be challenging; eventually, animals will begin to compete for resources. The competition for resources can be dangerous and for some animals, deadly. Some animals have developed adaptive traits that increase their chances of survival when competing for resources. This trait is Resource Holding Potential (RHP) (Parker 1974). Resource Holding Potential, or Resource Holding Power, is the term defining the motivation an individual has to continue to fight, work, or endure through situations that others may give up during. Animals that use RHP often evaluate the conditions of the danger they face. These animals have the ability to assess the RHP of their opponent in relation to their own (Francesca Gherardi 2006). Generally, the animal with the higher RHP survives and wins the disputes they encounter (Lindström and Pampoulie 2005). The determinations of who has the higher RHP can vary. In some cases, the robust size of the animal will establish one’s dominance. However, RHP can also be measured by prior residency and knowledge of resource quality (Lindström and Pampoulie 2005). In this case, RHP is not about the direct dangers that come with standing one’s ground; sometimes, an animal will use RHP to determine if their current living status is worth protecting. With that being said, RHP does not take does not so much focus on the physical ability of the individual to fight, but instead focuses on the motivation of the individual. RHP does not always determine if the individual will prevail (Hurd 2006). RHP along with other variables including the value of the resource and the aggressiveness (or daring) of the individual all help to determine how likely it is that an individual will initiate and prevail in a fight.
Recent studies
Male sand gobies (a ray-finned fish) must build large nests in order to attract a mate, and to be able to house numerous eggs. If the male is small and not very attractive but has a large nest, he is at risk of a larger more attractive male coming by and attempting to “steal” the nest. On the other hand, if the male is larger in size but lives in a smaller nest, he has a lesser chance of finding a mate and less space to house his offspring. In either case, the male sand goby must use RHP to determine whether it is more fit for him to stay or move on (Lindström and Pampoulie 2005).
In Aegus chelifer chelifer, a small tropical beetle species, head width is considered a resource holding potential. Researchers discovered that body size, rather than mandible size, had a bigger effect on the outcome of fights between the beetles, making it their resource holding potential (Songvorawit et al. 2018).
In the sea anemone, Actinia equina, morphological traits appear to determine their resource holding potential. A. equina does a “self-assessment” of their RHP when fighting nearby anemones. Body size appears to be the main RHP unless a peel occurs due to contact with another anemone where toxin is released. If a peel occurs then nematocyst length is the main factor to their RHP (Rudin and Briffa 2012).
The topic of resource holding power has some similar characteristics to the behavior of conditional migration. The thought process of “What benefit do I receive from this action,” is a similarity between the two. If an all out fight only has two outcomes, death, or winning the competition for resources, than the individuals will be less likely to interact with one another and instigate a fight because the outcomes would be so severe. Similar concepts can be applied to the conditional migration behavior. Subordinate males will be less likely to migrate because of the severe outcomes that come from the migration. If subordinates migrate with dominant males to a place where resources will be limited their likelihood of surviving is greatly reduced. What benefit could they receive knowing that most likely they are going to lose resources.
Conditional strategy - socially dominant individuals will be in a position to select the best option relative to their fitness.
Examples of the term in use
"... RHP is a measure of the absolute fighting ability of the individual" (Parker, 1974).
"Assuming the RHP of the combatants to be equal, there are many instances of fitness pay-off imbalances between holder and attacker which should weight the dispute outcome in favour of one or other opponent by allowing it a greater expendable fitness budget. Usually the weighting favours the holder; the attacker therefore needs a correspondingly higher RHP before it may be expected to win." (Parker, 1974).
"Each combatant assesses relative RHP; this correlates with an absolute probability of winning the next bout ()." (Parker, 1974).
"The essential point is to distinguish two cases (i) information about 'motivation' or 'intentions' [...] (ii) information about 'Resource Holding Power', or RHP (Parker, 1974b); RHP is a measure of the size, strength, weapons, etc. which would enable an animal to win an escalated contest" (Maynard Smith 1982).
"In practice, however, the two opponents are rarely equal in fighting ability, or resource holding potential" (Bradbury & Vehrencamp, 1998).
"Motivational and physical components are assumed to be separable. ... The motivation depends upon V, the value of the resource, and the perceived prowess and motivation of the opponent. ... but there is an additional component. It is the readiness of the individual to risk an encounter, to dare to escalate, measured when the contest is otherwise symmetrical. It differs from V in that daring appears to be an inherent property of the individual rather than a variable motivational state that is tuned to the value of the resource" (Barlow et al. 1986).
See also
Aggression
Game theory
Social defeat
References
Evolutionary biology terminology | Resource holding potential | [
"Biology"
] | 1,549 | [
"Evolutionary biology terminology"
] |
2,268,823 | https://en.wikipedia.org/wiki/Sandarac | Sandarac (or sandarach) is a resin obtained from the small cypress-like tree Tetraclinis articulata. The tree is native to the northwest of Africa with a notable presence in the Southern Morocco part of the Atlas Mountains. The resin exudes naturally on the stems of the tree. It is also obtained by making cuts on the bark. It solidifies when exposed to the air. It comes to commerce in the form of small solid chips, translucent, and having a delicate yellow tinge. Morocco has been the main place of origin of sandarac. A similar resin is obtained in southern Australia from some species of the Australian cypress-like trees Callitris, but the resin has not been systematically collected in Australia.
Historically, especially in the Late Medieval and Renaissance era, sandarac was used to make varnish. When "varnish" was spoken of in Renaissance Italy (Italian vernice) it usually meant sandarac. Copal and other resins displaced it as equally good, less expensive varnishing materials. Nevertheless the sandarac varnish is still valued today for use as a protective coating on paintings and antiques. It gives a coat which is hard, lustrous and durable. The varnish is made by melting the resin and mixing it with (e.g.) linseed oil. Sandarac resin melts at about 150°C to a colourless or slightly yellow liquid. Its specific gravity is about 1.04.
In mid-to-late 19th century photography, a varnish was applied as a preservative to photographic negatives and positives. Sandarac resin was preferred by some photographers for this purpose.
Although it is not very strongly aromatic, sandarac resin was and is also used as an incense. The aroma has been compared to balsam.
Besides the resin and the varnish, the word sandarac may refer to the tree that produces the resin. Entirely separately from that, the ancient Greeks and Romans used the word sandarac to refer to arsenic sulfide particularly red arsenic sulfide. In Medieval Latin, the term sandaraca meant red lead as well as red arsenic sulfide. The word's resin/varnish meaning came to Europe from Arabic in the early 16th century. To distinguish this meaning from the Greek and medieval Latin meaning, it was occasionally called "Arabian sandarac" or "sandaracha Arabum" in Neo-Latin writings. The name in Pakistan and India is چندرس and, in Arabic, was and is سندروس sandarūs.
In Mandaeism
In Mandaean texts, the Mandaic term sindirka, which is typically translated as 'date palm' by E. S. Drower and others, is instead identified by Carlos Gelbert (2023) as a cognate of sandarac and thus with Tetraclinis articulata.
References
Resins
Varnishes
Incense material | Sandarac | [
"Physics",
"Chemistry"
] | 612 | [
"Varnishes",
"Resins",
"Coatings",
"Unsolved problems in physics",
"Incense material",
"Materials",
"Amorphous solids",
"Matter"
] |
2,269,396 | https://en.wikipedia.org/wiki/Galactic%20disc | A galactic disc (or galactic disk) is a component of disc galaxies, such as spiral galaxies like the Milky Way and lenticular galaxies. Galactic discs consist of a stellar component (composed of most of the galaxy's stars) and a gaseous component (mostly composed of cool gas and dust). The stellar population of galactic discs tend to exhibit very little random motion with most of its stars undergoing nearly circular orbits about the galactic center. Discs can be fairly thin because the disc material's motion lies predominantly on the plane of the disc (very little vertical motion). The Milky Way's disc, for example, is approximately 1 kly thick, but thickness can vary for discs in other galaxies.
Stellar component
Exponential surface brightness profiles
Galactic discs have surface brightness profiles that very closely follow exponential functions in both the radial and vertical directions.
Radial profile
The surface brightness radial profile of the galactic disc of a typical disc galaxy (viewed face-on) roughly follows an exponential function:
where is the galaxy's central brightness and is the scale length. The scale length is the radius at which the galaxy is a factor of e (≈2.7) less bright than it is at its center. Due to the diversity in the shapes and sizes of galaxies, not all galactic discs follow this simple exponential form in their brightness profiles. Some galaxies have been found to have discs with profiles that become truncated in the outermost regions.
Vertical profile
When viewed edge-on, the vertical surface brightness profiles of galactic discs follow a very similar exponential profile that is proportional to the disc's radial profile:
where is the scale height. Although exponential profiles serve as a useful first approximations, vertical surface brightness profiles can also be more complicated. For example, the scale height , although assumed to be a constant above, can in some cases increase with the radius.
Gaseous component
Most of a disc galaxy's gas lies within the disc. Both cool atomic hydrogen (HI) and warm molecular hydrogen (HII) make up most of the disc's gaseous component. This gas serves as the fuel for the formation of new stars in the disc. Although the distribution of gas in the disc is not as well-defined as the stellar component's distribution it is understood (from 21cm emission) that atomic hydrogen is distributed fairly uniformly throughout the disc. 21 cm emission by HI also reveals that the gaseous component can flare out at the outer regions of the galaxy. The abundance of molecular hydrogen makes it a great candidate to help trace the dynamics within the disc. Like the stars within the disc, clumps or clouds of gas follow approximately circular orbits about the galactic center. The circular velocity of the gas in the disc is strongly correlated with the luminosity of the galaxy (see Tully–Fisher relation). This relationship becomes stronger when the stellar mass is also taken into consideration.
Structure of the Milky Way disc
Three stellar components with varying scale heights can be distinguished within the disc of the Milky Way (MW): the young thin disc, the old thin disc, and the thick disc. The young thin disc is a region in which star formation is taking place and contains the MW's youngest stars and most of its gas and dust. The scale height of this component is roughly 100 pc. The old thin disc has a scale height of approximately 325 pc while the thick disc has a scale height of 1.5 kpc. Although stars move primarily within the disc, they exhibit a random enough motion in the direction perpendicular to the disc to result in various scale heights for the different disc components. Stars in the MW's thin disc tend to have higher metallicities compared to the stars in the thick disc. The metal-rich stars in the thin disc have metallicities close to that of the sun () and are referred to as population I (pop I) stars while the stars that populate the thick disc are more metal-poor () and are referred to as population II (pop II) stars (see stellar population). These distinct ages and metallicities in the different stellar components of the disc point to a strong relationship between the metallicities and ages of stars.
See also
Thick disk
Thin disk
References
Galaxies | Galactic disc | [
"Astronomy"
] | 846 | [
"Galaxies",
"Astronomical objects"
] |
2,269,546 | https://en.wikipedia.org/wiki/Architectural%20mythology | Architectural mythology means the symbolism in real-world architecture, as well as the architecture described in mythological stories. In addition to language, a myth could be represented by a painting, a sculpture, or a building. It is about the overall story of an architectural work, often revealed through art.Mythology and symbolism has been a channel for architects to inject a deeper meaning for an indissoluble amount of time. The power of ancient myths and symbols is controlled to create a bridge between the past and the future. Mythology in architecture is a deliberate strategy, they try to design something timeless and universally relatable. The value of a built environment, therefore, is a conglomerate of its actual physical existence and the historical memories and myths people attach to it, bring to it, and project on it.
Not all stories surrounding an architectural work incorporate a level of myth. These stories can also be well hidden from the casual viewer and are often built into the conceptual design of the architectural statement.
Ancient Greek architecture
Before 600 BC worship was done in the open, but when the Greeks began to represent their Gods by large statues, it was necessary to provide a building for this purpose. This led to the development of temples. With the greek god of architecture being Hephaestus (fire, metalworking, craftsmen, sculpture, metallurgy and volcanoes) and the greek goddess associated with architecture being Hestia (architecture, the hearth, and domesticity). The role temples are intended for worship to celebrate their god and receive comfort. But, Ancient Greek temples were meant to serve as homes for the gods and goddesses of that community. Their homes were the finest and came with a staff of servants.
The ancient Greek temples were often enhanced with mythological decorations from the columns to the roof. The architectural functions of the temple mainly concentrated on the cella with the cult statue. The architectural elaboration served to stress the dignity of the cella.These statues of the god or goddess were usually represented standing up or sitting down in the central space of the temple. The early statues were made of wood and then were transitioned to be made out of stone or cast bronze. Two of the finest statues for temples built was the statue of Zeus at Olympia or Athena at the Parthenon, they were both a combination of gold and ivory with Zeus been considered as one of the seven wonder of the ancient world.
The Parthenon is a greek temple located in Athens that was built in dedication to the greek goddess of wisdom, war, handicraft, and practical reason Athena.The Parthenon was a symbol of the Athenians' devotion and gratitude to her. At a time when the Athenians wanted to showcase their strength, civilization, and heroism to the world, the Parthenon’s sculptural reliefs reinforced these ideals. The South, West, and North sides of the Parthenon frieze show a procession of human figures. The East side contains Greek gods in various positions. The gods on the left side of the frieze tend to have stronger associations with the underworld while the gods on the right reside over spheres of fertility and optimism. This creates a story of life and death across the East Frieze.
Ancient Egyptian Architecture
The great pyramids are an architectural feat constructed as a means to house the remains of ancient Egyptian rulers. Inscribed on the interior pyramid walls are hieroglyphic texts describing the afterlife and ancient Egyptian mythology. There are as many as 900 individual compositions in each pyramid.The pyramid's smooth, angled sides symbolized the rays of the sun and were designed to help the king's soul ascend to heaven and join the gods, particularly the sun god Ra. It was believed by ancient Egyptians that when a king died part of his spirit remained in his body and later mummified to care for the spirit. The Spetted pyramid became known as the royal burial grounds.
The sphinx was a mythical creature with the body of a lion and head of a man wearing a pharaoh headdress. The lion symbolizes strength, power, and the protection give nature to the pharaohs, it is considered a powerful guardian of the scared and royal realms. The human head symbolizes wisdom and intelligence. The position of the sphinx facing east towards the rising sun symbolizes the pharaoh’s role as the mediator between the gods and the people, and his connection to the sun god Ra. Sphinx statues were commonly found in or near ancient Egyptian temples and tombs. Sphinx statues were commonly found in or near ancient Egyptian temples and tombs. The sphinx was thought to be a guardian for the ancient rulers of Egypt. These sphinxes like the pyramids had inscriptions on their bases and bodies. These inscriptions were references to the Egyptian gods such as Horus, Nekhbet, Wadjet and many others.
- Horus : Egyptian deity and pharaoh who represented the sky, sun, kingship, healing, and protection
- Nekhbet : Egyptian goddess who protected the pharaohs, queens, children, pregnant people, and the dead
- Wadjet : Goddess of serpents, the Nile Delta, the land of the living, and protector of Egyptian kings
Ancient Roman Architecture
Many ancient Roman temples were constructed for religious purposes. The most influential example is the Pantheon. Pantheon is a Greek adjective meaning “honor all Gods”, in fact it was first built as a temple to all gods. According to Roman legend, the original Pantheon was constructed on the very site where Romulus, their mythological founder, ascended to heaven. However, most historians attribute the first Pantheon, built in 27 BC, to Agrippa, a close associate of Emperor Augustus. The Pantheon serves as the final resting place for the famed artist Raphael, as well as several Italian kings and poets.
While there is very little surviving written information about the building historian Cassius Dio remarked:Perhaps it has this name because, among the statues which embellished it, there were those of many gods, including Mars and Venus; but my own opinion on the origin of the name is that, because of its vaulted roof, it actually resembles the heavens.
See also
Folly
References
Books
Giedion, S.: The Beginnings of Architecture: The Eternal Present: A Contribution on Constancy and Change, New Jersey: Princeton University Press, 1981
Lethaby, William Richard: Architecture, Mysticism and Myth Cosimo (first published 1892), English, 288 pages, (Online PDF)
Mann, A.: Sacred Architecture, Shaftesbury: Element, 1993
Donald E. Strong, The Classical World, Paul Hamlyn, London (1965)
External links
Bruno Queysanne: Architecture and Mythology (Southern California Institute of Architecture: Media Archive)
Architectural history
Mythography | Architectural mythology | [
"Engineering"
] | 1,363 | [
"Architectural history",
"Architecture"
] |
2,269,568 | https://en.wikipedia.org/wiki/Rainband | A rainband is a cloud and precipitation structure associated with an area of rainfall which is significantly elongated. Rainbands in tropical cyclones can be either stratiform or convective and are curved in shape. They consist of showers and thunderstorms, and along with the eyewall and the eye, they make up a tropical cyclone. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity.
Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes. Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet. Bands of thunderstorms can form with sea breeze and land breeze boundaries, if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself. Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain or snow. Behind extratropical cyclones, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. If the atmosphere is cold enough, these rainbands can yield heavy snow.
Extratropical cyclones
Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion, and tend to be wide and stratiform in nature. In an atmosphere with rich low level moisture and vertical wind shear, narrow, convective rainbands known as squall lines form generally in the cyclone's warm sector, ahead of strong cold fronts associated with extratropical cyclones. Wider rain bands can occur behind cold fronts, which tend to have more stratiform, and less convective, precipitation. Within the cold sector north to northwest of a cyclone center, in colder cyclones, small scale, or mesoscale, bands of heavy snow can occur within a cyclone's comma head precipitation pattern with a width of to . These bands in the comma head are associated with areas of frontogensis, or zones of strengthening temperature contrast. Southwest of extratropical cyclones, curved flow bringing cold air across the relatively warm Great Lakes can lead to narrow lake-effect snow bands which bring significant localized snowfall.
Narrow cold-frontal rainband
A narrow cold-frontal rainband (NCFR) is a characteristic of particularly sharp cold frontal boundaries. These can usually be seen very easily on satellite photos. NCFRs are typically accompanied by strong gusty winds and brief but intense rainfall. Convection may or may not occur depending on the stability of the air mass being lifted by the front. Such fronts usually are also marked by a sharp wind shift and temperature drop.
Tropical cyclones
Rainbands exist in the periphery of tropical cyclones, which point towards the cyclone's center of low pressure. Rainbands within tropical cyclones require ample moisture and a low level pool of cooler air. Bands located to from a cyclone's center migrate outward. They are capable of producing heavy rains and squalls of wind, as well as tornadoes, particularly in the storm's right-front quadrant.
Some rainbands move closer to the center, forming a secondary, or outer, eyewall within intense hurricanes. Spiral rainbands are such a basic structure to a tropical cyclone that in most tropical cyclone basins, use of the satellite-based Dvorak technique is the primary method used to determine a tropical cyclone's maximum sustained winds. Within this method, the extent of spiral banding and difference in temperature between the eye and eyewall is used to assign a maximum sustained wind and a central pressure. Central pressure values for their centers of low pressure derived from this technique are approximate.
Different programs have been studying these rainbands, including the Hurricane Rainband and Intensity Change Experiment.
Forced by geography
Convective rainbands can form parallel to terrain on its windward side, due to lee waves triggered by hills just upstream of the cloud's formation. Their spacing is normally to apart. When bands of precipitation near frontal zones approach steep topography, a low-level barrier jet stream forms parallel to and just prior to the mountain ridge, which slows down the frontal rainband just prior to the mountain barrier. If enough moisture is present, sea breeze and land breeze fronts can form convective rainbands. Sea breeze front thunderstorm lines can become strong enough to mask the location of an approaching cold front by evening. The edge of ocean currents can lead to the development of thunderstorm bands due to heat differential at this interface. Downwind of islands, bands of showers and thunderstorms can develop due to low level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts.
References
External links
Precipitation
Extratropical cyclones
Storm
Weather hazards
Tropical cyclone meteorology
Mesoscale meteorology | Rainband | [
"Physics"
] | 995 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
5,674,131 | https://en.wikipedia.org/wiki/Color-tagged%20structure | A color-tagged structure is a structure which has been classified by a color to represent the severity of damage or the overall condition of the building. The exact definition for each color may be different in different countries and jurisdictions.
A "red-tagged" structure has been severely damaged to the degree that the structure is too dangerous to inhabit. Similarly, a structure is "yellow-tagged" if it has been moderately damaged to the degree that its habitability is limited (only during the day, for example). A "green-tagged" structure may mean the building is either undamaged or has suffered slight damage, although differences exist at local levels when to use a green tag.
Tagging is performed by government building officials, or, occasionally during disasters, by engineers deputized by the building official. Natural disasters such as earthquakes, floods and mudslides are among the most common causes of a building being red-, yellow- or green-tagged. Usually, after such incidents, the local government body responsible for enforcing the building safety code examines the affected structures and tags them as appropriate.
In some areas of the United States, buildings are marked with a rectangular sign that is red with a white border and a white "X". Such signs provide the same information as "red-tagging" a building. Tagging structures in these ways can warn firefighters and others about hazardous buildings before the buildings are entered.
References
Building engineering
Structural engineering
Disaster management tools | Color-tagged structure | [
"Engineering"
] | 295 | [
"Structural engineering",
"Building engineering",
"Construction",
"Civil engineering",
"Architecture"
] |
5,674,351 | https://en.wikipedia.org/wiki/Stream%20capacity | The capacity of a stream or river is the total amount of sediment a stream is able to transport. This measurement usually corresponds to the stream power and the width-integrated bed shear stress across section along a stream profile. Note that capacity is greater than the load, which is the amount of sediment carried by the stream. Load is generally limited by the sediment available upstream.
Stream capacity is often mistaken for the stream competency, which is a measure of the maximum size of the particles that the stream can transport, or for the total load, which is the load that a stream carries.
The sediment transported by the stream depends upon the intensity of rainfall and land characteristics.
See also
Bed load
Sediment transport
Suspended load
Wash load
Hydrology
Sedimentology | Stream capacity | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 148 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
5,674,401 | https://en.wikipedia.org/wiki/Americas%20Conference%20on%20Information%20Systems | The Americas Conference on Information Systems (AMCIS) is an annual conference for information systems and information technology academics and professionals sponsored by the Association for Information Systems. AMCIS is widely considered to be one of the most prestigious conferences for IS/IT in the Western Hemisphere, and provides a platform for panel discussions and the presentation of peer-reviewed information systems research papers. The conference attracts over 600 submissions each year, and those that are selected for presentation appear in the AMCIS Proceedings, which are distributed to hundreds of libraries throughout the world.
The first AMCIS conference took place in 1995 in Pittsburgh and is notable for being the first IS/IT conference to utilize electronic paper submissions. Since that time, AMCIS has been held every August in different cities and attracts between 800 and 1,200 registered delegates every year. In 2006, AMCIS was held in Acapulco, Mexico, thereby marking a major milestone for the conference insofar as it was the first time AMCIS has been held outside of the United States. In 2008, AMCIS was held in Toronto, Canada, and, in 2012, in Lima, Peru. A Portuguese-language track was added in 2008. This continued in 2009 at the San Francisco conference and was a large component of the 2010 Lima conference.
AMCIS venues
External links
AmCIS proceedings
Information systems conferences
Academic conferences
Association for Information Systems conferences | Americas Conference on Information Systems | [
"Technology"
] | 276 | [
"Computing stubs",
"Computer conference stubs"
] |
5,674,591 | https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%203 | Mothers against decapentaplegic homolog 3 also known as SMAD family member 3 or SMAD3 is a protein that in humans is encoded by the SMAD3 gene.
SMAD3 is a member of the SMAD family of proteins. It acts as a mediator of the signals initiated by the transforming growth factor beta (TGF-β) superfamily of cytokines, which regulate cell proliferation, differentiation and death. Based on its essential role in TGF beta signaling pathway, SMAD3 has been related with tumor growth in cancer development.
Gene
The human SMAD3 gene is located on chromosome 15 on the cytogenic band at 15q22.33. The gene is composed of 9 exons over 129,339 base pairs. It is one of several human homologues of a gene that was originally discovered in the fruit fly Drosophila melanogaster.
The expression of SMAD3 has been related to the mitogen-activated protein kinase (MAPK/ERK pathway), particularly to the activity of mitogen-activated protein kinase kinase-1 (MEK1). Studies have demonstrated that inhibition of MEK1 activity also inhibits SMAD3 expression in epithelial cells and smooth muscle cells, two cell types highly responsive to TGF-β1.
Protein
SMAD3 is a polypeptide with a molecular weight of 48,080 Da. It belongs to the SMAD family of proteins. SMAD3 is recruited by SARA (SMAD Anchor for Receptor Activation) to the membrane, where the TGF-β receptor is located. The receptors for TGF-β, (including nodal, activin, myostatin and other family members) are membrane serine/threonine kinases that preferentially phosphorylate and activate SMAD2 and SMAD3.
Once SMAD3 is phosphorylated at the C-terminus, it dissociates from SARA and forms a heterodimeric complex with SMAD4, which is required for the transcriptional regulation of many target genes.
The complex of two SMAD3 (or of two SMAD2) and one SMAD4 binds directly to DNA though interactions of the MH1 domain. These complexes are recruited to sites throughout the genome by cell lineage-defining transcription factors (LDTFs) that determine the context-dependent nature of TGF-β action. The DNA binding sites in promoters and enhancers are known as the SMAD-binding elements (SBEs). These sites contain the CAG(AC)|(CC) and GGC(GC)|(CG) consensus sequences, the latter also known as 5GC sites. The 5GC-motifs are highly represented as clusters of sites, in SMAD-bound regions genome-wide. These clusters can also contain CAG(AC)|(CC) sites.
SMAD3/SMAD4 complex also binds to the TPA-responsive gene promoter elements, which have the sequence motif TGAGTCAG.
Transcriptional coregulators, such as WWTR1 (TAZ), interact with SMAD3 to promote their function.
Structure
MH1 domain
The X-ray structures of the SMAD3 MH1 domain bound to the GTCT DNA reveal characteristic features of the fold. The MH1 structure consists of four-helices and three sets of antiparallel β-hairpins, one of which is used to interact with DNA. It also revealed the presence of a bound Zn2+, coordinated by His126, Cys64, Cys109 and Cys121 residues. The main DNA binding region of the MH1 domain comprises the loop following the β1 strand, and the β2-β3 hairpin. In the complex with a member of the 5GC DNAs, the GGCGC motif, the convex face of the DNA-binding hairpin dives into the concave major groove of the duplex DNA containing five base pairs (GGCGC /'GCGCC'). In addition, the three residues strictly conserved in all R-SMADS and in SMAD4 (Arg74 and Gln76 located in β2 and Lys81 in β3 in SMAD3) participate in a network of specific hydrogen bonds with the dsDNA. Several tightly bound water molecules at the protein-DNA interface that contribute to the stabilization of the interactions have also been detected. The SMAD3 complex with the GGCGC site reveals that the protein-DNA interface is highly complementary and that one MH1 protein covers a DNA binding site of six base pairs.
MH2 domain
The MH2 domain mediates the interaction of R-SMADS with activated TGF-β receptors, and with SMAD4 after receptor-mediated phosphorylation of the Ser-X-Ser motif present in R-SMADS. The MH2 domain is also a binding platform for cytoplasmic anchors, DNA-binding cofactors, histone modifiers, chromatin readers, and nucleosome- positioning factors.
The structure of the complex of SMAD3 and SMAD4 MH2 domains has been determined. The MH2 fold is defined by two sets of antiparallel β-strands (six and five strands respectively) arranged as a β-sandwich flanked by a triple-helical bundle on one side and by a set of large loops and a helix on the other.
Functions and interactions
TGF-β/SMAD signaling pathway
SMAD3 functions as a transcriptional modulator, binding the TRE (TPA responsive element) in the promoter region of many genes that are regulated by TGF-β. SMAD3 and SMAD4 can also form a complex with c-Fos and c-jun at the AP-1/SMAD site to regulate TGF-β-inducible transcription. The genes regulated by SMAD3-mediated TGFβ signaling affect differentiation, growth and death.
TGF-β/SMAD signaling pathway has been shown to have a critical role in the expression of genes controlling differentiation of embryonic stem cells. Some of the developmental genes regulated by this pathway include FGF1, NGF, and WNT11 as well as stem/progenitor cell associated genes CD34 and CXCR4. The activity of this pathway as a regulator of pluripotent cell states requires the TRIM33-SMAD2/3 chromatin reading complex.
TGF-β/SMAD3-induced repression
Besides the activity of TGF-β in the up-regulation of genes, this signaling molecule also induces the repression of target genes containing the TGF-β inhibitory element (TIE). SMAD3 plays also a critical role in TGF-β-induced repression of target genes, specifically it is required for the repression of c-myc. The transcriptional repression of c-myc is dependent on direct SMAD3 binding to a repressive SMAD binding element (RSBE), within TIE of the c-myc promoter. The c-myc TIE is a composite element, composed of an overlapping RSBE and a consensus E2F site, which is capable of binding at least SMAD3, SMAD4, E2F4, and p107.
Clinical significance
Diseases
Increased SMAD3 activity has, however, been implicated in the pathogenesis of scleroderma.
SMAD3 is also a multifaceted regulator in adipose physiology and the pathogenesis of obesity and type 2 diabetes. SMAD3-knockout mice have diminished adiposity, with improved glucose tolerance and insulin sensitivity. Despite their reduced physical activity arising from muscle atrophy, these SMAD3-knockout mice are resistant to high-fat-diet induced obesity. SMAD3-knockout mouse is a legitimate animal model of human aneurysms‐osteoarthritis syndrome (AOS), also named Loeys-Dietz Syndrome (type 3). SMAD3 deficiency promotes inflammatory aortic aneurysms in angiotensin II-infused mice via the activation of iNOS. Macrophage depletion and inhibition of iNOS activity prevent aortic aneurysms related to SMAD3 gene mutation
Role in cancer
The role of SMAD3 in the regulation of genes important for cell fate, such as differentiation, growth and death, implies that an alteration in its activity or repressing of its activity can lead to the formation or development of cancer. Also several studies have proven the bifunctional tumor suppressor/oncogene role of TGF beta signaling pathway in carcinogenesis.
One way in which SMAD3 transcriptional activator function is repressed, is by the activity of EVI-1. EVI-1 encodes a zinc-finger protein that may be involved in leukaemic transformation of haematopoietic cells. The zinc-finger domain of EVI-1 interacts with SMAD3, thereby suppressing the transcriptional activity of SMAD3. EVI-1 is thought to be able to promote growth and to block differentiation in some cell types by repressing TGF-β signalling and antagonizing the growth-inhibitory effects of TGF-β.
Prostate
The activity of SMAD3 in prostate cancer is related with the regulation of angiogenic molecules expression in tumor vascularization and cell-cycle inhibitor in tumor growth. The progressive growth of primary tumors and metastases in prostate cancer depends on an adequate blood supply provided by tumor angiogenesis. Studies analyzing SMAD3 levels of expression in prostate cancer cell lines found that the two androgen-independent and androgen receptor-negative cell lines (PC-3MM2 and DU145) have high expression levels of SMAD3. Analysis of the relation between SMAD3 and the regulation of angiogenic molecules suggest that SMAD3 may be one of key components as a repressor of the critical angiogenesis switch in prostate cancer.
The pituitary tumor-transforming gene 1 (PTTG1) has also an impact in SMAD3-mediated TGFβ signaling. PTTG1 has been associated with various cancer cells including prostate cancer cells. Studies showed that the overexpression of PTTG1 induces a decrease in SMAD3 expression, promoting the proliferation of prostate cancer cells via the inhibition of SMAD3.
Colorectal
In mice, mutation of SMAD3 has been linked to colorectal adenocarcinoma,[3] increased systemic inflammation, and accelerated wound healing.[4] Studies have shown that mutations in SMAD3 gene promote colorectal cancer in mice. The altered activity of SMAD3 was linked to chronic inflammation and somatic mutations that contribute to chronic colitis and the development of colorectal cancer.
The results generated on mice helped identify SMAD3 like a possible player in human colorectal cancer. The impact of SMAD3 has also been analyzed in colorectal cancer human cell lines, using single-nucleotide polymorphism (SNP) microarray analysis. The results showed reductions in SMAD3 transcriptional activity and SMAD2-SMAD4 complex formation, underlining the critical roles of these three proteins within the TGF-β signaling pathway and the impact of this pathway in colorectal cancer development.
Breast
TGF-β-induced SMAD3 transcriptional regulation response has been associated with breast cancer bone metastasis by its effects on tumor angiogenesis, and epithelial-mesenchymal transition (EMT). There have been identified diverse molecules that act over the TGF-β/SMAD signaling pathway, affecting primarily the SMAD2/3 complex, which have been associated with the development of breast cancer.
FOXM1 (forkhead box M1) is a molecule that binds with SMAD3 to sustain activation of the SMAD3/SMAD4 complex in the nucleus. The research over FOXM1 suggested that it prevents the E3 ubiquitin-protein ligase transcriptional intermediary factor 1 γ (TIF1γ) from binding SMAD3 and monoubiquitinating SMAD4, which stabilized the SMAD3/SMAD4 complex. FOXM1 is a key player in the activity of the SMAD3/SMAD4 complex, promoting SMAD3 modulator transcriptional activity, and also plays an important role in the turnover of the activity of SMAD3/SMAD4 complex. Based on the importance of this molecule, studies have found that FOXM1 is overexpressed in highly aggressive human breast cancer tissues. The results from these studies also found that the FOXM1/SMAD3 interaction was required for TGF-β-induced breast cancer invasion, which was the result of SMAD3/SMAD4-dependent upregulation of the transcription factor SLUG.
MED15 is a mediator molecule that promotes the activity of the TGF-β/SMAD signaling. The deficiency of this molecule attenuates the activity of the TGF-β/SMAD signaling pathway over the genes required for induction of epithelial-mesenchymal transition. The action of MED15 is related with the phosphorylation of SMAD2/3 complex. The knockdown of MED15 reduces the amount of SMAD3 phosphorylated, therefore reducing its activity as transcription modulator. However, in cancer, MED15 is also highly expressed in clinical breast cancer tissues correlated with hyperactive TGF-β signaling, as indicated by SMAD3 phosphorylation. The studies suggest that MED15 increases the metastatic potential of a breast cancer cell line by increasing TGF-β-induced epithelial–mesenchymal transition.
Kidney
Smad3 activation plays a role in the pathogenesis of renal fibrosis, probably by inducing activation of bone marrow-derived fibroblasts.
Nomenclature
The SMAD proteins are homologs of both the Drosophila protein "mothers against decapentaplegic" (MAD) and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene MAD in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was inspired by organizations formed by mothers to oppose social problems, such as Mothers Against Drunk Driving (MADD); and based on a tradition of such unusual naming within the gene research community.
A reference assembly of SMAD3 is available.
References
Further reading
External links
Developmental genes and proteins
MH1 domain
MH2 domain
R-SMAD
Transcription factors
Human proteins | Mothers against decapentaplegic homolog 3 | [
"Chemistry",
"Biology"
] | 3,067 | [
"Transcription factors",
"Gene expression",
"Signal transduction",
"Developmental genes and proteins",
"Induced stem cells"
] |
5,674,962 | https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic%20homolog%204 | SMAD4, also called SMAD family member 4, Mothers against decapentaplegic homolog 4, or DPC4 (Deleted in Pancreatic Cancer-4) is a highly conserved protein present in all metazoans. It belongs to the SMAD family of transcription factor proteins, which act as mediators of TGF-β signal transduction. The TGFβ family of cytokines regulates critical processes during the lifecycle of metazoans, with important roles during embryo development, tissue homeostasis, regeneration, and immune regulation.
SMAD 4 belongs to the co-SMAD group (common mediator SMAD), the second class of the SMAD family. SMAD4 is the only known co-SMAD in most metazoans. It also belongs to the Darwin family of proteins that modulate members of the TGFβ protein superfamily, a family of proteins that all play a role in the regulation of cellular responses. Mammalian SMAD4 is a homolog of the Drosophila protein "Mothers against decapentaplegic" named Medea.
SMAD4 interacts with R-Smads, such as SMAD2, SMAD3, SMAD1, SMAD5 and SMAD9 (also called SMAD8) to form heterotrimeric complexes. Transcriptional coregulators, such as WWTR1 (TAZ) interact with SMADs to promote their function. Once in the nucleus, the complex of SMAD4 and two R-SMADS binds to DNA and regulates the expression of different genes depending on the cellular context. Intracellular reactions involving SMAD4 are triggered by the binding, on the surface of the cells, of growth factors from the TGFβ family. The sequence of intracellular reactions involving SMADS is called the SMAD pathway or the transforming growth factor beta (TGF-β) pathway since the sequence starts with the recognition of TGF-β by cells.
Gene
In mammals, SMAD4 is coded by a gene located on chromosome 18. In humans, the SMAD4 gene contains 54 829 base pairs and is located from pair n° 51,030,212 to pair 51,085,041 in the region 21.1 of the chromosome 18.
Protein
SMAD4 is a 552 amino-acid polypeptide with a molecular weight of 60.439 Da. SMAD4 has two functional domains known as MH1 and MH2.
The complex of two SMAD3 (or of two SMAD2) and one SMAD4 binds directly to DNA though interactions of their MH1 domains. These complexes are recruited to sites throughout the genome by cell lineage-defining transcription factors (LDTFs) that determine the context-dependent nature of TGF-β action. Early insights into the DNA binding specificity of Smad proteins came from oligonucleotide binding screens, which identified the palindromic duplex 5'–GTCTAGAC–3' as a high affinity binding sequence for SMAD3 and SMAD4 MH1 domains. Other motifs have also been identified in promoters and enhancers. These additional sites contain the CAGCC motif and the GGC(GC)|(CG) consensus sequences, the latter also known as 5GC sites. The 5GC-motifs are highly represented as clusters of sites, in SMAD-bound regions genome-wide. These clusters can also contain CAG(AC)|(CC) sites. SMAD3/SMAD4 complex also binds to the TPA-responsive gene promoter elements, which have the sequence motif TGAGTCAG.
Structures
MH1 domain complexes with DNA motifs
The first structure of SMAD4 bound to DNA was the complex with the palindromic GTCTAGAC motif. Recently, the structures of SMAD4 MH1 domain bound to several 5GC motifs have also been determined. In all complexes, the interaction with the DNA involves a conserved β-hairpin present in the MH1 domain. The hairpin is partially flexible in solution and its high degree of conformational flexibility allows recognition of the different 5-bp sequences. Efficient interactions with GC-sites occur only if a G nucleotide is located deep in the major grove, and establishes hydrogen bonds with the guanidinium group of Arg81. This interaction facilitates a complementary surface contact between the Smad DNA-binding hairpin and the major groove of the DNA. Other direct interactions involve Lys88 and Gln83. The X-ray crystal structure of the Trichoplax adhaerens SMAD4 MH1 domains bound to the GGCGC motif indicates a high conservation of this interaction in metazoans.
MH2 domain complexes
The MH2 domain, corresponding to the C-terminus, is responsible for receptor recognition and association with other SMADs. It interacts with the R-SMADS MH2 domain and forms heterodimers and heterotrimers. Some tumor mutations detected in SMAD4 enhance interactions between the MH1 and MH2 domains.
Nomenclature and origin of name
SMADs are highly conserved across species, especially in the N terminal MH1 domain and the C terminal MH2 domain.
The SMAD proteins are homologs of both the Drosophila protein MAD and the C. elegans protein SMA. The name is a combination of the two. During Drosophila research, it was found that a mutation in the gene MAD in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added, since mothers often form organizations opposing various issues, e.g. Mothers Against Drunk Driving (MADD), reflecting "the maternal-effect enhancement of dpp"; and based on a tradition of unusual naming within the research community. SMAD4 is also known as DPC4, JIP or MADH4.
Function and action mechanism
SMAD4 is a protein defined as an essential effector in the SMAD pathway. SMAD4 serves as a mediator between extracellular growth factors from the TGFβ family and genes inside the cell nucleus. The abbreviation co in co-SMAD stands for common mediator. SMAD4 is also defined as a signal transducer.
In the TGF-β pathway, TGF-β dimers are recognized by a transmembrane receptor, known as type II receptor. Once the type II receptor is activated by the binding of TGF-β, it phosphorylates a type I receptor. Type I receptor is also a cell surface receptor. This receptor then phosphorylates intracellular receptor regulated SMADS (R-SMADS) such as SMAD2 or SMAD3. The phosphorylated R-SMADS then bind to SMAD4. The R-SMADs-SMAD4 association is a heteromeric complex. This complex is going to move from the cytoplasm to the nucleus: it is the translocation. SMAD4 may form heterotrimeric, heterohexameric or heterodimeric complexes with R-SMADS.
SMAD4 is a substrate of the Erk/MAPK kinase and GSK3. The FGF (Fibroblast Growth Factor) pathway stimulation leads to Smad4 phosphorylation by Erk of the canonical MAPK site located at Threonine 277. This phosphorylation event has a dual effect on Smad4 activity. First, it allows Smad4 to reach its peak of transcriptional activity by activating a growth factor-regulated transcription activation domain located in the Smad4 linker region, SAD (Smad-Activation Domain). Second, MAPK primes Smad4 for GSK3-mediated phosphorylations that cause transcriptional inhibition and also generate a phosphodegron used as a docking site by the ubiquitin E3 ligase Beta-transducin Repeat Containing (beta-TrCP) that polyubiquitinates Smad4 and targets it for degradation in the proteasome. Smad4 GSK3 phosphorylations have been proposed to regulate the protein stability during pancreatic and colon cancer progression.
In the nucleus the heteromeric complex binds promoters and interact with transcriptional activators. SMAD3/SMAD4 complexes can directly bind the SBE. These associations are weak and require additional transcription factors such as members of the AP-1 family, TFE3 and FoxG1 to regulate gene expression.
Many TGFβ ligands use this pathway and subsequently SMAD4 is involved in many cell functions such as differentiation, apoptosis, gastrulation, embryonic development and the cell cycle.
Clinical significance
Genetic experiments such as gene knockout (KO), which consist in modifying or inactivating a gene, can be carried out in order to see the effects of a dysfunctional SMAD 4 on the study organism. Experiments are often conducted in the house mouse (Mus musculus).
It has been shown that, in mouse KO of SMAD4, the granulosa cells, which secrete hormones and growth factors during the oocyte development, undergo premature luteinization and express lower levels of follicle-stimulating hormone receptors (FSHR) and higher levels of luteinizing hormone receptors (LHR). This may be due in part to impairment of bone morphogenetic protein-7 effects as BMP-7 uses the SMAD4 signaling pathway.
Deletions in the genes coding for SMAD1 and SMAD5 have also been linked to metastasic granulosa cell tumors in mice.
SMAD4, is often found mutated in many cancers. The mutation can be inherited or acquired during an individual's lifetime.
If inherited, the mutation affects both somatic cells and cells of the reproductive organs. If the SMAD 4 mutation is acquired, it will only exist in certain somatic cells. Indeed, SMAD 4 is not synthesized by all cells. The protein is present in skin, pancreatic, colon, uterus and epithelial cells. It is also produced by fibroblasts.
The functional SMAD 4 participates in the regulation of the TGF-β signal transduction pathway, which negatively regulates growth of epithelial cells and the extracellular matrix (ECM). When the structure of SMAD 4 is altered, expression of the genes involved in cell growth is no longer regulated and cell proliferation can go on without any inhibition. The important number of cell divisions leads to the forming of tumors and then to multiploid colorectal cancer and pancreatic carcinoma. It is found inactivated in at least 50% of pancreatic cancers.
Somatic mutations found in human cancers of the MH1 domain of SMAD 4 have been shown to inhibit the DNA-binding function of this domain.
SMAD 4 is also found mutated in the autosomal dominant disease juvenile polyposis syndrome (JPS). JPS is characterized by hamartomatous polyps in the gastrointestinal (GI) tract. These polyps are usually benign, however they are at greater risk of developing gastrointestinal cancers, in particular colon cancer.
Around 60 mutations causing JPS have been identified. They have been linked to the production of a smaller SMAD 4, with missing domains that prevent the protein from binding to R-SMADS and forming heteromeric complexes.
Mutations in SMAD4 (mostly substitutions) can cause Myhre syndrome, a rare inherited disorder characterized by mental disabilities, short stature, unusual facial features, and various bone abnormalities.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Hereditary Hemorrhagic Telangiectasia
GeneReviews/NCBI/NIH/UW entry on Juvenile Polyposis Syndrome
SMAD4 gene variant database
Developmental genes and proteins
MH1 domain
MH2 domain
Transcription factors
Human proteins
Genes on human chromosome 18 | Mothers against decapentaplegic homolog 4 | [
"Chemistry",
"Biology"
] | 2,506 | [
"Transcription factors",
"Gene expression",
"Signal transduction",
"Developmental genes and proteins",
"Induced stem cells"
] |
5,675,516 | https://en.wikipedia.org/wiki/Oil%2C%20Chemical%20and%20Atomic%20Workers%20International%20Union | The Oil, Chemical and Atomic Workers Union (OCAW) was a trade union in the United States which existed between 1917 and 1999. At the time of its dissolution and merger, the International represented 80,000 workers and was affiliated with the AFL–CIO.
History
Oil Workers International (OWIU)
The union was first originally established as the International Association of Oil Field, Gas Well, and Refinery Workers of America in 1918 after a major workers' strike in the Texas oil fields in late 1917. It affiliated with the American Federation of Labor (AFL) when they granted the occurrence of local unions of oil workers at a convention held in El Paso, Texas, and officially set up the international union for oil workers in 1918. Beginning with only 25 members, the newly established union underwent much success in the first few years of establishment. In just a few years they were already organizing and negotiating well thought out contacts that would affect thousands of oil workers in only three states California, Texas, and Oklahoma.
Its membership grew to 30,000 as the oil industry grew rapidly in the United States in 1921, which was considered their first high peak but the Great Depression reduced its ranks to just 350 in the beginning of 1933. With the several local unions that had been established, only one local – LB local 128 – managed to not miss a single meeting. The union began to increase in size and activity again once the NRA was passed in 1933. The NRA, under the New Deal, guaranteed the right of workers to organize. At the end of 1933, and even through the depression, several thousands of oil workers joined and rejoined the union and dispersed into several dozen locals. At this point being a part of the union became really important for the oil industry.
In 1937, the union changed its name to the Oil Workers International Union (OWIU). The union was one of the first that affiliated with the Committee for Industrial Organization in early 1938, and AFL President William Green revoked the union's AFL charter.
CIO helped the union grow significantly between the years of 1940–1946. Memberships grew due to large strategic groups that were brought into the union, but soon after growth began to slowly decrease after 1946.
Due to the expanding in memberships and in the union itself, the OWIU extended its membership into Canada in 1948. They expanded into Canada so that they could improve wages and working conditions. After 1948, due to wages and working conditions being outrageous, Canadian workers reaped the benefits of these changes from the union and soon after started to receive wages close behind those in the US. The impressive movement even far surpassed wages of other industries in Canada as well.
United Gas Coke and Chemical Workers of America (UGCCWA)
Similar to that of the OWIU, the UGCCWA began as the United Mine Workers of America. The main purpose of the UMA was to unite workers in industries related to coke and artificial gas production, which used coke as a fuel.
Due to being unhappy with the service AFL was providing them, UMW eventually broke away from the AFL and they decided to create its own union separate from that of AFL called “District 50”. District 50 became a branch of UMW and its main purpose was to cover “gas, coke and chemical products” made from coal.
Eventually, the president, John L Lewis, of UMW and of CIO resigned as president of the CIO and therefore removing UMW out of the CIO as well. After they were no longer part of AFL and CIO, Lewis strengthened District 50. He made it transform into a sort of all of the above branch of the Mine Workers, in such that all miscellaneous groups related to that of gas, coke, and chemicals were now a part of District 50 making gas, coke, and chemical workers simply a small division of District 50.
Due to the impact that this action had on the workers of these companies, several of the division leaders from District 50 met with the CIO executive board in June 1942. They wanted to break away from District 20 and unite back into CIO so they wanted to discuss the chance of creating an international union for their industry alone. The UGCCWU had broken away from the United Mine Workers of America in September 1942, and won a charter from the Congress of Industrial Organizations (CIO). At the time they were finally granted charter, their union officially changed their name to United Gas Coke and Chemical Workers of America.
The international under the CIO got off to a slow beginning and the first meeting only represented around 5,000 workers. However, in just a few months the union grew in size when numerous other groups left District 50 and joined the UGCCWA.
In 1948, Lee Pressman of New York and Joseph Forer of Washington, DC, represented Charles A. Doyle of the Gas, Coke and Chemical Workers Union along with Gerhard Eisler (public thought to be the top Soviet spy in America); Irving Potash, vice president of the Fur and Leather Workers Union; Ferdinand C. Smith, secretary of the National Maritime Union; and John Williamson, labor secretary of the CPUSA). On May 5, 1946, Pressman and Forer received a preliminary injunction so their defendants might have hearings with examiners unconnected with the investigations and prosecutions by examiners of the Immigration and Naturalization Service.
Over the next several years, members slowly but steadily increased but finally hit their peak in 1950 when members quickly grew. Finally, when UGCCWA merged with OWIU almost 100,000 workers represented those in the gas, coke, and chemical industry.
Oil Chemical and Atomic Workers International Union (OCAW)
Oil Workers International Union (OWIU) and the United Gas, Coke, and Chemical Workers of America (UGCCWA) merged on March 4, 1955, to form the Oil, Chemical, and Atomic Workers Union (OCAW). When the AFL and CIO merged in 1955, so did the two oil workers' unions. In 1956, after only one year of the merge, OCAW represented approximately 210,000 workers. During this time, it represented more workers than any other union in the oil and chemical field.
The OCAW had one important objective and main focus of their union, the improvement of living conditions of those who work in oil, chemical and related industries. OCAW went about achieving this by collective bargaining and participating in community activities, political action, and educational work. Collective bargaining was focused on seeking better wages and better working conditions for the wage earned. The union, in a specific rule followed way, bargained with employers on how to improve these conditions. Also, by participating in community activities, political action, and educational work, the union intended on gaining experience first hand and developing ways to better government, schools, housing, recreational facilities, amongst other things that will help improve the community in entirety.
In the 1970s, OCAW's Canadian locals broke off to form their own union. OCAW tried to absorb the United Rubber Workers several times in the 1970s and 1980s, but the talks collapsed due to internal union politics within the Rubber Workers and no merger ever occurred.
OCAW lost approximately 50 percent of its membership between 1980 and 1995, primarily because oil companies closed nearly half the refineries in the US. OCAW sought a merger with larger unions in an attempt to survive. A planned merger with the United Mine Workers of America was rejected on February 24, 1988, just two hours before the unions planned to announce the merger agreement. OCAW finally merged with the 250,000-member United Paperworkers International Union on January 4, 1999, to form the Paper, Allied-Industrial, Chemical and Energy Workers International Union (PACE).
OCAW gained a final victory as an independent union seven months after the merger, when the federal government acknowledged for the first time that nuclear weapons production during the Cold War likely caused the illness and even deaths of thousands of atomic mining, refining, and production workers. The government agreed to seek legislation to compensate workers and their survivors for their medical care and lost wages. The admission of complicity and legislative relief had long been sought by OCAW.
PACE merged with the United Steelworkers in 2005 to form the United Steel, Paper and Forestry, Rubber, Manufacturing, Energy, Allied-Industrial and Service Workers International Union (although the merged union is still more commonly known as the United Steelworkers). OCAW members are scattered throughout several "bargaining conferences", the industry divisions internal to the United Steelworkers. These include the Chemical Industry, Energy and Utilities, Manufacturing, Mining, and Pharmacies and Pharmaceuticals conferences. Robert Wages, president of OCAW from 1991 to 1999, is currently retired. Kip Phillips a former Vice President is now a Vice President at large with the USW.
Presidents
1955: Jack Knight
1965: Alvin F. Grospiron
1979: Robert F. Goss
1983: Joseph Misbrener
1991: Bob Wages
Other notable members
Stanley Aronowitz
Tony Mazzocchi
Mike Ricigliano
Sam Nahem (1915–2004), Major League Baseball pitcher
Karen Silkwood
References
Further reading
Rothbaum, Murray. The Government of the Oil, Chemical, and Atomic Workers Union. New York: John Wiley and Sons, 1962.
Weber, Arnold R. "Competitive Unionism in the Chemical Industry." Industrial and Labor Relations Review. 13:1 (October 1959).
United Steelworkers
Defunct trade unions in the United States
1918 establishments in the United States
Trade unions disestablished in 1999
Chemical industry trade unions
Energy industry trade unions
Trade unions established in 1918 | Oil, Chemical and Atomic Workers International Union | [
"Chemistry"
] | 1,956 | [
"Chemical industry trade unions"
] |
5,676,385 | https://en.wikipedia.org/wiki/Catskill%20Aqueduct | The Catskill Aqueduct, part of the New York City water supply system, brings water from the Catskill Mountains to Yonkers where it connects to other parts of the system.
History
Construction began in 1907. The aqueduct proper was completed in 1916 and the entire Catskill Aqueduct system including three dams and 67 shafts was completed in 1924. The total cost of the aqueduct system was $177 million ($ in ).
Specifications
The aqueduct consists of of cut and cover aqueduct, over of grade tunnel, of pressure tunnel, and nine miles (10 km) of steel siphon. The 67 shafts sunk for various purposes on the aqueduct and City Tunnel vary in depth from 174 to . Water flows by gravity through the aqueduct at a rate of about .
The Catskill Aqueduct has an operational capacity of about per day north of the Kensico Reservoir in Valhalla, New York. Capacity in the section of the aqueduct south of Kensico Reservoir to the Hillview Reservoir in Yonkers, New York is per day. The aqueduct normally operates well below capacity with daily averages around 350– of water per day. About 40% of New York City's water supply flows through the Catskill Aqueduct.
Geography
The Catskill Aqueduct begins at the Ashokan Reservoir in Olivebridge, New York, located in Ulster County. From the Ashokan Reservoir, the aqueduct traverses in a southeasterly direction through Ulster, Orange, and Putnam counties. It tunnels first beneath the Rondout Valley and Rondout Creek in the town of Marbletown, then beneath the Wallkill River in the town of Gardiner in Ulster County before flowing toward Orange County, New York. It crosses below the Hudson River bed at Storm King Mountain in Orange County before reaching Putnam County on the east side of the river at Breakneck Mountain. The aqueduct transports water from Ashokan as well as the Schoharie Reservoir, which feeds into Ashokan.
The aqueduct then enters Westchester County, New York, and flows to the Kensico Reservoir, which also receives water from the city's Delaware Aqueduct. It continues from the Kensico reservoir and terminates at the Hillview Reservoir in Yonkers. The Hillview Reservoir then feeds City Tunnels 1 and 2, which bring water to New York City. If necessary, water can be made to bypass both reservoirs.
References
See also
Delaware Aqueduct
New York City Water Supply System
Frank E. Winsor the engineer in charge of construction of of the Aqueduct.
Water infrastructure of New York City
Landmarks in New York (state)
Aqueducts in New York (state)
Interbasin transfer | Catskill Aqueduct | [
"Environmental_science"
] | 521 | [
"Hydrology",
"Interbasin transfer"
] |
5,676,427 | https://en.wikipedia.org/wiki/Estimation%20lemma | In mathematics the estimation lemma, also known as the inequality, gives an upper bound for a contour integral. If is a complex-valued, continuous function on the contour and if its absolute value is bounded by a constant for all on , then
where is the arc length of . In particular, we may take the maximum
as upper bound. Intuitively, the lemma is very simple to understand. If a contour is thought of as many smaller contour segments connected together, then there will be a maximum for each segment. Out of all the maximum s for the segments, there will be an overall largest one. Hence, if the overall largest is summed over the entire path then the integral of over the path must be less than or equal to it.
Formally, the inequality can be shown to hold using the definition of contour integral, the absolute value inequality for integrals and the formula for the length of a curve as follows:
The estimation lemma is most commonly used as part of the methods of contour integration with the intent to show that the integral over part of a contour goes to zero as goes to infinity. An example of such a case is shown below.
Example
Problem.
Find an upper bound for
where is the upper half-circle with radius traversed once in the counterclockwise direction.
Solution.
First observe that the length of the path of integration is half the circumference of a circle with radius , hence
Next we seek an upper bound for the integrand when . By the triangle inequality we see that
therefore
because on . Hence
Therefore, we apply the estimation lemma with . The resulting bound is
See also
Jordan's lemma
References
.
.
Theorems in complex analysis
Lemmas in analysis | Estimation lemma | [
"Mathematics"
] | 354 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis",
"Lemmas",
"Lemmas in mathematical analysis"
] |
5,676,639 | https://en.wikipedia.org/wiki/Information%20Systems%20Research | Information Systems Research is a quarterly peer-reviewed academic journal that covers research in the areas of information systems and information technology, including cognitive psychology, economics, computer science, operations research, design science, organization theory and behavior, sociology, and strategic management. It is published by the Institute for Operations Research and the Management Sciences and was in 2007 ranked as one of the most prestigious journals in the information systems discipline. In 2008 it was selected as one of the top 20 professional/academic journals by Bloomberg Businessweek. The current editor-in-chief is Suprateek Sarker, who was preceded by Alok Gupta (University of Minnesota), Ritu Agarwal (2011–2016; University of Maryland, College Park), Vallabh Sambamurthy (2005–2010; Michigan State University), Chris F. Kemerer (2002–2004), Izak Benbasat (1999–2001), John Leslie King (1993–1998), and E. Burton Swanson (1990–1992). According to the Journal Citation Reports, the journal has a 2018 impact factor of 2.457. The journal is member of the Senior Scholar's 'Basket of Eight'.
References
External links
Academic journals established in 1990
Quarterly journals
Information systems journals
English-language journals
INFORMS academic journals | Information Systems Research | [
"Technology"
] | 265 | [
"Information systems journals",
"Information systems"
] |
5,676,771 | https://en.wikipedia.org/wiki/Management%20Information%20Systems%20Quarterly | Management Information Systems Quarterly, referred to as MIS Quarterly, is an online-only quarterly peer-reviewed academic journal that covers research in management information systems and information technology. It was established in 1977 and is considered a major periodical in the information systems industry. An official journal of the Association for Information Systems, it is published by the Management Information Systems Research Center at the University of Minnesota. The current editor-in-chief is Andrew Burton-Jones, University of Queensland.
The journal had the highest impact factor (4.978) of all peer-reviewed academic journals in the field of business from 1992 to 2005. According to the Journal Citation Reports, the journal has a 2015 impact factor of 5.384.
Editors-in-chief
Past editors-in-chief in order of succession have been:
See also
Information Systems Research
Journal of Management Information Systems
Journal of Information Technology
References
External links
Business and management journals
Quarterly journals
Information systems journals
English-language journals
Academic journals established in 1977
University of Minnesota
Academic journals associated with learned and professional societies
Academic journals published by universities and colleges of the United States | Management Information Systems Quarterly | [
"Technology"
] | 219 | [
"Information systems journals",
"Information systems"
] |
5,676,819 | https://en.wikipedia.org/wiki/Lotus%20Dev.%20Corp.%20v.%20Borland%20Int%27l%2C%20Inc. | Lotus Dev. Corp. v. Borland Int'l, Inc., 516 U.S. 233 (1996), is a United States Supreme Court case that tested the extent of software copyright. The lower court had held that copyright does not extend to the user interface of a computer program, such as the text and layout of menus. Due to the recusal of one justice, the Supreme Court decided the case with an eight-member bench split evenly, leaving the lower court's decision affirmed but setting no national precedent.
Background information
Borland released a spreadsheet product, Quattro Pro, with a compatibility mode in which its menu imitated Lotus 1-2-3, a competing product. None of the source code or machine code that generated the menus was copied, but the names of the commands and the organization of those commands into a hierarchy were virtually identical.
Quattro Pro also contained a "Key Reader" feature, which allowed it to execute Lotus 1-2-3 keyboard macros. To support this feature, Quattro Pro's code contained a copy of Lotus's menu hierarchy in which each command was represented by its first letter instead of its entire name.
Borland CEO Philippe Kahn took the case to the software development community arguing that Lotus's position would stifle innovation and damage the future of software development. The vast majority of the software development community supported Borland's position.
District Court case
Lotus filed suit in the United States District Court for the District of Massachusetts on July 2, 1990, claiming that the structure of the menus was copyrighted by Lotus. The district court ruled that Borland had infringed Lotus's copyright. The ruling was based in part on the fact that an alternative satisfactory menu structure could be designed. For example, the "Quit" command could be changed to "Exit".
Borland immediately removed the Lotus-based menu system from Quattro Pro, but retained support for its "Key Reader" feature, and Lotus filed a supplemental claim against this feature. A district court held that this also constituted copyright infringement.
Circuit Court case
Borland appealed the decision of the district court arguing that the menu hierarchy is a "method of operation", which is not copyrightable according to 17 U.S.C. § 102(b).
The United States Court of Appeals for the First Circuit reversed the district court's decision, agreeing with Borland's legal theory that considered the menu hierarchy a "method of operation". The court agreed with the district court that an alternative menu hierarchy could be devised, but argued that despite this, the menu hierarchy is an uncopyrightable "method of operation".We hold that the Lotus menu command hierarchy is an uncopyrightable “method of operation.” The Lotus menu command hierarchy provides the means by which users control and operate Lotus 1–2–3. If users wish to copy material, for example, they use the “Copy” command. If users wish to print material, they use the “Print” command. Users must use the command terms to tell the computer what to do. Without the menu command hierarchy, users would not be able to access and control, or indeed make use of, Lotus 1–2–3's functional capabilities.The court made an analogy between the menu hierarchy and the arrangement of buttons on a VCR. The buttons are used to control the playback of a video tape, just as the menu commands are used to control the operations of Lotus 1-2-3. Since the buttons are essential to operating the VCR, their layout cannot be copyrighted. Likewise, the menu commands, including the textual labels and the hierarchical layout, are essential to operating Lotus 1-2-3.
The court also considered the impact of their decision on users of software. If menu hierarchies were copyrightable, users would be required to learn how to perform the same operation in a different way for every program, which the court finds "absurd". Additionally, all macros would have to be re-written for each different program, which places an undue burden on users.
Concurring opinion
Judge Michael Boudin wrote a concurring opinion for this case. In this opinion, he discusses the costs and benefits of copyright protection, as well as the potential similarity of software copyright protection to patent protection. He argues that software is different from creative works, which makes it difficult to apply copyright law to software.
His opinion also considers the theory that Borland's use of the Lotus menu is "privileged". That is, because Borland copied the menu for a legitimate purpose of compatibility, its use should be allowed. This decision, if issued by the majority of the court, would have been narrower in scope than the "method of operations" decision. Copying a menu hierarchy would be allowed in some circumstances, and disallowed in others.
Supreme Court case
Lotus petitioned the United States Supreme Court for a writ of certiorari. In a per curiam opinion, the Supreme Court affirmed the circuit court's judgment due to an evenly divided court, with Justice Stevens recusing. Because the Court split evenly, it affirmed the First Circuit's decision without discussion and did not establish any national precedent on the copyright issue. Lotus's petition for a rehearing by the full court was denied. By the time the lawsuit ended, Borland had sold Quattro Pro to Novell, and Microsoft's Excel spreadsheet had emerged as the main challenger to Lotus 1-2-3.
Impact
The Lotus decision establishes a distinction in copyright law between the interface of a software product and its implementation. The implementation is subject to copyright. The public interface may also be subject to copyright to the extent that it contains expression (for example, the appearance of an icon). However, the set of available operations and the mechanics of how they are activated are not copyrightable. This standard allows software developers to create competing versions of copyrighted software products without infringing the copyright. See software clone for infringement and compliance cases.
Lotus v. Borland has been used as a lens through which to view the controversial case in Oracle America, Inc. v. Google, Inc., dealing with the copyrightability of software application programming interfaces (APIs) and interoperability of software. Software APIs are designed to allow developers to insure compatibility, but should APIs be found to be copyrightable, that could drastically affect the development of software, as the threat of litigation for building interoperability (a core feature of computing, as it has developed over the decades of worldwide use) would present a chilling effect and coerce the establishment of walled gardens around islands of mutually-incompatible software ecosystems, causing millions of man-hours to be lost in re-implementation and quality assurance testing of the same software across multiple concurrent systems, leading to divergent software development paths and a drastically increased attack surface for potential illicit exploitation.
See also
List of United States Supreme Court cases, volume 516
List of United States Supreme Court cases
Lists of United States Supreme Court cases by volume
List of United States Supreme Court cases by the Rehnquist Court
References
External links
17 U.S.C. § 102(b)
Perspective: Lotus Development Corp. v. Borland International, Massachusetts Lawyers Weekly, April 1995
United States Supreme Court cases
United States copyright case law
1996 in United States case law
United States computer case law
Borland
IBM
Spreadsheet software
United States Supreme Court cases of the Rehnquist Court
Tie votes of the United States Supreme Court
Copyrightability case law | Lotus Dev. Corp. v. Borland Int'l, Inc. | [
"Mathematics"
] | 1,550 | [
"Spreadsheet software",
"Mathematical software"
] |
5,677,430 | https://en.wikipedia.org/wiki/First%20Step%20to%20Nobel%20Prize%20in%20Physics | The First Step to Nobel Prize in Physics is an annual international competition in research projects in physics. It originated and is based in Poland.
Participants
All the secondary high school students regardless of the country, type of the school, sex, nationality etc. are eligible for the competition. The only conditions are that the school cannot be considered as a university college and the age of the participants should not exceed 20 years on March 31 (every year March 31 is the deadline for submitting the competition papers). There are no restrictions concerning the subject matter of the papers, their level, methods applied etc. All these are left to the participants' choice. The papers, however, have to have a research character and deal with physics topics or topics directly related to physics. The papers are evaluated by the Evaluating Committee, which is nominated by the Organizing Committee. It was recently won by David Rosengarten.
History
In the first two competitions, only Polish physicists participated in the Evaluation Committee. In the third competition, one non-Polish judge took part in evaluation of the papers. In the fourth competition, the number of physicists from other countries was 10, with 14 being present in the fifth competition. Plans are in place to increase the number of physicists involved from other countries in future competitions. An International Advisory Committee (IAC) was also established. At present, it consists of 25 physicists from different countries.
Competition and evaluation
The materials on the competition are disseminated to all the countries via diplomatic channels. The competition is also advertised in different physics magazines for pupils and teachers. (Every year about 30 articles on the First Step are published in different countries). Also, different private channels are used. In the first eight competitions the pupils from 67 countries participated.
The criteria used when evaluating the papers submitted are geared towards an adult standard; no special consideration is given for the younger age of the participants. There are no prizes such as would be seen in other school-based competitions (cameras, electronics, financial rewards, etc.). Instead, the winners are invited to the Institute for one month for research stays (usually in November). During the stays, they are involved into real research works going on in the Institute. Each year the proceedings with all of the awarded papers are published.
Goals
The competition has several aims and interests, which include:
Promotion of scientific interests among young pupils.
Selection of outstanding pupils (this point is especially important in case of pupils from countries or regions in which access to science is difficult) and their promotion (very often winners are sent to better universities and receive appropriate financial help from the local authorities).
Stimulation of the schools, parents, local educational centers, etc. for greater activity in work with pupils interested in research (in some countries, some regions and even in some schools a preliminary local selection in organized, sometime such selections involve great numbers of participants)
Establishing friendly relations between young physicists (in recent competitions, all winners were invited to the Institute at the same time, were accommodated in the same place, and cooperated with each other).
Winners
In 2007, the winner was an American student. In 2009, the prize went to Mor Tzaban, a high school student from Netivot, Israel. In 2012, the first prize winner was another Israeli teenager, Yuval Katzenelson of Kiryat Gat, who presented a paper entitled "Kinetic energy of inert gas in a regenerative system of activated carbon." The Israeli delegation won 14 more prizes in the competition: 9 Israelis students won second prize, one won third prize and one won fourth prize.
See also
List of physics awards
References
External links
Official Website of the First Step to Nobel Prize in Physics
Physics awards
Polish awards
Science and technology in Poland | First Step to Nobel Prize in Physics | [
"Technology"
] | 758 | [
"Science and technology awards",
"Physics awards"
] |
5,677,733 | https://en.wikipedia.org/wiki/Total%20variation%20diminishing | In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten.
Model equation
In systems described by partial differential equations, such as the following hyperbolic advection equation,
the total variation (TV) is given by
and the total variation for the discrete case is,
where .
A numerical method is said to be total variation diminishing (TVD) if,
Characteristics
A numerical scheme is said to be monotonicity preserving if the following properties are maintained:
If is monotonically increasing (or decreasing) in space, then so is .
proved the following properties for a numerical scheme,
A monotone scheme is TVD, and
A TVD scheme is monotonicity preserving.
Application in CFD
In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of field variable “” is discontinuous.
To capture the variation fine grids ( very small) are needed and the computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, hybrid difference scheme, and power law scheme gives false shock predictions. TVD scheme enables sharper shock predictions on coarse grids saving computation time and as the scheme preserves monotonicity there are no spurious oscillations in the solution.
Discretisation
Consider the steady state one-dimensional convection diffusion equation,
,
where is the density, is the velocity vector, is the property being transported, is the coefficient of diffusion and is the source term responsible for generation of the property .
Making the flux balance of this property about a control volume we get,
Here is the normal to the surface of control volume.
Ignoring the source term, the equation further reduces to:
Assuming
and
The equation reduces to
Say,
From the figure:
The equation becomes: The continuity equation also has to be satisfied in one of its equivalent forms for this problem:
Assuming diffusivity is a homogeneous property and equal grid spacing we can say
we getThe equation further reduces toThe equation above can be written aswhere is the Péclet number
TVD scheme
Total variation diminishing scheme makes an assumption for the values of and to be substituted in the discretized equation as follows:
Where is the Péclet number and is the weighing function to be determined from,
where refers to upstream, refers to upstream of and refers to downstream.
Note that is the weighing function when the flow is in positive direction (i.e., from left to right) and is the weighing function when the flow is in the negative direction from right to left. So,
If the flow is in positive direction then, Péclet number is positive and the term , so the function won't play any role in the assumption of and . Likewise when the flow is in negative direction, is negative and the term , so the function won't play any role in the assumption of and .
It therefore takes into account the values of property depending on the direction of flow and using the weighted functions tries to achieve monotonicity in the solution thereby producing results with no spurious shocks.
Limitations
Monotone schemes are attractive for solving engineering and scientific problems because they do not produce non-physical solutions. Godunov's theorem proves that linear schemes which preserve monotonicity are, at most, only first order accurate. Higher order linear schemes, although more accurate for smooth solutions, are not TVD and tend to introduce spurious oscillations (wiggles) where discontinuities or shocks arise. To overcome these drawbacks, various high-resolution, non-linear techniques have been developed, often using flux/slope limiters.
See also
Flux limiters
Godunov's theorem
High-resolution scheme
MUSCL scheme
Sergei K. Godunov
Total variation
References
Further reading
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Vol 2, Wiley.
Laney, C. B. (1998), Computational Gas Dynamics, Cambridge University Press.
Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag.
Tannehill, J. C., Anderson, D. A., and Pletcher, R. H. (1997), Computational Fluid Mechanics and Heat Transfer, 2nd Ed., Taylor & Francis.
Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
Anil W. Date Introduction to Computational Fluid Dynamics, Cambridge University Press.
Numerical differential equations
Computational fluid dynamics | Total variation diminishing | [
"Physics",
"Chemistry"
] | 967 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,678,036 | https://en.wikipedia.org/wiki/Yellow%20goods%20%28construction%20and%20agriculture%29 | Yellow goods are material for construction and earth-moving equipment, quarrying equipment, and fork-lift trucks. The term is also used to encompass agricultural equipment, such as tractors. The term "yellow goods" originated from the distinctive yellow colour commonly used on these types of machinery.
Construction and earth-moving equipment
These yellow goods machines include excavators, bulldozers, backhoes, loaders, and dump trucks. They are used in construction projects and designed to handle heavy loads, operate in rugged terrain, and perform tasks like digging, grading, and hauling. Manufacturers include Case, Caterpillar, Fiat, Komatsu, Liebherr and Shantui.
Quarrying equipment
This equipment is designed for use in the quarrying industry, which involves the extraction of minerals and other materials from the earth. Common types of quarrying yellow goods include rock drills and stone crushers. These machines are used to extract materials like limestone, granite, and sand.
Fork-lift trucks
Fork-lift trucks are also considered yellow goods. They are mostly used in warehousing and manufacturing. They are designed to lift and move heavy loads and are used for material handling and logistics operations.
Agricultural equipment
Agricultural yellow goods such as tractors are used in the cultivation and harvesting of crops in the farming industry. They are designed to handle a variety of tasks, from tilling and planting to harvesting and transporting.
See also
White goods
Brown goods
References
Goods (economics) | Yellow goods (construction and agriculture) | [
"Physics"
] | 296 | [
"Materials",
"Goods (economics)",
"Matter"
] |
5,678,057 | https://en.wikipedia.org/wiki/Godunov%27s%20theorem | In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high-resolution schemes for the numerical solution of partial differential equations.
The theorem states that:
Professor Sergei Godunov originally proved the theorem as a Ph.D. student at Moscow State University. It is his most influential work in the area of applied and numerical mathematics and has had a major impact on science and engineering, particularly in the development of methods used in computational fluid dynamics (CFD) and other computational fields. One of his major contributions was to prove the theorem (Godunov, 1954; Godunov, 1959), that bears his name.
The theorem
We generally follow Wesseling (2001).
Aside
Assume a continuum problem described by a PDE is to be computed using a numerical scheme based upon a uniform computational grid and a one-step, constant step-size, M grid point, integration algorithm, either implicit or explicit. Then if and , such a scheme can be described by
In other words, the solution at time and location is a linear function of the solution at the previous time step . We assume that determines uniquely. Now, since the above equation represents a linear relationship between and we can perform a linear transformation to obtain the following equivalent form,
Theorem 1: Monotonicity preserving
The above scheme of equation (2) is monotonicity preserving if and only if
Proof - Godunov (1959)
Case 1: (sufficient condition)
Assume (3) applies and that is monotonically increasing with .
Then, because it therefore follows that because
This means that monotonicity is preserved for this case.
Case 2: (necessary condition)
We prove the necessary condition by contradiction. Assume that for some and choose the following monotonically increasing ,
Then from equation (2) we get
Now choose , to give
which implies that is NOT increasing, and we have a contradiction. Thus, monotonicity is NOT preserved for , which completes the proof.
Theorem 2: Godunov’s Order Barrier Theorem
Linear one-step second-order accurate numerical schemes for the convection equation
cannot be monotonicity preserving unless
where is the signed Courant–Friedrichs–Lewy condition (CFL) number.
Proof - Godunov (1959)
Assume a numerical scheme of the form described by equation (2) and choose
The exact solution is
If we assume the scheme to be at least second-order accurate, it should produce the following solution exactly
Substituting into equation (2) gives:
Suppose that the scheme IS monotonicity preserving, then according to the theorem 1 above, .
Now, it is clear from equation (15) that
Assume and choose such that . This implies that and .
It therefore follows that,
which contradicts equation (16) and completes the proof.
The exceptional situation whereby is only of theoretical interest, since this cannot be realised with variable coefficients. Also, integer CFL numbers greater than unity would not be feasible for practical problems.
See also
Finite volume method
Flux limiter
Total variation diminishing
References
Godunov, Sergei K. (1954), Ph.D. Dissertation: Different Methods for Shock Waves, Moscow State University.
Godunov, Sergei K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Mat. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7226, 1969.
Further reading
Numerical differential equations
Theorems in analysis
Computational fluid dynamics | Godunov's theorem | [
"Physics",
"Chemistry",
"Mathematics"
] | 734 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Computational fluid dynamics",
"Computational physics",
"Mathematical problems",
"Fluid dynamics"
] |
5,678,101 | https://en.wikipedia.org/wiki/Local%20time%20%28mathematics%29 | In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields.
Formal definition
For a continuous real-valued semimartingale , the local time of at the point is the stochastic process which is informally defined by
where is the Dirac delta function and is the quadratic variation. It is a notion invented by Paul Lévy. The basic idea is that is an (appropriately rescaled and time-parametrized) measure of how much time has spent at up to time . More rigorously, it may be written as the almost sure limit
which may be shown to always exist. Note that in the special case of Brownian motion (or more generally a real-valued diffusion of the form where is a Brownian motion), the term simply reduces to , which explains why it is called the local time of at . For a discrete state-space process , the local time can be expressed more simply as
Tanaka's formula
Tanaka's formula also provides a definition of local time for an arbitrary continuous semimartingale on
A more general form was proven independently by Meyer and Wang; the formula extends Itô's lemma for twice differentiable functions to a more general class of functions. If is absolutely continuous with derivative which is of bounded variation, then
where is the left derivative.
If is a Brownian motion, then for any the field of local times has a modification which is a.s. Hölder continuous in with exponent , uniformly for bounded and . In general, has a modification that is a.s. continuous in and càdlàg in .
Tanaka's formula provides the explicit Doob–Meyer decomposition for the one-dimensional reflecting Brownian motion, .
Ray–Knight theorems
The field of local times associated to a stochastic process on a space is a well studied topic in the area of random fields. Ray–Knight type theorems relate the field Lt to an associated Gaussian process.
In general Ray–Knight type theorems of the first kind consider the field Lt at a hitting time of the underlying process, whilst theorems of the second kind are in terms of a stopping time at which the field of local times first exceeds a given value.
First Ray–Knight theorem
Let (Bt)t ≥ 0 be a one-dimensional Brownian motion started from B0 = a > 0, and (Wt)t≥0 be a standard two-dimensional Brownian motion started from W0 = 0 ∈ R2. Define the stopping time at which B first hits the origin, . Ray and Knight (independently) showed that
where (Lt)t ≥ 0 is the field of local times of (Bt)t ≥ 0, and equality is in distribution on C[0, a]. The process |Wx|2 is known as the squared Bessel process.
Second Ray–Knight theorem
Let (Bt)t ≥ 0 be a standard one-dimensional Brownian motion B0 = 0 ∈ R, and let (Lt)t ≥ 0 be the associated field of local times. Let Ta be the first time at which the local time at zero exceeds a > 0
Let (Wt)t ≥ 0 be an independent one-dimensional Brownian motion started from W0 = 0, then
Equivalently, the process (which is a process in the spatial variable ) is equal in distribution to the square of a 0-dimensional Bessel process started at , and as such is Markovian.
Generalized Ray–Knight theorems
Results of Ray–Knight type for more general stochastic processes have been intensively studied, and analogue statements of both () and () are known for strongly symmetric Markov processes.
See also
Tanaka's formula
Brownian motion
Random field
Notes
References
K. L. Chung and R. J. Williams, Introduction to Stochastic Integration, 2nd edition, 1990, Birkhäuser, .
M. Marcus and J. Rosen, Markov Processes, Gaussian Processes, and Local Times, 1st edition, 2006, Cambridge University Press
P. Mörters and Y. Peres, Brownian Motion, 1st edition, 2010, Cambridge University Press, .
Stochastic processes
Statistical mechanics | Local time (mathematics) | [
"Physics"
] | 917 | [
"Statistical mechanics"
] |
5,678,338 | https://en.wikipedia.org/wiki/Flux%20limiter | Flux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations (PDEs). They are used in high resolution schemes, such as the MUSCL scheme, to avoid the spurious oscillations (wiggles) that would otherwise occur with high order spatial discretization schemes due to shocks, discontinuities or sharp changes in the solution domain. Use of flux limiters, together with an appropriate high resolution scheme, make the solutions total variation diminishing (TVD).
Note that flux limiters are also referred to as slope limiters because they both have the same mathematical form, and both have the effect of limiting the solution gradient near shocks or discontinuities. In general, the term flux limiter is used when the limiter acts on system fluxes, and slope limiter is used when the limiter acts on system states (like pressure, velocity etc.).
How they work
The main idea behind the construction of flux limiter schemes is to limit the spatial derivatives to realistic values – for scientific and engineering problems this usually means physically realisable and meaningful values. They are used in high resolution schemes for solving problems described by PDEs and only come into operation when sharp wave fronts are present. For smoothly changing waves, the flux limiters do not operate and the spatial derivatives can be represented by higher order approximations without introducing spurious oscillations. Consider the 1D semi-discrete scheme below,
where, and represent edge fluxes for the i-th cell. If these edge fluxes can be represented by low and high resolution schemes, then a flux limiter can switch between these schemes depending upon the gradients close to the particular cell, as follows,
where
is the low resolution flux,
is the high resolution flux,
is the flux limiter function, and
represents the ratio of successive gradients on the solution mesh, i.e.,
The limiter function is constrained to be greater than or equal to zero, i.e., . Therefore, when the limiter is equal to zero (sharp gradient, opposite slopes or zero gradient), the flux is represented by a low resolution scheme. Similarly, when the limiter is equal to 1 (smooth solution), it is represented by a high resolution scheme. The various limiters have differing switching characteristics and are selected according to the particular problem and solution scheme. No particular limiter has been found to work well for all problems, and a particular choice is usually made on a trial and error basis.
Limiter functions
The following are common forms of flux/slope limiter function, :
CHARM [not 2nd order TVD]
HCUS [not 2nd order TVD]
HQUICK [not 2nd order TVD]
Koren – third-order accurate for sufficiently smooth data
minmod – symmetric
monotonized central (MC) – symmetric
Osher
ospre – symmetric
smart [not 2nd order TVD]
superbee – symmetric
Sweby – symmetric
UMIST – symmetric
van Albada 1 – symmetric
van Albada 2 – alternative form [not 2nd order TVD] used on high spatial order schemes
van Leer – symmetric
All the above limiters indicated as being symmetric, exhibit the following symmetry property,
This is a desirable property as it ensures that the limiting actions for forward and backward gradients operate in the same way.
Unless indicated to the contrary, the above limiter functions are second order TVD. This means that they are designed such that they pass through a certain region of the solution, known as the TVD region, in order to guarantee stability of the scheme. Second-order, TVD limiters satisfy at least the following criteria:
,
,
,
,
The admissible limiter region for second-order TVD schemes is shown in the Sweby Diagram opposite, and plots showing limiter functions overlaid onto the TVD region are shown below. In this image, plots for the Osher and Sweby limiters have been generated using .
Generalised minmod limiter
An additional limiter that has an interesting form is the van-Leer's one-parameter family of minmod limiters. It is defined as follows
Note: is most dissipative for when it reduces to and is least dissipative for .
See also
Godunov's theorem
High resolution scheme
MUSCL scheme
Sergei K. Godunov
Total variation diminishing
Notes
References
Further reading
Computational fluid dynamics
Numerical differential equations | Flux limiter | [
"Physics",
"Chemistry"
] | 923 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,679,051 | https://en.wikipedia.org/wiki/List%20of%20aging%20processes | Accumulation of lipofuscin
Aging brain
Calorie restriction
Cross-link
Crosslinking of DNA
Degenerative disease
DNA damage theory of aging
Exposure to ultraviolet light
Free-radical damage
Glycation
Life expectancy
Longevity
Maximum life span
Senescence
Stem cell theory of aging
See also
Index of topics related to life extension
References
Aging processes | List of aging processes | [
"Chemistry",
"Biology"
] | 68 | [
"Senescence",
"Ageing processes",
"Metabolism",
"Cellular processes"
] |
5,679,227 | https://en.wikipedia.org/wiki/EPAS | EPAS (Electronic Protocols Application Software) was a European non-commercial cooperation initiative which developing a series of data protocols to be applied in a point of sale (POS) environment. This included the protocols used by payment terminals and card payment systems. The project focused on three protocols; a terminal management protocol, a retailer application protocol and an acquirer protocol.
In 2024, EPAS was merged with two other organizations to create Nexo Specifications.
History
The organization was launched in 2005, and took over some of the work done on payment terminal standards in Germany by the Open Payment Initiative.
The initiative was structured along three following main phases:
Phase I : development of technical specifications and issuance of standards (2006 - mid-2007)
Phase II : development of software and provision of test tools (2007 – 2008)
Phase III : construction of demonstrators (2008)
In 2012, it was merged with the OSCar consortium and the CIR SEPA-Fast technical working group to create a global standards organization called Nexo Standards.
Participants
The EPAS Consortium is composed of 24 organisations.
Ingenico (FR)
VeriFone (US)
The Logic Group (UK)
Amadis (CA)
ELITT (FR)
MoneyLine (FR)
Lyra Network (FR)
Atos Worldline (DE)
Wincor Nixdorf (ES)
GIE – Groupement des Cartes Bancaires "CB" (FR) (Co-ordinator)
Desjardins (CA)
Atos Worldline (BE)
Security Research and Consulting (SRC) GmbH (DE)
Equens SE (NL)
Sermepa (ES)
Cetrel (LU)
Total (FR)
Quercia (IT)
University of Applied Sciences, Cologne (DE)
Integri (BE)
PAN Nordic Card Association (PNC) (SE)
GALITT (FR)
BP (GB)
RSC Commercial Services (DE)
Europay Austria Zahlungsverkehrssysteme GmbH (AT)
SIBS (PT)
Thales e-Transactions España (ES)
See also
EFTPOS
Open Payment Initiative
Wire transfer
Electronic funds transfer
ERIDANE
References
Sources
“Standardisierungsarbeiten im europäischen Zahlungsverkehr - Chancen für SEPA” SRC - Security Research & Consulting GmbH, Bonn - Wiesbaden, Germany, 2006, p. 5, 11 (PDF-transparencies)
William Vanobberghen, „Le Projet EPAS - Sécurité, protection des personnes et des donnée: de nouvelles technologies et des standards pour fiabiliser le contrôle et l’identification“, Groupement des Cartes Bancaires, 27. June 2006 (PPT-transparencies)
Hans-Rainer Frank, „SEPA aus Sicht eines europäischen Tankstellenbetreibers“, Arbeitskreis ePayment, Brussels, 11.May 2006, p. 11 (PDF-transparencies)
GROUPEMENT DES CARTES BANCAIRES, „EUROPEAN STANDARDISATION FOR ELECTRONIC PAYMENTS“,Used to be at: https://web.archive.org/web/20070927174537/http://www.cartes-bancaires.com/en/dossiers/standard.html (dead link as of Okt 2011)
"EPC Card Fraud Prevention & Security Activities", Cédric Sarazin – Chairman Card Fraud Prevention TF 19. December 2007, FPEG Meeting - Brussels, https://web.archive.org/web/20121024081807/http://ec.europa.eu/internal_market/fpeg/docs/sarazin_en.ppt
"EPAS Members", https://web.archive.org/web/20161220082713/http://nexo-standards.org/members
External links
Official Web site
Banking terms
Consortia in Europe
Payment systems
Retail point of sale systems
Technology consortia | EPAS | [
"Technology"
] | 839 | [
"Retail point of sale systems",
"Information systems"
] |
5,679,286 | https://en.wikipedia.org/wiki/Panaeolus%20tropicalis | Panaeolus tropicalis is a species of psilocybin producing mushroom in the family Bolbitiaceae. It is also known as Copelandia tropicalis.
Description
The cap is 1.5 — 2(2.5) cm and hemispheric to convex to companulate. The margin is incurved when young, clay-colored, often reddish brown towards the disc, hygrophanous, smooth, and grayish to greenish; it is translucent-striate at the margin when wet. It becomes blue when bruised.
The gills are adnexed, distinctly mottled, and dully grayish with blackish spots.
The stipe is 5–12 cm long, 2–3 mm thick, hollow, and vertically striate. It is blackish towards the base, greyish towards the apex, and pallid to whitish fibrils run the length of the stipe. The stipe is equal to slightly swollen at the base and lacks a partial veil.
Panaeolus tropicalis spores are dark violet to jet black, ellipsoid, and 10.5–12.0 x 7–9 μm. The basidia each produce two spores.
Like many other hallucinogenic mushrooms, this fungus readily bruises blue where it is handled. It can be differentiated from Panaeolus cyanescens by microscopic characteristics.
Distribution and habitat
Panaeolus tropicalis is a mushroom that grows on dung. It is most often found in Hawaii, Central Africa, and Cambodia; it can also be found in Mexico, Tanzania, the Philippines, Florida, and Japan.
See also
List of Panaeolus species
External links
Photo of Panaeolus tropicalis
(on Fondazione Museo Civico di Rovereto)
Entheogens
tropicalis
Psychoactive fungi
Psychedelic tryptamine carriers
Fungi of North America
Fungi of South America
Fungi of Asia
Fungi of Africa
Fungi of Hawaii
Fungi without expected TNC conservation status
Fungus species | Panaeolus tropicalis | [
"Biology"
] | 405 | [
"Fungi",
"Fungus species"
] |
5,679,472 | https://en.wikipedia.org/wiki/Crotonaldehyde | Crotonaldehyde is a chemical compound with the formula CH3CH=CHCHO. The compound is usually sold as a mixture of the E- and Z-isomers, which differ with respect to the relative position of the methyl and formyl groups. The E-isomer is more common (data given in Table is for the E-isomer). This lachrymatory liquid is moderately soluble in water and miscible in organic solvents. As an unsaturated aldehyde, crotonaldehyde is a versatile intermediate in organic synthesis. It occurs in a variety of foodstuffs, e.g. soybean oils.
Production and reactivity
Crotonaldehyde is produced by the aldol condensation of acetaldehyde:
2 CH3CHO → CH3CH=CHCHO + H2O
Crotonaldehyde is a multifunctional molecule that exhibits diverse reactivity. It is a prochiral dienophile. It is a Michael acceptor. Addition of methylmagnesium chloride produces 3-penten-2-ol.
Uses
It is a precursor to many fine chemicals. A prominent industrial example is the crossed aldol condensation with diethyl ketone to give trimethylcyclohexenone, this can be easily converted to trimethylhydroquinone, which is a precursor to the vitamin E. Other derivatives include crotonic acid, 3-methoxybutanol and the food preservative Sorbic acid. Condensation with two equivalents of urea gives a pyrimidine derivative that is employed as a controlled-release fertilizer.
Safety
Crotonaldehyde is a potent irritant even at the ppm levels. It is not very toxic, with an of 174 mg/kg (rats, oral).
See also
Crotyl
Crotonic acid
Crotyl alcohol
Methacrolein
References
External links
Hazardous Substance Fact Sheet
CDC - NIOSH Pocket Guide to Chemical Hazards
Alkenals
Lachrymatory agents
Hazardous materials
IARC Group 2B carcinogens | Crotonaldehyde | [
"Physics",
"Chemistry",
"Technology"
] | 440 | [
"Chemical weapons",
"Materials",
"Lachrymatory agents",
"Hazardous materials",
"Matter"
] |
5,679,539 | https://en.wikipedia.org/wiki/Butenoic%20acid | Butenoic acid is any of three monocarboxylic acids with an unbranched 4-carbon chain with 3 single bonds and one double bond; that is, with the structural formula –––H (2-butenoic) or –––H (3-butenoic). All have the chemical formula or .
These compounds are technically mono-unsaturated fatty acids, although some authors may exclude them for being too short. The three isomers are:
crotonic acid (trans-2-butenoic or (2E)-but-2-enoic acid)
isocrotonic acid (cis-2-butenoic or (2Z)-but-2-enoic acid)
3-butenoic acid (but-3-enoic acid).
See also
Methacrylic acid, also but branched like isobutene; a.k.a. isobutenoic acid
Butyric acid, ; a.k.a. butanoic acid
References
Carboxylic acids | Butenoic acid | [
"Chemistry"
] | 222 | [
"Carboxylic acids",
"Functional groups"
] |
5,679,554 | https://en.wikipedia.org/wiki/High-resolution%20scheme | High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the following properties:
Second- or higher-order spatial accuracy is obtained in smooth parts of the solution.
Solutions are free from spurious oscillations or wiggles.
High accuracy is obtained around shocks and discontinuities.
The number of mesh points containing the wave is small compared with a first-order scheme with similar accuracy.
General methods are often not adequate for accurate resolution of steep gradient phenomena; they usually introduce non-physical effects such as smearing of the solution or spurious oscillations. Since publication of Godunov's order barrier theorem, which proved that linear methods cannot provide non-oscillatory solutions higher than first order (, ), these difficulties have attracted much attention and a number of techniques have been developed that largely overcome these problems. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a Total Variation Diminishing (TVD) characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL (Monotone Upstream-Centered Schemes for Conservation Laws), a flux/slope limiter method (, , , , ) and the WENO (Weighted Essentially Non-Oscillatory) method (, ). Both methods are usually referred to as high resolution schemes (see diagram).
MUSCL methods are generally second-order accurate in smooth regions (although they can be formulated for higher orders) and provide good resolution, monotonic solutions around discontinuities. They are straightforward to implement and are computationally efficient.
For problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth order accurate WENO scheme, whilst higher order schemes can be used where the problem demands improved accuracy in smooth regions.
The method of holistic discretisation systematically analyses subgrid scale dynamics to algebraically construct closures for numerical discretisations that are both accurate to any specified order of error in smooth regions, and automatically adapt to cater for rapid grid variations through the algebraic learning of subgrid structures (). A web service analyses any PDE in a class that may be submitted.
See also
Godunov's theorem
Sergei K. Godunov
Total variation diminishing
Shock capturing method
References
translated US Joint Publ. Res. Service, JPRS 7226, 1969.
Numerical differential equations
Computational fluid dynamics | High-resolution scheme | [
"Physics",
"Chemistry"
] | 533 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,679,766 | https://en.wikipedia.org/wiki/Stopped-flow | Stopped-flow is an experimental technique for studying chemical reactions with a half time of the order of 1 ms, introduced by Britton Chance and extended by Quentin Gibson (Other techniques, such as the temperature-jump method, are available for much faster processes.)
Description of the method
Summary
Stopped-flow spectrometry allows chemical kinetics of fast reactions (with half times of the order of milliseconds) to be studied in solution. It was first used primarily to study enzyme-catalyzed reactions. Then the stopped-flow rapidly found its place in almost all biochemistry, biophysics, and chemistry laboratories with a need to follow chemical reactions in the millisecond time scale.
In its simplest form, a stopped-flow mixes two solutions. Small volumes of solutions are rapidly and continuously driven into a high-efficiency mixer. This mixing process then initiates an extremely fast reaction. The newly mixed solution travels to the observation cell and pushes out the contents of the cell (the solution remaining from the previous experiment or from necessary washing steps). The time required for this solution to pass from the mixing point to the observation point is known as dead time. The minimum injection volume will depend on the volume of the mixing cell. Once enough solution has been injected to completely remove the previous solution, the instrument reaches a stationary state and the flow can be stopped. Depending on the syringe drive technology, the flow stop is achieved by using a stop valve called the hard-stop or by using a stop syringe. The stopped-flow also sends a ‘start signal’ to the detector called the trigger so the reaction can be observed. The timing of the trigger is usually software controlled so the user can trigger at the same time the flow stops or a few milliseconds before the stop to check the stationary state has been reached.
Reactant syringes
Two syringes are filled with solutions that do not undergo a chemical reaction until mixed together. These have pistons that are driven by a single drive piston or by independent stepping motors, so that they are coupled together and their contents are forced out simultaneously into a mixing device.
Mixing chamber
Once the two solutions are forced out of their syringes they enter a mixing system that has baffles to ensure complete mixing, with turbulent flow rather than laminar flow. (Laminar flow would allow the two solutions to flow side by side with incomplete mixing.)
Dead time
The dead time is the time for the solutions to go from the mixing point to the observation point, it is the part of the kinetics which cannot be observed. So the lower the dead time, the more information the user can get. In older instruments this could be of the order of 1 ms, but improvements now allow a dead time of about 0.3 ms.
Observation cell
The mixed reactants pass an observation cell that allows the reaction to be followed spectrophotometrically, typically by ultraviolet spectroscopy, fluorescence spectroscopy, circular dichroism or light scattering, and it is now common to combine several of these.
Observation cuvette with a short light path (0.75 to 1.5mm) are usually preferred for fluorescence measurements to reduce self-absorption effects. Observation cuvette with longer light path (0.5 cm to 1 cm) are preferred for absorbance measurements. Modern stopped-flow can accommodate different models of cells and it is possible to change the cuvette between two experiments.
For stopped-flow X-ray measurements, a quartz capillary with thin wall is used to minimize quartz absorption. Simultaneous x-ray and absorbance measurements are possible in the same capillary.
Stopping
Once through the observation cell the mixture enters a third syringe that contains a piston that is driven by the flow to activate a switch to stop the flow and activate the observation.
Continuous flow
The stopped-flow method is a development of the continuous-flow method used by Hamilton Hartridge and Francis Roughton to study the binding of O2 to hemoglobin. In the absence of any stopping system the reaction mixture passed to a long tube past an observation system (consisting in 1923 of a simple colorimeter) to waste. By moving the colorimeter along the tube, and knowing the flow rate, Hartridge and Roughton could measure the process after a known time.
In its time this was a revolutionary advance showing an apparently intractable problem (studying a process taking milliseconds with equipment that required seconds for each measurement) could be solved with simple equipment. However, in practice it was limited to reactants available in large quantities: for proteins this effectively limited it to reactions of hemoglobin. For practical purposes this approach is obsolete.
Quenched flow
The stopped-flow method depends on the existence of spectroscopic properties that can be used for following the reaction. When that is not the case quenched flow provides an alternative that uses conventional chemical methods for analysis. Instead of a mechanical stopping system the reaction is stopped by quenching, the products being delivered to a recipient that stops the reaction immediately, either by instantaneous freezing or by denaturing the enzyme with a chemical denaturant or exposing the sample to a denaturing light source. As in the continuous-flow method, the time between mixing and quenching can be varied by varying the length of the tube.
The pulsed quenched flow method introduced by Alan Fersht and Ross Jakes overcomes the need for a long tube. The reaction is initiated exactly as in a stopped-flow experiment, but there is a third syringe that brings about quenching a definite and preset time after the initiation.
Quenched flow has both advantages and disadvantages with respect to stopped flow. On the one hand, chemical analysis makes it clear what process is being measured, whereas it may not always be obvious what process a spectroscopic signal represents. On the other hand, quenched flow is much more laborious, as each point along the time course must be determined separately. The image at left for catalysis by nitrogenase from Klebsiella pneumoniae illustrates both of these points: the agreement in half times indicates that the absorbance at 420 nm measured the release of Pi, but the quenched-flow experiment required 11 data points.
References
Further reading
Chemical kinetics
Biophysics | Stopped-flow | [
"Physics",
"Chemistry",
"Biology"
] | 1,286 | [
"Chemical kinetics",
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Biophysics"
] |
5,679,875 | https://en.wikipedia.org/wiki/Mu%20Cancri | Mu Cancri (μ Cancri, μ Cnc, Mu Cnc) is a Bayer designation that could refer to two stars in the constellation Cancer:
Mu2 Cancri (10 Cancri)
Mu1 Cancri (BL Cancri, 9 Cancri)
Mu Cancri also sometimes just means Mu2 Cancri.
Cancri, Mu
Cancer (constellation) | Mu Cancri | [
"Astronomy"
] | 86 | [
"Cancer (constellation)",
"Constellations"
] |
5,679,969 | https://en.wikipedia.org/wiki/Block%20and%20bleed%20manifold | A Block and bleed manifold is a hydraulic manifold that combines one or more block/isolate valves, usually ball valves, and one or more bleed/vent valves, usually ball or needle valves, into one component for interface with other components (pressure measurement transmitters, gauges, switches, etc.) of a hydraulic (fluid) system. The purpose of the block and bleed manifold is to isolate or block the flow of fluid in the system so the fluid from upstream of the manifold does not reach other components of the system that are downstream. Then they bleed off or vent the remaining fluid from the system on the downstream side of the manifold. For example, a block and bleed manifold would be used to stop the flow of fluids to some component, then vent the fluid from that component’s side of the manifold, in order to effect some kind of work (maintenance/repair/replacement) on that component.
Types of valves
Block and Bleed
A block and bleed manifold with one block valve and one bleed valve is also known as an isolation valve or block and bleed valve; a block and bleed manifold with multiple valves is also known as an isolation manifold. This valve is used in combustible gas trains in many industrial applications. Block and bleed needle valves are used in hydraulic and pneumatic systems because the needle valve allows for precise flow regulation when there is low flow in a non-hazardous environment.
Double Block and Bleed (DBB Valves)
These valves replace existing traditional techniques employed by pipeline engineers to generate a double block and bleed configuration in the pipeline. Two block valves and a bleed valve are as a unit, or manifold, to be installed for positive isolation. Used for critical process service, DBB valves are for high pressure systems or toxic/hazardous fluid processes. Applications that use DBB valves include instrument drain, chemical injection connection, chemical seal isolation, and gauge isolation. DBB valves do the work of three separate valves (2 isolations and 1 drain) and require less space and have less weight.
Cartridge Type Standard Length DBB
This type of Double Block and Bleed Valves have a patented design which incorporates two ball valves and a bleed valve into one compact cartridge type unit with ANSI B16.5 tapped flanged connections. The major benefit of this design configuration is that the valve has the same face-to-face dimension as a single block ball valve (as specified in API 6D and ANSI B16.10), which means the valve can easily be installed into an existing pipeline without the need for any pipeline re-working.
Three Piece Non Standard Length DBB
This type of Double Block and Bleed Valves (DBB Valves) feature the traditional style of flange-by-flange type valve and is available with ANSI B16.5 flanges, hub connections and welded ends to suit the pipeline system it is to be installed in. It features all the benefits of the single unit DBB valve, with the added benefit of a bespoke face-to-face dimension if required.
Single Unit DBB
This design also has operational advantages, there are significantly fewer potential leak paths within the double block and bleed section of the pipeline. Because the valves are full bore with an uninterrupted flow orifice they have got a negligible pressure drop across the unit. The pipelines where these valves are installed can also be pigged without any problems.
There are several advantages in using a Double Block and Bleed Valve. Significantly, because all the valve components are housed in a single unit, the space required for the installation is dramatically reduced thus freeing up room for other pieces of essential equipment.
Considering the operations and procedures executed before an operator can intervene, the Double Block and Bleed manifold offers further advantages over the traditional hook up. Due to the volume of the cavity between the two balls being so small, the operator is afforded the opportunity to evacuate this space efficiently thereby quickly establishing a safe working environment.
References
Fluid mechanics
Hydraulics
Mechanical engineering | Block and bleed manifold | [
"Physics",
"Chemistry",
"Engineering"
] | 800 | [
"Applied and interdisciplinary physics",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Mechanical engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
5,680,091 | https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27ing%C3%A9nieurs%20de%20constructions%20a%C3%A9ronautiques | The École nationale supérieure d'ingénieurs de constructions aéronautiques (; meaning "National Higher School of aeronautical constructions"), or ENSICA, was a French engineering school founded in 1945. It was located in Toulouse. In 2007, Ensica merged with Supaéro to form the Institut supérieur de l'aéronautique et de l'espace (ISAE).
Ensica recruited its students from the French "Concours des Grandes Écoles". A competitive examination which requires studies at the "classes préparatoires". Classes préparatoires last two years where students are to work intensively on mathematics and physics.
Studies at Ensica lasted for 3 years where students eventually got a Master in Aeronautics.
Area of studies cover all the fundamentals of aeronautics, including: aerodynamics, structures, fluid dynamics, thermal power, electronics, control theory, airframe systems, IT...
Students are also trained to management, manufacturing, certification, and foreign languages.
Main employers are Airbus, Thales, Dassault, Safran (Sagem, Snecma), Rolls-Royce, Astrium, Eurocopter.
History
The decree giving birth to the "Ecole Nationale des Travaux Aéronautiques" (ENTA) was signed in 1945. The text was then ratified by Charles de Gaulle, president of the temporary government, and by René Pleven, Finance Minister. There were 25 students in the first class and 24 of them joined the "Ingénieurs Militaires des Travaux de l'Air" (IMTA).
In 1957, the school changed its name to the "Ecole Nationale d'Ingénieurs des Constructions Aéronautiques" (ENICA).The course was extended to three years and the school embarked on its new civil vocation welcoming a higher proportion of civil students.
In 1961, ENICA was transferred to Toulouse, the director at that time being Emile Blouin. It then took on a new dimension and established its identity. In 1969, the school joined the competitive entrance examination system organised by the Ecoles Nationales Supérieures d'Ingénieurs (ENSI). It thus increased its recruitment standards to become one of the leading French schools. This excellence was rewarded in 1979 when it received the Médaille de l'Aéronautique from Général Georges Bousquet: ENICA then became ENSICA, Ecole Nationale Supérieure d'Ingénieurs de Constructions Aéronautiques.
The eighties were marked by a profound diversification in the training courses offered: opening of a "Mastère" degree and an Advanced Studies degree (DEA) in automatic control and mechanics, specialisations in aircraft maintenance and helicopter techniques. ENSICA became the top-listed school for students with pass marks in ENSI competitive entrance examinations and continuously increased the part set aside for research. It also internationalised its training by implementing exchange programmes with English, American and German institutes and universities. In 1994, ENSICA became a public establishment and can now sign, in its own name, agreements and conventions with other organisations and receive research contracts.
Today, ENSICA has a staff of 150 people including 25 scientific directors and almost 700 part-time lecturers. The school can accommodate more than 400 students on the initial training courses and the same number of persons doing further training. The 50th class recently graduated. It included a total of 98 graduates 11 students of which did their third year of studies in a foreign university (USA, Great Britain, Germany and Sweden) and a high number of students who carried out their end of study projects abroad.
Missions
A public establishment under the auspices of the Ministry of Defence, ENSICA gives technological teaching courses for civil and military engineering students and offers a range of training:
"Diplôme d'Ingenieur" (engineer's diploma) course;
training for and through scientific research;
a set of "Mastère Spécialisé" courses;
further education courses;
research.
The engineer's course lasts three years.
Departments of Ensica
At ENSICA, research and training are integrated into the four training and research departments: avionics and systems, mechanical engineering, fluid mechanics, applied mathematics and computer science.
All the departments are composed with a scientific staff. The staff is composed by lecturers-researchers with Ph.D's, lecturers and senior lecturers from universities and full professors. They are responsible for the research work and pedagogical engineering, as well as the coordination of the lecturers' teams. By this way, they actively participate in international actions and in industrial relations.
The lecturers come, for one third, from the university and research world, for one fourth from industry and one fourth from the DGA.
Human, economics, social, linguistics and multi-cultural training is under the responsibility of three departments: human and social sciences, sports and languages.
Main departments are Avionics, Mechanical Engineering, Fluid Dynamics and Mathematics
Avionics
The Avionics & Systems Department develop :
- In the first year a basic training in: Signal processing, Automatic System and Electrical Engineering.
- In the third year, two advanced itineraries are proposed into the field :
Signals - Communications
Control - Avionics
The Department trains at these multidisciplinary itineraries :
Aircraft system
Space systems
Control - Guidance
Radar - Telecommunications
Preparation for the post-graduate diplomas DEA (Advanced Studies Diploma) :
Signals - images - acoustics
automatics systems.
These two itineraries allow, respectively, the preparation for the postgraduate diplomas signals-images-acoutics and automatic systems.
Taught subjects
Functional approach of electronics and electric engineering
Strong theorical bases of signal processing allowing a use in image processing, radar and telecommunications.
Optics and optronics bases.
Antennas and radars theories and applications in the aeronautical and spatial domains.
Approach of real-time systems based on a concrete system built on a micro controller.
Finally, control : from modelisation and control of simple processes to applied advanced methods in the aeronautical domain.
Mechanical engineering
The aim of the Mechanical engineering Department's curriculum is to provide the students with basic knowledge in mechanics indispensable for their future jobs as engineers and this within a multidisciplinary aerospace training framework.
The Mechanical Engineering courses lasts three years and includes :
- basic training including fundamental knowledge mainly concerning calculation of structures and technological knowledge of mechanisms, manufacturing and materials, - training applied to aeronautics and space; this part increasing progressively throughout the three years.
This common core is complemented, within the scope the third-year optional modules, by courses given at ENSICA for the Mechanical Engineering advanced studies degree and more specialised courses related to aeronautics and space.
The Mechanical Engineering Department also coordinates the school's space activities: this specific space training corresponds to around 250 hours and development is oriented both towards ultralight systems and crewed flight engineering.
Fluid Dynamics
The courses given by the Fluid Mechanics Department concern the thermodynamics of irreversible processes and continuum mechanics. The courses in these two disciplines are given in the first year and are completed by a basic fluid mechanics course (general equations of the movement of a Newtonian fluid and inviscid fluid movements). In the second year, the studies concern the flow of incompressible viscous fluids and compressible inviscid fluids dealing with the boundary layer, shock wave and turbulence phenomena with complements in unsteady fluid hypersonic and mechanical phenomena.
From these theoretical bases, aeronautical applications are introduced in the second year. They mainly concern:
external aerodynamics plus flight mechanics and handling qualities.
aeronautical turbine engines.
Mathematics
The goals of CS training are:
(1) to study the methods for developing programs (specification methods, object-oriented design, structured programming algorithms, testing);
(2) to learn the basics of algorithmics
(3) in-depth study of object programming, and learning an object-oriented methodology that uses UML as modeling notation;
(4) to study the specific features of "Real-Time" applications and systems and of new-generation network architectures in close association with the research work carried out in the department. Practical implementations of theoretical concepts are based on Java language;
ENSICA is co-accredited for issuing the Toulouse Systems Postgraduate School's Computer-based Systems DEAs (Advanced Studies Degrees) in cooperation with UPS science university, INSA and SUPAERO engineering schools, and the Toulouse CS and Telecommunications Postgraduate School's Networks and Telecommunications DEAs in cooperation with INPT engineering school, UPS science university, SUPAERO, INSA, ENST and ENAC engineering schools.
Training periods and international perspectives
During the 3 years, students of Ensica have the opportunity of studying for one semester or one year abroad, or make a one-year additional training period in a company.
Foreign partnerships include:
Australia
University of Technology Sydney
Belgium
Vrije Universiteit Brussel
Katholieke Universiteit Leuven
Université catholique de Louvain
Canada
Université de Sherbrooke
Ecole Polytechnique de Montréal
China
Nanjing University
Germany
Technische Universität München
Universität Stuttgart
Rheinisch-Westfälische Technische Hochschule Aachen
Technische Universität Braunschweig
Italy
Politecnico di Torino
Politecnico di Milano
Mexico
Instituto Politécnico Nacional
Netherlands
Delft University of Technology
Poland
Warsaw University of Technology
Lublin University of Technology
Romania
Polytechnic University of Bucharest
Military Technical Academy
Russia
Samara State Aerospace University
St. Petersburg State University
Singapore
National University of Singapore
Nanyang Technological University
Spain
Universidad Politécnica de Madrid (CETSEI)
Universitat Politècnica de Catalunya (ETSEIB - ETSEIAT)
Universidad de Sevilla
Sweden
Kungl Tekniska Högskolan
United Kingdom
Cranfield University
Imperial College
University of Bristol
University of Southampton
University of Glasgow
USA
State University of New York at Buffalo
Louisiana State University
University of Wisconsin Madison
University of Maryland at College Park
Syracuse University
Aerospace engineering organizations
Aviation schools in France
Educational institutions established in 1945
1945 establishments in France | École nationale supérieure d'ingénieurs de constructions aéronautiques | [
"Engineering"
] | 2,067 | [
"Aeronautics organizations",
"Aerospace engineering organizations",
"Aerospace engineering"
] |
5,680,797 | https://en.wikipedia.org/wiki/Sengstaken%E2%80%93Blakemore%20tube | A Sengstaken–Blakemore tube is a medical device inserted through the nose or mouth and used occasionally in the management of upper gastrointestinal hemorrhage due to esophageal varices (distended and fragile veins in the esophageal wall, usually a result of cirrhosis). The use of the tube was originally described in 1950, although similar approaches to bleeding varices were described by Westphal in 1930. With the advent of modern endoscopic techniques which can rapidly and definitively control variceal bleeding, Sengstaken–Blakemore tubes are rarely used at present.
Device
The device consists of a flexible plastic tube containing several internal channels and two inflatable balloons. Apart from the balloons, the tube has an opening at the bottom (gastric tip) of the device. More modern models also have an opening near the upper esophagus; such devices are properly termed Minnesota tubes. The tube is passed down into the esophagus and the gastric balloon is inflated inside the stomach. A traction of 1 kg is applied to the tube so that the gastric balloon will compress the gastroesophageal junction and reduce the blood flow to esophageal varices. If the use of traction alone cannot stop the bleeding, the esophageal balloon is also inflated to help stop the bleeding. The esophageal balloon should not remain inflated for more than six hours, to avoid necrosis. The gastric lumen is used to aspirate stomach contents.
Generally, Sengstaken–Blakemore tubes and Minnesota tubes are used only in emergencies where bleeding from presumed varices is impossible to control with medication alone. The tube may be difficult to position, particularly in an unwell patient, and may inadvertently be inserted in the trachea, hence endotracheal intubation before the procedure is strongly advised to secure the airway. The tube is often kept in the refrigerator in the hospital's emergency department, intensive care unit and gastroenterology ward. It is a temporary measure: ulceration and rupture of the esophagus and stomach are recognized complications.
A related device with a larger gastric balloon capacity (about 500 ml), the Linton–Nachlas tube, is used for bleeding gastric varices. It does not have an esophageal balloon.
Eponym
It is named after Robert William Sengstaken Sr. (1923–1978), an American neurosurgeon, and Arthur Blakemore (1897–1970), an American vascular surgeon. They conceptualized and invented the tube in the early 1950s.
References
External links
GP notebook
MedAU
Gastroenterology
Medical equipment | Sengstaken–Blakemore tube | [
"Biology"
] | 566 | [
"Medical equipment",
"Medical technology"
] |
5,680,834 | https://en.wikipedia.org/wiki/Gonadal%20dysgenesis | Gonadal dysgenesis is classified as any congenital developmental disorder of the reproductive system characterized by a progressive loss of primordial germ cells on the developing gonads of an embryo. One type of gonadal dysgenesis is the development of functionless, fibrous tissue, termed streak gonads, instead of reproductive tissue. Streak gonads are a form of aplasia, resulting in hormonal failure that manifests as sexual infantism and infertility, with no initiation of puberty and secondary sex characteristics.
Gonadal development is a process, which is primarily controlled genetically by the chromosomal sex (XX or XY), which directs the formation of the gonad (ovary or testicle).
Differentiation of the gonads requires a tightly regulated cascade of genetic, molecular and morphogenic events.
At the formation of the developed gonad, steroid production influences local and distant receptors for continued morphological and biochemical changes.
This results in the phenotype corresponding to the karyotype (46,XX for females and 46,XY for males).
Gonadal dysgenesis arises from a difference in signalling in this tightly regulated process during early foetal development.
Manifestations of gonadal dysgenesis are dependent on the aetiology and severity of the underlying causes.
Causes
Pure gonadal dysgenesis 46,XX also known as XX gonadal dysgenesis
Pure gonadal dysgenesis 46,XY also known as XY gonadal dysgenesis
Mixed gonadal dysgenesis also known as partial gonadal dysgenesis, and 45,X/46,XY mosaicism
Turner syndrome also known as 45,X or 45,X0
Endocrine disruptions
Pathogenesis
46,XX gonadal dysgenesis
46,XX gonadal dysgenesis is characteristic of female hypogonadism with a karyotype of 46,XX. Streak ovaries are present with non-functional tissues unable to produce the required sex steroid oestrogen.
Low levels of oestrogen effect the HPG axis with no feedback to the anterior pituitary to inhibit the secretion of FSH and LH.
FSH and LH are secreted at elevated levels. Increased levels of these hormones will cause the body to not start puberty, not undergo menarche, and not develop secondary sex characteristics. If ovarian tissue is present and produces some amount of hormones, limited menstrual cycles can occur.
46,XX gonadal dysgenesis can manifest from a variety of causes. Interruption during ovarian development in embryogenesis can cause 46,XX gonadal dysgenesis with cases of changes in the FSH receptor and mutations in steroidogenic acute regulatory protein (StAR protein) which regulates steroid hormone production.
46,XY gonadal dysgenesis
46,XY gonadal dysgenesis is characteristic of male hypogonadism with karyotype 46,XY. In embryogenesis, the development of the male gonads is primarily controlled by the testis determining factor located on the sex-determining region of the Y chromosome (SRY).
The male gonad is dependent on SRY and the signalling pathways initiated to several other genes to facilitate testis development.
The aetiology of 46,XY gonadal dysgenesis can be caused by mutations in the genes involved in testis development such as SRY, SOX9, WT1, SF1, and DHH. If a single or combination of these genes are mutated or deleted, downstream signalling is disrupted, leading to atypical penis and scrotum.
Genital Undermasculinization is the technical term for partial of complete undifferentiated genitallia in individuals with an SRY gene. In utero, all fetuses are anatomically undifferentiated which are then differentiated via androgen's and SRY activation.
Full undermasculinization results in a fully developed vulva with testicles inside the body where the ovaries usually are, which is caused by conditions such as complete androgen insensitivity syndrome. In 5α-Reductase 2 deficiency, individuals are born with normal female genitalia, however, during puberty, male differentiation and spermatogenesis occurs. Partial genital undermasculinization can occur if the body has a partial resistance to androgens, or if genital development is blocked, undermasculization can also be induced by certain drugs and hormones. The overall intensity of undermasculinization can manifest itself in hypospadias. The surgical assignment of newborns with ambiguous genitalia to a binary sex for cosmetic purposes is considered a human rights violation.
SRY acts on gene SOX9 which drives Sertoli cell formation and testis differentiation. An absence in SRY causes SOX9 to not be expressed at the usual time or concentration, leading to a decreased testosterone and anti-Müllerian hormone production.
Lowered levels of testosterone and anti-Müllerian hormone disrupts the development of Wolffian ducts and internal genitalia that are key to male reproductive tract development. The absence of the steroid hormones commonly associated with males drives Müllerian duct development and promotes the development of female genitalia, if anti-Müllerian hormone is suppressed or the body is insensitive, persistent Müllerian duct syndrome occurs when the individual has partial female reproductive, and partial male reproductive organs.
Gonadal streaks can replace the tissues of testes, resembling ovarian stroma absent of follicles. 46,XY gonadal dysgenesis can remain unsuspected until delayed pubertal development is observed. Approximately 15% of cases of 46,XY gonadal dysgenesis carry de novo mutations in the SRY gene, with an unknown causation for the remaining portion of 46,XY gonadal dysgenesis persons.
Mixed gonadal dysgenesis
Mixed gonadal dysgenesis, also known as X0/XY mosaicism or partial gonadal dysgenesis, is a sex development difference associated with sex chromosome aneuploidy and mosaicism of the Y chromosome. Mixed gonadal dysgenesis is the presence of two or more germ line cells.
The degree of development of the male reproductive tract is determined by the ratio of germ line cells expressing the XY genotype.
Manifestations of mixed gonadal dysgenesis are highly variable with asymmetry in gonadal development of testis and streak gonad, accounted for by the percentage of cells expressing XY genotype.
The dysgenic testis can have an amount of functional tissue which can produce a level of testosterone, which causes masculinisation.
Mixed gonadal dysgenesis is poorly understood at the molecular level. The loss of the Y chromosome can occur from deletions, translocations, or migration diffenernce of paired chromosomes during cell division. The chromosomal loss results in partial expression of the SRY gene, giving rise to atypical development of the reproductive tract and altered hormone levels.
Turner syndrome
Turner syndrome, also known as 45,X or 45,X0, is a chromosomal abnormality characterised by a partial or completely missing second X chromosome, giving a chromosomal count of 45, instead of the typical count of 46 chromosomes.
Dysregulation in meiosis signalling to germ cells during embryogenesis may result in nondisjunction and monosomy X from not occurred separation of chromosomes in either the parental gamete or during early embryonic divisions.
The aetiology of Turner syndrome phenotype can be the result of haploinsufficiency, where a portion of critical genes are rendered inactive during embryogenesis. Normal ovarian development requires these vital regions of the X chromosome that are inactivated. Clinical manifestation include primary amenorrhea, hypergonadotropic hypogonadism, streak gonads, infertility, and failure to develop secondary sex characteristics. Turner syndrome is usually not diagnosed until a delayed onset of puberty with Müllerian structures found to be in infantile stage. Physical phenotypic characteristics include short stature, dysmorphic features and lymphedema at birth. Comorbidities include heart defects, vision and hearing problems, diabetes, and low thyroid hormone production.
Endocrine disruptions
Endocrine disruptors interfere with the endocrine system and hormones. Hormones are critical for the common events in embryogenesis to occur. Foetal development relies on the proper timing of the delivery of hormones for cellular differentiation and maturation. Disruptions can cause sexual development disorders leading to gonadal dysgenesis.
Diagnosis
Management
History
Turner syndrome was first described independently by Otto Ulrich in 1930 and Henry Turner in 1938. 46,XX pure gonadal dysgenesis was first reported in 1960. 46,XY pure gonadal dysgenesis, also known as Swyer syndrome, was first described by Gim Swyer in 1955.
See also
(DoDI) 6130.03, 2018, section 5, 13f and 14m
Ovotestis
46 XX
References
External links
Congenital disorders of endocrine system
Congenital disorders of genital organs
Intersex topics
Intersex healthcare
Intersex variations
Rare diseases
Sex differences in humans | Gonadal dysgenesis | [
"Biology"
] | 1,938 | [
"Intersex topics",
"Sex"
] |
5,681,094 | https://en.wikipedia.org/wiki/British%20Association%20for%20Behavioural%20and%20Cognitive%20Psychotherapies | The British Association for Behavioural and Cognitive Psychotherapies (BABCP) is a British-based multi-disciplinary interest group for people involved in the practice and theory of cognitive behaviour therapy.
History
Initially founded as the British Association for Behavioural Psychotherapy in 1972 by a small group including Isaac Marks, the organisation changed name in 1992 to incorporate cognitive therapies.
Organisation aims and activity
Based in Bury, the BABCP works to promote cognitive behavioral psychotherapies, disseminate information, set standards, and support local interest groups. An annual conference has been held in July every year since 1975, with additional training seminars. The peer reviewed journal Behavioural and Cognitive Psychotherapy, with Paul M Salkovskis as the current Editor-in-Chief, is free to members. Members can also apply for accreditation as CBT practitioners, with the qualification used as a formal recognition of CBT training and as guidance in a United Kingdom government initiative to improve access to psychological treatments (Improving Access to Psychological Therapies).
Executive group and membership
The organisation is supported by a BABCP Board of Directors (President, Honorary Secretary, Treasurer, approximately six elected members), 14 National Committee Forum staff and office management staff. As of the end of 2011 there were 9,600 members.
See also
Mental health in the United Kingdom
References
External links
BABCP website
Annual Conference
Organizations established in 1972
Behavior therapy
Organisations based in Bury, Greater Manchester
Mental health organisations in the United Kingdom
1972 establishments in the United Kingdom | British Association for Behavioural and Cognitive Psychotherapies | [
"Biology"
] | 307 | [
"Behavior",
"Behavior therapy",
"Behaviorism"
] |
5,682,069 | https://en.wikipedia.org/wiki/XvYCC | xvYCC or extended-gamut YCbCr is a color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space. xvYCC was proposed by Sony, specified by the IEC in October 2005 and published in January 2006 as IEC 61966-2-4. xvYCC extends the ITU-R BT.709 tone curve by defining over-ranged values.
xvYCC-encoded video retains the same color primaries and white point as BT.709, and uses either a BT.601 or BT.709 RGB-to-YCC conversion matrix and encoding. This allows it to travel through existing digital limited range YCC data paths, and any colors within the normal gamut will be compatible. It works by allowing negative RGB inputs and expanding the output chroma. These are used to encode more saturated colors by using a greater part of the RGB values that can be encoded in the YCbCr signal compared with those used in Broadcast Safe Level. The extra-gamut colors can then be displayed by a device whose underlying technology is not limited by the standard primaries.
In a paper published by Society for Information Display in 2006, the authors mapped the 769 colors in the Munsell Color Cascade (so called Michael Pointer's gamut) to the BT.709 space and to the xvYCC space. About 55% of the Munsell colors could be mapped to the sRGB gamut, but 100% of those colors map to within the xvYCC gamut. Deeper hues can be created – for example a deeper cyan by giving the opposing primary (red) a negative coefficient. The quantization range of the xvYCC601 and xvYCC709 colorimetry is always Limited Range.
Background
Camera and display technology is evolving with more distinct primaries, spaced farther apart per the CIE chromaticity diagram. Displays with more separated primaries permit a larger gamut of displayable colors, however, color data needs to be available to make use of the larger gamut color space. xvYCC is an extended gamut color space that is backwards compatible with the existing BT.709 YCbCr broadcast signal by making use of otherwise unused data portions of the signal.
The BT.709 YCbCr signal has unused code space, a limitation imposed for broadcasting purposes. In particular only 16-240 is used for the color Cb/Cr channels out of the 0-255 digital values available for 8 bit data encoding. xvYCC makes use of this portion of the signal to store extended gamut color data by using code values 1-15 and 241-254 in the Cb/Cr channels for gamut-extension.
Definition
xvYCC expands the chroma values to 1-254 (i.e. a raw value of -0.567–0.567) while keeping the luma (Y) value range at 16-235 (though Superwhite may be supported), the same as Rec. 709. First the OETF (Transfer Characteristics 11 per H.273 as originally specified by the first amendment to H.264) is expanded to allow negative R'G'B' inputs such that:
Here 1.099 number has the value 1 + 5.5 * β = 1.099296826809442... and β has the value 0.018053968510807..., while 0.099 is 1.099 - 1.
The YCC encoding matrix is unchanged, and can follow either Rec. 709 or Rec. 601 (Matrix Coefficients 1 and 5).
The possible range for non-linear R’G’B’601 is between -1.0732 and 2.0835 and for R’G’B’709 is between -1.1206 and 2.1305. That is achieved when YCC values are "1, 1, any" and "254, 254, any" in B' component.
xvYCC709 covers 37.19% of CIE 1976 u'v', while BT.709 only 33.24%.
The last step encodes the values to a binary number (quantization). It is basically unchanged, except that a bit-depth n of more than 8 bits can be selected:
Example
With negative primary amounts allowed, a cyan that lies outside the basic gamut of the primaries can be encoded as "green plus blue minus red". Since the 16-255 Y range is used (255 value is reserved in HDMI standard for synchronization but may be in files) and since the values of Cb and Cr are only little restricted, a lot of high saturated colors outside the 0–255 RGB space can be encoded. For example, if YCbCr is 255, 128, 128, in the case of a full level YCbCr encoding (0–255), then the corresponding R'G'B' is 255, 255, 255 which is the maximum encodable luminance value in this color space. But if Y=255 and Cr and/or Cb are not 128, this codes for the maximum luminance but with an added color: one primary must necessarily be above 255 and cannot be converted to R'G'B'. Adapted software and hardware must be used during production to not clip the video data levels that are above the sRGB space. This is almost never the case for software working with an RGB core.
The more complex example is YCbCr BT.709 values 139, 151, 24 (that is RGB -21, 182, 181). That is out-of-gamut for BT.709, but is not for sYCC and xvYCC709, and to convert those values to display gamut you would convert to XYZ (0.27018, 0.40327, 0.54109) and then to display gamut.
The XYZ matrix is as specified in Nvidia docs.
Adoption
A mechanism for signaling xvYCC support and transmitting the gamut boundary definition for xvYCC has been defined in the HDMI 1.3 Specification. No new mechanism is required for transmitting the xvYCC data itself, as it is compatible with HDMI's existing YCbCr formats, but the display needs to signal its readiness to accept the extra-gamut xvYCC values (in Colorimetry block of EDID, flags xvYCC709 and xvYCC601), and the source needs to signal the actual gamut in use in AVI InfoFrame and use gamut metadata packets to help the display to intelligently adapt extreme colors to its own gamut limitations.
This should not be confused with HDMI 1.3's other new color feature, deep color. This is a separate feature that increases the precision of brightness and color information, and is independent of xvYCC.
xvYCC is not supported by DVD-Video but is supported by the high-definition recording format AVCHD, PlayStation 3, Blu-ray. It is also supported by some cameras, like Sony HDR-CX405, that tag the video as xvYCC with BT.709 inside Sony's XAVC. Most of Sony's mirrorless cameras intended mainly for still photography tag this on recorded videos as well.
History
On January 7, 2013, Sony announced that it would release "Mastered in 4K" Blu-ray Disc titles which are sourced at 4K and encoded at 1080p. "Mastered in 4K" 1080p Blu-ray Disc titles can be played on existing Blu-ray Disc players and will support a larger color space using xvYCC.
On May 30, 2013, Eye IO announced that their encoding technology was licensed by Sony Pictures Entertainment to deliver 4K Ultra HD video with their "Sony 4K Video Unlimited Service". Eye IO encodes their video assets at 3840 x 2160 and includes support for the xvYCC color space.
Hardware support
The following graphics hardware support xvYCC color space when connected to a display device supporting xvYCC:
AMD Mobility Radeon HD 4000 series and newer models
AMD Radeon HD 5000 series and newer models
AMD 785G, 880G and 890GX chipsets with integrated graphics
Intel HD Graphics integrated on some CPUs (except Pentium G6950 and Celeron G1101)
nVidia GeForce 200 series and newer models
References
External links
IEC Web Store for IEC 61966-2-4
Color space
Electronics standards
Ultra-high-definition television | XvYCC | [
"Mathematics"
] | 1,812 | [
"Color space",
"Space (mathematics)",
"Metric spaces"
] |
14,740,426 | https://en.wikipedia.org/wiki/Neolocal%20residence | Neolocal residence is a type of post-marital residence in which a newly married couple resides separately from both the husband's natal household and the wife's natal household. Neolocal residence forms the basis of most developed nations, especially in the West, and is also found among some nomadic communities.
Upon marriage, each partner is expected to move out of their parents' household and establish a new residence, thus forming the core of an independent nuclear family. Neolocal residence involves the creation of a new household where a child marries or even when they reach adulthood and become socially and economically active. Neolocal residence and nuclear family domestic structures are found in societies where geographical mobility is important. In Western societies, they are consistent with the frequent moves that are necessary due to choices and changes within a supply- and demand-regulated labor market. They are also prevalent in hunting and gathering economies, where nomadic movements are intrinsic to the subsistence strategy.
In western countries, employment in large corporations or the military often calls for frequent relocations, making it nearly impossible for extended families to remain together hence creating new generation of families.
Description
In neolocal residence, newly formed couples form their own separate household units, and create what is considered a nuclear family. This contrasts with other forms of post-marital residence, such as patrilocal residence and matrilocal residence, in which the couple resides with or near the husband's family (patrilocal residence) or the wife's family (matrilocal residence).
Neolocality first appeared in Northwestern Europe. It was from there brought to British colonies in the Americas. As American colonists expanded westward, this form of residence remained. Although some believe neolocal residence came as a result of industrialization, there is evidence of neolocality in England from before industrialization. Whatever the relationship between neolocality and economic development is, what is clear is that the two seem to coincide. Countries that experience economic development tend to also experience declines in multi-generational households, and increases in nuclear, neolocal forms of residence. A reason often cited for the high coincidence of neolocality in developed countries is the higher mobility of nuclear families, which becomes more important in modern economies. The decline of dependency on agricultural subsistence, which results in a weakening of extended family ties, is seen as another cause of nuclear, neolocal household creation. A particular case study of the relationship between economic development and neolocal residence patterns is the community of Navajo Mountain, which showed a positive correlation between the two.
Currently, neolocal residence is more commonly found in the west, and is becoming more common in countries that have experienced economic development, like Japan.
Notes
Further reading
Korotayev, Andrey. 2001. An Apologia of George Peter Murdock. Division of Labor by Gender and Postmarital Residence in Cross-Cultural Perspective: A Reconsideration. World Cultures 12(2): 179-203.
Living arrangements
Marriage
Sociobiology
Cultural anthropology | Neolocal residence | [
"Biology"
] | 618 | [
"Behavioural sciences",
"Behavior",
"Sociobiology"
] |
14,740,555 | https://en.wikipedia.org/wiki/Simplified%20sewerage | Simplified sewerage, also called small-bore sewerage, is a sewer system that collects all household wastewater (blackwater and greywater) in small-diameter pipes laid at fairly flat gradients. Simplified sewers are laid in the front yard or under the pavement (sidewalk) or - if feasible - inside the back yard, rather than in the centre of the road as with conventional sewerage. It is suitable for existing unplanned low-income areas, as well as new housing estates with a regular layout. It allows for a more flexible design. With simplified sewerage it is crucial to have management arrangements in place to remove blockages, which are more frequent than with conventional sewers. It has been estimated that simplified sewerage reduces investment costs by up to 50% compared to conventional sewerage.
Simplified sewerage is sometimes also referred to as conventional sewerage with appropriate standards, implying that most conventional sewers are overdesigned.
The concept of simplified sewerage emerged in parallel in Natal, Brazil and Karachi, Pakistan in the early 1980s without any interaction or communication.
In both cases particular emphasis was given to community mobilization, an essential element for the success of simplified sewerage. In Latin America, and particularly in Brazil, simplified sewerage is also known as condominial sewerage, a term that underscores the importance of community participation in planning and maintenance at the level of a housing block (known as condominio in the Spanish and Portuguese use of the term).
Background
In developing countries, connection to sewer systems is often costly for poor households, despite typically low monthly sewer tariffs. This apparent paradox is explained by the high costs of in-plot and in-house sanitary installations that have to be paid entirely by the user, by sometimes high sewer connection fees levied by utilities, and by a lack of community consultation. As a result, in many cities in developing countries conventional sewers are laid at high costs under a street, while many users on that street do not connect to them. In Brazil, in some cities connection rates in the early 1990s were less than 40% of the intended beneficiary population.
Application
Simplified sewerage is most widely used in Brazil. It is estimated that in Brazil some 5 million people in over 200 towns and cities are served with simplified sewerage - or condominial sewerage. This corresponds to about 3% of the population of Brazil and about 6% of the population connected to sewers. They serve poor and rich alike.
Simplified sewerage has also been used in
Bolivia, beginning with a pilot project in El Alto;
Honduras, primarily in marginal areas of Tegucigalpa where simplified sewerage has been introduced in 20 communities with 24,000 inhabitants;
Peru, primarily in marginal areas of Lima;
in South Africa, where pilot projects were carried out in Johannesburg and Durban;
in Sri Lanka, where the National Housing Development Authority implemented over 20 schemes in the 1980s and 90s.
In Pakistan, beginning with the Orangi Pilot Project in Karachi, a variation of simplified sewerage using larger diameter pipes has been used.
Community participation
Community participation in the planning of any sewer system is a fundamental requirement to achieve higher household connection rates and to increase the likelihood of proper maintenance of in-block sewers. In addition, it can motivate users to assume parts of the costs of the sewer system that they are able to assume, such as contribution of labor for construction and/or maintenance.
Typically, in the planning process for a simplified sewerage system, meetings are carried out at the housing block (condominio) level for information, discussions and clarifications required for a joint group decision on network design, community contributions during construction and maintenance responsibilities. Users might finance and implement in-house sanitary installations and household connections and would agree on a suitable type of condominial branch. They are asked to comply with agreements established for construction and operation of the condominial branch, as well as payment of tariffs. In turn, the service provider agrees to fulfill his responsibilities as established in the “Terms of Connection ” between the parties.
The community participation process also provides a good opportunity for complementary actions like hygiene promotion, which can have a significant impact on public health at a relatively limited cost.
Design and construction
Simplified sewers are usually laid in the front yard or under the pavement (sidewalk). In some rare cases it is possible to lay them in the back yard. Sidewalk branches are usually preferred in regular urbanizations, while the front and back yard branches are particularly suited to neighborhoods with challenging topography or urbanization patterns. However, in some cases neither of these options is possible. For example, in South Asia, in many cities there is no sidewalk or front yard, so pipes have to be laid in the middle of the street as with conventional sewers.
In Latin America typical simplified sewer diameters are 100 mm, laid at a gradient of 1 in 200 (0.5 percent). Such a sewer will serve around 200 households of 5 people with a wastewater flow of 80 litres per person per day. In Pakistan, however, there are no rigorous standards for sewer diameters. In a small pilot as part of the Orangi Pilot Project pipes with a diameter of 150mm were used.
Laying small diameter pipes at fairly flat gradients requires careful construction techniques. Plastic pipes are best used as they are more easily jointed correctly. This reduces wastewater leakage from the sewer and groundwater infiltration into it. With simplified sewerage there is no need to have the large expensive manholes of the type used for conventional sewerage — simple brick or plastic junction chambers are used instead.
Construction can be carried out by contractors or by trained and properly supervised community members. Training and proper supervision are actually needed in both cases, since contractors in many cities are not familiar with simplified sewerage.
Investment cost comparison
The cost of sewerage - conventional or simplified - are always site-specific, and estimates are subject to controversies. Construction costs of simplified sewerage are up to half the costs of conventional sewerage. Investment cost savings come from various design features that may or may not be present in a particular simplified sewerage system. Cost-saving features of any simplified sewerage system are a smaller diameter of pipes, smaller and shallower trenches and simplified manholes. The two latter features are estimated to account for most of the cost savings. Other features that could further reduce costs may only be present in some systems, such as:
shorter networks;
avoidance of the need to damage pavements and sidewalks (if they already exist and if pipes are laid in front or back yards);
decentralized, small-scale wastewater treatment, and consequently elimination of main collectors and sewage pumping stations.
An element that may slightly increase costs compared to conventional sewerage is the introduction of grease traps or of interceptor chambers for the settlement of sludge. The latter are more common in South Asia and are not used in the condominial model. A 2006 study of four countries showed cost savings of 31-57% from the use of simplified sewerage compared to conventional sewerage with unit costs varying from US$119 per connection in a neighborhood in Bolivia and to US$759 per connection in a small town in Paraguay. A detailed estimate gives the costs of simplified sewerage in Lima as at least US$700 per household (US$120–140 per person), including in-house sanitary facilities (US$100 per household) and including design, supervision and social intermediation costs (US$126 per household, which are common costs shared with water infrastructure), but excluding taxes.
In general, at higher population densities sewer systems are cheaper than on-site sanitation (such as septic tanks). The switching value at which sewerage becomes less costly is largely determined by the type of sewerage, conventional or simplified. A 1983 study in Natal showed that the investment costs for simplified sewerage were lower than for on-site systems at the quite low population density of about 160 people per hectare. Conventional sewerage, however, was cheaper only at densities above 400 people per hectare.
Operation and maintenance
Good operation and maintenance (O&M) is essential for the long-term sustainability of any sewerage system, but particularly for simplified sewerage, since the small diameter of pipes and low gradients make the system highly vulnerable to clogging. Solids can readily block the small diameter piping and the shallow grade of pipe alignment prevents sewage flow from reaching scouring velocity, meaning that solids fall out of suspension and depositing within the low gradient pipe before reaching the downstream receiving body.
The original concept of householders being responsible for O&M of in-block condominial sewers has not worked well in the long term. A study of simplified sewerage systems in Brazil has shown that effective maintenance of sewers by utilities has often been the result of community pressure by neighborhood associations. Without such pressure maintenance by utilities has often been inadequate, and community maintenance has not come about either.
Few situation exist where simplified sewers are appropriate sanitation solutions to install. Therefore, alternative management systems had to be developed to mitigate the high issues of simplified sewers, and a few examples are provided below:
In rural Ceará a villager is employed by the Residents’ Association to maintain the sewers and the wastewater treatment plant (typically, a single facultative waste stabilization pond). He is also responsible for the water supply.
In parts of Recife in northeast Brazil the state water and sewerage company employs local contracting firms for O&M. Usually this is done by a small team comprising a technician engineer and two laborers who work in a low-income area served by simplified sewerage and to whom residents report any problems.
In Brasília the water and sewerage company, which has over 1,200 km of condominial sewers, uses van-mounted water jet units to clear any blockages.
Concerning maintenance costs, available information indicates similar costs and requirements for the simplified and the conventional system under the same conditions. Simplified systems typically require more interventions, but the cost per intervention is lower. Comparative analytical studies are not yet available, however.
Constraints for application
According to Jose Carlos Melo, who is considered to be the "father" of condominial sewers in Brazil, some important constraints for the application of simplified sewerage are:
Lack of information on fundamentals and techniques of the approach or lack of experience in its application,
Resistance to change: Institutional, technical and operational changes required by the service provider for implementing the condominial approach usually provoke resistance and can hinder the application.
Normative and legal restrictions: Existing conservative design and construction standards linked to conventional systems can be an essential constraint in the introduction and dissemination of the systems.
Over the last years, countries like Bolivia and Peru reviewed and modernized technical standards according to methods and criteria established and accepted in Brazil in the 1980s, thus overcoming the latter constraint.
See also
Effluent sewer or solids-free sewer
References
External links
Simplified Sewerage Design, Microsoft Producer presentations and supporting Material, Duncan Mara, Leeds University
CONDOMINIAL SYSTEMS - BRAZILIAN PANORAMA AND CONCEPTUAL ELEMENTS, Leeds University
PC-Based Simplified Sewerage Design Program, Leeds University
Sewerage
Environmental engineering | Simplified sewerage | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,267 | [
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering"
] |
14,740,623 | https://en.wikipedia.org/wiki/Class%20II%20bacteriocin | Class II bacteriocins are a class of small peptides that inhibit the growth of various bacteria.
Many Gram-positive bacteria produce ribosomally synthesized antimicrobial peptides, termed bacteriocins.
Bacteriocins for which disulfide bonds are the only modification to the peptide are Class II bacteriocins.
Class IIa
One important and well studied class of bacteriocins is the class IIa or pediocin-like bacteriocins produced by lactic acid bacteria. All class IIa bacteriocins are produced by food-associated strains, isolated from a variety of food products of industrial and
natural origins, including meat products, dairy products and vegetables. Class IIa bacteriocins are all cationic, display anti-Listeria activity, and kill target cells by permeabilizing the cell membrane.
Class IIa bacteriocins contain between 37 and 48 residues. Based on their primary structures, the peptide chains of class IIa bacteriocins may be divided roughly into two regions: a hydrophilic, cationic and highly conserved N-terminal region, and a less conserved hydrophobic/amphiphilic C-terminal region. The N-terminal region contains the conserved Y-G-N-G-V/L 'pediocin box' motif and two conserved cysteine residues joined by a disulfide bridge. It forms a three-stranded antiparallel beta-sheet supported by the conserved disulfide bridge. This cationic N-terminal beta-sheet domain mediates binding of the class IIa bacteriocin to the target cell membrane. The C-terminal region forms a hairpin-like domain that penetrates into the hydrophobic part of the target cell membrane, thereby mediating leakage through the membrane. The two domains are joined by a hinge, which enables movement of the domains relative to each other.
Some proteins known to belong to the class IIa bacteriocin family are listed below:
Pediococcus acidilactici pediocin PA-1.
Leuconostoc mesenteroides mesentericin Y105.
Carnobacterium piscicola carnobacteriocin B2.
Lactobacillus sakei sakacin P.
Enterococcus faecium enterocin A.
Enterococcus faecium enterocin P.
Leuconostoc gelidum leucocin A.
Lactobacillus curvatus curvacin A.
Listeria innocua listeriocin 743A.
Class IIb
The class IIb bacteriocins (two-peptide bacteriocins) require two different peptides for activity. It includes the alpha enterocins and lactococcin G peptides. These peptides have some antimicrobial properties; they inhibit the growth of Enterococcus spp. and a few other Gram-positive bacteria. These peptides act as pore-forming toxins that create cell membrane channels through a barrel-stave mechanism and thus produce an ionic imbalance in the cell
Class IIc
Other class II bacteriocins can be grouped together as Class IIc (circular bacteriocins). These have a wide range of effects on membrane permeability, cell wall formation and pheromone actions of target cells. In particular, Bacteriocin AS-48 is a cyclic peptide antibiotic produced by the eubacteria Enterococcus faecalis (Streptococcus faecalis) that shows a broad antimicrobial spectrum against both Gram-positive and Gram-negative bacteria. Bacteriocin AS-48 is encoded by the pheromone-responsive plasmid pMB2, and acts on the plasma membrane in which it opens pores leading to ion leakage and cell death. The globular structure of bacteriocin AS-48 consists of five alpha helices enclosing a hydrophobic core. The mammalian NK-lysin effector protein of T and natural killer cells has a similar structure, though it lacks sequence homology with bacteriocins AS-48.
References
Further reading
External links
Class II bacteriocin and related families is variously recorded in Pfam and InterPro as:
The naming is inconsistent at times.
Protein domains
Protein families
Peripheral membrane proteins | Class II bacteriocin | [
"Biology"
] | 924 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,740,796 | https://en.wikipedia.org/wiki/Isotopically%20pure%20diamond | An isotopical pure diamond is a type of diamond that is composed entirely of one isotope of carbon. Isotopically pure diamonds have been manufactured from either the more common carbon isotope with mass number 12 (abbreviated as 12C) or the less common 13C isotope. Compared to natural diamonds that are composed of a mixture of 12C and 13C isotopes, isotopically pure diamonds possess improved characteristics such as increased thermal conductivity. Thermal conductivity of diamonds is at a minimum when 12C and 13C are in a ratio of 1:1 and reaches a maximum when the composition is 100% 12C or 100% 13C.
Manufacture
The isotopes of carbon can be separated in the form of carbon dioxide gas by cascaded chemical exchange reactions with amine carbamate. Such CO2 can be converted to methane and from there to isotopically pure synthetic diamonds. Isotopically enriched diamonds have been synthesized by application of chemical vapor deposition followed by high pressure.
Types
Carbon 12
The 12C isotopically pure, (or in practice 15-fold enrichment of isotopic number, 12 over 13 for carbon) diamond gives a 50% higher thermal conductivity than the already high value of 900-2000 W/(m·K) for a normal diamond, which contains the natural isotopic mixture of 98.9% 12C and 1.1% 13C. This is useful for heat sinks for the semiconductor industry.
Carbon 13
Isotopically pure 13C diamond layers 20 micrometers thick are used as stress sensors due to the advantageous Raman spectroscopy properties of 13C.
References
Synthetic diamond
Isotopes of carbon | Isotopically pure diamond | [
"Chemistry"
] | 333 | [
"Isotopes of carbon",
"Isotopes"
] |
14,740,927 | https://en.wikipedia.org/wiki/Euphyllophyte | The euphyllophytes are a clade of plants within the tracheophytes (the vascular plants). The group may be treated as an unranked clade, a division under the name Euphyllophyta or a subdivision under the name Euphyllophytina. The euphyllophytes are characterized by the possession of true leaves ("megaphylls"), and comprise one of two major lineages of extant vascular plants. As shown in the cladogram below, the euphyllophytes have a sister relationship to the lycopodiophytes or lycopsids. Unlike the lycopodiophytes, which consist of relatively few presently living or extant taxa, the euphyllophytes comprise the vast majority of vascular plant lineages that have evolved since both groups shared a common ancestor more than 400 million years ago. The euphyllophytes consist of two lineages, the spermatophytes or seed plants such as flowering plants (angiosperms) and gymnosperms (conifers and related groups), and the Polypodiophytes or ferns, as well as a number of extinct fossil groups.
The division of the extant tracheophytes into three monophyletic lineages is supported in multiple molecular studies. Other researchers argue that phylogenies based solely on molecular data without the inclusion of carefully evaluated fossil data based on whole plant reconstructions, do not necessarily completely and accurately resolve the evolutionary history of groups like the euphyllophytes.
The following cladogram shows a 2004 view of the evolutionary relationships among the taxa described above.
An updated phylogeny of both living and extinct Euphyllophytes with plant taxon authors from Anderson, Anderson & Cleal 2007.
References
Plants | Euphyllophyte | [
"Biology"
] | 370 | [
"Plants"
] |
14,742,347 | https://en.wikipedia.org/wiki/Power%20screed | A power concrete screed is a tool used to smooth and level freshly poured concrete surfaces. It can be used in place of a man-powered screed bar to strike off excess concrete. A power screed works by consolidating and/or vibrating the wet concrete mixture. The screed moves back and forth, as friction screeds or "roller" screeds level the concrete, filling holes and lowering any high spots. Power screeds can be powered by gas, electricity or hydraulics.
In concrete, prior to the mix drying, the concrete should be smoothed out on the desired surface. The compaction performance of the power concrete screed is mainly determined by the centrifugal force of the vibration force and only to a minor extent by the static weight.
References
Concrete | Power screed | [
"Engineering"
] | 168 | [
"Structural engineering",
"Concrete"
] |
14,743,159 | https://en.wikipedia.org/wiki/City%20Solar | City Solar AG is a producer of large-scale photovoltaic power plants, taking care of all aspects of production. This includes site location, planning, construction, and management. The company was started in 2002 in Bad Kreuznach, Germany, but now has offices in Saarbrücken, Berlin, Chemnitz, Augsburg, and Madrid.
City Solar has produced over a dozen power stations including the world's largest photovoltaic power plant located in Beneixama, Spain. The Beneixama photovoltaic power plant is a 10MWp power station, with 100,000 solar modules, encompassing an area of approximately 500,000m2. As of 2007, City Solar has 4 more plants under construction or in development.
See also
Photovoltaic power stations
List of photovoltaics companies
References
Solar energy companies of Germany
Photovoltaics manufacturers | City Solar | [
"Engineering"
] | 186 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
14,743,352 | https://en.wikipedia.org/wiki/Software%20incompatibility | Software incompatibility is a characteristic of software components or systems which cannot operate satisfactorily together on the same computer, or on different computers linked by a computer network. They may be components or systems which are intended to operate cooperatively or independently. Software compatibility is a characteristic of software components or systems which can operate satisfactorily together on the same computer, or on different computers linked by a computer network. It is possible that some software components or systems may be compatible in one environment and incompatible in another.
Examples
Deadlocks
Consider sequential programs of the form:
Request resource A
Request resource B
Perform action using A and B
Release resource B
Release resource A
A particular program might use a printer (resource A) and a file (resource B) in order to print the file.
If several such programs P1,P2,P3 ... operate at the same time, then the first one to execute will block the others until the resources are released, and the programs will execute in turn. There will be no problem. It makes no difference whether a uni-processor or a multi-processor system is used, as it's the allocation of the resources which determines the order of execution.
Note, however, that programmers are, in general, not constrained to write programs in a particular way, or even if there are guidelines, then some may differ from the guidelines. A variant of the previous program may be:
Request resource B
Request resource A
Perform action using A and B
Release resource A
Release resource B
The resources A and B are the same as in the previous example – not simply dummy variables, as otherwise the programs are identical.
As before, if there are several such programs, Q1,Q2,Q3 which run at the same time using resources as before, there will be no problem.
However, if several of the Ps are set to run at the same time as several of the Qs, then a deadlock condition can arise. Note that the deadlock need not arise, but may.
P: Request resource A
Q: Request resource B
Q: Request resource A (blocked by P)
P: Request resource B (blocked by Q)
...
Now neither P nor Q can proceed1.
This is one kind of example where programs may demonstrate incompatibility.
Interface incompatibility
Another example of a different kind would be where one software component provides service to another. The incompatibility could be as simple as a change in the order of parameters between the software component requesting service, and the component providing the service. This would be a kind of interface incompatibility. This might be considered a bug, but could be very hard to detect in some systems. Some interface incompatibilities can easily be detected during the build stage, particularly for strongly typed systems, others may be hard to find and may only be detected at run time, while others may be almost impossible to detect without a detailed program analysis.
Consider the following example:
Component P calls component Q with parameters x and y. For this example, y may be an integer.
Q returns f(x) which is desired and never zero, and ignores y.
A variant of Q, Q' has similar behaviour, with the following differences:
if y = 100, then Q' does not terminate.
If P never calls Q with y set to 100, then using Q' instead is a compatible computation.
However if P calls Q with y set to 100, then using Q' instead will lead to a non-terminating computation.
If we assume further that f(x) has a numeric value, then component Q'' defined as:
Q'' behaves as Q except that
if y = 100 then Q'' does not terminate
if y = 101 then Q'' returns 0.9 * f(x)
if y = 102 then Q'' returns a random value
if y = 103 then Q'' returns 0.
may cause problem behaviour. If P now calls Q'' with = 101, then the results of the computation will be incorrect, but may not cause a program failure. If P calls Q'' with y = 102 then the results are unpredictable, and failure may arise, possibly due to divide by zero or other errors such as arithmetic overflow.
If P calls Q'' with y= 103 then in the event that P uses the result in a division operation, then a divide by zero failure may occur.
This example shows how one program P1 may be always compatible with another Q1, but that there can be constructed other programs Q1' and Q1'' such that P1 and Q1' are sometimes incompatible, and P1 and Q1'' are always incompatible.
Performance incompatibility
Sometimes programs P and Q can be running on the same computer, and the presence of one will inhibit the performance of the other. This can particularly happen where the computer uses virtual memory. The result may be that disk thrashing occurs, and one or both programs will have significantly reduced performance. This form of incompatibility can occur if P and Q are intended to cooperate, but can also occur if P and Q are completely unrelated but just happen to run at the same time. An example might be if P is a program which produces large output files, which happen to be stored in main memory, and Q is an anti-virus program which scans many files on the hard disk. If a memory cache is used for virtual memory, then it is possible for the two programs to interact adversely and the performance of each will be drastically reduced.
For some programs P and Q their performance compatibility may depend on the environment in which they are run. They may be substantially incompatible if they are run on a computer with limited main memory, yet it may be possible to run them satisfactorily on a machine with more memory. Some programs may be performance incompatible in almost any environment.
See also
Backward compatibility
Forward compatibility
References
C. M. Krishna, K. G. Shin, Real-Time Systems, McGraw-Hill, 1997
Incompatibility
Software | Software incompatibility | [
"Technology",
"Engineering"
] | 1,242 | [
"Telecommunications engineering",
"Software engineering",
"Computer science",
"nan",
"Interoperability",
"Software"
] |
14,743,376 | https://en.wikipedia.org/wiki/Hardy%E2%80%93Ramanujan%20theorem | In mathematics, the Hardy–Ramanujan theorem, proved by Ramanujan and checked by Hardy states that the normal order of the number of distinct prime factors of a number is .
Roughly speaking, this means that most numbers have about this number of distinct prime factors.
Precise statement
A more precise version states that for every real-valued function that tends to infinity as tends to infinity
or more traditionally
for almost all (all but an infinitesimal proportion of) integers. That is, let be the number of positive integers less than for which the above inequality fails: then converges to zero as goes to infinity.
History
A simple proof to the result was given by Pál Turán, who used the Turán sieve to prove that
Generalizations
The same results are true of , the number of prime factors of counted with multiplicity.
This theorem is generalized by the Erdős–Kac theorem, which shows that is essentially normally distributed. There are many proofs of this, including the method of moments (Granville & Soundararajan) and Stein's method (Harper). It was shown by Durkan that a modified version of Turán's result allows one to prove the Hardy–Ramanujan Theorem with any even moment.
See also
Almost prime
Turán–Kubilius inequality
References
Further reading
Theorems in analytic number theory
Theorems about prime numbers | Hardy–Ramanujan theorem | [
"Mathematics"
] | 278 | [
"Theorems in mathematical analysis",
"Theorems in number theory",
"Theorems in analytic number theory",
"Theorems about prime numbers"
] |
14,743,458 | https://en.wikipedia.org/wiki/Transcendent%20Man | Transcendent Man is a 2009 documentary film by American filmmaker Barry Ptolemy about inventor, futurist and author Ray Kurzweil and his predictions about the future of technology in his 2005 book, The Singularity is Near. In the film, Ptolemy follows Kurzweil around his world as he discusses his thoughts on the technological singularity, a proposed advancement that will occur sometime in the 21st century when progress in artificial intelligence, genetics, nanotechnology, and robotics will result in the creation of a human-machine civilization.
William Morris Endeavor distributed the film partnership with Ptolemaic Productions and Therapy Studios, using an original model involving a nationwide screening tour of the film (featuring Q&A sessions with Ptolemy and Kurzweil), as well as separate digital and DVD releases. The film was also released on iTunes and On-Demand on March 1, 2011, and on DVD on May 24, 2011.
The film debuted for the first public screening at the Time-Life Building in New York City on February 3, 2011. The same week, Time ran the Singularity cover story by Lev Grossman, with coverage about Kurzweil's ideas and the concepts, citing Transcendent Man. Kurzweil toured the film, appearing on Fox News Channel, CNN, MSNBC, Bloomberg News, and Charlie Rose. Additionally, Kurzweil went on to discuss the film on The Colbert Report, Jimmy Kimmel Live!, and Real Time with Bill Maher.
Synopsis
Raymond Kurzweil, noted inventor and futurist, is a man who refuses to accept the inevitability of physical death. He proposes that the Law of Accelerating Returns—the exponential increase in the growth of information technology—will result in a "singularity", a point where humanity and machines will merge, allowing one to transcend biological mortality: advances in genetics will provide the knowledge to reprogram biology, eliminate disease and stop the aging process; nanotechnology will keep humans healthy from the inside using robotic "red blood cells" and provide a human-computer interface within the brain; robotics, or artificial intelligence, will make superhuman intelligence possible, including the ability to back up the mind.
Most of the movie has an implication of a religious background, and is applying technology to accomplish the goals with what is considered to be "god like" powers, through interdependent connection. Kurzweil has been criticized as being a modern-day prophet, however the film describes a detailed list of his inventions. Ray's dedication to improving the blind's quality of life is displayed in the climax of the film, with his miniature blind reading tool. Ray speaks of emailing someone a blouse, or printing out a toaster utilizing nanotechnology. Eventually swarms of our nanotechnology will be sent by us into the universe to, as Kurzweil puts it, "wake up the universe".
Against this optimistic backdrop of human and machine evolution, concerns about Kurzweil's predictions are raised by technology experts, philosophers, and commentators. Physician William B. Hurlbut warns of tragedy and views Kurzweil's claims as lacking a more moderate approach necessitated by biological science. AI engineer Ben Goertzel champions the transhumanist vision, but acknowledges the possibility of a dystopian outcome. AI researcher Hugo de Garis warns of a coming "Artilect War", where god-like artificial intellects and those who want to build them, will fight against those who don't. Kevin Warwick, professor of Cybernetics at University of Reading, advocates the benefits of the singularity, but suggests the Terminator scenario could also occur, where humans become subservient to machine and live on a farm, and the singularity is the point where humans lose control to the intelligent machines. Warwick basically spells doom for anyone who is human after the singularity. Dean Kamen observes that advances in technology have finally made immortality a reasonable goal. At the end of the film, Kurzweil states, "if I was asked if god exists, I would say not yet."
Cast
Tom Abate, Technology Reporter, San Francisco Chronicle.
Hugo De Garis, Professor of Computer Science and Mathematical Physics, Xiamen University.
Peter Diamandis, Chairman, X Prize Foundation.
Neil Gershenfeld, Director, Center for Bits and Atoms, MIT.
Ben Goertzel, Artificial Intelligence Engineer.
William Hurlbut, Consulting Professor in the Neuroscience Institute at Stanford University.
Kevin Kelly, Co-founder, Wired.
Aaron Kleiner, Kurzweil Technologies
Hannah Kurzweil, mother of Ray Kurzweil
Ray Kurzweil
Sonya R. Kurzweil, wife of Ray Kurzweil
Robert Metcalfe, co-inventor of Ethernet, founder of 3Com
Chuck Missler, Technologist/Koinonia Institute
Colin Powell, retired four-star General in the United States Army.
Steve Rabinowitz, college friend from MIT.
Philip Rosedale, creator of Second Life
William Shatner
Kevin Warwick, Professor of Cybernetics, University of Reading.
Stevie Wonder
Music
American composer Philip Glass scored the original soundtrack for the film. In addition to the Transcendent Man score, other music from Glass's collection was included in the soundtrack.
"A Brief History of Time"
"Koyaanisqatsi"
"Kyoko's House" (from Mishima)
"Religion" (from Naqoyqatsi)
"Satyagraha Act III" (Conclusion)
"Symphony No. 3"
"The Thin Blue Line"
"Tirol Concerto for Piano and Orchestra"
Release
The Transcendent Man tour visited five major cities in the U.S., as well as London. These screenings featured question and answer sessions with director Barry Ptolemy and Ray Kurzweil following the film, as well as V.I.P. receptions.
Ptolemaic Productions and Therapy Studios have pursued an alternative distribution strategy for Transcendent Man, going through the Global and Music departments of agency William Morris Endeavor to partner with iTunes and Media-on-Demand for a March 1, 2011 digital release and with New Media for a May 24, 2011 DVD release. Marketing made use of social media and emerging technologies like QR codes to appeal to a tech-savvy audience.
Film festivals
April 28, 2009 - Tribeca Film Festival, World Documentary Feature Competition.
November 5, 2009 - American Film Institute film festival, Los Angeles.
November 24, 2009 - International Documentary Film Festival Amsterdam (IDFA), Amsterdam, Netherlands, screened in competition.
March 2010 - Martha's Vineyard Film Festival.
Criticism
One common criticism of Kurzweil's final prediction is that he does not consider that new technologies are never universally and immediately adopted due to the laws of economics. Start-up costs and economies of scale mean that initially transhumanist technology would be prohibitively expensive for most people. This would cause the wealthy, first adopters of brain enhancing technology to be transcendental above the less fortunate. One response to this criticism uses the technology of the automobile as an example. Even though a rich person might drive an expensive Rolls-Royce, cheaper alternatives are available that perform the same task. In other words, no matter how much two cars differ in price, their function is virtually identical. One important element of Kurzweil's singularity is that the cost will come down to virtually nothing.
Kurzweil readily defends AI as being controllable against malicious behavior, which he accepts is a definite threat. He never, on the other hand, confronts the dangers of AI fusing with the first humans.
References
Further reading
Barker, A. (November 23, 2009). Transcendent Man. Variety. 417 (2), 34.
Gefter, A. (May 8, 2009). Film review: Transcendent Man. New Scientist. 202 (2707), 27.
Shermer, M. (April 1, 2011). The Immortalist. Science. 332 (6025), 40.
Tucker, P. (2009). The Cinematic Singularitarian. The Futurist. 43 (5), 60.
External links
Transcendent Man on YouTube
2009 films
Biographical documentary films
Documentary films about technology
Documentary films about death
Films scored by Philip Glass
American independent films
Transhumanism
Futurology documentaries
2000s English-language films
2000s American films
K's Choice compilation albums | Transcendent Man | [
"Technology",
"Engineering",
"Biology"
] | 1,740 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
14,744,729 | https://en.wikipedia.org/wiki/Joshua%20Hendy%20Iron%20Works | The Joshua Hendy Iron Works was an American engineering company that existed from the 1850s to the late 1940s. It was at one time a world leader in mining technology and its equipment was used in constructing the Panama Canal, amongst other major projects. The company went on to serve many different markets during the course of its existence, but is perhaps best remembered today for its contribution to the American shipbuilding industry during World War II.
Beginnings
The company was named after its founder Joshua Hendy. Born in Cornwall, England in 1822, Hendy at the age of 13 migrated with two brothers to South Carolina in the United States. Joshua married and became a blacksmith in Houston, Texas. After the death of his wife and two children from yellow fever, he sailed around Cape Horn to San Francisco in 1849 to participate in the California Gold Rush.
Hendy built California's first redwood lumber mill, the Benicia Sawmill (the region is now known as the Hendy Woods State Park). In 1856, he established the Joshua Hendy Iron Works in San Francisco to supply equipment to Gold Rush placer miners. The Hendy plant supplied various equipment to the mining industry.
Mining industry leader
By the 1890s, the Joshua Hendy Iron Works was a leader in the mining industry, supplying equipment to mining companies globally, including ore carts, ore crushers, stamp and ball mills and other equipment.
Many of the engineering innovations developed by Hendy became mining industry standards, employed as late as the 1970s, such as the hydraulic giant monitor, the tangential water wheel, the Hendy ore concentrator, the Challenge ore feeder, and the Hendy hydraulic gravel elevator. Hendy giant hydraulic crushers were used to excavate the Panama Canal.
After Joshua Hendy died in 1891, management of the company was taken over by his nephews Samuel and John. After the April 18, 1906 earthquake a fire devastated the original San Francisco factory, and the company was re-established in Sunnyvale, California after the local government enticed the company with free land. Samuel died after a short illness on March 14, 1906 and was succeeded as president of the company by his brother John.
World War I
During World War I, the Hendy plant gained its first experience building marine engines by supplying 11 triple expansion steam engines for cargo ships built by Western Pipe & Steel, for the U.S. Shipping Board. Each engine weighed about 137 tons and stood 24½ feet high. Although the first marine engines built by Hendy, they proved to be reliable, with most providing many years of service. Essentially the same engine design (with minor improvements) was used by the company for its mass production of US Liberty ship engines in World War II.
Interwar period
In the early 1920s, Hendy's hydraulic mining equipment was used in the regrading of Seattle, described as perhaps the largest such alteration of urban terrain in history.
With the onset of the Great Depression however, and hampered by indifferent management, the Hendy Iron Works - like many other heavy equipment manufacturers of the era - fell on hard times. The company adapted by finding new markets, for example by contracting for the building of giant gates and valves for the hydroelectric schemes of the Hoover and Grand Coulee dams. During this period it also produced equipment as diverse as crawler tractors, freight car wheel pullers, parts for internal combustion engines and standards for street lamps. Some of the ornate street lamps built by the company can still be seen in San Francisco's Chinatown district today.
World War II
By the late 1930s the company was in financial difficulties and had shrunk to a shadow of its former self, employing only 60 workers. The company was in the process of being taken over by the Bank of California in 1940 when businessman Charles E. Moore, with the financial support of the Six Companies, took a controlling interest. Moore soon managed to contract with the US Navy for the building of some torpedo tube mounts, and shortly thereafter he secured a contract for the building of twelve triple expansion marine steam engines.
By 1942, with the US government's wartime Emergency Shipbuilding Program getting under way, it became clear that a large number of new marine engines would be needed to power the new ships. Since there was a shortfall in capacity to produce modern steam turbines, it was realized that most of the new Liberty ships would have to be fitted with older and slower reciprocating steam engines instead. Admiral Vickery contacted Moore to ask if he could double the original order of 12 engines, to which Moore is reported to have responded that it would be as easy to tool up for a hundred as for a dozen. The company was then contracted to build 118 triple expansion steam engines for the Liberty ships.
As the war progressed and the emergency shipbuilding program continued to expand, so the orders for new engines also grew. Moore responded by streamlining production at the Joshua Hendy plant. He introduced more advanced assembly line techniques, standardizing on more production parts and enabling less skilled workers to accomplish tasks formerly carried out by skilled machinists. By 1943, the company had reduced the time required to manufacture a marine steam engine from 4,500 hours to 1,800 hours. The number of workers employed by the company also grew dramatically, reaching a peak of 11,500 during the war.
By the end of the war, the Joshua Hendy Iron Works had supplied the engines for 754 of America's 2,751 Liberty ships, or about 28% of the total - more than that of any other plant in the country and the main engines of all s (2 per ship) built on the West Coast, 18 by Consolidated Steel in Wilmington and 12 by Kaiser Shipyards in Richmond plus 15 more built by Great Lakes shipyards and 7 in Rhode Island. In addition, the company in the late stages of the war produced 53 steam turbines and reduction gears for the more modern Victory ships.
Postwar developments
In 1947, the Joshua Hendy Iron Works was sold to the Westinghouse Corporation. In the postwar period, the plant continued to produce military equipment including missile launching and control systems for nuclear-powered submarines, and antiaircraft guns. It also produced pressure hulls for undersea vehicles, nuclear power plant equipment, wind tunnel compressors, large diameter radio telescopes, diesel engines and electrical equipment.
In 1996, Westinghouse sold the plant to Northrop Grumman, which renamed it Northrop Grumman Marine Systems.
As a legacy, the Big Thunder Mountain Railroad attraction in Disney World features Joshua Hendy mining equipment in its queue.
References
Bibliography
External links
Charles E. Moore website.
Illustrations of a Joshua Hendy stamp mill, early 1900s - MS Book and Mineral Company website.
Iron Man Museum.
Working at the Joshua Hendy Iron Works - employee memoir from the Sunnyvale Public Library.
American companies established in 1856
Defunct engineering companies of the United States
History of the San Francisco Bay Area
Companies based in Sunnyvale, California
Mining equipment companies
Historic Mechanical Engineering Landmarks | Joshua Hendy Iron Works | [
"Engineering"
] | 1,406 | [
"Mining equipment",
"Mining equipment companies"
] |
14,744,934 | https://en.wikipedia.org/wiki/Nokia%202110 | The Nokia 2110 is a cellular phone made by the Finnish telecommunications firm Nokia, first announced and released in January 1994. It is the first Nokia phone with the famous Nokia tune ringtone. The phone can send and receive SMS messages; and lists ten dialed calls, ten received calls and ten missed calls. At the time of the phone's release, it was smaller than others of its price and had a bigger display, so it became very popular. It also features a "revolutionary" new user interface featuring with two dynamic softkeys, which would later lead to the development of the Navi-key on its successor, the Nokia 6110, as well as the Series 20 interface.
A later version, the Nokia 2110i, released in 1996, comes with more memory and a protruding antenna knob.
A variant model, the Nokia 2140 (more popularly called the Nokia Orange), is the launch handset on the Orange network (now EE). It differed in that it was designed to work on the 1800 MHz frequency then utilised by Orange, and had a slightly less bulbous design.
A North American model, the Nokia 2190, was also available. It is one of the earlier phones available on the Pacific Bell Mobile Services and Powertel's newly launched GSM 1900 network in 1995. A version for Digital AMPS was produced as the Nokia 2120.
Another variant, the Nokia C6, was introduced in 1997 for Germany's analogue C-Netz.
See also
HP OmniGo 700LX, a palmtop PC with built-in Nokia 2110
References
External links
full phone specifications
A Nokia 2110 User Manual
2110
Mobile phones introduced in 1994 | Nokia 2110 | [
"Technology"
] | 343 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
14,745,018 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Science%20Education%20and%20Research%2C%20Mohali | Indian Institute of Science Education and Research, Mohali (IISER Mohali) is an autonomous public Research institute established in 2007 at Mohali, Punjab, India. It is one of the seven Indian Institutes of Science Education and Research (IISERs), established by the Ministry of Human Resources and Development, Government of India, to research in frontier areas of science and to provide science education at the undergraduate and postgraduate level. It was established after IISER Pune and IISER Kolkata and is recognized as an Institute of National Importance by the Government of India. Institute focuses on pure research as well as interdisciplinary research in various fields of science.
History
The institute was approved by The Planning Commission in New Delhi in July 2006 and land was provided by The Punjab State government. The foundation stone of IISER Mohali was laid on 27 September 2006 by the former Prime Minister of India, Manmohan Singh. The Computing Facility of IISER Mohali was inaugurated on 3 September 2007 by T. Ramasami (Secretary, Department of Science and Technology). The Earth-breaking ceremony for IISER boundary wall was held on 29 December 2008 at the proposed campus site in Knowledge City, Sector 81, S.A.S. Nagar. The ceremony was performed by N. Sathyamurthy, the founding director of the institute.
C.N.R. Rao inaugurated the Chemistry Research Laboratory on 8 April 2009. The Central Analytical Facility of IISER Mohali has been inaugurated in March 2010. Initially, the institute started its working from a transit campus in Mahatma Gandhi State Institute of Public Administration (MGSIPA), Chandigarh. In March 2010, the institute started shifting to its permanent campus in The Knowledge City at Sector 81 with the opening of Central Analytical Facility (CAF) and completed the shifting in May 2013 by shutting operations in MGSIPAP complex, Sector 26, Chandigarh.
Academics
Academic Programs
Source:
The institute offers the following programs:
Integrated Master's level (B.S.-M.S.): Admission to this program is after 10+2 years of school training and is done through the IISERs Joint Admissions Committee.
Integrated Doctoral Program (Int. Ph.D.): Integrated Ph.D. involves a master's degree (M.S.) followed by a doctorate (Ph.D.). Students after three years of undergraduate education can join the program.
Doctoral Program (Ph.D.): IISER Mohali has a separate doctoral program, in hard sciences or in the Humanities & Social Sciences Department, which requires a master's degree as qualification.
Admissions
Admissions to UG courses in IISERs are done exclusively through the IISER Aptitude Test (IAT)
Reputation and Rankings
The National Institutional Ranking Framework (NIRF) ranked it 49 in research and 64 overall in India in 2024.
Organization and administration
Departments
IISER Mohali is currently having six departments:
Department of Physical Sciences
Department of Chemical Sciences
Department of Mathematical Sciences
Department of Biological Science
Department of Earth Science
Department of Humanities and Social Sciences
Facilities
NMR Research Facility (NMR)
X-ray Facility - X-ray Diffraction Crystallograph
Cell Culture Facility
Animal house
Atomic Force microscope
Laser Raman and AFM Facility-Raman Infrared spectroscope
Circular Dichroic Spectrometer
Atmospheric Chemistry Facility
Computing Facility
Scanning Electron Microscopy
DC Sputtering
PLD Machine
Cryostat
Dilution refrigerator
Liquid Helium Facility
Liquid Nitrogen Facility
FemtoLaser facility
Proton Transfer Reaction Mass Spectrometer (PTR-MS)
Laser micro-Raman spectroscope
Single Crystal X-ray Diffractometer
Crystal Growth Laboratory
PPMS
SQUID
Tetra and mono arc furnace
Tube furnace
Conferences held
7th JNOST Conference: 15–18 December 2011
History of Chemistry in India, 2013
Conference on Nonlinear Systems and Dynamics, 2013
ICTS program: Knot theory and its Applications, 10–20 December 2013.
43rd National Seminar on Crystallography: 28–30 March 2014
32nd meeting of the Astronomical Society of India (ASI): 20–22 March 2014
International Workshop "Knots, Braids and Topology", 15–17 October 2014
International Workshop "ATMW: Lattices--Geometry and Dynamics", 17–22 December 2014
National Conference on Ethology and Evolution (30 October to 1 November 2015)
International Conference on Gravitation and Cosmology (ICGC) 2015
Conference on Nonlinear Systems and Dynamics, 2015
30th Annual Conference of the Ramanujan Mathematical Society, 15–17 May 2015.
GIAN course on "Quantum Criticality in Heavy Fermions: an Experimental Perspective", 22–28 March 2018
National Conference On Quantum Condensed Matter, 25–27 July 2018
9th International Conference on Gravitation and Cosmology, 10–13 December 2019
Student life
Amenities
Health Centre
Counseling Service
Accommodation & Transport including visitors hostel
World Class Library of 8 levels
Sports Complex complete with two courts each for basketball, tennis, and volleyball
Cricket cum Football ground in the stadium which has a seating capacity of 1000
Computer Centre with High-Performance Scientific Computing cluster
Various labs
Gym
National Science Day celebrations
National Science Day celebrations on 28 February are a regular feature at IISER Mohali, every year. Invitations are sent to schools in Mohali, Chandigarh, Panchkula and nearby areas.
The focus of the day is on science and mathematics demonstrations prepared by IISER Mohali students and faculty members. A large number of schools send teams for inter-school competitions such as science quizzes, group discussions, treasure hunt, junkyard wars, poster presentation held on this day. Other non-competitive events such as documentary screening, anti-superstition demonstrations, etc. are also held. The day usually ends with a 'panel discussion' in which the school students ask science-related questions to a panel of faculty members of IISER Mohali.
Since 2015, the Science Day celebrations have been shifted to 27 September, IISER Mohali's Foundation Day, as this date is more convenient for school students in the region.
Opportunity Cell
The Opportunity Cell was first proposed by the Student Representative Council, in October 2011 as a joint student-faculty
body to provide guidance to students about research and job opportunities. In the year 2012−13, the opportunity cell established a summer
research and internship programme with National Centre for Biological Sciences (NCBS), Bangalore, Connexios Life sciences and Lucid Software Limited (Lucid). It also organised various seminars such as
"Alternative Careers in Science", "Research Opportunities at University of St Andrews" etc. Currently the cell disseminates information about
summer research programmes, PhD positions and research oriented jobs.
Magazine
Manthan, IISER Mohali's student magazine was revived in the summer of 2018 after a long gap. Six editions, along with a lockdown Life in Quarantine edition, have been published since its revival.
Clubs
1. Phi@i - Physics Club
2. Biology Discussion Forum (BDF)
3. Infinity - Math Club
4. Turing Club- Computation Club
5. Curie Club - Chemistry Club
6. Robotics Club
7. Lumiére - Photography Club
8. Itehad - Dance Club
9. Aria - Music Club
10. Ambient - Environment Club
11. Miles - Running Club
12. DarPan - Drama Club
13. Literary and Debating Society(LDS)
14. Rang - Art club
15. IISER Mohali Quiz Club (IMQC)
16. Astronomy Club
17. Movie club
18. IEC - Entrepreneurship Club
19. Adventure Sports club - Trekking and stuff
20. Gaming club
21. IMLC - IISER Mohali LGBTQ collective.
Notable people
Current faculty
Inder Bir Singh Passi, Bhatnagar Prize winning Mathematician
Anand Kumar Bachhawat, Geneticist and Biochemist
Kausik Chattopadhyay, N-Bios laureate
Kapil Hari Paranjape, Bhatnagar Prize winning Mathematician
Sudeshna Sinha, Physicist
Anu Sabhlok, Architect and a well known geographer and feminist scholar
Somdatta Sinha, theoretical biologist
Debi Prasad Sarkar, Bhatnagar Prize winning biochemist
Former Faculty
Meera Nanda, Historian and Philosopher of Science
Narayanasami Sathyamurthy, Bhatnagar Prize winning Chemist and President of Chemical Research Society of India. He was the director of IISER Mohali from 2007 to 2017
References
External links
2007 establishments in Punjab, India
Mohali
Chemical research institutes
Research institutes established in 2007
Research institutes in Punjab, India
Education in Mohali | Indian Institute of Science Education and Research, Mohali | [
"Chemistry"
] | 1,745 | [
"Chemical research institutes"
] |
14,745,373 | https://en.wikipedia.org/wiki/Internet%20checksum | The Internet checksum, also called the IPv4 header checksum is a checksum used in version 4 of the Internet Protocol (IPv4) to detect corruption in the header of IPv4 packets. It is carried in the IPv4 packet header, and represents the 16-bit result of the summation of the header words.
The IPv6 protocol does not use header checksums. Its designers considered that the whole-packet link layer checksumming provided in protocols, such as PPP and Ethernet, combined with the use of checksums in upper layer protocols such as TCP and UDP, are sufficient. Thus, IPv6 routers are relieved of the task of recomputing the checksum whenever the packet changes, for instance by the lowering of the hop limit counter on every hop.
The Internet checksum is mandatory to detect errors in IPv6 UDP packets (including data payload).
The Internet checksum is used to detect errors in ICMP packets (including data payload).
Computation
The checksum calculation is defined as follows:
The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero.
If there is no corruption, the result of summing the entire IP header, including checksum, and then taking its one's complement should be zero. At each hop, the checksum is verified. Packets with checksum mismatch are discarded. The router must adjust the checksum if it changes the IP header (such as when decrementing the TTL).
The procedure is explained in detail in RFC 1071 "Computing the Internet Checksum". Optimizations are presented in RFC 1624 "Computation of the Internet Checksum via Incremental Update", to cover the case in routers that need to recompute the header checksum during packet forwarding when only a single field has changed.
Examples
Calculating the IPv4 header checksum
Take the following truncated excerpt of an IPv4 packet. The header is shown in bold and the checksum is underlined.
4500 0073 0000 4000 4011 b861 c0a8 0001
c0a8 00c7 0035 e97c 005f 279f 1e4b 8180
For ones' complement addition, each time a carry occurs, we must add a 1 to the sum. A carry check and correction can be performed with each addition or as a post-process after all additions. If another carry is generated by the correction, another 1 is added to the sum.
To calculate the checksum, we can first calculate the sum of each 16-bit value within the header, skipping only the checksum field itself. Note that these values are in hexadecimal notation.
Initial addition: 4500 + 0073 + 0000 + 4000 + 4011 + c0a8 + 0001 + c0a8 + 00c7 = 2479c
Carry addition is then made by adding the fifth hexadecimal digit to the first 4 digits: 2 + 479c = 479e
The checksum is then the ones' complement (bitwise NOT) of this result: NOT 479e = b861
This checksum value is shown as underlined in the original IP packet header above.
Verifying the IPv4 header checksum
When verifying a checksum, the same procedure is used as above, except that the original header checksum is not omitted.
4500 + 0073 + 0000 + 4000 + 4011 + b861 + c0a8 + 0001 + c0a8 + 00c7 = 2fffd
Add the carry bits:
fffd + 2 = ffff
Taking the ones' complement (flipping every bit) yields 0000, which indicates that no error is detected.
IP header checksum does not check for the correct order of 16-bit values within the header.
See also
Frame check sequence
Header check sequence
References
External links
Header Checksum
Error detection and correction
Header Checksum | Internet checksum | [
"Engineering"
] | 853 | [
"Error detection and correction",
"Reliability engineering"
] |
14,745,714 | https://en.wikipedia.org/wiki/Pipe%20insulation | Pipe Insulation is thermal or acoustic insulation used on pipework.
Applications
Condensation control
Where pipes operate at below-ambient temperatures, the potential exists for water vapour to condense on the pipe surface. Moisture is known to contribute towards many different types of corrosion, so preventing the formation of condensation on pipework is usually considered important.
Pipe insulation can prevent condensation forming, as the surface temperature of the insulation will vary from the surface temperature of the pipe. Condensation will not occur, provided that (a) the insulation surface is above the dewpoint temperature of the air; and (b) the insulation incorporates some form of water-vapour barrier or retarder that prevents water vapour from passing through the insulation to form on the pipe surface.
Pipe freezing
Since some water pipes are located either outside or in unheated areas where the ambient temperature may occasionally drop below the freezing point of water, any water in the pipework may potentially freeze. When water freezes it expands and this expansion can cause failure of a pipe system in any one of a number of ways.
Pipe insulation cannot prevent the freezing of standing water in pipework, but it can increase the time required for freezing to occur—thereby reducing the risk of the water in the pipes freezing. For this reason, it is recommended to insulate pipework at risk of freezing, and local water-supply regulations may require pipe insulation be applied to pipework to reduce the risk of pipe freezing.
For a given length, a smaller-bore pipe holds a smaller volume of water than a larger-bore pipe, and therefore water in a smaller-bore pipe will freeze more easily (and more quickly) than water in a larger-bore pipe (presuming equivalent environments). Since smaller-bore pipes present a greater risk of freezing, insulation is typically used in combination with alternative methods of freeze prevention (e.g., modulating trace heating cable, or ensuring a consistent flow of water through the pipe).
Energy saving
Since pipework can operate at temperatures far removed from the ambient temperature, and the rate of heat flow from a pipe is related to the temperature differential between the pipe and the surrounding ambient air, heat flow from pipework can be considerable. In many situations, this heat flow is undesirable. The application of thermal pipe insulation introduces thermal resistance and reduces the heat flow.
Thicknesses of thermal pipe insulation used for saving energy vary, but as a general rule, pipes operating at more-extreme temperatures exhibit a greater heat flow and larger thicknesses are applied due to the greater potential savings.
The location of pipework also influences the selection of insulation thickness. For instance, in some circumstances, heating pipework within a well-insulated building might not require insulation, as the heat that's "lost" (i.e., the heat that flows from the pipe to the surrounding air) may be considered “useful” for heating the building, as such "lost" heat would be effectively trapped by the structural insulation anyway. Conversely, such pipework may be insulated to prevent overheating or unnecessary cooling in the rooms through which it passes.
Protection against extreme temperatures
Where pipework is operating at extremely high or low temperatures, the potential exists for injury to occur should any person come into physical contact with the pipe surface. The threshold for human pain varies, but several international standards set recommended touch temperature limits.
Since the surface temperature of insulation varies from the temperature of the pipe surface, typically such that the insulation surface has a "less extreme" temperature, pipe insulation can be used to bring surface touch temperatures into a safe range.
Control of noise
Pipework can operate as a conduit for noise to travel from one part of a building to another (a typical example of this can be seen with waste-water pipework routed within a building). Acoustic insulation can prevent this noise transfer by acting to damp the pipe wall and performing an acoustic decoupling function wherever the pipe passes through a fixed wall or floor and wherever the pipe is mechanically fixed.
Pipework can also radiate mechanical noise. In such circumstances, the breakout of noise from the pipe wall can be achieved by acoustic insulation incorporating a high-density sound barrier.
Factors influencing performance
The relative performance of different pipe insulation on any given application can be influenced by many factors. The principal factors are:
Thermal conductivity ("k" or "λ" value)
Surface emissivity ("ε" value)
Water-vapour resistance ("μ" value)
Insulation thickness
Density
Other factors, such as the level of moisture content and the opening of joints, can influence the overall performance of pipe insulation. Many of these factors are listed in the international standard EN ISO 23993.
Materials
Pipe insulation materials come in a large variety of forms, but most materials fall into one of the following categories.
Mineral wool
Mineral wools, including rock and slag wools, are inorganic strands of mineral fibre bonded together using organic binders. Mineral wools are capable of operating at high temperatures and exhibit good fire performance ratings when tested.
Mineral wools are used on all types of pipework, particularly industrial pipework operating at higher temperatures.
Glass wool
Glass wool is a high-temperature fibrous insulation material, similar to mineral wool, where inorganic strands of glass fibre are bound together using a binder.
As with other forms of mineral wool, glass-wool insulation can be used for thermal and acoustic applications.
Flexible elastomeric foams
These are flexible, closed-cell, rubber foams based on NBR or EPDM rubber. Flexible elastomeric foams exhibit such a high resistance to the passage of water vapour that they do not generally require additional water-vapour barriers. Such high vapour resistance, combined with the high surface emissivity of rubber, allows flexible elastomeric foams to prevent surface condensation formation with comparatively small thicknesses.
As a result, flexible elastomeric foams are widely used on refrigeration and air-conditioning pipework. Flexible elastomeric foams are also used on heating and hot-water systems.
Rigid foam
Pipe insulation made from rigid Phenolic, PIR, or PUR foam insulation is common in some countries. Rigid-foam insulation has minimal acoustic performance but can exhibit low thermal-conductivity values of 0.021 W/(m·K) or lower, allowing energy-saving legislation to be met whilst using reduced insulation thicknesses.
Polyethylene
Polyethylene is a flexible plastic foamed insulation that is widely used to prevent freezing of domestic water supply pipes and to reduce heat loss from domestic heating pipes.
The fire performance of Polyethylene is typically 25/50 E84 compliant up to 1" thickness.
Cellular Glass
100% Glass manufactured primarily from sand, limestone & soda ash.
Cellular insulations are composed of small individual cells either interconnecting or sealed from each other to form a cellular structure. Glass, plastics, and rubber may comprise the base material and a variety of foaming agents are used. Cellular insulations are often further classified as either open cell (cells are interconnecting) or closed cell (cells sealed from each other). Generally, materials that have greater than 90% closed cell content are considered to be closed cell materials.
Aerogel
Silica Aerogel insulation has the lowest thermal conductivity of any commercially produced insulation. Although no manufacturer currently manufactures Aerogel pipe sections, it is possible to wrap Aerogel blanket around pipework, allowing it to function as pipe insulation.
The usage of Aerogel for pipe insulation is currently limited.
Heat flow calculations and R-value
Heat flow passing through pipe insulation can be calculated by following the equations set out in either the ASTM C 680 or EN ISO 12241 standards. Heat flux is given by the following equation:
Where:
is the internal pipe temperature,
is the external ambient temperature, and
is the sum total thermal resistance of all insulation layers and the internal- and external-surface heat-transfer resistances.
In order to calculate heat flow, it is first necessary to calculate the thermal resistance ("R-value") for each layer of insulation.
For pipe insulation, the R-value varies not only with the insulation thickness and thermal conductivity ("k-value") but also with the pipe outer diameter and the average material temperature. For this reason, it is more common to use the thermal conductivity value when comparing the effectiveness of pipe insulation, and R-values of pipe insulation are not covered by the US FTC R-value rule .
The thermal resistance of each insulation layer is calculated using the following equation:
Where:
represents the insulation outer diameter,
represents the insulation inner diameter,
represents the thermal conductivity ("k-value") at the average insulation temperature (for accurate results iterative calculations are necessary), and
is either if heat loss calculation will use for area calculation or if it will use .
Calculating the heat transfer resistance of the inner- and outer-insulation surfaces is more complex and requires the calculation of the internal- and external-surface coefficients of heat transfer. Equations for calculating this are based on empirical results and vary from standard to standard (both ASTM C 680 and EN ISO 12241 contain equations for estimating surface coefficients of heat transfer).
A number of organisations such as the North American Insulation Manufacturers Association and Firo Insulation offer free programs that allow the calculation of heat flow through pipe insulation.
References
External links
Mechanical Insulation Design Guide - National Insulation Association
R-Values by Insulation Material - InspectAPedia
Insulators
Heat transfer
Thermal protection | Pipe insulation | [
"Physics",
"Chemistry"
] | 1,949 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
14,747,033 | https://en.wikipedia.org/wiki/Aqua%20Sciences | Aqua Sciences is a Miami Beach-based company providing advanced water technologies with a module capable of extracting up to 2500 gallons of water from the moisture present in the air.
Module
The module is a modified 40-foot trailer that permits the extraction of water from the moisture in the air. It has options such as additional storage tanks for keeping the water for extended periods of time. It can be powered by an internal diesel generator for a week without needing to refuel, or plugged into the electrical grid. The module is the trailer of an 18-wheeler. It is possible to add a reverse osmosis module which increases production up to 8000 gallons a day.
Advantages
There are no toxic or harmful byproducts. The only requirement for it is 14% humidity in the air, so it can be used in deserts. The water provided is also very pure.
Applications
The United States Army has shown interest in the project, mainly because of the high cost of water transportation to its forces. Using the Aqua Sciences' module, that price is pushed down to $0.15 USD per gallon, which would provide huge logistic savings for the military. Also, it would be practical for providing water after a natural disaster such as the 2004 Indian Ocean earthquake or Hurricane Katrina.
References
External links
Aqua Sciences Homepage (December 2007)
Making Water From Thin Air (December 2007)
Water technology | Aqua Sciences | [
"Chemistry"
] | 280 | [
"Water technology"
] |
14,747,754 | https://en.wikipedia.org/wiki/List%20of%20states%20and%20union%20territories%20of%20India%20by%20vaccination%20coverage | This is a list of the States of India ranked in order of percentage of children between 12–23 months of age
who received all recommended vaccines, including all required doses of the BCG vaccine, Hepatitis B vaccine, polio vaccine, DPT vaccine, and the MMR vaccine. This information was compiled from National Family Health Survey - 4 and 5 published by International Institute for Population Sciences. Overall vaccination coverage in the country increased from 62.0% in 2015-16 to 76.6% in 2019-21 (Urban: 63.9% to 75.5%; Rural: 61.3% to 77.0%).
List
Union Territory by vaccination coverage
Notes
References
Vaccination coverage
Vaccination | List of states and union territories of India by vaccination coverage | [
"Biology"
] | 150 | [
"Vaccination"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.