id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
30,951,390
https://en.wikipedia.org/wiki/Circle%20jerk
A circle jerk, also sometimes spelled circlejerk, is a sexual practice in which a group of men form a circle and masturbate or touch each other's genitals. In the metaphorical sense, the term is used to refer to self-congratulatory behavior or discussion among a group of people, usually in reference to a "boring or time-wasting meeting or other event". Circle jerks often feature a competitive element, with the "winner" being the participant able to ejaculate first, last, or farthest depending on the pre-established rules. They can serve as an introduction to sexual relations with other males, or as a sexual outlet at an age or situation when regular sexual activity with another person is not possible. While circle jerks feature a homoerotic element, some analysts interpret adolescent boys' group activities such as circle jerks as an effort to establish heterosexual, masculine dominance within the group. However, American sociologist Bernard Lefkowitz asserts that what actually motivates participation is the desire for friends to witness and acknowledge one's sexual prowess, helping to counter teenage feelings of inadequacy related to sexual activity. See also Bukkake Daisy chain Soggy biscuit References Group sex Sexuality and society Male masturbation Male homosexuality
Circle jerk
Biology
264
523,430
https://en.wikipedia.org/wiki/Constructible%20polygon
In mathematics, a constructible polygon is a regular polygon that can be constructed with compass and straightedge. For example, a regular pentagon is constructible with compass and straightedge while a regular heptagon is not. There are infinitely many constructible polygons, but only 31 with an odd number of sides are known. Conditions for constructibility Some regular polygons are easy to construct with compass and straightedge; others are not. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular polygons with compass and straightedge? If not, which n-gons (that is, polygons with n edges) are constructible and which are not? Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons. Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem: A regular n-gon can be constructed with compass and straightedge if and only if n is the product of a power of 2 and any number of distinct (unequal) Fermat primes. Here, a power of 2 is a number of the form , where m ≥ 0 is an integer. A Fermat prime is a prime number of the form , where m ≥ 0 is an integer. The number of Fermat primes involved can be 0, in which case n is a power of 2. In order to reduce a geometric problem to a problem of pure number theory, the proof uses the fact that a regular n-gon is constructible if and only if the cosine is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Equivalently, a regular n-gon is constructible if any root of the nth cyclotomic polynomial is constructible. Detailed results by Gauss's theory Restating the Gauss–Wantzel theorem: A regular n-gon is constructible with straightedge and compass if and only if n = 2kp1p2...pt where k and t are non-negative integers, and the pi's (when t > 0) are distinct Fermat primes. The five known Fermat primes are: F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 . Since there are 31 nonempty subsets of the five known Fermat primes, there are 31 known constructible polygons with an odd number of sides. The next twenty-eight Fermat numbers, F5 through F32, are known to be composite. Thus a regular n-gon is constructible if n = 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272, 320, 340, 384, 408, 480, 510, 512, 514, 544, 640, 680, 768, 771, 816, 960, 1020, 1024, 1028, 1088, 1280, 1285, 1360, 1536, 1542, 1632, 1920, 2040, 2048, ... , while a regular n-gon is not constructible with compass and straightedge if n = 7, 9, 11, 13, 14, 18, 19, 21, 22, 23, 25, 26, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, ... . Connection to Pascal's triangle Since there are five known Fermat primes, we know of 31 numbers that are products of distinct Fermat primes, and hence 31 constructible odd-sided regular polygons. These are 3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537, 196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009, 50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295 . As John Conway commented in The Book of Numbers, these numbers, when written in binary, are equal to the first 32 rows of the modulo-2 Pascal's triangle, minus the top row, which corresponds to a monogon. (Because of this, the 1s in such a list form an approximation to the Sierpiński triangle.) This pattern breaks down after this, as the next Fermat number is composite (4294967297 = 641 × 6700417), so the following rows do not correspond to constructible polygons. It is unknown whether any more Fermat primes exist, and it is therefore unknown how many odd-sided constructible regular polygons exist. In general, if there are q Fermat primes, then there are 2q−1 regular constructible polygons. General theory In the light of later work on Galois theory, the principles of these proofs have been clarified. It is straightforward to show from analytic geometry that constructible lengths must come from base lengths by the solution of some sequence of quadratic equations. In terms of field theory, such lengths must be contained in a field extension generated by a tower of quadratic extensions. It follows that a field generated by constructions will always have degree over the base field that is a power of two. In the specific case of a regular n-gon, the question reduces to the question of constructing a length cos  , which is a trigonometric number and hence an algebraic number. This number lies in the n-th cyclotomic field — and in fact in its real subfield, which is a totally real field and a rational vector space of dimension ½ φ(n), where φ(n) is Euler's totient function. Wantzel's result comes down to a calculation showing that φ(n) is a power of 2 precisely in the cases specified. As for the construction of Gauss, when the Galois group is a 2-group it follows that it has a sequence of subgroups of orders 1, 2, 4, 8, ... that are nested, each in the next (a composition series, in group theory terminology), something simple to prove by induction in this case of an abelian group. Therefore, there are subfields nested inside the cyclotomic field, each of degree 2 over the one before. Generators for each such field can be written down by Gaussian period theory. For example, for n = 17 there is a period that is a sum of eight roots of unity, one that is a sum of four roots of unity, and one that is the sum of two, which is cos  . Each of those is a root of a quadratic equation in terms of the one before. Moreover, these equations have real rather than complex roots, so in principle can be solved by geometric construction: this is because the work all goes on inside a totally real field. In this way the result of Gauss can be understood in current terms; for actual calculation of the equations to be solved, the periods can be squared and compared with the 'lower' periods, in a quite feasible algorithm. Compass and straightedge constructions Compass and straightedge constructions are known for all known constructible polygons. If n = pq with p = 2 or p and q coprime, an n-gon can be constructed from a p-gon and a q-gon. If p = 2, draw a q-gon and bisect one of its central angles. From this, a 2q-gon can be constructed. If p > 2, inscribe a p-gon and a q-gon in the same circle in such a way that they share a vertex. Because p and q are coprime, there exists integers a and b such that ap + bq = 1. Then 2aπ/q + 2bπ/p = 2π/pq. From this, a pq-gon can be constructed. Thus one only has to find a compass and straightedge construction for n-gons where n is a Fermat prime. The construction for an equilateral triangle is simple and has been known since antiquity; see Equilateral triangle. Constructions for the regular pentagon were described both by Euclid (Elements, ca. 300 BC), and by Ptolemy (Almagest, ca. 150 AD). Although Gauss proved that the regular 17-gon is constructible, he did not actually show how to do it. The first construction is due to Erchinger, a few years after Gauss's work. The first explicit constructions of a regular 257-gon were given by Magnus Georg Paucker (1822) and Friedrich Julius Richelot (1832). A construction for a regular 65537-gon was first given by Johann Gustav Hermes (1894). The construction is very complex; Hermes spent 10 years completing the 200-page manuscript. Gallery From left to right, constructions of a 15-gon, 17-gon, 257-gon and 65537-gon. Only the first stage of the 65537-gon construction is shown; the constructions of the 15-gon, 17-gon, and 257-gon are given completely. Other constructions The concept of constructibility as discussed in this article applies specifically to compass and straightedge constructions. More constructions become possible if other tools are allowed. The so-called neusis constructions, for example, make use of a marked ruler. The constructions are a mathematical idealization and are assumed to be done exactly. A regular polygon with n sides can be constructed with ruler, compass, and angle trisector if and only if where r, s, k ≥ 0 and where the pi are distinct Pierpont primes greater than 3 (primes of the form These polygons are exactly the regular polygons that can be constructed with Conic section, and the regular polygons that can be constructed with paper folding. The first numbers of sides of these polygons are: 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 24, 26, 27, 28, 30, 32, 34, 35, 36, 37, 38, 39, 40, 42, 45, 48, 51, 52, 54, 56, 57, 60, 63, 64, 65, 68, 70, 72, 73, 74, 76, 78, 80, 81, 84, 85, 90, 91, 95, 96, 97, 102, 104, 105, 108, 109, 111, 112, 114, 117, 119, 120, 126, 128, 130, 133, 135, 136, 140, 144, 146, 148, 152, 153, 156, 160, 162, 163, 168, 170, 171, 180, 182, 185, 189, 190, 192, 193, 194, 195, 204, 208, 210, 216, 218, 219, 221, 222, 224, 228, 234, 238, 240, 243, 247, 252, 255, 256, 257, 259, 260, 266, 270, 272, 273, 280, 285, 288, 291, 292, 296, ... See also Polygon Carlyle circle References External links Regular Polygon Formulas, Ask Dr. Math FAQ. Carl Schick: Weiche Primzahlen und das 257-Eck : eine analytische Lösung des 257-Ecks. Zürich : C. Schick, 2008. . 65537-gon, exact construction for the 1st side, using the Quadratrix of Hippias and GeoGebra as additional aids, with brief description (German) Euclidean plane geometry Carl Friedrich Gauss
Constructible polygon
Mathematics
2,861
250,237
https://en.wikipedia.org/wiki/Effective%20results%20in%20number%20theory
For historical reasons and in order to have application to the solution of Diophantine equations, results in number theory have been scrutinised more than in other branches of mathematics to see if their content is effectively computable. Where it is asserted that some list of integers is finite, the question is whether in principle the list could be printed out after a machine computation. Littlewood's result An early example of an ineffective result was J. E. Littlewood's theorem of 1914, that in the prime number theorem the differences of both ψ(x) and π(x) with their asymptotic estimates change sign infinitely often. In 1933 Stanley Skewes obtained an effective upper bound for the first sign change, now known as Skewes' number. In more detail, writing for a numerical sequence f&hairsp;(n), an effective result about its changing sign infinitely often would be a theorem including, for every value of N, a value M > N such that f&hairsp;(N) and f&hairsp;(M) have different signs, and such that M could be computed with specified resources. In practical terms, M would be computed by taking values of n from N onwards, and the question is 'how far must you go?' A special case is to find the first sign change. The interest of the question was that the numerical evidence known showed no change of sign: Littlewood's result guaranteed that this evidence was just a small number effect, but 'small' here included values of n up to a billion. The requirement of computability is reflected in and contrasts with the approach used in the analytic number theory to prove the results. It for example brings into question any use of Landau notation and its implied constants: are assertions pure existence theorems for such constants, or can one recover a version in which 1000 (say) takes the place of the implied constant? In other words, if it were known that there was M > N with a change of sign and such that M = O(G(N)) for some explicit function G, say built up from powers, logarithms and exponentials, that means only M < A.G(N) for some absolute constant A. The value of A, the so-called implied constant, may also need to be made explicit, for computational purposes. One reason Landau notation was a popular introduction is that it hides exactly what A is. In some indirect forms of proof it may not be at all obvious that the implied constant can be made explicit. The 'Siegel period' Many of the principal results of analytic number theory that were proved in the period 1900–1950 were in fact ineffective. The main examples were: The Thue–Siegel–Roth theorem Siegel's theorem on integral points, from 1929 The 1934 theorem of Hans Heilbronn and Edward Linfoot on the class number 1 problem The 1935 result on the Siegel zero The Siegel–Walfisz theorem based on the Siegel zero. The concrete information that was left theoretically incomplete included lower bounds for class numbers (ideal class groups for some families of number fields grow); and bounds for the best rational approximations to algebraic numbers in terms of denominators. These latter could be read quite directly as results on Diophantine equations, after the work of Axel Thue. The result used for Liouville numbers in the proof is effective in the way it applies the mean value theorem: but improvements (to what is now the Thue–Siegel–Roth theorem) were not. Later work Later results, particularly of Alan Baker, changed the position. Qualitatively speaking, Baker's theorems look weaker, but they have explicit constants and can actually be applied, in conjunction with machine computation, to prove that lists of solutions (suspected to be complete) are actually the entire solution set. Theoretical issues The difficulties here were met by radically different proof techniques, taking much more care about proofs by contradiction. The logic involved is closer to proof theory than to that of computability theory and computable functions. It is rather loosely conjectured that the difficulties may lie in the realm of computational complexity theory. Ineffective results are still being proved in the shape A or B, where we have no way of telling which. References External links Analytic number theory Diophantine equations
Effective results in number theory
Mathematics
894
27,107,962
https://en.wikipedia.org/wiki/Sexual%20division%20of%20labour
Sexual division of labour (SDL) is the delegation of different tasks between the male and female members of a species. Among human hunter-gatherer societies, males and females are responsible for the acquisition of different types of foods and shared them with each other for a mutual or familial benefit. In some species, males and females eat slightly different foods, while in other species, males and females will routinely share food; but only in humans are these two attributes combined. The few remaining hunter-gatherer populations in the world serve as evolutionary models that can help explain the origin of the sexual division of labour. Many studies on the sexual division of labour have been conducted on hunter-gatherer populations, such as the Hadza, a hunter-gatherer population of Tanzania. In modern day society, sex differences in occupation is seen across cultures, with the tendency that men do technical work and women tend to do work related to care. Behavioral ecological perspective Both men and women have the option of investing resources either to provision children or to have additional offspring. According to life history theory, males and females monitor costs and benefits of each alternative to maximize reproductive fitness; however, trade-off differences do exist between sexes. Females are likely to benefit most from parental care effort because they are certain which offspring are theirs and have relatively few reproductive opportunities, each of which is relatively costly and risky. In contrast, males are less certain of paternity, but may have many more mating opportunities bearing relatively low costs and risks. Hunting vs gathering From the 1970s onward, the dominant paleontological perspective of gendered roles in hunter-gatherer societies was of a model termed "Man the Hunter, Woman the Gatherer"; coined by anthropologists Richard Borshay Lee and Irven DeVore in 1968, it argued, based on evidence now thought to be incomplete, that contemporary foragers displayed a clear division of labour between women and men. However, an attempted verification of this study found "that multiple methodological failures all bias their results in the same direction...their analysis does not contradict the wide body of empirical evidence for gendered divisions of labor in foraging societies". The sexual division of labour may have arisen to allow humans to acquire food and other resources more efficiently. More recent evidence compiled by researchers such as Sarah Lacy and Cara Ocobock has found a lack of conclusive preferences for their role among both modern hunter gatherers, where "79 percent of the 63 foraging societies with clear descriptions of their hunting strategies feature women hunters," and among prehistoric societies such as those in Peru. Archaeological research done in 2006 by the anthropologist and archaeologist Steven Kuhn from the University of Arizona suggests that the sexual division of labour did not exist prior to the Upper Paleolithic and developed relatively recently in human history. Notable hunter-gatherer groups in the recent or contemporary eras known to lack a distinct sexual division of labour include the Ainu, Agta, and Ju'/hoansi. Anthropologist Rebecca Bird argued that natural selection is more likely to favor male reproductive strategies that stress mating effort and female strategies that emphasize parental investment. As a result, women do the low-risk task of gathering vegetation and underground storage organs that are rich in energy to provide for themselves and offspring. In the book Catching Fire: How Cooking Made Us Human British primatologist Richard Wrangham suggests that the origin of the division of labour between males and females may have originated with the invention of cooking, which is estimated to have happened simultaneously with humans gaining control of fire. A similar idea was proposed far earlier by Friedrich Engels in an unfinished essay from 1876. In modern human society Sexual division of labour is observed globally, and across most cultures. In many societies the breadwinner–homemaker model has been a stable characteristic. The division is more pronounced in some fields of work than in others, generally, work outside, dangerous work and work in highly technical disciplines (typically STEM jobs with the exception of those related to health care) is more likely to be done by men, while work related to care and interpersonal relations is generally more likely to be done by women. The borders of the division are not generally stable, with some fields showing a reversal of the proportions, such as doctors. Some fields see an increasing segregation, positively correlated with the levels of egalitarian policies of the countries, known as the gender-equality paradox. Hypotheses for evolutionary origins Provisioning household The traditional explanation of the sexual division of labour finds that males and females cooperate within pair bonds by targeting different foods so that everyone in the household benefits. Females may target foods that do not conflict with reproduction and child care, while males will target foods that females do not gather, which increases variance in daily consumption and provides a broader diet for the family. Foraging specialization in particular food groups should increase skill level and thus foraging success rates for targeted foods. Show-Off/Signaling hypothesis The signaling hypothesis proposes that men hunt to gain social attention and mating benefits by widely sharing game. This model proposes that hunting functions mainly to provide an honest signal of the underlying genetic quality of hunters, which later yields a mating advantage or social deference. Women tend to target the foods that are most reliable, while men tend to target difficult-to-acquire foods to "signal" their abilities and genetic quality. Hunting is thus viewed as a form of mating or male-male status competition, not familial provisioning. Recent studies on the Hadza have revealed that men hunt mainly to distribute food to their own families rather than sharing it with other members of the community. This conclusion suggests evidence against hunting for signaling purposes. The Victorian Period The Victorian era has been closely examined by Sally Shuttleworth. Women played dual roles and were expected to deliver with conviction in the aspects in which they were required to perform duties in and outside of the household. Shuttleworth states, "two traditional tropes are here combined: Victorian medical textbooks demonstrated not only woman's biological fitness and adaptation to the sacred role of homemaker, but also her terrifying subjection to the forces of the body. At once angel and demon, woman came to represent both the civilizing power that would cleanse the male from contamination in the brutal world of the economic market and also the rampant, uncontrolled excesses of the material economy." SDL and optimal foraging theory Optimal foraging theory (OFT) states that organisms forage in such a way as to maximize their energy intake per unit time. In other words, animals behave in such a way as to find, capture, and consume food containing the most calories while expending the least amount of time possible in doing so. The sexual division of labour provides an appropriate explanation as to why males forgo the opportunity to gather any items with caloric value- a strategy that would seem suboptimal from an energetic standpoint. The OFT suggests that the sexual division of labour is an adaptation that benefits the household; thus, foraging behavior of males will appear optimal at the level of the family. If a hunter-gatherer man does not rely on resources from others and passes up a food item with caloric value, it can be assumed that he is foraging at an optimal level. But, if he passes up the opportunity because it is a food that women routinely gather, then as long as men and women share their spoils, it will be optimal for men to forgo the collection and continue searching for different resources to complement the resources gathered by women. Cooking and the sexual division of labour The emergence of cooking in early Homo may have created problems of food theft from women while food was being cooked. As a result, females would recruit male partners to protect them and their resources from others. This concept, known as the theft hypothesis, accommodates an explanation as to why the labour of cooking is strongly associated with the status of women. Women are forced to gather and cook foods because they will not acquire food otherwise and access to resources is critical for their reproductive success. On the contrary, men do not gather because their physical dominance allows them to scrounge cooked foods from women. Thus, women's foraging and food preparation efforts allow men to participate in the high-risk, high-reward activities of hunting. Females, in turn, become increasingly sexually attractive as a means to exploit male interest in investing in her protection. Evolution of sex differences Many studies investigating the spatial abilities of men and women have found no significant differences, though meta studies show a male advantage in mental rotation and assessing horizontality and verticality, and a female advantage in spatial memory. The sexual division of labour has been proposed as an explanation for these cognitive differences. Those differences disappear with a short training or when given a favorable image of woman ability. Furthermore, differences between individuals are greater than average differences, therefore such differences are not a valid prediction of male or female cognitive ability. This hypothesis argues that males needed the ability to follow prey over long distances and to accurately target their game with projectile technology, and, as a result, male specialization in hunting prowess would have spurred the selection for increased spatial and navigational ability. Similarly, the ability to remember the locations of underground storage organs and other vegetation would have led to an increase in overall efficiency and decrease in total energy expenditure since the time spent searching for food would decrease. Natural selection based on behaviors that increase hunting success and energetic efficiency would bear a positive influence on reproductive success. However, recent research suggests that the sexual division of labour developed relatively recently and that gender roles were not always the same in early-human cultures, contradicting the theory that each sex is naturally predisposed to different types of work. Sexual division of labour continues to be a debated topic within anthropology. Gerda Lerner quotes the philosopher Socrates to argue that the idea of defined gender roles is patriarchal. It also identifies how men and women are capable of performing the same job descriptions with the exception of when it calls for anatomical differences, such as giving birth. "In Book V of the Republic, Plato—in the voice of Socrates—sets down the conditions for the training of the guardians, his elite leadership group. Socrates proposes that women should have the same opportunity as men to be trained as guardians. In support of this he offers a strong statement against making sex differences the basis for discrimination: if the difference [between men and women] consists only in women bearing and men begetting children, this does not amount to proof that a woman differs from a man in respect to the sort of education she should receive; and we shall therefore continue to maintain that our guardians and their wives ought to have the same pursuits."He continues to add that with the same set of established resources such as education, training and teaching, it creates an atmosphere of equity which helps to further the cause of gender equality. "Socrates proposes the same education for boys and girls, freeing guardian women from housework and child-care. But this female equality of opportunity will serve a larger purpose: the destruction of the family. Plato's aim is to abolish private property, the private family, and with it self-interest in his leadership group, for he sees clearly that private property engenders class antagonism and disharmony. Therefore men and women are to have a common way of life. . . —common education, common children; and they are to watch over the citizens in common."Some researchers, such as Cordelia Fine, argue that available evidence does not support a biological basis for gender roles. Evolutionary perspective Based on the contemporary theories and research on the sexual division of labour, four critical aspects of hunter‐gatherer socioecology led to the evolutionary origin of the SDL in humans: long‐term dependency on high‐cost offspring, optimal dietary mix of mutually exclusive foods, (3) efficient foraging based on specialized skill, and (4) sex‐differentiated comparative advantage in tasks. These combined conditions are rare in nonhuman vertebrates but common to currently-existing populations of human foragers, which, thus, gives rise to a potential factor for the evolutionary divergence of social behaviors in Homo. See also Division of labour Hunter-gatherer Evolution Compatibility with childcare Economy-of-effort theory Strength theory Adaptation Male expendability Natural selection Gender roles References Labor history Sociobiology Anthropology Labour economics Industrial history Manufacturing Production economics Gender roles
Sexual division of labour
Engineering,Biology
2,506
70,979,040
https://en.wikipedia.org/wiki/Chronology%20of%20early%20Christian%20monasticism
Christian monasticism first appeared in Egypt and Syria. This is a partial chronology of early Christian monasticism with its notable events listed. It covers 343 years. References Chronology Christian monasticism Early Christian Monasticism History of Catholic monasticism
Chronology of early Christian monasticism
Physics
47
75,129,597
https://en.wikipedia.org/wiki/283%20%28number%29
283 is the natural number following 282 and preceding 284. In mathematics 283 is an odd prime number, a twin prime with 281, and a super prime, meaning that it is the nth prime where n is a prime number as well. 283 is a strictly non palindromic number. That means that between base 2 and base n-2, that number is never palindromic. 283 is a number such that 4283-3283 is prime. 283 is equivalent to 25+8+35. References Integers
283 (number)
Mathematics
109
5,087,160
https://en.wikipedia.org/wiki/Mesosuchia
"Mesosuchia" is an obsolete name for a group of terrestrial, semi-aquatic, or fully aquatic crocodylomorph reptiles. Characteristics The marine crocodile Metriorhynchus had paddle-like forelimbs, Dakosaurus andiniensis had a skull that was adapted to eat large sea reptiles, and Shamosuchus was adapted for eating molluscs and gastropods. Shamosuchus also looked like modern crocodiles and was very closely related to their direct ancestor. History The "Mesosuchia" were formerly placed at Suborder rank as within Crocodylia. The "first" crocodiles were placed within their own suborder, Protosuchia; whilst extant species where placed within Suborder Eusuchia (meaning 'true crocodiles'). Mesosuchia were the crocodylians "in between".But it is no longer regarded as genuine because it belongs to a paraphyletic group. It is replaced by its phylogenetic equivalent Mesoeucrocodylia, which contains the taxa herein, the Crocodylia, and some allied forms more recently discovered. Classification The "Mesosuchia" was composed as: Family Hsisosuchidae Family Gobiosuchidae Infraorder Notosuchia Family Notosuchidae Family Sebecidae Family Baurusuchidae Infraorder Neosuchia Family Trematochampsidae Family Peirosauridae Genus Lomasuchus Genus Montealtosuchus Family Elosuchidae Family Atoposauridae Family Dyrosauridae Family Pholidosauridae Genus Sarcosuchus Infraorder Thalattosuchia - Sea "Crocodiles" Family Teleosauridae Family Metriorhynchidae Genus Dakosaurus Family Goniopholididae Family Paralligatoridae Genus Shamosuchus External links A teleosaurid (Crocodylia, Mesosuchia) from the Toarcian of Madagascar and its palaeobiogeographical significance Blue Nile Gorge Crocodyliformes Crocodyliforms Jurassic crocodylomorphs Cretaceous crocodylomorphs Paleocene crocodylomorphs Eocene crocodylomorphs Hettangian first appearances Eocene extinctions Paraphyletic groups Articles with quotation marks in the title
Mesosuchia
Biology
492
23,743,068
https://en.wikipedia.org/wiki/International%20Microwave%20Power%20Institute
The International Microwave Power Institute (IMPI) is an organization devoted to microwave energy and its usage. The organization has conducted surveys as well as educated the public to dispel microwave myths. Founded in Canada in 1966, it is now headquartered in Mechanicsville, Virginia. It was initially created for industrial and scientific purposes, however in 1977, IMPI's purpose was expanded to deal with the evolution of microwave oven for the home. The professional scientific journal of IMPI is the Journal of Microwave Power and Electromagnetic Energy. References External links Official website Engineering societies American engineering organizations Professional associations based in the United States Organizations based in Virginia Organizations established in 1966 1966 establishments in Canada Microwave technology
International Microwave Power Institute
Engineering
137
29,904,885
https://en.wikipedia.org/wiki/Double%20origin%20topology
In mathematics, more specifically general topology, the double origin topology is an example of a topology given to the plane R2 with an extra point, say 0*, added. In this case, the double origin topology gives a topology on the set , where ∐ denotes the disjoint union. Construction Given a point x belonging to X, such that and , the neighbourhoods of x are those given by the standard metric topology on We define a countably infinite basis of neighbourhoods about the point 0 and about the additional point 0*. For the point 0, the basis, indexed by n, is defined to be: In a similar way, the basis of neighbourhoods of 0* is defined to be: Properties The space }, along with the double origin topology is an example of a Hausdorff space, although it is not completely Hausdorff. In terms of compactness, the space }, along with the double origin topology fails to be either compact, paracompact or locally compact, however, X is second countable. Finally, it is an example of an arc connected space. References General topology
Double origin topology
Mathematics
226
78,740,295
https://en.wikipedia.org/wiki/449P/Leonard
449P/Leonard is a periodic comet that orbits the Sun once every 6.83 years. Studies in 2022 show that 449P was a rediscovery of a previously lost comet that was spotted in 1987. Discovery and observations On 29 September 2020, Gregory J. Leonard discovered a new comet about 21.5 in apparent magnitude from images taken from the telescope of the Mount Lemmon Observatory. Orbital calculations showed it had reached its most recent perihelion on 23 November 2020, and it has frequent close passes with Jupiter, where the comet had passed about from the giant planet in 1983, reducing its orbital period from 13.2 years to just 6.82 years. In 2022, Maik Meyer linked the 449P with the previously lost comet, X/1987 A2, which was discovered by Robert H. McNaught and Malcolm Hartley from the Siding Spring Observatory on 5 January 1987. This precovery image of the comet was not found until March 1987, hence precise follow-up observations were not possible at the time. Subsequently, scientists have also identified P/2013 Y6 as another previous apparition of the comet, which was observed from the Mauna Kea Observatory between 2013 and 2014. The comet will next return to the inner Solar System on 25 September 2027. References External links Periodic comets 0449 449P 449P 449P Recovered astronomical objects
449P/Leonard
Astronomy
285
50,673,280
https://en.wikipedia.org/wiki/Sleep%20pod
A sleep pod, also known as nap pod, napping pod, or nap capsule, is a special type of structure or chair that allows people to nap. Users use the pods to take private sleep breaks, often aided by technology and ambient features. Nap pods have emerged in corporate environments, hospitals, universities, airports and other public places. Their supposed efficacy is rooted in research that suggests that 20-minute naps could reduce signs of fatigue, boost energy levels, improve focus, boost productivity, improve mood, enhance learning, reduce stress and reduce the risk of cardiovascular disease. Origins Technological development of nap pods emerges from growing awareness of the health benefits of sleep and napping including productivity and cognitive function. The original sleep pod was designed by Kisho Kurokawa in 1979, in his design for the Capsule Inn Osaka. Workplace sleep culture Existing products and designs are being used particularly for professionals and commuters. By devising specialised furniture that encourages short, structured naps during the day, specialists like Dr James B Mass, who coined the term ‘power nap’, writes of his intention to alter existing workplace culture in the West to improve focus and energy. This aligns with cultural practices such as Siesta in Spain, a mid afternoon break where work and activity is halted. The Japanese practice of Inemuri, sleeping at work, is culturally viewed as proof of dedication to the point of exhaustion, and has also influenced the use of nap pods around the world. Push for a workplace cultural shift that emphasises the necessity of sleep and rest has been heralded by Arianna Huffington. Her book The Sleep Revolution includes rhetoric that encourages normalisation for the need for rest in high stress work environments, and was followed by the launch of Thrive Global, an origination which provides wellness training to corporations including advice to encourage employees taking appropriate sleep breaks when needed. Huffington writes "That idea that sleep is somehow a sign of weakness and that burnout and sleep deprivation are macho signs of strength is particularly destructive, So changing the way we talk about sleep is an important part of the culture shift." Other leading scientists encouraging a revision of existing cultural understandings of exhaustion include Matthew Walker, neuroscientist and author of Why We Sleep: The New Science of Sleep and Dreams, who labeled humanity as in “the midst of a global sleep loss pandemic”. He has publicly endorsed nap pods in offices “even if they just signal some degree of recognition of sleep’s importance in the workplace by people in senior positions.” Sleep specialist and psychiatrist Rita Aoud told The Guardian, in light of existing data that “Research shows that a nap of about 20 minutes in the afternoon has a positive effect on attention, vigilance, mood and alertness. The actions of major corporations in establishing nap pod technology in their workplaces indicates that research and expert advice on the importance of sleep and the effectiveness of day time napping is influencing company culture. Workplaces have been criticised for installing nap pods. Diana Bradley commented in one article that in offering technology such as these as perks for employees, companies can ignore more fundamental support in the form of management and policy. References in science fiction Nap pods are a prevalent technology in science fiction books, movies and television, often fitted with futuristic sleep technology. Cryosleep pods, which hold bodies frozen in suspended animation appear in the films Alien, Avatar, 2001: a Space Odyssey, Passengers, and Event Horizon. In these instances compact bed ‘pods’ similar in construction to existing nap-pod designs are depicted, storing sleeping bodies during long term space travel. The sleep pods in the 1979 film Alien are white capsules in clusters of eight, with glass shields across the top. The crew members inside are in suspended animation, unconscious until ‘activated’, un-aged and able to join the workforce. Suspended animation in pods is also seen in the space adventure TV series Lost in Space, Star Trek, and Futurama. In a 2015 Doctor Who episode, Sleep No More, the scientists and crew of a space lab forgo normal sleep patterns by using ‘Morpheus’ sleep pods, that can compress months of sleep into a two minute nap. Confined within the pods, a human’s brain activity is altered to maximise the productivity on board a ship. Notable locations Nap pod technology has been implemented and installed in a number of notable public and private spaces. They are available at airports for travellers to use between and before flights at JFK airport, Atlanta Airport, Berlin Airport, Munich Airport, Dubai Airport, and Istanbul Airport. Tech companies Google, Samsung and Facebook have installed nap pods across their headquarters and offices for employee use. Nike's headquarters in Portland, Oregon, has rooms on site in which employees can sleep or meditate. Ben & Jerry's has had a nap room at its headquarters since 2010. Universities including King's College London, Sydney University, Western Sydney University, The University of Miami, Wesleyan University, Stanford University and Washington State University have nap pods in campus libraries and student centers. The Sydney Swans AFL team installed two 'sleep chambers' for players to use between training and game sessions at the SCG Stadium. In the UK, the NHS has installed sleep pods in public hospitals for doctors, nurses and staff. References Chairs Sleep
Sleep pod
Biology
1,067
40,639,448
https://en.wikipedia.org/wiki/Game-theoretic%20rough%20sets
Game-theoretic rough sets are the use of rough sets to induce three-way classification decisions. The positive, negative, and boundary regions can be interpreted as regions of acceptance, rejection, and deferment decisions, respectively. The probabilistic rough set model extends the conventional rough sets by providing a more effective way of classifying objects. A main result of probabilistic rough sets is the interpretation of three-way decisions using a pair of probabilistic thresholds. The game-theoretic rough set model determines and interprets the required thresholds by utilizing a game-theoretic environment for analyzing strategic situations between cooperative or conflicting decision-making criteria. The essential idea is to implement a game for investigating how the probabilistic thresholds may change in order to improve the rough set-based decision-making. References Set theory
Game-theoretic rough sets
Mathematics
177
62,988,782
https://en.wikipedia.org/wiki/Ekpoma%20virus
Ekpoma viruses, including Ekpoma 1 tibrovirus (EKV-1) and Ekpoma 2 tibrovirus (EKV-2), are orphan viruses not associated with any disease. They are negative-sense RNA viruses and members of the rhabdovirus family.  Both viruses were discovered in 2015 in blood samples collected from two healthy women living in Ekpoma, Nigeria.  EKV-2 appears to be widespread and ~45% of people living in and around Ekpoma have been previously exposed.  Both viruses have very broad cellular tropism and the ability to infect a wide range of human cancer cell lines.  Neither virus has been isolated, hindering research. Discovery EKV-1 and EKV-2 were discovered in plasma samples from a 45-year-old female and in a 19-year-old female, respectively. Neither woman presented with any indication of illness and according to a 2015 report, the samples were collected as controls in a larger metagenomics study. The viruses were identified using next-generation sequencing. Clinical Disease EKV-1 and EKV-2 are orphan viruses not associated with any disease. According to the 2015 report, the woman infected with EKV-1 could not recall any episode of illness in the weeks or months following the collection of her sample. The woman infected with EKV-2 recalled a fever that occurred several weeks after the sample collection. She was diagnosed and treated for malaria. Viremia The titers of viremia observed in the women ranged from 45,000 RNA copies/mL plasma (EKV-2) to 4.5 million RNA copies/mL plasma (EKV-1). Prevalence Researchers used an enzyme-linked immunosorbent assay (ELISA) to detect antibodies that recognize the nucleocapsid protein of EKV-1/2. They reported that 5% of people living in and around Ekpoma had been exposed to EKV-1 and 45% to EKV-2. Transmission The natural reservoir and mode of transmission for EKV-1/2 are not known. Based on the natural reservoir and vector for other tibroviruses, researchers have hypothesized that biting midges may transmit the viruses to humans. Genome The published genomes of EKV-1 and EKV-2 are not complete. However, based on the sequence available, the genome contains the typical five open reading frames present in all rhabdoviruses (N, P, M, G, and L). The viruses also include three open reading frames of unknown function (U1, U2 and U3). U3 has been hypothesized to be a viroporin based on sequence similarity to other viroporins. Genetic divergence Although EKV-1 and EKV-2 were discovered in the same village in southwestern Nigeria, they only share 33% overall homology at the amino acid level. One notable difference between the two viruses is in the length of the phosphoprotein (P). The EKV-1 phosphoprotein contains 115 more amino acids than the EKV-2 phosphoprotein. Also notable are the differences in the envelope glycoprotein. The EKV-1 and EKV-2 envelop glycoproteins are only 27% identical at the amino acid level. Replication The EKV-1 and EKV-2 cellular receptors have not been identified. However, tropism of EKV-1 and EKV-2 has been studied using recombinant vesicular stomatitis virus (VSV) that express the EKV-1 or EKV-2 glycoproteins. VSV particles that express the EKV-1 and EKV-2 glycoproteins outperform the native VSV glycoprotein. These particles are able to enter a wide range of human and non-human cells. The steps in the replication lifecycle after particle entry have not be elucidated. References Rhabdoviridae Unaccepted virus taxa
Ekpoma virus
Biology
855
28,157,480
https://en.wikipedia.org/wiki/Mobile%20simulator
A mobile simulator is a software application for a personal computer which creates a virtual machine version of a mobile device, such as a mobile phone, iPhone, other smartphone, or calculator, on the computer. This may sometimes also be termed an emulator. The mobile simulator allows the user to use features and run applications on the virtual mobile on their computer as though it was the actual mobile device. A mobile simulator lets you test a website and determine how well it performs on various types of mobile devices. A good simulator tests mobile content quickly on multiple browsers and emulates several device profiles simultaneously. This allows analysis of mobile content in real-time, locate errors in code, view rendering in an environment that simulates the mobile browser, and optimize the site for performance. Mobile simulators may be developed using programming languages such as Java, .NET and JavaScript. See also Mobile application development Simulation for general information on simulation Web-based emulation Motion simulator - a mobile simulator in the entertainment world References External links Nokia Mobile Browser Simulator Mobile computers Emulation software Programming tools
Mobile simulator
Technology
218
1,191,041
https://en.wikipedia.org/wiki/Extended%20supersymmetry
In theoretical physics, extended supersymmetry is supersymmetry whose infinitesimal generators carry not only a spinor index , but also an additional index where is integer (such as 2 or 4). Extended supersymmetry is also called , supersymmetry, for example. Extended supersymmetry is very important for analysis of mathematical properties of quantum field theory and superstring theory. The more extended supersymmetry is, the more it constrains physical observables and parameters. See also Supersymmetry algebra Harmonic superspace Projective superspace References Supersymmetry
Extended supersymmetry
Physics
126
31,170,034
https://en.wikipedia.org/wiki/Crystallography%20Reviews
Crystallography Reviews is a quarterly peer-reviewed scientific journal publishing review articles on all aspects of crystallography. It is published by Taylor & Francis. The editor-in-chief is Petra Bombicz. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service/CASSI Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.467. References External links Quarterly journals Chemistry journals English-language journals Crystallography journals Academic journals established in 1987 Taylor & Francis academic journals
Crystallography Reviews
Chemistry,Materials_science
115
12,667,554
https://en.wikipedia.org/wiki/Nokia%20Prism
The Nokia Prism is a fashion mobile phone collection produced by Nokia. All models run S40 5th Edition user interface. As for 2008, the collection includes the four handsets: Nokia 7070 Prism (low-end phone, released 4Q 2008) Nokia 7500 Prism (mid-range triband phone, released 3Q 2007) Nokia 7900 Prism (hi-end quadband phone, released 3Q 2007) Nokia 7900 Crystal Prism (luxury version of Nokia 7900 Prism, released 4Q 2007) References http://www.intomobile.com/2007/08/07/nokia-prism-collection-the-7900-and-7500-are-official.html Information Source http://www.nokiaprismcollection.com/
Nokia Prism
Technology
160
338,082
https://en.wikipedia.org/wiki/Gastric%20intubation
Nasogastric intubation is a medical process involving the insertion of a plastic tube (nasogastric tube or NG tube) through the nose, down the esophagus, and down into the stomach. Orogastric intubation is a similar process involving the insertion of a plastic tube (orogastric tube) through the mouth. Abraham Louis Levin invented the NG tube. Nasogastric tube is also known as Ryle's tube in Commonwealth countries, after John Alfred Ryle. Uses A nasogastric tube is used for feeding and administering drugs and other oral agents such as activated charcoal. For drugs and for minimal quantities of liquid, a syringe is used for injection into the tube. For continuous feeding, a gravity based system is employed, with the solution placed higher than the patient's stomach. If accrued supervision is required for the feeding, the tube is often connected to an electronic pump which can control and measure the patient's intake and signal any interruption in the feeding. Nasogastric tubes may also be used as an aid in the treatment of life-threatening eating disorders, especially if the patient is not compliant with eating. In such cases, a nasogastric tube may be inserted by force for feeding against the patient's will under restraint. Such a practice may be highly distressing for both patients and healthcare staff. Nasogastric aspiration (suction) is the process of draining the stomach's contents via the tube. Nasogastric aspiration is mainly used to remove gastrointestinal secretions and swallowed air in patients with gastrointestinal obstructions. Nasogastric aspiration can also be used in poisoning situations when a potentially toxic liquid has been ingested, for preparation before surgery under anesthesia, and to extract samples of gastric liquid for analysis. If the tube is to be used for continuous drainage, it is usually appended to a collector bag placed below the level of the patient's stomach; gravity empties the stomach's contents. It can also be appended to a suction system, however this method is often restricted to emergency situations, as the constant suction can easily damage the stomach's lining. In non-emergency situations, intermittent suction is often applied giving the benefits of suction without the untoward effects of damage to the stomach lining. Suction drainage is also used for patients who have undergone a pneumonectomy in order to prevent anesthesia-related vomiting and possible aspiration of any stomach contents. Such aspiration would represent a serious risk of complications to patients recovering from this surgery. Types Types of nasogastric tubes include: Levin catheter, which is a single lumen, small bore NG tube. It is more appropriate for administration of medication or nutrition. This type of catheter tends to be more prone to suctioning against the stomach lining, which can cause damage and interfere with future function of the tube. Salem Sump catheter, which is a large bore NG tube with double lumen. This avails for aspiration in one lumen, and venting in the other to reduce negative pressure and prevent gastric mucosa from being drawn into the catheter. Dobhoff tube, which is a small bore NG tube with a weight at the end intended to pull it by gravity during insertion. The name "Dobhoff" refers to its inventors, surgeons Dr. Robert Dobbie and Dr. James Hoffmeister, who invented the tube in 1975. Materials Nasogastric tubes are available in a variety of different materials, each with their own unique properties. Polypropylene - This material is most common. It is less likely to kink, which can be beneficial for placement, but its rigidity makes it less suitable to be used for long term feeding. Latex - These tubes tend to be thicker and can be difficult to place without proper lubrication. Latex tends to break down at faster rates compared to other materials. Allergies to latex are relatively common and latex tubes are more likely to be recognized as a foreign object by the body. Silicone - Especially useful in patients with known latex allergies. Silicone tubes tend to be thinner and more pliable. This can be useful in some situations but can also be more prone to rupture under stress. Technique Before an NG tube is inserted, it must be measured from the tip of the patient's nose, loop around their ear and then down to roughly below the xiphoid process. The tube is then marked at this level to ensure that the tube has been inserted far enough into the patient's stomach. Many commercially available stomach and duodenal tubes have several standard depth markings, for example , , and from distal end; infant feeding tubes often come with 1 cm depth markings. The end of a plastic tube is lubricated (local anesthetic, such as 2% xylocaine gel, may be used; in addition, nasal vasoconstrictor and/or anesthetic spray may be applied before the insertion) and inserted into one of the patient's anterior nares. Treatment with 2.0 mg of IV midazolam greatly reduces patient stress. The tube should be directed straight towards the back of the patient as it moves through the nasal cavity and down into the throat. When the tube enters the oropharynx and glides down the posterior pharyngeal wall, the patient may gag; in this situation the patient, if awake and alert, is asked to mimic swallowing or is given some water to sip through a straw, and the tube continues to be inserted as the patient swallows. Once the tube is past the pharynx and enters the esophagus, it is easily inserted down into the stomach. The tube must then be secured in place to prevent it from moving. There are several ways to secure an NG placement. One method and the least invasive is tape. Tape is positioned and wrapped around the NG tube onto the patients nose to prevent dislodgement. Another securement device is a nasal bridle, or a device that enters one nare, around the nasal septum, and then to the other nare where it is secured in place around the nasogastric tube. There are two ways a bridle is put into place. One method, according to the Australian Journal of Otolaryngology, is performed by a physician to pull a material through the nares and then tied with the ends shortened to prevent removal of the tube. The other method is a device called the Applied Medical Technology, or AMT, bridle. This device uses a magnet inserted into both nares that connects at the nasal septum and then pulled through to one side and tied. This technology allows nurses to safely apply bridles. Several studies have proven the use of a nasal bridle prevents the loss of the NG placement that provides necessary nutrients or suctioning. A study conducted in the UK from 2014 through 2017, determined that 50% of feeding tubes secured with tape were lost inadvertently. The use of bridle securement decreased the percentage of NGs lost from 53% to 9%. Great care must be taken to ensure that the tube has not passed through the larynx into the trachea and down into the bronchi. The reliable method is to aspirate some fluid from the tube with a syringe. This fluid is then tested with pH paper (note not litmus paper) to determine the acidity of the fluid. If the pH is 4 or below then the tube is in the correct position. If this is not possible then correct verification of tube position is obtained with an X-ray of the chest/abdomen. This is the most reliable means of ensuring proper placement of an NG tube. The use of a chest x-ray to confirm position is the expected standard in the UK, with Dr/ physician review and confirmation. Future techniques may include measuring the concentration of enzymes such as trypsin, pepsin, and bilirubin to confirm the correct placement of the NG tube. As enzyme testing becomes more practical, allowing measurements to be taken quickly and cheaply at the bedside, this technique may be used in combination with pH testing as an effective, less harmful replacement of X-ray confirmation. If the tube is to remain in place then a tube position check is recommended before each feed and at least once per day. Only smaller diameter (12 Fr or less in adults) nasogastric tubes are appropriate for long-term feeding, so as to avoid irritation and erosion of the nasal mucosa. These tubes often have guidewires to facilitate insertion. If feeding is required for a longer period of time, other options, such as placement of a PEG tube, should be considered. Function of an NG tube properly placed and used for suction is maintained by flushing. This may be done by flushing small amounts of saline and air using a syringe or by flushing larger amounts of saline or water, and air, and then assessing for the air to circulate through one lumen of the tube, into the stomach, and out the other lumen. When these two techniques of flushing were compared, the latter was more effective. Contraindications The use of nasogastric intubation is contraindicated in patients with moderate-to-severe neck and facial fractures due to the increased risk of airway obstruction or improper tube placement. Special attention is necessary during insertion under these circumstances in order to avoid undue trauma to the esophagus. There is also a greater risk to patients with bleeding disorders, particularly those resulting from the distended sub-mucosal veins in the lower third of the esophagus known as esophageal varices which may be easily ruptured due to their friability and also in GERD(Gastro Esophageal Reflux Disease). Alternative measures, such as an orogastric intubation, should be considered under these circumstances, or if the patient will be incapable of meeting their nutritional and caloric needs for an extended time period (usually >24 hours). Complications Complications with nasogastric intubation can occur due to incorrect initial placement of the nasogastric tube or due to changes in tube position that go unrecognized. Nasogastric tubes mistakenly placed in the trachea or lungs can lead to aspiration of enteral feeds or medications administered through the NG tube. This can also lead to pneumothorax or pleural effusion, which often requires a chest tube to drain. Nasogastric tubes can also be mistakenly placed within the intracranial space; this is more likely to occur in patient who already have specific types of skull fractures. Other complications include clogged or nonfunctional tubes, premature removal of the tube, erosion of the nasal mucosa, esophageal perforation esophageal reflux, nose bleeds, sinusitis, sore throat and gagging. Fox News Digital reported about a voluntary field correction notice dated March 21, 2022, referenced 60 injuries and 23 deaths related to misplacement of a nasogastric tube. Avanos Medical's Cortrak2 EAS recall, has been classified as a Class I recall by the FDA, following these reports. See also Force feeding Feeding tube References Medical equipment Enteral feeding Medical treatments
Gastric intubation
Biology
2,369
33,693,638
https://en.wikipedia.org/wiki/Battlefield%204
Battlefield 4 is a 2013 first-person shooter video game developed by DICE and published by Electronic Arts. The game was released in October and November for Microsoft Windows, PlayStation 3, Xbox 360, PlayStation 4, and Xbox One, and is the sequel to 2011's Battlefield 3, taking place six years later during the fictional "War of 2020". Battlefield 4 was met with positive reception for its multiplayer mode, gameplay and graphics, but was criticized for its single-player campaign and for numerous bugs and glitches in the multiplayer. It was a commercial success, selling over seven million copies. A successor, Battlefield Hardline, was released in March 2015 and a direct sequel Battlefield 2042, was released in November 2021. Gameplay The game's heads-up display (HUD) is composed of two compact rectangles. The lower left-hand corner features a mini-map and compass for navigation, and a simplified objective notice above it; the lower right includes a compact ammo counter and health meter. The top right displays kill notifications of all players in-game. On the Windows version of the game, the top left features a chat window when in multiplayer. The mini-map, as well as the main game screen, shows symbols denoting three kinds of entities: blue for allies, green for squadmates, and orange for enemies, this applies to all interactivity on the battlefield. Battlefield 4 options also allow colour-blind players to change the on-screen colour indicators to: tritanomaly, deuteranomaly and protanomaly. Weapon customisation is expansive and encouraged. Primary, secondary and melee weapons can all be customised with weapon attachments and camouflage 'skins'. Most weapons can also change between different firing modes (automatic, semi-automatic, and burst fire), allowing the player to adapt to the environment they find themselves in. They can "spot" targets (marking their positions to the player's team's mini-maps) in the single-player campaign (a first in the Battlefield franchise) as well as in multiplayer. The game's bullet-dropping-system has been significantly enhanced, forcing the player to change the way they play medium to long distance combat. In addition, players have more combat capabilities, such as countering melee attacks from the front while standing or crouching, shooting with their sidearm while swimming, and diving underwater to avoid enemy detection. Standard combat abilities are still current including, reloading whilst sprinting, unlimited sprint, prone and vaulting. Campaign The single-player campaign has several differences from the main multiplayer component. For the most part, the player must traverse mini-sandbox-style levels, in some cases using vehicles, like tanks and boats, to traverse the environment. As the player character, Recker, the player can use two campaign-only functions: the Engage command and the tactical binocular. The Engage command directs Recker's squadmates, and occasionally other friendly units, to attack any hostiles in Recker's line of sight. The tactical binocular is similar to a laser-designator, in the sense that it allows the player to identify friendly and enemy units, weapon stashes, explosives, and objectives in the field. By identifying enemies, the player can make them visible without using the visor, making them easier to mark for their teammates. At one point, Recker will briefly lose the tactical visor, forcing them to only use the Engage command to direct his squadmates on a limited number of enemies. The campaign features assignments that require specific actions and unlock weapons for use in multiplayer upon completion. Collectible weapons return along with the introduction of collectible dog tags which can be used in multiplayer. Weapon crates are found throughout all levels, allowing players to obtain ammo and switch weapons. While crates hold default weapons, collectible weapons may be used whenever they are acquired and level-specific weapons may be used once a specific mission assignment has been completed by obtaining enough points in a level. Multiplayer Battlefield 4s multiplayer contains three playable factions—the United States, China, and Russia—fighting against each other, in up to 64-player matches on PC, PlayStation 4, and Xbox One (24-Player on Xbox 360 and PS3). A newly reintroduced "Commander Mode", last seen in Battlefield 2142, gives one player on each team a real-time strategy-like view of the entire map and the ability to give orders to teammates. Also, the Commander can observe the battle through the eyes of the players on the battlefield, deploying vehicle and weapon drops to "keep the war machinery going", and order in missile strikes on hostile targets. A spectator mode is included, enabling players to spectate others in first or third person, as well as use a free camera to pan around the map from any angle. On June 10, 2013, at E3, DICE featured the map "Siege of Shanghai", depicting the People's Liberation Army against the U.S. Marine Corps. The gameplay showcased Commander Mode; new weapons and vehicles; and the "Levolution" gameplay mechanic. The video displays the last of these at various points, including: a player destroying a support pillar to trap an enemy tank above it; and a large skyscraper (with an in-game objective on the top floor) collapsing in the center of the map, kicking up a massive dust cloud throughout the map and bringing the objective closer to ground level. Levolution also includes effects such as shooting a fire extinguisher to fill the room with obscuring clouds, car alarms going off when stepped on, metal detectors going off once passed through, or cutting the power in a room to reduce others' visibility. The maps included in the main game are "Siege of Shanghai", "Paracel Storm", "Zavod 311", "Lancang Dam", "Flood Zone", "Rogue Transmission", "Hainan Resort", "Dawnbreaker", "Operation Locker" and "Golmud Railway". The game modes on offer include Battlefields Conquest, Domination and Rush; while adding two new game modes called Obliteration and Defuse, along with traditional game modes such as Team Deathmatch and Squad Deathmatch. The four kits from Battlefield 3 are present in Battlefield 4 with minor tweaks. The Assault kit must now wait for the defibrillator to recharge after reviving teammates in quick succession. The Engineer kit uses PDWs, and carbines are available to all kits. The support kit has access to the new remote mortar and the XM25 allowing for indirect suppressive fire. The Recon kit is now more mobile and is able to equip carbines, designated marksman rifles (DMRs), and C-4. Sniping mechanics also give with the ability to zero in your sights (set an aiming distance), and equip more optics and accessories than previous Battlefield games. The Recon kit is still able to utilize the MAV, T-UGS, and the Radio Beacon. New vehicles have also been introduced. With the addition of the Chinese faction, new vehicles include the Type 99 MBT, the ZFB-05 armored car, and the Z-10W attack helicopter. Jets have also been rebalanced and put into two classes, "attack" and "stealth". The attack jets focus is mainly air-to-ground capabilities, while stealth jets focus mainly on air-to-air combat. Another vehicle added in Battlefield 4 is the addition of the RCB and DV-15 Interceptor attack boats, which function as heavily armed aquatic assault craft. Customization options have also been increased in Battlefield 4, with all new camos available for every gun and vehicle. A new "adaptive" camo has been introduced that can adapt the camo to the map being played without the player having to change camos every map. Camos can now be applied to jets, helicopters, tanks, transport vehicles, guns, and soldiers themselves. Previously this option was introduced to parachutes but has been removed, emblems are now printed onto parachutes. Synopsis Setting and characters Battlefield 4s single-player Campaign takes place during the fictional "War of 2020", six years after the events of its predecessor. Tensions between Russia and the United States have been running at a record high, due to a conflict between the two countries that has been running for the last six years. On top of this, China is also on the brink of war, as Admiral Chang, the main antagonist, plans to overthrow China's current government. If he succeeds, Chang will have full support from the Russians, helping spark war between China and the United States. The player controls Sgt. Daniel "Reck" Recker, second-in-command of a U.S. Marine Corps Force Recon squad callsigned "Tombstone". His squadmates include squad leader SSgt. William Dunn, Heavy Weapon Specialist SSgt. Kimble "Irish" Graves, and field medic Sgt. Clayton "Pac" Pakowski. Early in the Campaign, Tombstone is joined by CIA operative Laszlo W. Kovic, originally known as "Agent W." from Battlefield 3s Campaign; and Chinese Secret Service agent Huang "Hannah" Shuyi. The Campaign also sees the return of Dimitri "Dima" Mayakovsky from Battlefield 3s Campaign—still alive after the nuclear detonation in Paris six years ago, and under the Chinese military's custody for unknown reasons. Plot Six years after the events in Battlefield 3, a squad of US Marines, codenamed "Tombstone" (consisting of Dunn, the squad's leader, Sergeant Recker, Irish, and Pac) attempts to escape from Azerbaijan with vital intelligence about a potential military uprising in China. After being trapped underwater in a car while being pursued by Russian special forces, Dunn, critically wounded and trapped, sacrifices himself by ordering the squad to break the windshield and escape, leaving Recker in charge of Tombstone. Reuniting with their commanding officer Captain Garrison, codenamed "Fortress", Tombstone learns that Admiral Chang, head of the Chinese army, has taken control of China with Russian support, and eliminated Chinese presidential candidate Jin Jié, a progressive politician seeking reforms within the Chinese government. The group finds itself sent to Shanghai with orders to rescue two VIPs: a woman named Hannah, and her husband, with assistance from an intelligence agent named Kovic. Although the rescue is a success and Kovic takes the VIPs back to the USS Valkyrie, a Wasp-class amphibious assault ship, Tombstone becomes trapped in the city and is forced to rescue civilians against Pac's protests. Shortly after returning to the Valkyrie, Garrison assigns Kovic as head of the squad, and sends them to the USS Titan, a Nimitz-class aircraft carrier that had just been attacked, to recover its voyage data recorder before the wreckage sinks. Upon returning to the Valkyrie, Tombstone finds the ship under assault by Chinese Marines. The squad rescues Garrison and the VIPs, but Kovic is fatally wounded and passes control of the squad to Recker. Learning that China's air force is grounded due to a storm, Garrison assigns Tombstone to assist US forces planning to assault the Chinese-controlled Singapore airfield to weaken the Chinese air superiority. Hannah volunteers to join Tombstone on their mission, much to Irish's chagrin. Despite the airfield being destroyed by a missile strike, Pac is separated from Tombstone during evacuation and assumed killed in the blast. Hannah then ostensibly betrays Recker and Irish, allowing both to be captured by Chinese soldiers. Both men are imprisoned in the Kunlun Mountains for interrogation under Chang's orders. In his cell, Recker finds himself befriended by a Russian prisoner named "Dima" (a survivor of the Paris nuclear blast, now suffering from radiation poisoning). The two escape from their cell, start a mass prison riot and use the chaos to make their escape, with Recker rescuing Irish along the way. As the Chinese military arrives to quell the riot, Hannah prevents the group from being recaptured by a group of soldiers. Although Irish mistrusts her, Hannah reveals her action was necessary for her mission, revealing her husband is in fact Jin Jié, who survived Chang's assassination attempt. The group uses a tram to leave the mountains, only for it to be shot down by an enemy helicopter, killing Dima in the crash. Forced to make their way down on foot with privation, the group eventually finds a jeep and drives towards the US-occupied city of Tashgar. During the journey, Hannah reveals how she lost her family to Chang's men after bringing Jin Jié to meet them, causing Irish to make amends with her for his behavior. Upon reaching Tashgar, the squad finds US troops being besieged by both Chinese and Russian forces, and offers assistance to the US commander by destroying a nearby dam, flooding the area and eliminating the opposing forces. Learning the Valkyrie is within the region of the Suez Canal, Tombstone is airlifted to the ship, and arrives to warn the vessel that they are blindly heading towards Chang's navy. Stopping Chinese forces from boarding the ship, the squad soon finds Jin Jié amongst other survivors, including Pac (who had survived the events in Singapore). Knowing he must show his face, as Chinese forces had been fighting under the assumption he was dead, Jin Jié convinces Recker to let him show his face and calm tensions between the three forces. The assault quickly ends with Chinese soldiers beginning to spread the news of Jin Jié's return. Chang, wanting to prevent this and conceal the truth, proceeds to barrage the Valkyrie with his personal warship. Recker, Irish, and Hannah decide to board Chang's warship and destroy it with C-4 explosives. However, when they fail to detonate, Irish and Hannah each volunteer to manually replace the charges, sacrificing his or her own life. If the player does nothing, Chang destroys the Valkyrie, killing Pac, Garrison and Jin Jié; if the player chooses Irish or Hannah to rearm the explosives, he or she will be reported missing in action after the warship's destruction, while the survivor and Recker are recovered by the Valkyrie. During the credits, the player hears new dialogue between Irish and Hannah, discussing their pasts, and how they have to keep moving forward with no regrets. Development Electronic Arts president Frank Gibeau confirmed the company's intention to release a sequel to Battlefield 3 during a keynote at the University of Southern California where he said "There is going to be a Battlefield 4". Afterwards, an EA spokesperson told IGN: "Frank was speaking broadly about the Battlefield brand—a brand that EA is deeply passionate about and a fan community that EA is committed to." On the eve of Battlefield 3s launch, EA Digital Illusions CE told Eurogamer it was the Swedish studio's hope that it would one day get the opportunity to make Battlefield 4. "This feels like day one now," executive producer Patrick Bach said. "It's exciting. The whole Frostbite 2 thing has opened up a big landscape ahead of us so we can do whatever we want." Battlefield 4 is built on the new Frostbite 3 engine. The new Frostbite engine enables more realistic environments with higher resolution textures and particle effects. A new "networked water" system is also being introduced, allowing all players in the game to see the same wave at the same time. Tessellation has also been overhauled. An Alpha Trial commenced on June 17, 2013, with invitations randomly emailed to Battlefield 3 players the day prior. The trial ran for two weeks and featured the Siege of Shanghai map with all of its textures removed, essentially making it a "whitebox" test. Due to mixed reception of the two-player Co-op Mode in Battlefield 3, DICE decided to omit the mode from Battlefield 4 to focus on improving both the campaign and multiplayer components instead. AMD and DICE have partnered for AMD's Mantle API to be used on Battlefield 4. The goal was to boost performance on AMD GCN Radeon graphic cards providing a higher level of hardware-optimized performance than was previously possible with OpenGL or DirectX. Initial tests of AMD's Mantle showed it was an effective enhancement for slower processors. DICE released an Open Beta for the game that was available on Windows (64 bit only), Xbox 360 and PlayStation 3. It featured the game-modes Domination, Conquest and Obliteration which were playable on the map Siege of Shanghai. The Open Beta started on October 4, 2013, and ended on October 15, 2013. Technical issues and legal troubles Upon release, Battlefield 4 was riddled with major technical bugs, glitches and crashes across all platforms. EA and DICE soon began releasing several patches for the game on all systems and DICE later revealed that work on all of its future games (including Mirror's Edge, Star Wars: Battlefront and Battlefield 4 DLC) would be halted until Battlefield 4 was working properly. In December 2013, more than a month after the game's initial release, an EA representative said, "We know we still have a ways to go with fixing the game – it is absolutely our #1 priority. The team at DICE is working non-stop to update the game." EA President Peter Moore announced in January 2014 that the company did not see any negative impact to sales as a result of the myriad technical issues. He said any negative impacts to sales were actually due to the transition from current-generation (PS3, Xbox 360) to next-generation consoles (PS4, Xbox One), and that other video game franchises like FIFA and Need for Speed were experiencing similar effects. As a reward for players who bought the game early and continued to play it despite all of the bugs and glitches, DICE rewarded players in February 2014 with all-month-long, free multi-player content such as: bronze and silver Battlepacks, XP boosts and events, camouflage skins, shortcut bundles for weapons and additional content for Premium members. Because of the widespread bugs and glitches that were present, EA became the target of multiple law firms. The firm Holzer Holzer & Fistel, LLC launched an investigation into EA's public statements made between July 24 and December 4, 2013, to determine if the company intentionally misled its investors with information pertaining to, "the development and sales of the Company's Battlefield 4 video game and the game's impact on EA's revenue and projects moving forward." Shortly thereafter, the law firm Robbins Geller Rudman & Dowd LLP similarly filed a class action lawsuit against EA for releasing false or misleading statements about the quality of Battlefield 4. A second class action lawsuit was announced only days later from the firm Bower Piven, which alleged that EA violated the Securities Exchange Act of 1934 by not properly informing its investors about the major bugs and glitches during development that may have prevented the investors from making an informed decision about Battlefield 4. Bower Piven sought out investors who lost more than US$200,000 to become the lead plaintiff. In October 2014, Judge Susan Illston dismissed one of the class action suits' original case on the grounds that EA did not intentionally mislead investors, instead its pre-release claims about Battlefield 4 were a "vague statement of corporate optimism", "an inactionable opinion" and "puffery". Six months after the initial release of the game, in April 2014, DICE released a program called Community Test Environment (CTE), which let a limited number of PC gamers play a different version of Battlefield 4 that was designed to test new patches and updates before giving them a wide-release. One of the major patches tested was an update to the game's netcode, specifically the "tickrate", which is how frequently the game and server would update, measured in cycles per second. Because of the size of Battlefield 4 in terms of information, DICE initially chose to have a low tickrate. However, the low tickrate resulted in a number of issues including damage registration and "trade kills". The CTE program tested the game at a higher tickrate, among other common problems, and began rolling out patches in mid-2014. In October 2014, nearly a full year after the official release with major updates still being put out, DICE LA producer David Sirland said the company acknowledged that the release of Battlefield 4 "absolutely" damaged the trust of the franchise's fanbase. Sirland said that the shaky release of Battlefield 4 caused the company to reevaluate their release model, and plan on being more transparent and offer earlier beta tests with future installments, namely (at the time) with Battlefield Hardline (2015). Sirland also said: "We still probably have a lot of players who won't trust us to deliver a stable launch or a stable game. I don't want to say anything because I want to do. I want them to look at what we're doing and what we are going to do and that would be my answer. I think we have to do things to get them to trust us, not say things to get them to trust us. Show by doing." Marketing In March 2013, Electronic Arts opened the Battlefield 4 website with three official teasers, entitled "Prepare 4 Battle". Each hints at three kinds of battlespace: air, land and sea. EA then continued to release teaser trailers leading up to the unveiling of Battlefield 4 at the Game Developers Conference on March 26, 2013. The following day, Battlefield 4s first gameplay trailer, which doubled as a showcase for the Frostbite 3 engine was released. Shortly thereafter, EA listed the game for pre-order on Origin for Microsoft Windows, PlayStation 3, and Xbox 360; however, EA excluded any mention of the next generation consoles. In July 2012, Battlefield 4 was announced when EA advertised on their Origin client that those who pre-ordered Medal of Honor: Warfighter (either Digital Deluxe or the limited edition) would receive early access to the Battlefield 4 beta, this has since been expanded to include any Battlefield 3 Premium owners, and any Origin users who pre-purchase Battlefield 4 Digital Deluxe Edition. Although players who qualify for access in more than one way will only be granted one beta pass for their account and is non transferable to other players. The "Exclusive" beta started on October 1, 2013, with the open beta that went live on October 4. The beta will be on three platforms, PC, Xbox 360, and PlayStation 3 and features the Siege of Shanghai map on the Conquest game mode. DICE revealed more Battlefield 4 content in the E3 2013 event at June 10, 2013, such as multiplayer modes, and allowed participants to play the game at the same event. More information was released at Gamescom 2013 in Cologne, Germany, such as the "Paracel Storm" multiplayer map and Battlefield 4 Premium. Battlefield 4 Premium includes five digital expansion packs featuring new maps and in-game content. Two-weeks early access to all expansion packs. Personalization options including camos, paints, emblems, dog tags and more. Priority position in server queues. Weekly updates with new content. Double XP events, 12 Battle Packs. Battle Packs are digital packages that contain a combination of new weapon accessories, dog tags, knives, XP boosts, and character customization items, three are included with all pre-orders of the Origin Digital Deluxe edition. The service will also transfer your Premium membership from Xbox 360 to Xbox One or PS3 to PS4. Premium membership pre-orders started the day the service was announced (August 21, 2013). DICE has also announced that if you purchase the game for a current generation system (PlayStation 3 or Xbox 360) you will be able to trade it in for a PlayStation 4 or Xbox One version of the game for as little as $10. Additionally all PlayStation 3 and 4 copies will include a code in the box to redeem a digital copy on the PlayStation Store. An important strategy of DICE's market strategy to promote Battlefield 4 was the series of TV and web advertisements entitled Only in Battlefield 4. Each one of these TV spots was narrated by a player of Battlefield 4 describing one of the unique experiences they encountered, along with a re-creation of the event using gameplay footage. These advertisements highlighted the free-form nature of the upcoming game, such as the destructibility of the environment and the dynamic nature of the game's combat engine. These events included things such as demonstrating the new Levolution feature, upgrades to gameplay, and unscripted moments that cannot occur in other games' multiplayer mode. Due to poor reception from gamers, on May 30, 2013, EA discontinued the online pass for all existing and future EA games including Battlefield 4. A companion application was also released for iOS and Android. Downloadable content Battlefield 4 featured a total of five downloadable content (DLC) packs that included new maps and additions to gameplay. All five DLC packs were developed by DICE LA and were available two weeks prior to their scheduled release by players who had purchased Premium. Once support for Battlefield 4 Premium ended, DICE LA announced all future DLC would be free. China Rising On May 21, 2013, DICE unveiled Battlefield 4: China Rising on a Battlelog post and stated that it would include four new maps (Silk Road, Altai Range, Dragon Pass, and Guilin Peaks) on the Chinese mainland, ten new assignments, new vehicles, as well as the Air Superiority gametype. It is available to those who pre-ordered the game at no extra cost. It was released to premium players on December 3, 2013, followed by a general release on December 17, 2013. Second Assault On June 10, 2013, DICE LA unveiled Battlefield 4: Second Assault during the Microsoft Press Conference at E3 2013. It was announced that it would be the first expansion pack to be released for Battlefield 4 and would debut on the Xbox One. It was released on November 22, 2013, the same day the Xbox One was launched. The expansion features the return of four fan-favorite maps from Battlefield 3 and introduces Capture the Flag as a new gametype. On February 18, 2014, Second Assault became available as Premium exclusive for Xbox 360, PlayStation 3, PlayStation 4, and PC. It became available for non-Premium users on March 4, 2014. During January 29 – February 28, 2015, the expansion was free of charge to all EA Access subscribers. Naval Strike On August 20, 2013, DICE LA unveiled Battlefield 4: Naval Strike at Gamescom 2013. It involves dynamic combat on four new maps, Wave Breaker, Nansha Strike, Operation Mortar, and Lost Islands, which take place in the South China Sea and features a new mode called "Carrier Assault" inspired by Battlefield 2142. The original release date was planned for March 25, 2014 for premium members and April 8, 2014, for non-premium members but was delayed several hours before release for Xbox One and PC without a new release date being set. On March 26, 2014, Naval Strike was released for premium members on PlayStation 3, PlayStation 4 and Xbox 360. The Xbox One version was released for premium members on March 27, 2014, and the PC version was released on March 31, 2014. Dragon's Teeth At Gamescom 2013, DICE LA unveiled Battlefield 4: Dragon's Teeth. Its maps take place in war-torn cities locked down by the People's Liberation Army. Dragon's Teeth was released on July 15, 2014, for Battlefield 4 Premium Members. For Non-Premium members it was released 2 weeks later on July 29, 2014. A new game mode included in this Dragon's Teeth DLC is called "Chain Link". There are four new maps included in Dragon's Teeth called "Lumphini Garden, Pearl Market, Propaganda, and Sunken Dragon". There are 11 new Assignments and a new assault drone called the "RAWR" that can be found on those four maps. Final Stand On August 20, 2013, DICE LA unveiled Battlefield 4: Final Stand at Gamescom 2013. Final Stand focuses on the conclusion of the in-game war of 2020. It includes four new maps and "secret prototype weapons and vehicles". The four maps that are included are "Operation Whiteout", "Giants of Karelia", "Hammerhead" and "Hangar 21". New weapons include the Rorsch X1 Handheld Railgun and some gadgets including the DS-3 Decoy and XD-1 Accipiter MKV, as well as a hovercraft tank based on the Levkov 1937 Hovercraft MBT. It was released for Battlefield 4 Premium members on November 18, 2014, 00:01 and for non-Premium Battlefield 4 players on December 2, 2014, 00:01. Weapons Crate The Weapons Crate DLC was announced by DICE LA on March 30, 2015, as a free DLC. The DLC added five weapons into the game: the Mare's Leg, AN-94, Groza-1, Groza-4 and the L86A2 along with the gamemode from Battlefield 3 'Gun Master' and many other stat changes. It was released in an alpha form in the Community Test Environment. It was released along with the Spring 2015 Patch on May 26, 2015. Night Operations In August 2015, DICE and DICE LA announced the expansion pack Night Operations, a free DLC pack. The first map to be released was Zavod: Graveyard shift, a night time version of the Battlefield 4 map Zavod 311, it was released with the Summer 2015 Patch. Two other night maps were also in development, a night time version of the map Siege of Shanghai and Golmud Railway, these maps were playable in the Battlefield 4 Community Test Environment but would remain unreleased as further development on Battlefield 4 ended. All three maps were developed by DICE LA and tested in the Community Test Environment with player feedback taken on board. Community Operations Community Operations was released on October 27, 2015, a free DLC pack. The map, Outbreak, is a medium-sized with much vegetation such as trees, shrubs, and grass for ambushing the enemies within. There are limited amounts of heavy vehicles such as tanks, LAVs and no anti-air vehicles. The map does not include air dominance such as stealth jets, scout helicopters and attack aircraft. This map was created by DICE Los Angeles and the Battlefield 4 gaming community. The update contains major changes to weapons and vehicles. Legacy Operations Legacy Operations was released on December 15, 2015, a free DLC pack. The map is an updated version of the Battlefield 2 map, Dragon Valley. It was released alongside the Winter Patch content update. Premium Premium is a downloadable pass that offers all of the downloadable content for a discounted price. Premium offers a range of personalization options and items, such as exclusive dog tags or camos. Premium contributes to the game by offering select days in which special events take place only for premium members. Reception Critical reception Battlefield 4 received positive reviews from critics. Chris Watters of GameSpot gave praise to Obliteration Mode and the multiplayer elements but was otherwise unimpressed with the campaign. IGN's Mitch Dyer stated that "Battlefield 4 is a greatest hits album of DICE's multiplayer legacy" for same versions. Evan Lahti of PC Gamer stated that although the game strongly resembles Battlefield 3 it still manages to remain "a visually and sonically satisfying, reliably intense FPS". Commander Mode and the diverse map selection within multiplayer were also praised as being good additions to the game. Joystiq'''s David Hinkle said that the game "drops players into a sandbox and unhooks all tethers, loosing scores of soldiers to squad up and take down the opposition however they choose". Hinkle praised the campaign elements, but found the multiplayer to not hold any surprises. GameZone Lance Liebl stated "Your success in Battlefield is up to you and how well you work as a team. And it's one of the most rewarding games I've played. Battlelog needs some refinement, and there's still way too many crashes, but the multiplayer more than makes up for all of it." Machinima's Lawrence Sonntag praised the Levolution feature and the multiplayer mode. However, several reviewers noted that the multiplayer part of the game had been released with a lot of game-breaking bugs on PC, PlayStation 4 and Xbox One, such as server crashes and lag. Polygon reviewed the game the day of its release, and gave it 7.5, then later downgraded their score to 4 after acknowledging that the game "was still barely playable for many players". DICE later acknowledged the issues with the multiplayer part of the game and said they were working to fix them, and that they would not work on expansions or future projects until the game problems were resolved. Despite this promise, the game's second expansion was released while numerous recurring problems had yet to be resolved. Ban in China In late December 2013, shortly after the release of the "China Rising" DLC pack, China banned the sale of Battlefield 4, requesting stores and online vendors to remove the game and encouraging those who have already purchased the game to remove it from their consoles and/or PCs. The game was viewed as a national security risk in the form of a cultural invasion as the DLC includes four maps on the Chinese mainland. An editorial from the China National Defense Newspaper (a subsidiary of the PLA Daily) published in December 2013 criticized the game for discrediting China's national sovereignty, and stated that while in the past the Soviet Union would often be used as an imaginary enemy in video games, the game has recently shifted to China. Sales During the first week of sales in the United Kingdom, Battlefield 4 became the second best-selling game on all available formats, only behind Assassin's Creed IV: Black Flag. The game's sales were down 69% compared to 2011's Battlefield 3. EA blamed the fall in demand on uncertainty caused by the upcoming transition to eighth generation consoles. The PlayStation 3 version of Battlefield 4 topped the Media Create sales charts in Japan during its first week of release, ahead of Pokémon X and Y which has topped the charts for the past four weeks, by selling 121,699 copies. The PlayStation 3 version of Battlefield 4 Sales in Japan however fell 84.148% to only 19,291 copies in its second week of release, and losing number one to God Eater 2. According to NPD Group figures, Battlefield 4 was the second best-selling game of November in the United States, only behind Call of Duty: Ghosts. In February 2014, EA announced that the Premium service for the game had sold more than 1.6 million copies. In May 2014, the game had sold more than 7 million copies. Awards According to EA, Battlefield 4 received awards from over 30 gaming publications prior to its release. Battlefield 4 appeared on several year-end lists of the best First-person shooter games of 2013, receiving wins from 18th Satellite Awards, and GamesRadar. During the 17th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated Battlefield 4'' for "Action Game of the Year", "Online Game of the Year", "Outstanding Achievement in Sound Design", and "Outstanding Achievement in Visual Engineering". Notes References External links Battlefield 4 at MobyGames 2013 video games Video games about amputees Asymmetrical multiplayer video games Baku in fiction 13 Fiction about discrimination Electronic Arts games Esports games Fiction about assassinations Fiction about the People's Liberation Army First-person shooters Frostbite (game engine) games Fiction about government Video game interquels Islands in fiction Martyrdom in fiction Multiplayer and single-player video games PlayStation 3 games PlayStation 4 games Fiction about sacrifices Trains in fiction Video game sequels Video games about the United States Marine Corps Video games developed in Sweden Video games set in 2020 Video games set in the future Video games set in abandoned buildings and structures Video games set in Azerbaijan Video games set in China Video games set in Egypt Video games set in Hong Kong Video games set in Iran Video games set in North Korea Video games set in France Video games set in Russia Video games set in Singapore Video games set in Thailand Video games set in Ukraine Video games that support Mantle (API) Video games using Havok Windows games Works banned in China World War III video games Xbox Cloud Gaming games Xbox 360 games Xbox One games
Battlefield 4
Physics
7,504
1,104,704
https://en.wikipedia.org/wiki/Covariance%20and%20contravariance%20%28computer%20science%29
Many programming language type systems support subtyping. For instance, if the type is a subtype of , then an expression of type should be substitutable wherever an expression of type is used. Variance is how subtyping between more complex types relates to subtyping between their components. For example, how should a list of s relate to a list of s? Or how should a function that returns relate to a function that returns ? Depending on the variance of the type constructor, the subtyping relation of the simple types may be either preserved, reversed, or ignored for the respective complex types. In the OCaml programming language, for example, "list of Cat" is a subtype of "list of Animal" because the list type constructor is covariant. This means that the subtyping relation of the simple types is preserved for the complex types. On the other hand, "function from Animal to String" is a subtype of "function from Cat to String" because the function type constructor is contravariant in the parameter type. Here, the subtyping relation of the simple types is reversed for the complex types. A programming language designer will consider variance when devising typing rules for language features such as arrays, inheritance, and generic datatypes. By making type constructors covariant or contravariant instead of invariant, more programs will be accepted as well-typed. On the other hand, programmers often find contravariance unintuitive, and accurately tracking variance to avoid runtime type errors can lead to complex typing rules. In order to keep the type system simple and allow useful programs, a language may treat a type constructor as invariant even if it would be safe to consider it variant, or treat it as covariant even though that could violate type safety. Formal definition Suppose A and B are types, and I<U> denotes application of a type constructor I with type argument U. Within the type system of a programming language, a typing rule for a type constructor I is: covariant if it preserves the ordering of types (≤), which orders types from more specific to more generic: If A ≤ B, then I<A> ≤ I<B>; contravariant if it reverses this ordering: If A ≤ B, then I<B> ≤ I<A>; bivariant if both of these apply (i.e., if A ≤ B, then I<A> ≡ I<B>); variant if covariant, contravariant or bivariant; invariant or nonvariant if not variant. The article considers how this applies to some common type constructors. C# examples For example, in C#, if is a subtype of , then: is a subtype of . The subtyping is preserved because is covariant on . is a subtype of . The subtyping is reversed because is contravariant on . Neither nor is a subtype of the other, because is invariant on . The variance of a C# generic interface is declared by placing the (covariant) or (contravariant) attribute on (zero or more of) its type parameters. The above interfaces are declared as , , and . Types with more than one type parameter may specify different variances on each type parameter. For example, the delegate type represents a function with a contravariant input parameter of type and a covariant return value of type . The compiler checks that all types are defined and used consistently with their annotations, and otherwise signals a compilation error. The typing rules for interface variance ensure type safety. For example, an represents a first-class function expecting an argument of type , and a function that can handle any type of animal can always be used instead of one that can only handle cats. Arrays Read-only data types (sources) can be covariant; write-only data types (sinks) can be contravariant. Mutable data types which act as both sources and sinks should be invariant. To illustrate this general phenomenon, consider the array type. For the type we can make the type , which is an "array of animals". For the purposes of this example, this array supports both reading and writing elements. We have the option to treat this as either: covariant: a is an ; contravariant: an is a ; invariant: an is not a and a is not an . If we wish to avoid type errors, then only the third choice is safe. Clearly, not every can be treated as if it were a , since a client reading from the array will expect a , but an may contain e.g. a . So, the contravariant rule is not safe. Conversely, a cannot be treated as an . It should always be possible to put a into an . With covariant arrays this cannot be guaranteed to be safe, since the backing store might actually be an array of cats. So, the covariant rule is also not safe—the array constructor should be invariant. Note that this is only an issue for mutable arrays; the covariant rule is safe for immutable (read-only) arrays. Likewise, the contravariant rule would be safe for write-only arrays. Covariant arrays in Java and C# Early versions of Java and C# did not include generics, also termed parametric polymorphism. In such a setting, making arrays invariant rules out useful polymorphic programs. For example, consider writing a function to shuffle an array, or a function that tests two arrays for equality using the . method on the elements. The implementation does not depend on the exact type of element stored in the array, so it should be possible to write a single function that works on all types of arrays. It is easy to implement functions of type: boolean equalArrays(Object[] a1, Object[] a2); void shuffleArray(Object[] a); However, if array types were treated as invariant, it would only be possible to call these functions on an array of exactly the type . One could not, for example, shuffle an array of strings. Therefore, both Java and C# treat array types covariantly. For instance, in Java is a subtype of , and in C# is a subtype of . As discussed above, covariant arrays lead to problems with writes into the array. Java and C# deal with this by marking each array object with a type when it is created. Each time a value is stored into an array, the execution environment will check that the run-time type of the value is equal to the run-time type of the array. If there is a mismatch, an (Java) or (C#) is thrown: // a is a single-element array of String String[] a = new String[1]; // b is an array of Object Object[] b = a; // Assign an Integer to b. This would be possible if b really were // an array of Object, but since it really is an array of String, // we will get a java.lang.ArrayStoreException. b[0] = 1; In the above example, one can read from the array (b) safely. It is only trying to write to the array that can lead to trouble. One drawback to this approach is that it leaves the possibility of a run-time error that a stricter type system could have caught at compile-time. Also, it hurts performance because each write into an array requires an additional run-time check. With the addition of generics, Java and C# now offer ways to write this kind of polymorphic function without relying on covariance. The array comparison and shuffling functions can be given the parameterized types <T> boolean equalArrays(T[] a1, T[] a2); <T> void shuffleArray(T[] a); Alternatively, to enforce that a C# method accesses a collection in a read-only way, one can use the interface instead of passing it an array . Function types Languages with first-class functions have function types like "a function expecting a Cat and returning an Animal" (written in OCaml syntax or in C# syntax). Those languages also need to specify when one function type is a subtype of another—that is, when it is safe to use a function of one type in a context that expects a function of a different type. It is safe to substitute a function f for a function g if f accepts a more general type of argument and returns a more specific type than g. For example, functions of type , , and can be used wherever a was expected. (One can compare this to the robustness principle of communication: "be liberal in what you accept and conservative in what you produce.") The general rule is: if and . Using inference rule notation the same rule can be written as: In other words, the → type constructor is contravariant in the parameter (input) type and covariant in the return (output) type. This rule was first stated formally by John C. Reynolds, and further popularized in a paper by Luca Cardelli. When dealing with functions that take functions as arguments, this rule can be applied several times. For example, by applying the rule twice, we see that if . In other words, the type is covariant in the position of . For complicated types it can be confusing to mentally trace why a given type specialization is or isn't type-safe, but it is easy to calculate which positions are co- and contravariant: a position is covariant if it is on the left side of an even number of arrows applying to it. Inheritance in object-oriented languages When a subclass overrides a method in a superclass, the compiler must check that the overriding method has the right type. While some languages require that the type exactly matches the type in the superclass (invariance), it is also type safe to allow the overriding method to have a "better" type. By the usual subtyping rule for function types, this means that the overriding method should return a more specific type (return type covariance) and accept a more general argument (parameter type contravariance). In UML notation, the possibilities are as follows (where Class B is the subclass that extends Class A which is the superclass): For a concrete example, suppose we are writing a class to model an animal shelter. We assume that is a subclass of , and that we have a base class (using Java syntax) class AnimalShelter { Animal getAnimalForAdoption() { // ... } void putAnimal(Animal animal) { //... } } Now the question is: if we subclass , what types are we allowed to give to and ? Covariant method return type In a language which allows covariant return types, a derived class can override the method to return a more specific type: class CatShelter extends AnimalShelter { Cat getAnimalForAdoption() { return new Cat(); } } Among mainstream OO languages, Java, C++ and C# (as of version 9.0 ) support covariant return types. Adding the covariant return type was one of the first modifications of the C++ language approved by the standards committee in 1998. Scala and D also support covariant return types. Contravariant method parameter type Similarly, it is type safe to allow an overriding method to accept a more general argument than the method in the base class: class CatShelter extends AnimalShelter { void putAnimal(Object animal) { // ... } } Only a few object-oriented languages actually allow this (for example, Python when typechecked with mypy). C++, Java and most other languages that support overloading and/or shadowing would interpret this as a method with an overloaded or shadowed name. However, Sather supported both covariance and contravariance. Calling convention for overridden methods are covariant with out parameters and return values, and contravariant with normal parameters (with the mode in). Covariant method parameter type A couple of mainstream languages, Eiffel and Dart allow the parameters of an overriding method to have a more specific type than the method in the superclass (parameter type covariance). Thus, the following Dart code would type check, with overriding the method in the base class: class CatShelter extends AnimalShelter { void putAnimal(covariant Cat animal) { // ... } } This is not type safe. By up-casting a to an , one can try to place a dog in a cat shelter. That does not meet parameter restrictions and will result in a runtime error. The lack of type safety (known as the "catcall problem" in the Eiffel community, where "cat" or "CAT" is a Changed Availability or Type) has been a long-standing issue. Over the years, various combinations of global static analysis, local static analysis, and new language features have been proposed to remedy it, and these have been implemented in some Eiffel compilers. Despite the type safety problem, the Eiffel designers consider covariant parameter types crucial for modeling real world requirements. The cat shelter illustrates a common phenomenon: it is a kind of animal shelter but has additional restrictions, and it seems reasonable to use inheritance and restricted parameter types to model this. In proposing this use of inheritance, the Eiffel designers reject the Liskov substitution principle, which states that objects of subclasses should always be less restricted than objects of their superclass. One other instance of a mainstream language allowing covariance in method parameters is PHP in regards to class constructors. In the following example, the __construct() method is accepted, despite the method parameter being covariant to the parent's method parameter. Were this method anything other than __construct(), an error would occur: interface AnimalInterface {} interface DogInterface extends AnimalInterface {} class Dog implements DogInterface {} class Pet { public function __construct(AnimalInterface $animal) {} } class PetDog extends Pet { public function __construct(DogInterface $dog) { parent::__construct($dog); } } Another example where covariant parameters seem helpful is so-called binary methods, i.e. methods where the parameter is expected to be of the same type as the object the method is called on. An example is the method: checks whether comes before or after in some ordering, but the way to compare, say, two rational numbers will be different from the way to compare two strings. Other common examples of binary methods include equality tests, arithmetic operations, and set operations like subset and union. In older versions of Java, the comparison method was specified as an interface : interface Comparable { int compareTo(Object o); } The drawback of this is that the method is specified to take an argument of type . A typical implementation would first down-cast this argument (throwing an error if it is not of the expected type): class RationalNumber implements Comparable { int numerator; int denominator; // ... public int compareTo(Object other) { RationalNumber otherNum = (RationalNumber)other; return Integer.compare(numerator * otherNum.denominator, otherNum.numerator * denominator); } } In a language with covariant parameters, the argument to could be directly given the desired type , hiding the typecast. (Of course, this would still give a runtime error if was then called on e.g. a .) Avoiding the need for covariant parameter types Other language features can provide the apparent benefits of covariant parameters while preserving Liskov substitutability. In a language with generics (a.k.a. parametric polymorphism) and bounded quantification, the previous examples can be written in a type-safe way. Instead of defining , we define a parameterized class . (One drawback of this is that the implementer of the base class needs to foresee which types will need to be specialized in the subclasses.) class Shelter<T extends Animal> { T getAnimalForAdoption() { // ... } void putAnimal(T animal) { // ... } } class CatShelter extends Shelter<Cat> { Cat getAnimalForAdoption() { // ... } void putAnimal(Cat animal) { // ... } } Similarly, in recent versions of Java the interface has been parameterized, which allows the downcast to be omitted in a type-safe way: class RationalNumber implements Comparable<RationalNumber> { int numerator; int denominator; // ... public int compareTo(RationalNumber otherNum) { return Integer.compare(numerator * otherNum.denominator, otherNum.numerator * denominator); } } Another language feature that can help is multiple dispatch. One reason that binary methods are awkward to write is that in a call like , selecting the correct implementation of really depends on the runtime type of both and , but in a conventional OO language only the runtime type of is taken into account. In a language with Common Lisp Object System (CLOS)-style multiple dispatch, the comparison method could be written as a generic function where both arguments are used for method selection. Giuseppe Castagna observed that in a typed language with multiple dispatch, a generic function can have some parameters which control dispatch and some "left-over" parameters which do not. Because the method selection rule chooses the most specific applicable method, if a method overrides another method, then the overriding method will have more specific types for the controlling parameters. On the other hand, to ensure type safety the language still must require the left-over parameters to be at least as general. Using the previous terminology, types used for runtime method selection are covariant while types not used for runtime method selection of the method are contravariant. Conventional single-dispatch languages like Java also obey this rule: only one argument is used for method selection (the receiver object, passed along to a method as the hidden argument ), and indeed the type of is more specialized inside overriding methods than in the superclass. Castagna suggests that examples where covariant parameter types are superior (particularly, binary methods) should be handled using multiple dispatch; which is naturally covariant. However, most programming languages do not support multiple dispatch. Summary of variance and inheritance The following table summarizes the rules for overriding methods in the languages discussed above. Generic types In programming languages that support generics (a.k.a. parametric polymorphism), the programmer can extend the type system with new constructors. For example, a C# interface like makes it possible to construct new types like or . The question then arises what the variance of these type constructors should be. There are two main approaches. In languages with declaration-site variance annotations (e.g., C#), the programmer annotates the definition of a generic type with the intended variance of its type parameters. With use-site variance annotations (e.g., Java), the programmer instead annotates the places where a generic type is instantiated. Declaration-site variance annotations The most popular languages with declaration-site variance annotations are C# and Kotlin (using the keywords and ), and Scala and OCaml (using the keywords and ). C# only allows variance annotations for interface types, while Kotlin, Scala and OCaml allow them for both interface types and concrete data types. Interfaces In C#, each type parameter of a generic interface can be marked covariant (), contravariant (), or invariant (no annotation). For example, we can define an interface of read-only iterators, and declare it to be covariant (out) in its type parameter. interface IEnumerator<out T> { T Current { get; } bool MoveNext(); } With this declaration, will be treated as covariant in its type parameter, e.g. is a subtype of . The type checker enforces that each method declaration in an interface only mentions the type parameters in a way consistent with the / annotations. That is, a parameter that was declared covariant must not occur in any contravariant positions (where a position is contravariant if it occurs under an odd number of contravariant type constructors). The precise rule is that the return types of all methods in the interface must be valid covariantly and all the method parameter types must be valid contravariantly, where valid S-ly is defined as follows: Non-generic types (classes, structs, enums, etc.) are valid both co- and contravariantly. A type parameter is valid covariantly if it was not marked , and valid contravariantly if it was not marked . An array type is valid S-ly if is. (This is because C# has covariant arrays.) A generic type is valid S-ly if for each parameter , Ai is valid S-ly, and the ith parameter to is declared covariant, or Ai is valid (not S)-ly, and the ith parameter to is declared contravariant, or Ai is valid both covariantly and contravariantly, and the ith parameter to is declared invariant. As an example of how these rules apply, consider the interface. interface IList<T> { void Insert(int index, T item); IEnumerator<T> GetEnumerator(); } The parameter type of must be valid contravariantly, i.e. the type parameter must not be tagged . Similarly, the result type of must be valid covariantly, i.e. (since is a covariant interface) the type must be valid covariantly, i.e. the type parameter must not be tagged . This shows that the interface is not allowed to be marked either co- or contravariant. In the common case of a generic data structure such as , these restrictions mean that an parameter can only be used for methods getting data out of the structure, and an parameter can only be used for methods putting data into the structure, hence the choice of keywords. Data C# allows variance annotations on the parameters of interfaces, but not the parameters of classes. Because fields in C# classes are always mutable, variantly parameterized classes in C# would not be very useful. But languages which emphasize immutable data can make good use of covariant data types. For example, in all of Scala, Kotlin and OCaml the immutable list type is covariant: List[Cat] is a subtype of List[Animal]. Scala's rules for checking variance annotations are essentially the same as C#'s. However, there are some idioms that apply to immutable datastructures in particular. They are illustrated by the following (excerpt from the) definition of the List[A] class. sealed abstract class List[+A] extends AbstractSeq[A] { def head: A def tail: List[A] /** Adds an element at the beginning of this list. */ def ::[B >: A] (x: B): List[B] = new scala.collection.immutable.::(x, this) /** ... */ } First, class members that have a variant type must be immutable. Here, head has the type A, which was declared covariant (+), and indeed head was declared as a method (def). Trying to declare it as a mutable field (var) would be rejected as type error. Second, even if a data structure is immutable, it will often have methods where the parameter type occurs contravariantly. For example, consider the method :: which adds an element to the front of a list. (The implementation works by creating a new object of the similarly named class ::, the class of nonempty lists.) The most obvious type to give it would be def :: (x: A): List[A] However, this would be a type error, because the covariant parameter A appears in a contravariant position (as a function parameter). But there is a trick to get around this problem. We give :: a more general type, which allows adding an element of any type B as long as B is a supertype of A. Note that this relies on List being covariant, since this has type List[A] and we treat it as having type List[B]. At first glance it may not be obvious that the generalized type is sound, but if the programmer starts out with the simpler type declaration, the type errors will point out the place that needs to be generalized. Inferring variance It is possible to design a type system where the compiler automatically infers the best possible variance annotations for all datatype parameters. However, the analysis can get complex for several reasons. First, the analysis is nonlocal since the variance of an interface depends on the variance of all interfaces that mentions. Second, in order to get unique best solutions the type system must allow bivariant parameters (which are simultaneously co- and contravariant). And finally, the variance of type parameters should arguably be a deliberate choice by the designer of an interface, not something that just happens. For these reasons most languages do very little variance inference. C# and Scala do not infer any variance annotations at all. OCaml can infer the variance of parameterized concrete datatypes, but the programmer must explicitly specify the variance of abstract types (interfaces). For example, consider an OCaml datatype which wraps a function type ('a, 'b) t = T of ('a -> 'b) The compiler will automatically infer that is contravariant in the first parameter, and covariant in the second. The programmer can also provide explicit annotations, which the compiler will check are satisfied. Thus the following declaration is equivalent to the previous one: type (-'a, +'b) t = T of ('a -> 'b) Explicit annotations in OCaml become useful when specifying interfaces. For example, the standard library interface for association tables include an annotation saying that the map type constructor is covariant in the result type. module type S = sig type key type (+'a) t val empty: 'a t val mem: key -> 'a t -> bool ... end This ensures that e.g. is a subtype of . Use-site variance annotations (wildcards) One drawback of the declaration-site approach is that many interface types must be made invariant. For example, we saw above that needed to be invariant, because it contained both and . In order to expose more variance, the API designer could provide additional interfaces which provide subsets of the available methods (e.g. an "insert-only list" which only provides ). However this quickly becomes unwieldy. Use-site variance means the desired variance is indicated with an annotation at the specific site in the code where the type will be used. This gives users of a class more opportunities for subtyping without requiring the designer of the class to define multiple interfaces with different variance. Instead, at the point a generic type is instantiated to an actual parameterized type, the programmer can indicate that only a subset of its methods will be used. In effect, each definition of a generic class also makes available interfaces for the covariant and contravariant parts of that class. Java provides use-site variance annotations through wildcards, a restricted form of bounded existential types. A parameterized type can be instantiated by a wildcard together with an upper or lower bound, e.g. or . An unbounded wildcard like is equivalent to . Such a type represents for some unknown type which satisfies the bound. For example, if has type , then the type checker will accept Animal a = l.get(3); because the type is known to be a subtype of , but l.add(new Animal()); will be rejected as a type error since an is not necessarily an . In general, given some interface , a reference to an forbids using methods from the interface where occurs contravariantly in the type of the method. Conversely, if had type one could call but not . While non-wildcard parameterized types in Java are invariant (e.g. there is no subtyping relationship between and ), wildcard types can be made more specific by specifying a tighter bound. For example, is a subtype of . This shows that wildcard types are covariant in their upper bounds (and also contravariant in their lower bounds). In total, given a wildcard type like , there are three ways to form a subtype: by specializing the class , by specifying a tighter bound , or by replacing the wildcard with a specific type (see figure). By applying two of the above three forms of subtyping, it becomes possible to, for example, pass an argument of type to a method expecting a . This is the kind of expressiveness that results from covariant interface types. The type acts as an interface type containing only the covariant methods of , but the implementer of did not have to define it ahead of time. In the common case of a generic data structure , covariant parameters are used for methods getting data out of the structure, and contravariant parameters for methods putting data into the structure. The mnemonic for Producer Extends, Consumer Super (PECS), from the book Effective Java by Joshua Bloch gives an easy way to remember when to use covariance and contravariance. Wildcards are flexible, but there is a drawback. While use-site variance means that API designers need not consider variance of type parameters to interfaces, they must often instead use more complicated method signatures. A common example involves the interface. Suppose we want to write a function that finds the biggest element in a collection. The elements need to implement the method, so a first try might be <T extends Comparable<T>> T max(Collection<T> coll); However, this type is not general enough—one can find the max of a , but not a . The problem is that does not implement , but instead the (better) interface . In Java, unlike in C#, is not considered a subtype of . Instead the type of has to be modified: <T extends Comparable<? super T>> T max(Collection<T> coll); The bounded wildcard conveys the information that calls only contravariant methods from the interface. This particular example is frustrating because all the methods in are contravariant, so that condition is trivially true. A declaration-site system could handle this example with less clutter by annotating only the definition of . The method can be changed even further by using an upper bounded wildcard for the method parameter: <T extends Comparable<? super T>> T max(Collection<? extends T> coll); Comparing declaration-site and use-site annotations Use-site variance annotations provide additional flexibility, allowing more programs to type check. However, they have been criticized for the complexity they add to the language, leading to complicated type signatures and error messages. One way to assess whether the extra flexibility is useful is to see if it is used in existing programs. A survey of a large set of Java libraries found that 39% of wildcard annotations could have been directly replaced by declaration-site annotations. Thus the remaining 61% is an indication of places where Java benefits from having the use-site system available. In a declaration-site language, libraries must either expose less variance, or define more interfaces. For example, the Scala Collections library defines three separate interfaces for classes which employ covariance: a covariant base interface containing common methods, an invariant mutable version which adds side-effecting methods, and a covariant immutable version which may specialize the inherited implementations to exploit structural sharing. This design works well with declaration-site annotations, but the large number of interfaces carry a complexity cost for clients of the library. And modifying the library interface may not be an option—in particular, one goal when adding generics to Java was to maintain binary backwards compatibility. On the other hand, Java wildcards are themselves complex. In a conference presentation Joshua Bloch criticized them as being too hard to understand and use, stating that when adding support for closures "we simply cannot afford another wildcards". Early versions of Scala used use-site variance annotations but programmers found them difficult to use in practice, while declaration-site annotations were found to be very helpful when designing classes. Later versions of Scala added Java-style existential types and wildcards; however, according to Martin Odersky, if there were no need for interoperability with Java then these would probably not have been included. Ross Tate argues that part of the complexity of Java wildcards is due to the decision to encode use-site variance using a form of existential types. The original proposals used special-purpose syntax for variance annotations, writing instead of Java's more verbose . Since wildcards are a form of existential types they can be used for more things than just variance. A type like ("a list of unknown type") lets objects be passed to methods or stored in fields without exactly specifying their type parameters. This is particularly valuable for classes such as where most of the methods do not mention the type parameter. However, type inference for existential types is a difficult problem. For the compiler implementer, Java wildcards raise issues with type checker termination, type argument inference, and ambiguous programs. In general it is undecidable whether a Java program using generics is well-typed or not, so any type checker will have to go into an infinite loop or time out for some programs. For the programmer, it leads to complicated type error messages. Java type checks wildcard types by replacing the wildcards with fresh type variables (so-called capture conversion). This can make error messages harder to read, because they refer to type variables that the programmer did not directly write. For example, trying to add a to a will give an error like method List.add (capture#1) is not applicable (actual argument Cat cannot be converted to capture#1 by method invocation conversion) where capture#1 is a fresh type-variable: capture#1 extends Animal from capture of ? extends Animal Since both declaration-site and use-site annotations can be useful, some type systems provide both. Etymology These terms come from the notion of covariant and contravariant functors in category theory. Consider the category whose objects are types and whose morphisms represent the subtype relationship ≤. (This is an example of how any partially ordered set can be considered as a category.) Then for example the function type constructor takes two types p and r and creates a new type p → r; so it takes objects in to objects in . By the subtyping rule for function types this operation reverses ≤ for the first parameter and preserves it for the second, so it is a contravariant functor in the first parameter and a covariant functor in the second. See also Polymorphism (computer science) Inheritance (object-oriented programming) Liskov substitution principle References External links Fabulous Adventures in Coding: An article series about implementation concerns surrounding co/contravariance in C# Contra Vs Co Variance (note this article is not updated about C++) Closures for the Java 7 Programming Language (v0.5) The theory behind covariance and contravariance in C# 4 Object-oriented programming Type theory Polymorphism (computer science)
Covariance and contravariance (computer science)
Mathematics
7,760
1,729,542
https://en.wikipedia.org/wiki/Neural%20network%20%28biology%29
A neural network, also called a neuronal network, is an interconnected population of neurons (typically containing multiple neural circuits). Biological neural networks are studied to understand the organization and functioning of nervous systems. Closely related are artificial neural networks, machine learning models inspired by biological neural networks. They consist of artificial neurons, which are mathematical functions that are designed to be analogous to the mechanisms used by neural circuits. Overview A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion. Artificial intelligence, cognitive modelling, and artificial neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence. History The preliminary theoretical base for contemporary neural networks was independently proposed by Alexander Bain (1873) and William James (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain. For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. The general scientific community at the time was skeptical of Bain's theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain “wiring” can handle multiple problems and inputs. James' theory was similar to Bain's; however, he suggested that memories and actions resulted from electrical currents flowing among the neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action. C. S. Sherrington (1898) conducted experiments to test James' theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that the electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation. McCulloch and Pitts (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence. The parallel distributed processing of the mid-1980s became popular under the name connectionism. The text by Rumelhart and McClelland (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes. Artificial neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function. Neuroscience Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (neural network models) and theory (statistical learning theory and information theory). Types of models Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behaviour arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and their relation to learning and memory, from the individual neuron to the system level. Connectivity In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of a single pulse packet throughout the entire network. The connectivity of a neural network stems from its biological structures and is usually challenging to map out experimentally. Scientists used a variety of statistical tools to infer the connectivity of a network based on the observed neuronal activities, i.e., spike trains. Recent research has shown that statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, providing deeper insights into the structure of neural circuits and their computational properties. Recent improvements While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning. Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience. See also Adaptive resonance theory Biological cybernetics Cognitive architecture Cognitive science Connectomics Cultured neuronal networks Parallel constraint satisfaction processes Wood Wide Web References Biological Computational neuroscience Neuroanatomy
Neural network (biology)
Engineering
1,253
47,914,580
https://en.wikipedia.org/wiki/Cladosporium%20cladosporioides
Cladosporium cladosporioides is a darkly pigmented mold that occurs world-wide on a wide range of materials both outdoors and indoors. It is known for its role in the decomposition of organic matter and its presence in indoor and outdoor environments. This species is also notable for its potential impact on human health, particularly in individuals with respiratory conditions. It is one of the most common fungi in outdoor air where its spores are important in seasonal allergic disease. While this species rarely causes invasive disease in animals, it is an important agent of plant disease, attacking both the leaves and fruits of many plants. This species produces asexual spores in delicate, branched chains that break apart readily and drift in the air. It is able to grow under low water conditions and at very low temperatures. History and classification Georg Fresenius first described Cladosporium cladosporioides in 1850, classifying it in the genus Penicillium as Penicillium cladosporioides. In 1880 Pier Andrea Saccardo renamed the species, Hormodendrum cladosporioides. Other early names for this taxon included Cladosporium hypophyllum, Monilia humicola and Cladosporium pisicola. In 1952 that Gerardus Albertus de Vries transferred the species to the genus Cladosporium where it remains today. Growth and morphology Cladosporium cladosporioides reproduces asexually and because no teleomorph has been identified, it is considered an exclusively anamorphic species. Colonies are olive-green to olive-brown and appear velvety or powdery. On a potato dextrose agar (PDA) medium, colonies are olive-grey to dull green, velvety and tufted. The edges of the colony can be olive-grey to white, and feathery. The colonies are diffuse and the mycelia form mats and rarely grow upwards from the surface of the colony. On a malt extract agar (MEA) medium, colonies are olive-grey to olive or whitish due to the mycelia growing upwards, and seem velvety to tufted with olive-black or olive-brown edges. The mycelia can be diffuse to tufted and sometimes covers the whole colony. The mycelia appear felt-like, grows flat, and can be effused and furrowed. On oatmeal agar (OA) medium, colonies are olive-grey and there can be a gradient toward the edges of the colony from olive green to dull green, then olive-grey. The upward growth of mycelia can be sparse to abundant and tufted. The mycelia and can be loose to dense and tends to grow flat. Cladosporium cladosporioides has sparse, unbranched or rarely branched, darkly-pigmented hyphae that are typically not constricted at the septa. Mature conidiophores are treelike and comprise many long, branched chains of conidia. Cladosporium cladosporioides produces brown to olive-brown coloured, solitary conidiophores that branch irregularly, forming many ramifications. Each branch tends to be between 40–300 μm in length (exceptionally up to 350 μm) and 2–6 μm in width. The conidiophores are thin-walled and cylindrical and are formed at the end of ascending hyphae. The conidia are small, single-celled, lemon-shaped and smooth-walled. They form long, fragile chains up to 10 conidia in length with distinctive darkened connective tissue between each spore. Physiology Cladosporium cladosporioides produces antifungal metabolites targeted toward plant pathogens. Three different compounds isolated from C. cladosporioides (cladosporin, and 5′-hydroxyasperentin), as well as a compound (5′,6-diacetyl cladosporin) synthesized from 5′-hydroxyasperentin have antifungal properties. As these compounds are effective against different types of fungi, C. cladosporioides is an important species for potential treatment and control of various plant-infecting fungi. The inoculation of Venturia inaequalis, a fungus that causes apple scab on apple tree leaves, with C. cladosporioides led to decreased conidial production in V. inaequalis. As this effect is seen both on younger and older leaves C. cladosporioides is effective in preventing and controlling infections of V. inaequalis in apple trees. This species also produces calphostins A, B, C, D and I, which are protein kinase C inhibitors. These calphostins have cytotoxic activity due to their ability to inhibit protein kinase C activity. Ecology Cladosporium cladosporioides is a common saprotroph occurring as a secondary infection on decaying, or necrotic, parts of plants. This fungus is xerophilic – growing well in low water activity environments (e.g., aW = 0.86–0.88). This species is also psychrophilic, it can grow at temperatures between . Cladosporium cladosporioides occurs outdoor environments year-round with peak spore concentration in the air occurring in summer where levels can range from 2,000 spores up to 50,000 spores per cubic meter of air. It is among the most common of all outdoor airborne fungi, colonizing plant materials and soil. It has been found in a number of crops, such as wheat, grapes, strawberries, peas and spinach. This species also grows in indoor environments, where it is often associated with the growth of fungi including species of Penicillium, Aspergillus versicolor and Wallemia sebi. Cladosporium cladosporioides grows well on wet building materials, paint, wallpaper and textiles, as well as on paper, pulp, frescos, tiles, wet window sills and other indoor substrates including salty and sugary foods. Due to its tolerance of lower temperatures, C. cladosporioides can grow on refrigerated foods and colonize moist surfaces in refrigerators. Role in disease Plants Cladosporium cladosporioides and C. herbarum cause Cladosporium rot of red wine grapevines. The incidence of infection is much higher when the harvest of the grapes is delayed. Over 50% of grape clusters can be affected at harvest, which greatly reduced the yield and affects the wine quality. This delay is required in order for the phenolic compounds in the grapes to ripen and contribute to the aroma and flavour development in wine of optimum quality. Symptoms of Cladosporium rot are typically observed on mature grapes and are characterized by dehydration, a small area of decay that is firm, and a layer of olive-green mould. Although leaf removal reduces the incidence of infection by many species of fungi, it leads to an increase in C. cladosporioides populations on grape clusters and an increase in rotten grapes at harvest. Removal of diseased leaves is therefore counter-indicated in the control of this fungus. The only recommendation made to avoid severe Cladosporium infections of grape clusters is to limit periods of continuous exposure to sunlight. This species has also been involved in the rotting of strawberry blossoms. Infection of strawberry blossoms by C. cladosporioides has been associated with simultaneous infections by Xanthomonas fragariae (in California), and more recently C. tenuissimum (in Korea). C. cladosporioides infects the anthers, sepals, petals and pistils of the strawberry blossom and is typically observed on older flowers with dehisced anthers and signs of senescence. From 1997-2000, there was a higher proportion of misshapen fruits due to C. cladosporioides infection, and their culling affected the strawberry industry in California. Infection leads to necrosis of the entire flower, or parts of it, as well as to the production of small and misshapen fruits and green-grey sporulation on the stigma. A higher occurrence of infection is observed in strawberry plants cultivated outdoors than cultivated in a greenhouse. Animals Cladosporium cladosporioides rarely causes infections in humans, although superficial infections have been reported. It can occasionally cause pulmonary and cutaneous phaeohyphomycosis and it has been isolated from cerebrospinal fluid in an immunocompromised patient. This species can trigger asthmatic reactions due to the presence of allergens and beta-glucans on its spore surface. In mice, living C. cladosporioides spores have induced hyperresponsiveness of the lungs, as well as an increase in eosinophils, which are white blood cells typically associated with asthmatic and allergic reactions. Cladosporium cladosporioides can also induce respiratory inflammation due to the up-regulation of macrophage inflammatory protein (MIP)-2 and keratinocyte chemoattractant (KC), which are cytokines involved in the mediation of inflammation. A case of mycotic encephalitis and nephritis due to C. cladosporioides has been described in a dog, resulting in altered behaviour, depression, abnormal reflexes in all 4 limbs and loss of vision. Post-mortem examination indicated posterior brainstem and cerebellar lesions, confirming the causative involvement of the agent. References cladosporioides Fungal plant pathogens and diseases Wheat diseases Fungi described in 1880 Fungus species
Cladosporium cladosporioides
Biology
2,027
62,467,440
https://en.wikipedia.org/wiki/Mathematica%20Applicanda
Mathematica Applicanda is a peer-reviewed scientific journal covering applied mathematics. It was established in 1973 by the Polish Mathematical Society as Series III of the Annales Societatis Mathematicae Polonae, under the name Matematyka Stosowana (ISSN 0137-2890). The first editor-in-chief was Marceli Stark. In 1999 the journal was renamed Matematyka Stosowana-Matematyka dla Społeczeństwa (ISSN 1730-2668 ). Since 2012 its main issue is the electronic one with the name Mathematica Applicanda with ISSN 2299-4009. Former Editors-in-chief Marceli Stark (volume I[1973]) Robert Bartoszyński (volumes II[1974] - XXIX[1987]) Andrzej Kiełbasiński (volumes XXX[1987] - XLI[1999]) Witold Kosiński (volumes XLII[2000] - XLIV[2011]) Krzysztof J. Szajowski (volumes XL[2012] - XLVII[2019]) Krzysztof Burnecki (volume LXVIII[2020] ) Jacek Miękisz (volume XLIX[2021] - L[2022]) Agnieszka Wyłomańska (volume LI[2023- ] ) Abstracting and indexing The journal is abstracted and indexed in MathSciNet Zentralblatt MATH CEON The Library of Science (Biblioteka Nauki) BazTech Scopus Index Copernicus See also List of mathematical physics journals List of probability journals List of statistics journals References External links Applied mathematics journals Academic journals established in 1973 English-language journals Biannual journals
Mathematica Applicanda
Mathematics
373
54,352,124
https://en.wikipedia.org/wiki/SSU%20rRNA
Small subunit ribosomal ribonucleic acid (SSU rRNA) is the smaller of the two major RNA components of the ribosome. Associated with a number of ribosomal proteins, the SSU rRNA forms the small subunit of the ribosome. It is encoded by SSU-rDNA. Characteristics Use in phylogenetics SSU rRNA sequences are widely used for determining evolutionary relationships among organisms, since they are of ancient origin and are found in all known forms of life. See also LSU rRNA: the large subunit ribosomal ribonucleic acid. References Ribosomal RNA Protein biosynthesis
SSU rRNA
Chemistry
130
71,344,906
https://en.wikipedia.org/wiki/Interactive%20application%20security%20testing
Interactive application security testing (abbreviated as IAST) is a security testing method that detects software vulnerabilities by interaction with the program coupled with observation and sensors. The tool was launched by several application security companies. It is distinct from static application security testing, which does not interact with the program, and dynamic application security testing, which considers the program as a black box. It may be considered a mix of both. References Security testing
Interactive application security testing
Technology
90
48,037,177
https://en.wikipedia.org/wiki/Communications%20in%20Algebra
Communications in Algebra is a monthly peer-reviewed scientific journal covering algebra, including commutative algebra, ring theory, module theory, non-associative algebra (including Lie algebras and Jordan algebras), group theory, and algebraic geometry. It was established in 1974 and is published by Taylor & Francis. The editor-in-chief is Scott Chapman (Sam Houston State University). Earl J. Taft (Rutgers University) was the founding editor. Abstracting and indexing The journal is abstracted and indexed in CompuMath Citation Index, Current Contents/Chemical, Earth, and Physical Sciences, Mathematical Reviews, MathSciNet, Science Citation Index Expanded (SCIE), and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7. References External links Academic journals established in 1974 Algebra journals English-language journals Monthly journals Taylor & Francis academic journals
Communications in Algebra
Mathematics
191
21,550,547
https://en.wikipedia.org/wiki/Cricotus
Cricotus is an extinct genus of Embolomeri. It was erected by Cope in 1875, on the basis of fragmentary, not clearly associated remains including caudal vertebrae, on which the name was established (in fact, based on a single intercentrum), as well as a few other postcranial bones. It was little-used in the subsequent literature, contrary to Archeria, which appears to be a junior synonym of Cricotus. However, given that the type species of Cricotus (C. heteroclitus) is a nomen dubium, the name Cricotus is unavailable. This led to Holmes suggesting using the name Archeria for this taxon, though he provided no evidence that he made a formal appeal to the International Commission on Zoological Nomenclature for this (and presumably did not do so). References Embolomeres Carboniferous sarcopterygians of North America Nomina dubia Taxa named by Edward Drinker Cope
Cricotus
Biology
202
15,685,558
https://en.wikipedia.org/wiki/Mithqal
Mithqāl () is a unit of mass equal to which is mostly used for measuring precious metals, such as gold, and other commodities, like saffron. The name was also applied as an alternative term for the gold dinar, a coin that was used throughout much of the Islamic world from the 8th century onward and survived in parts of Africa until the 19th century. The name of Mozambique's currency since 1980, the metical, is derived from mithqāl. Etymology The word mithqāl (; “weight, unit of weight”) comes from the Arabic thaqala (), meaning “to weigh” (cf. ). Other variants of the unit in English include miskal (from Persian or Urdu ; misqāl), mithkal, mitkal and mitqal. Indian mithqaal In India, the measurement is known as mithqaal. It contains 4 and 3½ (rata'ii; مثقال). It is equivalent to 4.25 grams when measuring gold, or 4.5 grams when measuring commodities. It may be more or less than this. Nikki mithqal A gold coin minted in Nikki, Benin and known as the mithqal was in wide circulation in West Africa in the 18th century, particularly the Niger bend. It was useable in the trans-Saharan trade and coexisted with the use of cowries as shell money. Conversion factors The mithqāl in another more modern calculation is as follows: Nakhud is a Baháʼí unit of mass used by Bahá'u'lláh. The mithqāl had originally consisted of 24 nakhuds, but in the Bayán, the collective works of the Báb, this was reduced to 19. See also Nisab References Units of mass Measurement Ottoman units of measurement Islamic banking Arabic words and phrases Islamic banking and finance terminology
Mithqal
Physics,Mathematics
395
14,628,551
https://en.wikipedia.org/wiki/Imidazoline%20receptor
Imidazoline receptors are the primary receptors on which clonidine and other imidazolines act. There are three main classes of imidazoline receptor: I1 is involved in inhibition of the sympathetic nervous system to lower blood pressure, I2 has as yet uncertain functions but is implicated in several psychiatric conditions, and I3 regulates insulin secretion. Classes As of 2017, there are three known subtypes of imidazoline receptors: I1, I2, and I3. I1 receptor The I1 receptor appears to be a G protein-coupled receptor that is localized on the plasma membrane. It may be coupled to PLA2 signalling and thus prostaglandin synthesis. In addition, activation inhibits the sodium-hydrogen antiporter and enzymes of catecholamine synthesis are induced, suggesting that the I1 receptor may belong to the neurocytokine receptor family, since its signaling pathways are similar to those of interleukins. It is found in the neurons of the reticular formation, the dorsomedial medulla oblongata, adrenal medulla, renal epithelium, pancreatic islets, platelets, and the prostate. They are notably not expressed in the cerebral cortex or locus coeruleus. Animal research suggests that much of the antihypertensive action of imidazoline drugs such as clonidine is mediated by the I1 receptor. In addition, I1 receptor activation is used in ophthalmology to reduce intraocular pressure. Other putative functions include promoting Na+ excretion and promoting neural activity during hypoxia. I2 receptor The I2 receptor binding sites have been defined as being selective binding sites inhibited by the antagonist idazoxan that are not blocked by catecholamines. The major binding site is located on the outer mitochondrial membrane, and is proposed to be an allosteric site on monoamine oxidase, while another binding site has been found to be brain creatine kinase. Other known binding sites have yet to be characterized . Preliminary research in rodents suggests that I2 receptor agonists may be effective in chronic, but not acute pain, including fibromyalgia. I2 receptor activation has also been shown to decrease body temperature, potentially mediating neuroprotective effects seen in rats. The only known antagonist for the receptor is idazoxan, which is non-selective. I3 receptor The I3 receptor regulates insulin secretion from pancreatic beta cells. It may be associated with ATP-sensitive K+ (KATP) channels. Ligands I1 receptors Agonists AGN 192403 Moxonidine Antagonists I2 receptors Agonists CR-4056 Phenyzoline (2-(2-phenylethyl)-4,5-dihydro-1H-imidazole) RS 45041-90 Tracizoline Antagonists BU 224 (disputed) I3 receptors No selective ligands are known as of 2017. Nonselective ligands Agonists Agmatine (putative endogenous ligand at I1; also interacts with NMDA, nicotinic, and α2 adrenoceptors) Apraclonidine (α2 adrenoceptor agonist) 2-BFI (I2 agonist, NMDA antagonist) Cimetidine (I1 agonist, H2 receptor antagonist) Clonidine (I1 agonist, α2 adrenoceptor agonist) LNP-509 LNP-911 7-Me-marsanidine Dimethyltryptamine mCPP Moxonidine Oxymetazoline (I1 agonist, α1 adrenoceptor agonist, α2 partial agonist) Rilmenidine S-23515 S-23757 Tizanidine Antagonists BU99006 (alkylating agent, inactivates I2 receptors) Efaroxan (I1, α2 adrenoceptor antagonist) Idazoxan (I1, I2 antagonist, α2 adrenoceptor antagonist) See also Imidazoline References External links Receptors
Imidazoline receptor
Chemistry
881
37,104,805
https://en.wikipedia.org/wiki/Lepiota%20babruzalka
Lepiota babruzalka is an agaric mushroom of the genus Lepiota in the order Agaricales. Described as new to science in 2009, it is found in Kerala State, India, where it grows on the ground in litterfall around bamboo stems. Fruit bodies have caps that measure up to in diameter, and are covered with reddish-brown scales. The cap is supported by a long and slender stem up to long and thick. One of the distinguishing microscopic features of the species is the variably shaped cystidia found on the edges of the gills. Taxonomy The species was first described by Arun Kumar Thirovoth Kottuvetta and P. Manimohan in the journal Mycotaxon in 2009, in a survey of the genus Lepiota in Kerala State in southern India. The holotype collection was made in 2004 in Chelavur, located in the Kozhikode District; it is now kept in the herbarium of Kew Gardens. The specific epithet babruzalka derives from the Sanskrit word for "brown-scaled". Description The fruit bodies of Lepiota babruzalka have caps that start out roughly spherical, and as they expand become broadly convex, and eventually flat, with a blunt umbo. The cap attains a diameter of . Its whitish surface is covered with small, reddish-brown, pressed-down scales that are more numerous in the center. The margin is initially curved inward, but straightens out in age, and retains hanging remnants of the partial veil. The gills are white, and free from attachment to the stem. They are crowded together, with two or three tiers of interspersed lamellulae (short gills that do not extend fully from the cap edge to the stem). Viewed with a hand lens, the edges of the gills appear to be fringed. The stem is cylindrical with a bulbous base, initially solid before becoming hollow, and measures long by 1–1.5 mm thick. The stem surface is whitish, but will stain a light brown color if handled. In young fruit bodies, the stems have a whitish, membranous ring on the upper half, but the ring does not last long before disintegrating. The flesh is thin (up to 1 mm), whitish, and lacks any appreciable odor. Lepiota babruzalka produces a white spore print. Spores are roughly elliptical to somewhat cylindrical, hyaline (translucent), and measure 5.5–10.5 by 3.5–4.5 μm. They are thick-walled and contain a refractive oil droplet. The basidia (spore-bearing cells) are club-shaped, hyaline, and are one- to four-spored with sterigmata up to 8 μm long; the dimensions of the basidia are 15–20 by 7–8 μm. Cheilocystidia (cystidia on the edge of the gill) are plentiful, and can assume a number of shapes, including cylindrical to club-shaped, utriform (like a wineskin bottle), to ventricose-rostrate (where the basal and middle portions are swollen and the apex extends into a beak-like protrusion). The cheilocystidia are thin-walled, and measure 13–32 by 7–12 μm; there are no cystidia on the gill faces (pleurocystidia). The gill tissue is made of thin-walled hyphae containing a septum, which are hyaline to pale yellow, and measure 3–15 μm wide. The cap tissue comprises interwoven, inflated hyphae with widths between 2 and 25 μm. Neither the gill tissue nor the cap tissue show any color reaction when stained with Melzer's reagent. Clamp connections are rare in the hyphae of Lepiota babruzalka. Similar species According to the authors, the only Lepiota bearing a close resemblance to L. babruzalka is L. roseoalba, an edible mushroom described by Paul Christoph Hennings in 1891. Found in Africa and Iran, L. roseoalba lacks the reddish-brown scales on the cap, has radial grooves on the cap margin, and its stem is not as slender as those of L. babruzalka. Habitat and distribution Fruit bodies of Lepiota babruzalka grow singly or scattered on the ground among decaying leaf litter around the base of bamboo stands. The species has been documented only from Chelavur and Nilambur in the Kozhikode and Malappuram Districts of Kerala State. As of 2009, there are 22 Lepiota taxa (21 species and 1 variety) known from Kerala, which is recognized as a biodiversity hotspot. See also List of Lepiota species References External links babruka Fungi of India Fungi described in 2009 Fungus species
Lepiota babruzalka
Biology
1,037
3,946,504
https://en.wikipedia.org/wiki/Break%20junction
A break junction is an electronic device which consists of two metal wires separated by a very thin gap, on the order of the inter-atomic spacing (less than a nanometer). This can be done by physically pulling the wires apart or through chemical etching or electromigration. As the wire breaks, the separation between the electrodes can be indirectly controlled by monitoring the electrical resistance of the junction. After the gap is formed, its width can often be controlled by bending the substrate that the metal contacts lie on. The gap can be controlled to a precision of picometers. A typical conductance versus time trace during the breaking process (conductance is simply current divided by applied voltage bias) shows two regimes. First is a regime where the break junction comprises a quantum point contact. In this regime conductance decreases in steps equal to the conductance quantum which is expressed through the electron charge (−e) and the Planck constant . The conductance quantum has a value of siemens, corresponding to a resistance increase of roughly 12.9 kΩ. These step decreases are interpreted as the result of a decrease, as the electrodes are pulled apart, in the number of single-atom-wide metal strands bridging between the two electrodes, each strand having a conductance equal to the quantum of conductance. As the wire is pulled, the neck becomes thinner with fewer atomic strands in it. Each time the neck reconfigures, which happens abruptly, a step-like decrease of the conductance can be observed. This picture inferred from the current measurement has been confirmed by "in-situ" TEM imaging of the breaking process combined with current measurement. In a second regime, when the wire is pulled further apart, the conductance collapses to values less than the quantum of conductance. This is known as the tunneling regime where electrons tunnel through vacuums between the electrodes. Use Break junctions are used to make electrical contacts to study single molecules. References Notes Electricity Molecular electronics Nanoelectronics
Break junction
Chemistry,Materials_science
412
17,235,045
https://en.wikipedia.org/wiki/Redevelopment%20of%20Norrmalm
The redevelopment of Norrmalm (; ) was a major revision of the city plan for lower Norrmalm district in Stockholm, Sweden, which was principally decided by the Stockholm town council in 1945, and realised during the 1950s, 1960s, and 1970s. The renewal resulted in most of the old Klara quarters being replaced for the modern city of Stockholm, according to rigorist CBD ideas, while the Stockholm subway was facilitated through the city. As a result of the project, over 750 buildings were demolished to make way for new infrastructure and redevelopment. The renewal of Norrmalm was the largest Swedish urban development project to date and engaged a large part of Sweden's architectural élite. The Norrmalm renewal has been criticised and admired throughout Sweden and internationally, and is regarded as one of the larger and most full-of-character of all city renewals in Europe in the aftermath of World War II, even including the cities that were severely damaged during the war. Key politicians behind the massive urban renewal project included Yngve Larsson and Hjalmar Mehr. See also Californication Manhattanization Brusselization References Notes Printed sources Literature History of Stockholm Planned communities in Sweden Architecture in Sweden 20th century in Stockholm Redevelopment
Redevelopment of Norrmalm
Engineering
251
171,905
https://en.wikipedia.org/wiki/Langlands%20program
In mathematics, the Langlands program is a set of conjectures about connections between number theory and geometry. It was proposed by . It seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. It was described by Edward Frenkel as "grand unified theory of mathematics." As an explanation to a non-specialist: the program provides constructs for a generalised and somewhat unified framework, to characterise the structures that underpin numbers and their abstractions; thus the invariants which base them... through analytical methods. The Langlands program consists of theoretical abstractions, which challenge even specialist mathematicians. Basically, the fundamental lemma of the project links the generalized fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. This allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. The meaning of such a construction is nuanced, but its specific solutions and generalizations are far-reaching. The consequence for proof of existence to such theoretical objects, implies an analytical method for constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes; the Langlands program allows a potential general tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions... The Langlands view allows a general analysis of structuring number-abstractions. This description is at once a reduction and over-generalization of the program's proper theorems – although these mathematical concepts illustrate its basic ideas. Background The Langlands program is built on existing ideas: the philosophy of cusp forms formulated a few years earlier by Harish-Chandra and , the work and Harish-Chandra's approach on semisimple Lie groups, and in technical terms the trace formula of Selberg and others. What was new in Langlands' work, besides technical depth, was the proposed connection to number theory, together with its rich organisational structure hypothesised (so-called functoriality). Harish-Chandra's work exploited the principle that what can be done for one semisimple (or reductive) Lie group, can be done for all. Therefore, once the role of some low-dimensional Lie groups such as GL(2) in the theory of modular forms had been recognised, and with hindsight GL(1) in class field theory, the way was open to speculation about GL(n) for general n > 2. The cusp form idea came out of the cusps on modular curves but also had a meaning visible in spectral theory as "discrete spectrum", contrasted with the "continuous spectrum" from Eisenstein series. It becomes much more technical for bigger Lie groups, because the parabolic subgroups are more numerous. In all these approaches technical methods were available, often inductive in nature and based on Levi decompositions amongst other matters, but the field remained demanding. From the perspective of modular forms, examples such as Hilbert modular forms, Siegel modular forms, and theta-series had been developed. Objects The conjectures have evolved since Langlands first stated them. Langlands conjectures apply across many different groups over many different fields for which they can be stated, and each field offers several versions of the conjectures. Some versions are vague, or depend on objects such as Langlands groups, whose existence is unproven, or on the L-group that has several non-equivalent definitions. Objects for which Langlands conjectures can be stated: Representations of reductive groups over local fields (with different subcases corresponding to archimedean local fields, p-adic local fields, and completions of function fields) Automorphic forms on reductive groups over global fields (with subcases corresponding to number fields or function fields). Analogues for finite fields. More general fields, such as function fields over the complex numbers. Conjectures The conjectures can be stated variously in ways that are closely related but not obviously equivalent. Reciprocity The starting point of the program was Emil Artin's reciprocity law, which generalizes quadratic reciprocity. The Artin reciprocity law applies to a Galois extension of an algebraic number field whose Galois group is abelian; it assigns L-functions to the one-dimensional representations of this Galois group, and states that these L-functions are identical to certain Dirichlet L-series or more general series (that is, certain analogues of the Riemann zeta function) constructed from Hecke characters. The precise correspondence between these different kinds of L-functions constitutes Artin's reciprocity law. For non-abelian Galois groups and higher-dimensional representations of them, L-functions can be defined in a natural way: Artin L-functions. Langlands' insight was to find the proper generalization of Dirichlet L-functions, which would allow the formulation of Artin's statement in Langland's more general setting. Hecke had earlier related Dirichlet L-functions with automorphic forms (holomorphic functions on the upper half plane of the complex number plane that satisfy certain functional equations). Langlands then generalized these to automorphic cuspidal representations, which are certain infinite dimensional irreducible representations of the general linear group GL(n) over the adele ring of (the rational numbers). (This ring tracks all the completions of see p-adic numbers.) Langlands attached automorphic L-functions to these automorphic representations, and conjectured that every Artin L-function arising from a finite-dimensional representation of the Galois group of a number field is equal to one arising from an automorphic cuspidal representation. This is known as his reciprocity conjecture. Roughly speaking, this conjecture gives a correspondence between automorphic representations of a reductive group and homomorphisms from a Langlands group to an L-group. This offers numerous variations, in part because the definitions of Langlands group and L-group are not fixed. Over local fields this is expected to give a parameterization of L-packets of admissible irreducible representations of a reductive group over the local field. For example, over the real numbers, this correspondence is the Langlands classification of representations of real reductive groups. Over global fields, it should give a parameterization of automorphic forms. Functoriality The functoriality conjecture states that a suitable homomorphism of L-groups is expected to give a correspondence between automorphic forms (in the global case) or representations (in the local case). Roughly speaking, the Langlands reciprocity conjecture is the special case of the functoriality conjecture when one of the reductive groups is trivial. Generalized functoriality Langlands generalized the idea of functoriality: instead of using the general linear group GL(n), other connected reductive groups can be used. Furthermore, given such a group G, Langlands constructs the Langlands dual group LG, and then, for every automorphic cuspidal representation of G and every finite-dimensional representation of LG, he defines an L-function. One of his conjectures states that these L-functions satisfy a certain functional equation generalizing those of other known L-functions. He then goes on to formulate a very general "Functoriality Principle". Given two reductive groups and a (well behaved) morphism between their corresponding L-groups, this conjecture relates their automorphic representations in a way that is compatible with their L-functions. This functoriality conjecture implies all the other conjectures presented so far. It is of the nature of an induced representation construction—what in the more traditional theory of automorphic forms had been called a 'lifting', known in special cases, and so is covariant (whereas a restricted representation is contravariant). Attempts to specify a direct construction have only produced some conditional results. All these conjectures can be formulated for more general fields in place of : algebraic number fields (the original and most important case), local fields, and function fields (finite extensions of Fp(t) where p is a prime and Fp(t) is the field of rational functions over the finite field with p elements). Geometric conjectures The geometric Langlands program, suggested by Gérard Laumon following ideas of Vladimir Drinfeld, arises from a geometric reformulation of the usual Langlands program that attempts to relate more than just irreducible representations. In simple cases, it relates -adic representations of the étale fundamental group of an algebraic curve to objects of the derived category of -adic sheaves on the moduli stack of vector bundles over the curve. A 9-person collaborative project led by Dennis Gaitsgory announced a proof of the (categorical, unramified) geometric Langlands conjecture leveraging Hecke eigensheaf as part of the proof. Status The Langlands conjectures for GL(1, K) follow from (and are essentially equivalent to) class field theory. Langlands proved the Langlands conjectures for groups over the archimedean local fields (the real numbers) and (the complex numbers) by giving the Langlands classification of their irreducible representations. Lusztig's classification of the irreducible representations of groups of Lie type over finite fields can be considered an analogue of the Langlands conjectures for finite fields. Andrew Wiles' proof of modularity of semistable elliptic curves over rationals can be viewed as an instance of the Langlands reciprocity conjecture, since the main idea is to relate the Galois representations arising from elliptic curves to modular forms. Although Wiles' results have been substantially generalized, in many different directions, the full Langlands conjecture for remains unproved. In 1998, Laurent Lafforgue proved Lafforgue's theorem verifying the Langlands conjectures for the general linear group GL(n, K) for function fields K. This work continued earlier investigations by Drinfeld, who proved the case GL(2, K) in the 1980s. In 2018, Vincent Lafforgue established the global Langlands correspondence (the direction from automorphic forms to Galois representations) for connected reductive groups over global function fields. Local Langlands conjectures proved the local Langlands conjectures for the general linear group GL(2, K) over local fields. proved the local Langlands conjectures for the general linear group GL(n, K) for positive characteristic local fields K. Their proof uses a global argument. proved the local Langlands conjectures for the general linear group GL(n, K) for characteristic 0 local fields K. gave another proof. Both proofs use a global argument. gave another proof. Fundamental lemma In 2008, Ngô Bảo Châu proved the "fundamental lemma", which was originally conjectured by Langlands and Shelstad in 1983 and being required in the proof of some important conjectures in the Langlands program. Implications To a lay reader or even nonspecialist mathematician, abstractions within the Langlands program can be somewhat impenetrable. However, there are some strong and clear implications for proof or disproof of the fundamental Langlands conjectures. As the program posits a powerful connection between analytic number theory and generalizations of algebraic geometry, the idea of 'Functoriality' between abstract algebraic representations of number fields and their analytical prime constructions results in powerful functional tools allowing an exact quantification of prime distributions. This, in turn, yields the capacity for classification of diophantine equations and further abstractions of algebraic functions. Furthermore, if the reciprocity of such generalized algebras for the posited objects exists, and if their analytical functions can be shown to be well-defined, some very deep results in mathematics could be within reach of proof. Examples include: rational solutions of elliptic curves, topological construction of algebraic varieties, and the famous Riemann hypothesis. Such proofs would be expected to utilize abstract solutions in objects of generalized analytical series, each of which relates to the invariance within structures of number fields. Additionally, some connections between the Langlands program and M theory have been posited, as their dualities connect in nontrivial ways, providing potential exact solutions in superstring theory (as was similarly done in group theory through monstrous moonshine). Simply put, the Langlands project implies a deep and powerful framework of solutions, which touches the most fundamental areas of mathematics, through high-order generalizations in exact solutions of algebraic equations, with analytical functions, as embedded in geometric forms. It allows a unification of many distant mathematical fields into a formalism of powerful analytical methods. See also Jacquet–Langlands correspondence Erlangen program Notes References External links The work of Robert Langlands Zeta and L-functions Representation theory of Lie groups Automorphic forms Conjectures History of mathematics
Langlands program
Mathematics
2,745
48,708,320
https://en.wikipedia.org/wiki/Caroxylon%20imbricatum
Caroxylon imbricatum, synonym Salsola imbricata, is a small species of shrub in the family Amaranthaceae. It grows in deserts and arid regions of north Africa, the Arabian Peninsula and southwestern Asia. Description Caroxylon imbricatum is a small, spreading shrub or sub-shrub growing up to tall. The grey or reddish stems are up to thick and these and the lower leaves are densely hairy. In the upper parts of the plant the stems are creamy or pale grey and branch frequently, some branches growing vertically while others spread horizontally. Regularly-arranged, catkin-like branchlets project from the branches. The leaves are tiny, succulent and linear or narrowly triangular. The inflorescence is spike-like with bracts similar to the leaves, small flowers with five petals, five stamens and two styles. The fruiting perianth has silky wings. Taxonomy The species was first described in 1775 as Salsola imbricata by the Swedish naturalist Peter Forsskål. In 1849, Alfred Moquin-Tandon transferred it to the genus Caroxylon, making it Caroxylon imbricatum (Forssk.) Moq., but later it was mostly accepted in genus Salsola. Following a phylogenetic analysis of Salsoloideae in 2007 by Akhani, H., et al., it has been proposed to place Salsola imbricata back into Caroxylon imbricatum. This placement is accepted by GBIF and Plants of the World Online. Distribution and habitat This plant has a widespread distribution across the desert belt of Saharan Africa, the Arabian Peninsula, southern Iran, Pakistan, Afghanistan and northwestern India. It typically grows in disturbed areas such as runnels, washes, dry wadis, eroded slopes and coastal cliffs. It grows on various soil types and is a ruderal species, colonising fallow land and over-grazed pastures. Ecology Caroxylon imbricatum is a halophytic plant; under conditions of salt stress, the plant increases its water content (becomes more succulent) and decreases the surface area of its leaves. Tests on the germination rates of seeds show that Caroxylon imbricatum sprouts more quickly and consistently at 20 °C than at higher temperatures, and shows higher germination rates at lower salinity levels than high ones. However, seeds treated at high salinity levels recovered their germination potential after immersion in unsalted water. The species has traditionally been used as a vermifuge and for treating certain skin disorders. Five triterpene glycosides have been isolated from the roots of Caroxylon imbricatum, two of them being new glycoside derivatives not previously known. References Amaranthaceae Desert flora Flora of the Arabian Peninsula Flora of North Africa
Caroxylon imbricatum
Biology
601
58,118,690
https://en.wikipedia.org/wiki/Enterprise%20and%20Data%20Center%20Standard%20Form%20Factor
The Enterprise and Data Center Standard Form Factor (EDSFF), previously known as the Enterprise and Data Center SSD Form Factor, is a family of solid-state drive (SSD) form factors for use in data center servers. Form factors EDSFF was developed by the Small Form Factor Technology Affiliate technical work group, which is itself under the organizational stewardship of the Storage Networking Industry Association. As a family of form factors, it defines specifications for the mechanical dimensions and electrical interfaces devices should have, to ensure compatibility between disparate hardware manufacturers. The standard is meant to replace the U.2 form factors for drives used in data centers. EDSFF provides a pure NVMe over PCIe interface. One common way to provide EDSFF connections on the motherboard is through MCIO connectors. EDSFF SSDs come in four form factors: E1.L (Long) and E1.S (Short), which fit vertically in a 1u server, and E3.L and E3.S, which fit vertically in a 2u server. E3.S is approximately the size of an 2.5 inch (U.2) drive. Samsung's NGSFF (also known as M.3 or NF1) form factor competes with EDSFF. See also M.2 ePCIe CXL References External links Solid State Drive Form Factors on SNIA's website E1 and E3 EDSFF to Take Over from M.2 and 2.5 in SSDs Data centers Computer-related introductions in 2017 Computer connectors
Enterprise and Data Center Standard Form Factor
Technology
320
68,023,606
https://en.wikipedia.org/wiki/Commercetools
Commercetools, stylized as commercetools, is a cloud-based headless commerce platform that provides APIs to power e-commerce sales and similar functions for large businesses. Both the company and platform are called Commercetools. The company is headquartered in Munich, Germany with additional offices in Berlin, Germany; Jena, Germany; Amsterdam, Netherlands; London, England; Durham, North Carolina; Zürich, Switzerland; Sydney, Australia; Shanghai, China and Singapore. Through its investor REWE Group it is associated with the omnichannel order fulfillment software solutions provider and the payment transactions provider . Its clients include Audi, Bang & Olufsen, Carhartt and Nuts.com. Commercetools is a founding member of the MACH Alliance. History Commercetools was founded by Dirk Hoerig and Denis Werner in 2006. It launched its platform in 2013. In 2014, Commercetools was wholly bought by REWE Digital, part of Germany’s REWE Group. Hoerig is credited with coining the term "headless commerce". In 2018, Commercetools announced a $17 million investment to support its international expansion. It expanded into the U.K market in 2019 with the opening of its London office. In 2020, Commercetools established a presence in Australia, with a team in Melbourne and a data center in Sydney. In 2019, commercetools raised $145 million from venture capital firm Insight Partners. Insight Partners' managing directors Richard Wells and Matt Gatto joined Commercetools' board of directors as part of the deal. At the same time, Commercetools was spun out by REWE. REWE remains a significant shareholder. In January 2021, commercetools partnered with car manufacturer Volkswagen Group to use the platform for its group brands, including Volkswagen, Bentley, Porsche and Audi. In May 2021, REWE group announced additional investment into Commercetools to fund growth into the Chinese market. In September 2021, commercetools raised $140m in its series C round, led by venture capital firm Accel, valuing the company at $1.9 billion. In November 2021, commercetools acquired Frontastic for an undisclosed amount. References E-commerce E-commerce software German companies established in 2006 Content management systems
Commercetools
Technology
460
220,432
https://en.wikipedia.org/wiki/Xylitol
Xylitol is a chemical compound with the formula , or HO(CH2)(CHOH)3(CH2)OH; specifically, one particular stereoisomer with that structural formula. It is a colorless or white crystalline solid. It is classified as a polyalcohol and a sugar alcohol, specifically an alditol. Of the common sugar alcohols, only sorbitol is more soluble in water. The name derives from , xyl[on] 'wood', with the suffix -itol used to denote it being a sugar alcohol. Xylitol is used as a food additive and sugar substitute. Its European Union code number is E967. Replacing sugar with xylitol in food products may promote better dental health, but evidence is lacking on whether xylitol itself prevents dental cavities. In the United States, xylitol is used as a common sugar substitute, and is considered to be safe for humans. Xylitol can be toxic to dogs. History Emil Fischer, a German chemist, and his assistant Rudolf Stahel isolated a new compound from beech wood chips in September 1890 and named it , after the Greek word for wood. The following year, the French chemist M. G. Bertrand isolated xylitol syrup by processing wheat and oat straw. Sugar rationing during World War II led to an interest in sugar substitutes. Interest in xylitol and other polyols became intense, leading to their characterization and manufacturing methods. Structure, production, commerce Xylitol is one of three 5-carbon sugar alcohols. The others are arabitol and ribitol. These three compounds differ in the stereochemistry of the three secondary alcohol groups. Xylitol occurs naturally in small amounts in plums, strawberries, cauliflower, and pumpkin; humans and many other animals make trace amounts during metabolism of carbohydrates. Unlike most sugar alcohols, xylitol is achiral. Most other isomers of pentane-1,2,3,4,5-pentol are chiral, but xylitol has a plane of symmetry. Industrial production starts with lignocellulosic biomass from which xylan is extracted; raw biomass materials include hardwoods, softwoods, and agricultural waste from processing maize, wheat, or rice. The mixture is hydrolyzed with acid to give xylose. The xylose is purified by chromatography. Purified xylose is catalytically hydrogenated into xylitol using a Raney nickel catalyst. The conversion changes the sugar (xylose, an aldehyde) into the primary alcohol, xylitol. Xylitol can also be obtained by industrial fermentation, but this methodology are not as economical as the acid hydrolysis/chromatography route described above. Fermentation is effected by bacteria, fungi, or yeast, especially Candida tropicalis. According to the US Department of Energy, xylitol production by fermentation from discarded biomass is one of the most valuable renewable chemicals for commerce, forecast to be a US $1.41 billion industry by 2025. Uses Xylitol is used as a sugar substitute in such manufactured products as drugs, dietary supplements, confections, toothpaste, and chewing gum, but is not a common household sweetener. Xylitol has negligible effects on blood sugar because its assimilation and metabolism are independent of insulin. It is approved as a food additive and sugar substitute in the United States. Xylitol is also found as an additive to saline solution for nasal irrigation and has been reported to be effective in improving symptoms of chronic sinusitis. Xylitol can also be incorporated into fabrics to produce a cooling fabric. When moisture, such as sweat, comes into contact with the xylitol embedded in the fabric, it produces a cooling sensation. Food properties Nutrition, taste, and cooking Humans absorb xylitol more slowly than sucrose, and xylitol supplies 40% fewer calories than an equal mass of sucrose. Xylitol has about the same sweetness as sucrose, but is sweeter than similar compounds like sorbitol and mannitol. Xylitol is stable enough to be used in baking, but because xylitol and other polyols are more heat-stable, they do not caramelise as sugars do. When used in foods, they lower the freezing point of the mixture. Food risks No serious health risk exists in most humans for normal levels of consumption. The European Food Safety Authority has not set a limit on daily intake of xylitol. Due to the adverse laxative effect that all polyols have on the digestive system in high doses, xylitol is banned from soft drinks in the European Union. Similarly, due to a 1985 report by the E.U. Scientific Committee on Food which states that "ingesting 50 g a day of xylitol can cause diarrhea", tabletop sweeteners (as well as other products containing xylitol) are required to display the warning "Excessive consumption may induce laxative effects". Metabolism Xylitol has 2.4 kilocalories of food energy per gram of xylitol (10 kilojoules per gram) according to U.S. and E.U. food-labeling regulations. The real value can vary, depending on metabolic factors. Primarily, the liver metabolizes absorbed xylitol. The main metabolic route in humans occurs in cytoplasm, via nonspecific NAD-dependent dehydrogenase (polyol dehydrogenase), which transforms xylitol to -xylulose. Specific xylulokinase phosphorylates it to -xylulose-5-phosphate. This then goes to pentose phosphate pathway for further processing. About 50% of eaten xylitol is absorbed via the intestines. Of the remaining 50% that is not absorbed by the intestines, in humans, 50–75% of the xylitol remaining in the gut is fermented by gut bacteria into short-chain organic acids and gases, which may produce flatulence. The remnant unabsorbed xylitol that escapes fermentation is excreted unchanged, mostly in feces; less than 2 g of xylitol out of every 100 g ingested is excreted via urine. Xylitol ingestion also increases motilin secretion, which may be related to xylitol's ability to cause diarrhea. The less-digestible but fermentable nature of xylitol also contributes to constipation relieving effects. Health effects Dental care A 2015 Cochrane review of ten studies between 1991 and 2014 suggested a positive effect in reducing tooth decay of xylitol-containing fluoride toothpastes when compared to fluoride-only toothpaste, but there was insufficient evidence to determine whether other xylitol-containing products can prevent tooth decay in infants, children or adults. Subsequent reviews support the belief that xylitol can suppress the growth of pathogenic Streptococcus in the mouth, thereby reducing dental caries and gingivitis, although there is concern that swallowed xylitol may cause intestinal dysbiosis. A 2022 review suggested that xylitol-containing chewing gum decreases plaque, but not xylitol-containing candy. Earache In 2011 EFSA "concluded that there was not enough evidence to support" the claim that xylitol-sweetened gum could prevent middle-ear infections, also known as acute otitis media (AOM). A 2016 review indicated that xylitol in chewing gum or a syrup may have a moderate effect in preventing AOM in healthy children. It may be an alternative to conventional therapies (such as antibiotics) to lower risk of earache in healthy children – reducing risk of occurrence by 25% – although there is no definitive proof that it could be used as a therapy for earache. Diabetes In 2011, EFSA approved a marketing claim that foods or beverages containing xylitol or similar sugar replacers cause lower blood glucose and lower insulin responses compared to sugar-containing foods or drinks. Xylitol products are used as sucrose substitutes for weight control, as xylitol has 40% fewer calories than sucrose (2.4 kcal/g compared to 4.0 kcal/g for sucrose). The glycemic index (GI) of xylitol is only 7% of the GI for glucose. Adverse effects Humans When ingested at high doses, xylitol and other polyols may cause gastrointestinal discomfort, including flatulence, diarrhea, and irritable bowel syndrome (see Metabolism above); some people experience the adverse effects at lower doses. Xylitol has a lower laxation threshold than some sugar alcohols but is more easily tolerated than mannitol and sorbitol. Increased xylitol consumption can increase oxalate, calcium, and phosphate excretion to urine (termed oxaluria, calciuria, and phosphaturia, respectively). These are known risk factors for kidney stone disease, but despite that, xylitol has not been linked to kidney disease in humans. A 2024 study suggests that xylitol is prothrombotic (increases clotting) and is associated with cardiovascular risk when consumed at "typical dietary amounts". Dogs and other animals Xylitol is poisonous to dogs. Ingesting 100 milligrams of xylitol per kilogram of body weight (mg/kg bw) causes dogs to experience a dose-dependent insulin release; depending on the dose it can result in life-threatening hypoglycemia. Hypoglycemic symptoms of xylitol toxicity may arise as quickly as 30 to 60 minutes after ingestion. Vomiting is a common first symptom, which can be followed by tiredness and ataxia. At doses above 500 mg/kg bw, liver failure is likely and may result in coagulopathies like disseminated intravascular coagulation. Xylitol is safe for rhesus macaques, horses, and rats. A 2018 study suggests that xylitol is safe for cats in doses of up to 1000 mg/kg; however, this study was performed on only 6 cats and should not be considered definitive. See also Aspartame Birch sap -Xylulose reductase Xylonic acid References External links Chewing gum E-number additives Excipients Sugar alcohols Sugar substitutes Veterinary toxicology
Xylitol
Chemistry,Environmental_science
2,276
66,565,501
https://en.wikipedia.org/wiki/Ulnaria%20ulna
Ulnaria ulna is a species of diatom belonging to the family Ulnariaceae. Synonym: Synedra ulna (Nitzsch) Ehrenberg 1832 References Diatoms
Ulnaria ulna
Biology
41
1,192,077
https://en.wikipedia.org/wiki/Sth%C3%A8ne
The sthène (; symbol sn), sometimes spelled (or misspelled) sthéne or sthene (from ), is an obsolete unit of force or thrust in the metre–tonne–second system of units (mts) introduced in France in 1919. When proposed by the British Association in 1876, it was called the funal, but the name was changed by 1914. The mts system was abandoned in favour of the mks system and has now been superseded by the International System of Units. {| |- |rowspan=4 valign=top|1 sthène |= 1 kilonewton |- |≈ |- |≈ |- |≈ |} References Obsolete units of measurement Units of force Non-SI metric units Metre–tonne–second system of units
Sthène
Physics,Mathematics
171
24,753,581
https://en.wikipedia.org/wiki/Galactic%20Emission%20Mapping
The Galactic Emission Mapping survey (GEM) is an international project with the goal of making a precise map of the electromagnetic spectrum of our galaxy at low frequencies (radio and microwaves). Description of the project The GEM Radio Telescope measures the radio emission of our galaxy in five frequencies, between 408 MHz and 10 GHz, from different places of the earth. This data will be used to calibrate other telescopes, more specifically the Planck Surveyor, and will give the means to filter the Cyclotron Radiation and the free free radiation from other maps in a way that the only radiation left on the map is the Cosmic Microwave Background. The telescope is in construction at Pampilhosa da Serra, Portugal, but the receptor has already made measurements in Cachoeira Paulista, (Brazil), in Antártica, in Bishop (U.S.), Villa de Leyva (Colombia) and in Tenerife (Canary Islands). The main reflector has a parabolic form of 5.5 m of diameter. The telescope was projected and is operated by an international collaboration coordinated by the University of California, Berkeley and by the Lawrence Berkeley National Laboratory, under the guidance of George Smoot, awarded with the Nobel Prize in Physics in 2006. In Brazil, the radio telescope is under the responsibility of the Instituto Nacional de Pesquisas Espaciais (National Institute of Space Research) and counts with the participation of the Astrophysics group of the Universidade Federal de Itajubá (Itajubá Federal University). Portugal joined the project in 2005 through the Instituto de Telecomunicações of Aveiro (Telecommunications institute of Aveiro), who is responsible for the planning and construction of the radio telescope. GEM in Portugal Scanning Process In Portugal the radio telescope will perform scans by rotating on its base at a speed greater than one rotation per minute, therefore avoiding the error fluctuations caused by water vapour in the atmosphere. This scanning process will provide an important contribution to the data processing. Telescope A Ground Shield will be built to avoid signal contamination with thermal radiation that may come from below the horizon, to reflect side lobes to the sky and to reduce the noise originating from diffraction from the edges of the reflector to the receiver. This will be made possible by an aluminium grid surrounding the radio telescope, which is 10 meters wide but only 8 meters high because it will be inclined towards the exterior. The edges will be curved with a radius larger than ¼ of the wavelength so that diffraction is reduced. Localization The antenna is located at Pampilhosa da Serra at an altitude of 800m above sea level. This location was chosen because it is surrounded by a mountain range which peaks at about 1000m above sea level, which give a natural "shielding" from the electromagnetic noise of the neighboring cities. The same reason that made this location a good choice also created additional problems, since many of the necessary infrastructures had to be prepared and installed. The Telescope foundations were studied by the Département of Civil Engineering of the Universidade de Aveiro and the city hall of Pampilhosa da Serra offered 120 tons of concrete. A new connection to the electric grid was made taking into account the size of the transformer to avoid noise in the observed frequencies. This was necessary because the wavelength of the emitted radiation is close to size of the transformer. A small meteorologic station was also installed to measure the wind intensity and help prevent against wind damages on the telescope. A second telescope is planned on the same site, to study solar phenomena. References External links Página do Projeto GEM Radiotelescópio GEM no Brasil Radiotelescópio GEM em Portugal Instituto de Telecomunicações Movie about galactic emissions and GEM Astronomical surveys Astronomical observatories in Brazil
Galactic Emission Mapping
Astronomy
769
69,847,260
https://en.wikipedia.org/wiki/Lophodermium%20caricinum
Lophodermium caricinum is a species of fungus in the family Rhytismataceae. It is a decomposer known to live on dead tissues of Carex capillaris, Carex machlowiana, Eriophorum angustifolium and Kobresia myosuroides. References Fungi described in 1861 Leotiomycetes Fungus species
Lophodermium caricinum
Biology
78
72,936,237
https://en.wikipedia.org/wiki/Journal%20of%20Heat%20and%20Mass%20Transfer%20Research
The Journal of Heat and Mass Transfer Research is a semiannual peer-reviewed open-access scientific journal published by Semnan University and the editor-in-chief is Syfolah Saedodin (Semnan University). The journal covers all aspects of research on heat and mass transfer. It was established in 2014 and is indexed and abstracted in Scopus. References External links Energy and fuel journals Biannual journals Creative Commons Attribution-licensed journals English-language journals Academic journals established in 2014
Journal of Heat and Mass Transfer Research
Environmental_science
106
30,490,672
https://en.wikipedia.org/wiki/Independence%20Monument%20%28Albania%29
The Monument of Independence is a monument in Vlorë, Albania, dedicated to the Albanian Declaration of Independence and worked by Albanian sculptors, Muntaz Dhrami and Kristaq Rama. It is found in the Flag's Plaza, near the building where the first Albanian government worked in 1913. In the center of the monument is the sculpture of Ismail Kemal, the leader of the Albanian national movement and founder of Independent Albania. References External links View of the monument Buildings and structures in Vlorë Monuments and memorials in Albania National symbols of Albania Tourist attractions in Vlorë County National personifications Colossal statues 1972 sculptures
Independence Monument (Albania)
Physics,Mathematics
125
41,084,961
https://en.wikipedia.org/wiki/Paul%20Boston
Paul Boston (born 1952) is an Australian artist. Life and work Paul Boston was born in Melbourne in 1952. While with at art school he developed an interest in Zen. After graduating, Boston travelled to Japan and South East Asia, where he spent time developing his Zen practice which informs much of his later work. Taking inspiration from Cubist and Abstract art, Boston has explored the nature of paradox in his paintings and drawings and has shown an interest in the interchangeability of form and space. Taking from his involvement with Zen practice, Boston is interested in creating a sense of the meditation experience for the viewer through his work, something he calls a contemplative presence, showing a careful consideration for tone and a refinement towards the fabrication of forms, whereby his shapes come to mean different things to different people. Boston has produced an impressive body of work that has been shown in solo exhibitions throughout Australia and in group shows in Australia and overseas. He is the recipient of a number of prizes and his work is included in the collections of all major Australian public galleries and a number of important private and corporate collections throughout the world. Boston lived and worked in Melbourne. Exhibitions Throughout his 30-year career Boston has held 20 solo shows in Australia and has participated in over 45 group exhibitions in Australia and overseas. Since 1993 he has shown regularly at Niagara Galleries, Melbourne and has represented them at the Korea International Art Fair in 2010, 2011 and 2012 and at the Auckland Art Fair in 2007 and 2011. He also participated in the Melbourne Art Fair in 2002, 2004, 2006, 2010 and 2012. Boston has shown extensively across Australia as well as internationally, participating in Group Show at the David McKee Gallery and A Survey of International Painting and Sculpture at the Museum of Modern Art in New York, as well as Contemporary Australian Art to China 1988–1989 which toured from Beijing to Guangzhou. His most recent shows have included the Paul Boston Survey Exhibition: 1980–2010 at Ray Hughes Gallery, Sydney and Abstraction 10 at Charles Nodrum Gallery, Melbourne in 2011. Collections Boston's work is represented extensively in public and private collections throughout Australia. He features in major state galleries such as the National Gallery of Australia, National Gallery of Victoria, Art Gallery of NSW, Art Gallery of Western Australia and Museum of Contemporary Art, South Brisbane, as well as regional galleries and corporate, university and private collections across Australia and overseas. Awards Boston has won several awards throughout his career, including the National Works on Paper Award, Mornington Peninsula Regional Gallery in 2004, the John McCaughey Memorial Art Prize, National Gallery of Victoria in 1991, and the inaugural Savage Drawing Prize, Melbourne in 1987. References 1942 births Living people Australian artists Draughtsmen
Paul Boston
Engineering
545
27,930,353
https://en.wikipedia.org/wiki/Living%20free-radical%20polymerization
Living free radical polymerization is a type of living polymerization where the active polymer chain end is a free radical. Several methods exist. IUPAC recommends to use the term "reversible-deactivation radical polymerization" instead of "living free radical polymerization", though the two terms are not synonymous. Reversible-deactivation polymerization There is a mode of polymerization referred to as reversible-deactivation polymerization which is distinct from living polymerization, despite some common features. Living polymerization requires a complete absence of termination reactions, whereas reversible-deactivation polymerization may contain a similar fraction of termination as conventional polymerization with the same concentration of active species. Some important aspects of these are compared in the table: Catalytic chain transfer and cobalt mediated radical polymerization Catalytic chain transfer polymerization is not a strictly living form of polymerization. Yet it figures significantly in the development of later forms of living free radical polymerization. Discovered in the late 1970s in the USSR it was found that cobalt porphyrins were able to reduce the molecular weight during polymerization of methacrylates. Later investigations showed that the cobalt glyoxime complexes were as effective as the porphyrin catalysts and also less oxygen sensitive. Due to their lower oxygen sensitivity these catalysts have been investigated much more thoroughly than the porphyrin catalysts. The major products of catalytic chain transfer polymerization are vinyl-terminated polymer chains. One of the major drawbacks of the process is that catalytic chain transfer polymerization does not produce macromonomers but instead produces addition fragmentation agents. When a growing polymer chain reacts with the addition fragmentation agent the radical end-group attacks the vinyl bond and forms a bond. However, the resulting product is so hindered that the species undergoes fragmentation, leading eventually to telechelic species. These addition fragmentation chain transfer agents do form graft copolymers with styrenic and acrylate species however they do so by first forming block copolymers and then incorporating these block copolymers into the main polymer backbone. While high yields of macromonomers are possible with methacrylate monomers, low yields are obtained when using catalytic chain transfer agents during the polymerization of acrylate and stryenic monomers. This has been seen to be due to the interaction of the radical centre with the catalyst during these polymerization reactions. The reversible reaction of the cobalt macrocycle with the growing radical is known as cobalt carbon bonding and in some cases leads to living polymerization reactions. Iniferter polymerization An iniferter is a chemical compound that simultaneously acts as initiator, transfer agent, and terminator (hence the name ini-fer-ter) in controlled free radical iniferter polymerizations, the most common is the dithiocarbamate type. Stable free radical mediated polymerization The two options of SFRP are nitroxide mediated polymerization (NMP) and verdazyl mediated polymerization (VMP), SFRP was discovered while using a radical scavenger called TEMPO when investigating the rate of initiation during free radical polymerization. When the coupling of the stable free radical with the polymeric radical is sufficiently reversible, termination is reversible, and the propagating radical concentration can be limited to levels that allow controlled polymerization. Similar to atom transfer radical polymerization (discussed below), the equilibrium between dormant chains (those reversibly terminated with the stable free radical) and active chains (those with a radical capable of adding to monomer) is designed to heavily favor the dormant state. Further stable free radicals have also been explored for this polymerization reaction with lower efficiency. Atom transfer radical polymerization (ATRP) Among LRP methods, ATRP is the most studied one. Since its development in 1995 an exhaustive number of articles has been published on this topic. A review written by Matyjaszewski covers the developments in ATRP from 1995 to 2000. ATRP involves the chain initiation of free radical polymerization by a halogenated organic species in the presence of a metal halide. The metal has a number of different oxidation states that allows it to abstract a halide from the organohalide, creating a radical that then starts free radical polymerization. After initiation and propagation, the radical on the active chain terminus is reversibly terminated (with the halide) by reacting with the catalyst in its higher oxidation state. Thus, the redox process gives rise to an equilibrium between dormant (polymer-halide) and active (polymer-radical) chains. The equilibrium is designed to heavily favor the dormant state, which effectively reduces the radical concentration to a sufficiently low level to limit bimolecular coupling. Obstacles associated with this type of reaction is the generally low solubility of the metal halide species, which results in limited availability of the catalyst. This is improved by the addition of a ligand, which significantly improves the solubility of the metal halide and thus the availability of the catalyst but complicates subsequent catalyst removal from the polymer product. Reversible addition fragmentation chain transfer (RAFT) polymerization RAFT technology offers the benefit of being able to readily synthesize polymers with predetermined molecular weight and narrow molecular weight distributions over a wide range of monomers with reactive terminal groups that can be purposely manipulated, including further polymerization, with complex architecture.6 Furthermore, RAFT can be used in all modes of free radical polymerization: solution, emulsion and suspension polymerizations. Implementing the RAFT technique can be as simple as introducing a suitable chain transfer agent (CTA), known as a RAFT agent, into a conventional free radical polymerization reaction (must be devoid of oxygen, which terminates propagation). This CTA is the main species in RAFT polymerization. Generally it is a di- or tri-thiocarbonylthio compound (1), which produces the dormant form of the radical chains. Control in RAFT polymerization (scheme 1) is achieved in a far more complicated manner than the homolytic bond formation-bond cleavage of SFRP and ATRP. The CTA for RAFT polymerization must be chosen cautiously because it has an effect on polymer length, chemical composition, rate of the reaction and the number of side reactions that may occur. The mechanism of RAFT begins with a standard initiation step as homolytic bond cleavage of the initiator molecule yields a reactive free radical. This free radical then reacts with a molecule of the monomer to form the active center with additional molecules of monomer then adding in a sequential fashion to produce a growing polymer chain (Pn•). The propagating chain adds to the CTA (1) to yield a radical intermediate. Fragmentation of this intermediate gives rise to either the original polymer chain (Pn•) or to a new radical (R•), which itself must be able to reinitiate polymerization. This free radical generates its own active center by reaction with the monomer and eventually a new propagating chain (Pm•) is formed.3 Ultimately, chain equilibration occurs in which there is a rapid equilibrium between the actively growing radicals and the dormant compounds, thereby allowing all of the chains to grow at the same rate. A limited amount of termination does occur; however, the effect of termination of polymerization kinetics is negligible. The calculation of molecular weight for a synthesized polymer is relatively easy, in spite of the complex mechanism for RAFT polymerization. As stated before, during the equilibration step, all chains are growing at equal rates, or in other words, the molecular weight of the polymer increases linearly with conversion. Multiplying the ratio of monomer consumed to the concentration of the CTA used by the molecular weight of the monomer (mM) a reliable estimate of the number average molecular weight can be determined. RAFT is a degenerative chain transfer process and is free radical in nature. RAFT agents contain di- or tri-thiocarbonyl groups, and it is the reaction with an initiator, usually AIBN, that creates a propagating chain or polymer radical. This polymer chain then adds to the C=S and leads to the formation of a stabilized radical intermediate. In an ideal system, these stabilized radical intermediates do not undergo termination reactions, but instead reintroduce a radical capable of reinitiation or propagation with monomer, while they themselves reform their C=S bond. The cycle of addition to the C=S bond, followed by fragmentation of a radical, continues until all monomer or initiator is consumed. Termination is limited in this system by the low concentration of active radicals and any termination that does occur is negligible. RAFT, invented by Rizzardo et al. at CSIRO and a mechanistically identical process termed Macromolecular Design via Interchange of Xanthates (MADIX), invented by Zard et al. at Rhodia were both first reported in 1998/early 1999. Iodine-transfer polymerization (ITP) Iodine-transfer polymerization (ITP, also called ITRP), developed by Tatemoto and coworkers in the 1970s gives relatively low polydispersities for fluoroolefin polymers. While it has received relatively little academic attention, this chemistry has served as the basis for several industrial patents and products and may be the most commercially successful form of living free radical polymerization. It has primarily been used to incorporate iodine cure sites into fluoroelastomers. The mechanism of ITP involves thermal decomposition of the radical initiator (AIBN), generating the initiating radical In•. This radical adds to the monomer M to form the species P1•, which can propagate to Pm•. By exchange of iodine from the transfer agent R-I to the propagating radical Pm• a new radical R• is formed and Pm• becomes dormant. This species can propagate with monomer M to Pn•. During the polymerization exchange between the different polymer chains and the transfer agent occurs, which is typical for a degenerative transfer process. Typically, iodine transfer polymerization uses a mono- or diiodo-perfluoroalkane as the initial chain transfer agent. This fluoroalkane may be partially substituted with hydrogen or chlorine. The energy of the iodine-perfluoroalkane bond is low and, in contrast to iodo-hydrocarbon bonds, its polarization small. Therefore, the iodine is easily abstracted in the presence of free radicals. Upon encountering an iodoperfluoroalkane, a growing poly(fluoroolefin) chain will abstract the iodine and terminate, leaving the now-created perfluoroalkyl radical to add further monomer. But the iodine-terminated poly(fluoroolefin) itself acts as a chain transfer agent. As in RAFT processes, as long as the rate of initiation is kept low, the net result is the formation of a monodisperse molecular weight distribution. Use of conventional hydrocarbon monomers with iodoperfluoroalkane chain transfer agents has been described. The resulting molecular weight distributions have not been narrow since the energetics of an iodine-hydrocarbon bond are considerably different from that of an iodine-fluorocarbon bond and abstraction of the iodine from the terminated polymer difficult. The use of hydrocarbon iodides has also been described, but again the resulting molecular weight distributions were not narrow. Preparation of block copolymers by iodine-transfer polymerization was also described by Tatemoto and coworkers in the 1970s. Although use of living free radical processes in emulsion polymerization has been characterized as difficult, all examples of iodine-transfer polymerization have involved emulsion polymerization. Extremely high molecular weights have been claimed. Listed below are some other less described but to some extent increasingly important living radical polymerization techniques. Selenium-centered radical-mediated polymerization Diphenyl diselenide and several benzylic selenides have been explored by Kwon et al. as photoiniferters in polymerization of styrene and methyl methacrylate. Their mechanism of control over polymerization is proposed to be similar to the dithiuram disulfide iniferters. However, their low transfer constants allow them to be used for block copolymer synthesis but give limited control over the molecular weight distribution. Telluride-mediated polymerization (TERP) Telluride-mediated polymerization or TERP first appeared to mainly operate under a reversible chain transfer mechanism by homolytic substitution under thermal initiation. However, in a kinetic study it was found that TERP predominantly proceeds by degenerative transfer rather than 'dissociation combination'. Alkyl tellurides of the structure Z-X-R, were Z=methyl and R= a good free radical leaving group, give the better control for a wide range of monomers, phenyl tellurides (Z=phenyl) giving poor control. Polymerization of methyl methacrylates are only controlled by ditellurides. The importance of X to chain transfer increases in the series O<S<Se<Te, makes alkyl tellurides effective in mediating control under thermally initiated conditions and the alkyl selenides and sulfides effective only under photoinitiated polymerization. Stibine-mediated polymerization More recently Yamago et al. reported stibine-mediated polymerization, using an organostibine transfer agent with the general structure Z(Z')-Sb-R (where Z= activating group and R= free radical leaving group). A wide range of monomers (styrenics, (meth)acrylics and vinylics) can be controlled, giving narrow molecular weight distributions and predictable molecular weights under thermally initiated conditions. Yamago has also published a patent indicating that bismuth alkyls can also control radical polymerizations via a similar mechanism. References Free radical reactions Polymerization reactions
Living free-radical polymerization
Chemistry,Materials_science
2,919
9,750,615
https://en.wikipedia.org/wiki/Marietta%20Blau
Marietta Blau (29 April 1894 – 27 January 1970) was an Austrian physicist credited with developing photographic nuclear emulsions that were usefully able to image and accurately measure high-energy nuclear particles and events, significantly advancing the field of particle physics in her time. For this, she was awarded the Lieben Prize by the Austrian Academy of Sciences. As a Jew, she was forced to flee Austria when Nazi Germany annexed it in 1938, eventually making her way to the United States. She was nominated for Nobel Prizes in both physics and chemistry for her work, but did not win. After her return to Austria, she won the Erwin Schrödinger Prize from the Austrian Academy of Sciences. Biography Blau was born on 29 April 1894 in a middle-class Jewish family, to Mayer (Markus) Blau, a court lawyer and music publisher, and his wife, Florentine Goldzweig. After having obtained the general certificate of education from the girls' high school run by the Association for the Extended Education of Women, she studied physics and mathematics at the University of Vienna from 1914 to 1918; her PhD, on the absorption of gamma rays, was awarded in March 1919. Blau is credited with developing (photographic) nuclear emulsions that were usefully able to image and accurately measure high energy nuclear particles and events. Additionally, this established a method to accurately study reactions caused by cosmic ray events. Her nuclear emulsions significantly advanced the field of particle physics in her time. For her work, she was nominated several times, during the period 1950 to 1957, for the Nobel Prize in Physics and once for the Nobel Prize in Chemistry by Erwin Schrödinger and Hans Thirring. Pre World War 2 From 1919 to 1923, Blau held several positions in industrial and University research institutions in Austria and Germany; in 1921, she moved to Berlin to work at a manufacturer of x-ray tubes, a position she left in order to become an assistant at the Institute for Medical Physics at the University of Frankfurt am Main. From 1923 on, she worked as an unpaid scientist at the Institute for Radium Research of the Austrian Academy of Sciences in Vienna. A stipend by the Austrian Association of University Women made it possible for her to do research also in Göttingen and Paris (1932/1933) at the Curie Institute. In her Vienna years, Blau's main interest was the development of the photographic method of particle detection. The methodical goals which she pursued were the identification of particles, in particular alpha-particles and protons, and the determination of their energy based on the characteristics of the tracks they left in emulsions; there, she developed a photographic emulsion technique used in the study of cosmic rays, being the first scientist to use nuclear emulsions to detect neutrons. For this work, Blau and her former student Hertha Wambacher received the Lieben Prize of the Austrian Academy of Sciences in 1937. It was her greatest success when, also in 1937, she and Wambacher discovered "disintegration stars" in photographic plates that had been exposed to cosmic radiation at an altitude of 2,300 metres (≈7,500 feet) above sea level. These stars are the patterns of particle tracks from nuclear reactions (spallation events) of cosmic-ray particles with nuclei of the photographic emulsion. Because of her Jewish descent, Blau had to leave Austria in 1938 after the country's annexation by Nazi Germany, a fact which caused a severe break in her scientific career. She first went to Oslo. Then, through the intercession of Albert Einstein, she obtained a teaching position at the Instituto Politécnico Nacional in Mexico City and later at Universidad Michoacana de San Nicolás de Hidalgo. Conditions in Mexico made research extremely difficult for her, and she seized an opportunity to move to the United States in 1944. Post-war In the United States, Blau worked in industry until 1948, afterwards (until 1960) at Columbia University, Brookhaven National Laboratory and the University of Miami. At these institutions, she was responsible for the application of the photographic method of particle detection in high-energy experiments at particle accelerators. In 1960, Blau returned to Austria and conducted scientific work at the Institute for Radium Research until 1964 – again without pay. She headed a working group analyzing particle-track photographs from experiments at CERN and supervised a dissertation in this field. In 1962, she received the Erwin Schrödinger Prize of the Austrian Academy of Sciences, but an attempt to make her also a corresponding member of the Academy was not successful. Death Marietta Blau died in Vienna from cancer on 27 January 1970. Her illness was related to her unprotected handling of radioactive substances as well as her cigarette smoking over many years. No obituary appeared in any scientific publication. Legacy In 1950, Cecil Powell received the Nobel Prize in Physics for the development of the photographic method for particle detection and the discovery of the pion by use of Blau's method. See also Timeline of women in science The Matilda effect References Literature Robert Rosner & Brigitte Strohmaier (eds.) (2003) Marietta Blau – Sterne der Zertrümmerung. Biographie einer Wegbereiterin der modernen Teilchenphysik. Böhlau, Vienna (in German) Brigitte Strohmaier & Robert Rosner (2006) Marietta Blau – Stars of Disintegration. Biography of a pioneer of particle physics. Ariadne, Riverside, California Leopold Halpern & Maurice Shapiro (2006) "Marietta Blau" in Out of the Shadows: Contributions of Twentieth-Century Women to Physics, Nina Byers and Gary Williams, ed., Cambridge University Press . External links "Marietta Blau" in CWP at UCLA Rentetzi, Maria "Marietta Blau", Jewish Women: A Comprehensive Historical Encyclopedia Sime, Ruth Lewin, "Marietta Blau: Pioneer of Photographic Nuclear Emulsions and Particle Physics", Physics in Perspective, 15 (2013) 3–32 20th-century Austrian physicists Austrian nuclear physicists 1894 births 1970 deaths Austrian women physicists 20th-century Austrian women scientists Schrödinger Prize recipients Jewish emigrants from Austria after the Anschluss to the United States Scientists from Vienna Deaths from cancer in Austria University of Vienna alumni Particle physicists Brookhaven National Laboratory staff Nuclear physicists Women nuclear physicists Academic staff of the Instituto Politécnico Nacional Academic staff of Universidad Michoacana de San Nicolás de Hidalgo People associated with CERN American women academics
Marietta Blau
Physics
1,348
19,277,881
https://en.wikipedia.org/wiki/Silver%20Shoes
The Silver Shoes are the magical shoes that appear in L. Frank Baum's 1900 novel The Wonderful Wizard of Oz as heroine Dorothy Gale's transport home. They are originally owned by the Wicked Witch of the East but passed to Dorothy when her house lands on the Witch. At the end of the story, Dorothy uses the shoes to transport herself back to her home in Kansas, but when she arrives at her destination finds the shoes have fallen off en route. Appearances in books The Wonderful Wizard of Oz The Wonderful Wizard of Oz (1900) is the only book in the original series to feature the Silver Shoes directly. They are the property of the Wicked Witch of the East until Dorothy's house lands on and kills her. They are then given to Dorothy by the Good Witch of the North, who tells Dorothy that "there is some charm connected with them; but what it is we never knew." When Dorothy is captured by the Wicked Witch of the West, she tries to steal the shoes. She finally gets one by tricking Dorothy into tripping over an invisible iron bar. Dorothy then melts the Witch with a bucket of water and recovers the shoe. In the final chapters of the book, Glinda explains that the shoes can transport the wearer anywhere they wish. If the Silver Shoes have any other powers they are never outlined in the books, however the Witch of the West was obsessed with obtaining them, as they would give her much greater power than any other thing she possessed, suggesting the shoes hold immense magic. After saying goodbye to her friends, Dorothy knocks her heels together three times, and commands the Shoes to carry her home. When Dorothy opens her eyes, she has arrived in Kansas. She finds that the shoes are gone, having fallen off during her flight and landing somewhere in the Deadly Desert. Though they are mentioned several times in sequels, they never appear again in the original series. The Wizard of the Emerald City In Alexander Melentyevich Volkov's The Wizard of the Emerald City (1939), the Silver shoes or Serebryaniye bashmachki as they are called in the manuscript, are the source of Elly's (his version of Dorothy) protection instead of the good Witch's kiss. She is therefore attacked once by an Ogre when removing them, and afterward wears them even when she sleeps. They are not taken from the Witch's body, but rather brought by Toto from her dwelling (a dark cave). This was possibly done to avert the problem of a person wearing the shoes to be impossible to harm, since in that book the hurricane is created by the Wicked Witch to destroy mankind, and redirected upon her by the Good Witch of the North, who suffers no ill effects for harming her. It is said the Witch only wore the shoes on very special occasions. They are lost just like in Baum's book. Dorothy of Oz In Roger S. Baum's Dorothy of Oz (1989), Glinda recovers the silver shoes and presents them to Dorothy. They have enough power remaining that Dorothy can travel once more to Oz and back to Kansas. Wicked: The Life and Times of the Wicked Witch of the West In Wicked: The Life and Times of the Wicked Witch of the West (1995), the silver shoes are a gift to Nessarose (the Wicked Witch of the East) before she and her sister, Elphaba (the Wicked Witch of the West) start college. They are made by her father using special glass beads another man (Turtle Heart, possibly her biological father) taught him to make, which make the shoes shiny and iridescent, not necessarily a true silver. They are later enchanted by Glinda (the Good Witch of the North) to give Nessarose the necessary balance to walk. In the Broadway musical adaption of the book, Elphaba is the one who enchants the shoes. Her spell makes the silver shoes burn red hot, turning them into the ruby slippers. Appearances in film Wizard of Oz (1925 film) The shoes were absent from the 1925 movie. The Wizard of Oz (1939 film) In the 1939 movie the shoes served the same purpose as in the book but were changed to red by its screenwriter Noel Langley. He gave a notably different appearance than in Denslow's illustrations. In addition to the silver shoes' powers, the Ruby slippers in the film were magically protected from being removed from the feet of the person wearing them unless said person is dead. The Wiz In The Wiz (1978), the shoes are silver high heels. This movie gives further insight into the shoes' magical protection: when Evillene (the witch of the west) tries to obtain them magically, her fingers are bent painfully backwards. The Wizard of Oz (1982 film) In the anime movie, the shoes are once again ruby slippers, though they are never referred to by that name. They are heeled shoes with pointed, slightly curled toes, similar to their appearance in Denslow's illustrations. Unlike the book, the shoes are still on Dorothy's feet when she arrives in Kansas. Return to Oz In Return to Oz (1985), the Ruby Slippers are used once again. In this movie, the slippers have more power than simply transporting people. They allow the Nome King to conquer Oz and turn every one in the Emerald City to stone. Dorothy later uses the shoes to reverse this process. This extra power is due to the fact the slippers replace the Nome King's Magic Belt. In the original draft of the script, the Nome King had refashioned the slippers into the actual Magic Belt from the novels. Upon his death, they reverted into the form of slippers. This was cut from the final filming of the movie. The Muppets' Wizard of Oz In The Muppets' Wizard of Oz (2005), the Silver Shoes are portrayed as sparkling, bejeweled, glittery Manolo Blahnik high-heels. The laws of ownership are again displayed in that the Witch of the West tries to cut off Dorothy's feet to obtain the shoes. Once again the shoes remain on Dorothy's feet when she arrives home. Appearances in television The Wonderful Wizard of Oz (1986 anime) When first seen on the feet of the Witch of the East in the 1986 anime version, they are brown peasant's shoes. When the Witch of the North then magically transfers them to Dorothy's feet, they take on the appearance of white slip-on shoes. When Dorothy is forced to give one of the shoes to the Witch of the West, it reverts to the peasant form. After the Witch is melted and Dorothy is shown wearing it again, it has returned to its white form. The shoes are used twice after they initially send Dorothy home. The first time, she is holding them in her hands when she clicks the heels and drops them. Consequently, Dorothy is transported to Oz and the shoes are left in Kansas (Glinda sends her home). The second time occurs while Dorothy is sleeping. Tik-Tok is emitting a distress signal and the shoes activate, transporting Dorothy to the Land of Ev in a beam of light. Her clothes are changed in mid flight. The Wizard of Oz (TV series) In the 1990 The Wizard of Oz television series, the Ruby Slippers are used to transport Dorothy back to Oz. They are depicted to possess other powerful magical capabilities that Dorothy did not fully understand, and as such, often served as a form of deus ex machina against hopeless situations. They are no longer depicted as high heels. A unique concept proposed by this series is that the Ruby Slippers' magic is linked to the glow of their red coloration. Their powers only function while a dim glow of red light emanates from them, initiated by Dorothy clicking her heels; and the effects of their magic immediately cease after the shoes cease to glow. Also, the Wicked Witch was once able to annul their abilities entirely, by capturing a red Luminary (teardrop-shaped creatures who control all color in Oz) and forcing him to drain the red color from the slippers themselves. However the slippers regained their powers after the Luminary escaped. This series also proposes that the slippers do not necessarily have to be on the user's feet for their powers to work, as Dorothy once used them by tapping the heels together when she held the shoes in her hands (since the ground's sandy surface prevented her from clicking the heels together). Also worth noting in a single episode, is that Truckle, the series' lead Flying Monkey, was once able to wear the Ruby Slippers and thus utilize their powerful magic for his own whims. Even with his generally dim wits and reckless disregard, the slippers gave him sufficient power to overwhelm the Wicked Witch of the West's magical attacks, and temporarily reduce her to his servant. This once again demonstrates that the shoes' users need not be a skilled/knowledgeable spellcaster, in order to gain great power. The Cowardly Lion also gets to wear them briefly. Charmed In the season five episode where the Halliwell sisters are jovially interfered with by fairy tale creatures due to the Wicked Queen of Snow White. Once Piper vanquishes the Queen, she is sent home from purgatory with Dorothy's slippers. They are depicted as ruby slippers, like the 1939 Film Once Upon a Time (TV series) In the Once Upon a Time television series, the Silver Slippers are first alluded to in "The Doctor", when Rumpelstiltskin sends the Mad Hatter to the Land of Oz in order to locate and retrieve the shoes so that he could travel to the Land without Magic in order to locate his lost son Balefire. In "It's Not Easy Being Green", shortly before "The Doctor", Zelena, the woman who would eventually become the Wicked Witch of the West, goes to the Wizard in order to seek out her family. Upon discovering that she was abandoned by her mother, Cora, and that her half-sister, Regina became Queen and was being trained by Rumpelstiltskin, the Wizard gives Zelena the shoes so that she can travel to the Enchanted Forest to try and replace Regina as Rumpelstiltskin's student. Upon being rejected by him, Zelena turns green with envy. She mocks Rumpelstiltskin with the power of the Silver Slippers, thus causing his later desire to obtain them. Using the Silver Slippers again she returns to Oz and dethrones the Wizard. In "Kansas" the Slippers appear again, as Zelena, posing as the Wizard of Oz, gives Dorothy the Silver Slippers in order to send her back to Kansas, in the hopes that it would keep Dorothy from becoming a powerful witch herself, and from defeating and replacing Zelena as the Witch of the West. In "Our Decay", an adult Dorothy has returned to Oz via the Slippers to face down Zelena and rescue the Scarecrow from her. In "Ruby Slippers", the Silver Slippers appear one last time, enabling travel between the Underworld and Oz so that Ruby (Red Riding Hood) can rescue Dorothy from a sleeping curse Zelena has placed her under. Zelena, who is trying to change her ways gives the Slippers over to the heroes so Ruby and Snow White can make their way back to Oz. Appearances in other media Dorothy of Oz The Dorothy of Oz series completely revamps the Silver Shoes. They are instead depicted as red boots created by Selluriah, the Witch of the East. When Mara (codename Dorothy) stomps the heels of said boots, she takes on the form and powers of a witch. This power is channeled (rather inexpertly) through the staff Thrysos. The transformation is rather embarrassing, as it involves Mara being momentarily nude and various men are always apt to spot her. It is yet to be revealed if these boots will help Mara return home. Fables The shoes are shown in the DC comics Vertigo series Fables. Dorothy, who's portrayed as a cold, merciless assassin, found that she enjoyed killing after being hired by the Wizard to kill the Witch of the West. However, she loses them on the way back to Kansas over the Deadly Desert, and goes to great lengths to get them back. She has several encounters with Fabletown spy Cinderella, which climaxes with them facing off in the mini-series Cinderella: Fables Are Forever. After deducing that they are actually too big to fit Dorothy, Cinderella takes them and pushes her out of an airship over the Deadly Desert to her apparent death, though her body is not seen. Cinderella then looks over the shoes and decides they're just the right size to fit her. References Fictional elements introduced in 1900 Fictional footwear Magic items Oz (franchise)
Silver Shoes
Physics
2,657
5,301,306
https://en.wikipedia.org/wiki/Portable%20water%20purification
Portable water purification devices are self-contained, easily transported units used to purify water from untreated sources (such as rivers, lakes, and wells) for drinking purposes. Their main function is to eliminate pathogens, and often also suspended solids and some unpalatable or toxic compounds. These units provide an autonomous supply of drinking water to people without access to clean water supply services, including inhabitants of developing countries and disaster areas, military personnel, campers, hikers, and workers in wilderness, and survivalists. They are also called point-of-use water treatment systems and field water disinfection techniques. Techniques include heat (including boiling), filtration, activated charcoal adsorption, chemical disinfection (e.g. chlorination, iodine, ozonation, etc.), ultraviolet purification (including sodis), distillation (including solar distillation), and flocculation. Often these are used in combination. Drinking water hazards Untreated water may contain potentially pathogenic agents, including protozoa, bacteria, viruses, and some larvae of higher-order parasites such as liver flukes and roundworms. Chemical pollutants such as pesticides, heavy metals and synthetic organics may be present. Other components may affect taste, odour and general aesthetic qualities, including turbidity from soil or clay, colour from humic acid or microscopic algae, odours from certain type of bacteria, particularly Actinomycetes which produce geosmin, and saltiness from brackish or sea water. Common metallic contaminants such as copper and lead can be treated by increasing the pH using soda ash or lime, which precipitates such metals. Careful decanting of the clear water after settlement or the use of filtration provides acceptably low levels of metals. Water contaminated by aluminium or zinc cannot be treated in this way using a strong alkali as higher pHs re-dissolve the metal salts. Salt is difficult to remove except by reverse osmosis or distillation. Most portable treatment processes focus on mitigating human pathogens for safety and removing particulates matter, tastes and odours. Significant pathogens commonly present in the developed world include Giardia, Cryptosporidium, Shigella, hepatitis A virus, Escherichia coli, and enterovirus. In less developed countries there may be risks from cholera and dysentery organisms and a range of tropical enteroparasites. Giardia lamblia and Cryptosporidium spp., both of which cause diarrhea (see giardiasis and cryptosporidiosis) are common pathogens. In backcountry areas of the United States and Canada they are sometimes present in sufficient quantity that water treatment is justified for backpackers, although this has created some controversy. (See wilderness acquired diarrhea.) In Hawaii and other tropical areas, Leptospira spp. are another possible problem. Less commonly seen in developed countries are organisms such as Vibrio cholerae which causes cholera and various strains of Salmonella which cause typhoid and para-typhoid diseases. Pathogenic viruses may also be found in water. The larvae of flukes are particularly dangerous in area frequented by sheep, deer, or cattle. If such microscopic larvae are ingested, they can form potentially life-threatening cysts in the brain or liver. This risk extends to plants grown in or near water including the commonly eaten watercress. In general, more human activity up stream (i.e. the larger the stream/river) the greater the potential for contamination from sewage effluent, surface runoff, or industrial pollutants. Groundwater pollution may occur from human activity (e.g. on-site sanitation systems or mining) or might be naturally occurring (e.g. from arsenic in some regions of India and Bangladesh). Water collected as far upstream as possible above all known or anticipated risks of pollution poses the lowest risk of contamination and is best suited to portable treatment methods. Techniques Not all techniques by themselves will mitigate all hazards. Although flocculation followed by filtration has been suggested as best practice this is rarely practicable without the ability to carefully control pH and settling conditions. Ill-advised use of alum as a flocculant can lead to unacceptable levels of aluminium in the water so treated. If water is to be stored, halogens offer extended protection. Heat (boiling) Heat kills disease-causing micro-organisms, with higher temperatures and/or duration required for some pathogens. Sterilization of water (killing all living contaminants) is not necessary to make water safe to drink; one only needs to render enteric (intestinal) pathogens harmless. Boiling does not remove most pollutants and does not leave any residual protection. The WHO states bringing water to rolling boil then naturally cooling is sufficient to inactivate pathogenic bacteria, viruses and protozoa. The CDC recommends a rolling boil for 1 minute. At high elevations, though, the boiling point of water drops. At altitudes greater than boiling should continue for 3 minutes. All bacterial pathogens are quickly killed above , therefore, although boiling is not necessary to make the water safe to drink, the time taken to heat the water to boiling is usually sufficient to reduce bacterial concentrations to safe levels. Encysted protozoan pathogens may require higher temperatures to remove any risk. Boiling is not always necessary nor sometimes enough. Pasteurization where enough pathogens are killed typically occurs at 63 °C for 30 minutes or 72 °C for 15 seconds. Certain pathogens must be heated above boiling (e.g. botulismClostridium botulinum requires , most endospores require , and prions even higher). Higher temperatures may be achieved with a pressure cooker. Heat combined with ultraviolet light (UV), such as sodis method, reduces the necessary temperature and duration. Filtration Portable pump filters are commercially available with ceramic filters that filter 5,000 to 50,000 litres per cartridge, removing pathogens down to the 0.2–0.3 micrometer (μm) range. Some also utilize activated charcoal filtering. Most filters of this kind remove most bacteria and protozoa, such as Cryptosporidium and Giardia lamblia, but not viruses except for the very largest of 0.3 μm and larger diameters, so disinfection by chemicals or ultraviolet light is still required after filtration. It is worth noting that not all bacteria are removed by 0.2 μm pump filters; for example, strands of thread-like Leptospira spp. (which can cause leptospirosis) are thin enough to pass through a 0.2 μm filter. Effective chemical additives to address shortcomings in pump filters include chlorine, chlorine dioxide, iodine, and sodium hypochlorite (bleach). There have been polymer and ceramic filters on the market that incorporated iodine post-treatment in their filter elements to kill viruses and the smaller bacteria that cannot be filtered out, but most have disappeared due to the unpleasant taste imparted to the water, as well as possible adverse health effects when iodine is ingested over protracted periods. While the filtration elements may do an excellent job of removing most bacteria and fungi contaminants from drinking water when new, the elements themselves can become colonization sites. In recent years some filters have been enhanced by bonding silver metal nanoparticles to the ceramic element and/or to the activated charcoal to suppress growth of pathogens. Small, hand-pumped reverse osmosis filters were originally developed for the military in the late 1980s for use as survival equipment, for example, to be included with inflatable rafts on aircraft. Civilian versions are available. Instead of using the static pressure of a water supply line to force the water through the filter, pressure is provided by a hand-operated pump. These devices can generate drinkable water from seawater. The Portable Aqua Unit for Lifesaving (short PAUL) is a portable ultrafiltration-based membrane water filter for humanitarian aid. It allows the decentralized supply of clean water in emergency and disaster situations for about 400 persons per unit per day. The filter is designed to function with neither chemicals nor energy nor trained personnel. Activated charcoal adsorption Granular activated carbon filtering utilizes a form of activated carbon with a high surface area, and adsorbs many compounds, including many toxic compounds. Water passing through activated carbon is commonly used in concert with hand pumped filters to address organic contamination, taste, or objectionable odors. Activated carbon filters aren't usually used as the primary purification techniques of portable water purification devices, but rather as secondary means to complement another purification technique. It is most commonly implemented for pre- or post-filtering, in a separate step than ceramic filtering, in either case being implemented prior to the addition of chemical disinfectants used to control bacteria or viruses that filters cannot remove. Activated charcoal can remove chlorine from treated water, removing any residual protection remaining in the water protecting against pathogens, and should not, in general, be used without careful thought after chemical disinfection treatments in portable water purification processing. Ceramic/Carbon Core filters with a 0.5 μm or smaller pore size are excellent for removing bacteria and cysts while also removing chemicals. Chemical disinfection with halogens Chemical disinfection with halogens, chiefly chlorine and iodine, results from oxidation of essential cellular structures and enzymes. The primary factors that determine the rate and proportion of microorganisms killed are the residual or available halogen concentration and the exposure time. Secondary factors are pathogen species, water temperature, pH, and organic contaminants. In field-water disinfection, use of concentrations of 1–16 mg/L for 10–60 min is generally effective. Of note, Cryptosporidium oocysts, likely Cyclospora species, Ascaris eggs are extremely resistant to halogens and field inactivation may not be practical with bleach and iodine. Iodine Iodine used for water purification is commonly added to water as a solution, in crystallized form, or in tablets containing tetraglycine hydroperiodide that release 8 mg of iodine per tablet. The iodine kills many, but not all, of the most common pathogens present in natural fresh water sources. Carrying iodine for water purification is an imperfect but lightweight solution for those in need of field purification of drinking water. Kits are available in camping stores that include an iodine pill and a second pill (vitamin C or ascorbic acid) that will remove the iodine taste from the water after it has been disinfected. The addition of vitamin C, in the form of a pill or in flavored drink powders, precipitates much of the iodine out of the solution, so it should not be added until the iodine has had sufficient time to work. This time is 30 minutes in relatively clear, warm water, but is considerably longer if the water is turbid or cold. If the iodine has precipitated out of the solution, then the drinking water has less available iodine in the solution. Tetraglycine hydroperiodide maintains its effectiveness indefinitely before the container is opened; although some manufacturers suggest not using the tablets more than three months after the container has initially been opened, the shelf life is in fact very long provided that the container is resealed immediately after each time it is opened. Similarly to potassium iodide (KI), sufficient consumption of tetraglycine hydroperiodide tablets may protect the thyroid against uptake of radioactive iodine. A 1995 study found that daily consumption of water treated with 4 tablets containing tetraglycine hydroperiodide reduced the uptake of radioactive iodine in human subjects to a mean of 1.1 percent, from a baseline mean of 16 percent, after a week of treatment. At 90 days of daily treatment, uptake was further reduced to a mean of 0.5 percent. However, unlike KI, tetraglycine hydroperiodide is not recommended by the WHO for this purpose. Iodine should be allowed at least 30 minutes to kill Giardia. Iodine crystals A potentially lower cost alternative to using iodine-based water purification tablets is the use of iodine crystals, although there are serious risks of acute iodine toxicity if preparation and dilution are not measured with some accuracy. This method may not be adequate in killing Giardia cysts in cold water. An advantage of using iodine crystals is that only a small amount of iodine is dissolved from the iodine crystals at each use, giving this method of treating water a capability for treating very large volumes of water. Unlike tetraglycine hydroperiodide tablets, iodine crystals have an unlimited shelf life as long as they are not exposed to air for long periods of time or are kept under water. Iodine crystals will sublimate if exposed to air for long periods of time. The large quantity of water that can be purified with iodine crystals at low cost makes this technique especially cost effective for point of use or emergency water purification methods intended for use longer than the shelf life of tetraglycine hydroperiodide. Halazone tablets Chlorine-based halazone tablets were formerly popularly used for portable water purification. Chlorine in water is more than three times more effective as a disinfectant against Escherichia coli than iodine. Halazone tablets were thus commonly used during World War II by U.S. soldiers for portable water purification, even being included in accessory packs for C-rations until 1945. Sodium dichloroisocyanurate (NaDCC) has largely displaced halazone tablets for the few remaining chlorine-based water purification tablets available today. Bleach Common bleach including calcium hypochlorite (Ca[OCl]2) and sodium hypochlorite (NaOCl) are common, well-researched, low-cost oxidizers. Chlorine bleach tablets give a more stable platform for disinfecting the water than liquid bleach as the liquid version tends to degrade with age and give unregulated results unless assays are carried out, which may be impractical in the field. Still, liquid bleach may nonetheless safely be used for short-term emergency water disinfection. The EPA recommends two drops of 8.25% sodium hypochlorite solution (regular, unscented chlorine bleach) mixed per one quart/liter of water and leave to stand covered for 30 to 60 minutes. Two drops of 5% solution also suffices. Double the amount of bleach if the water is cloudy, colored, or very cold. Afterwards, the water should have a slight chlorine odor. If not repeat the dosage and let stand for another 15 minutes before use. After this treatment, the water may be left open to reduce the chlorine smell and taste. The Centers for Disease Control & Prevention (CDC) and Population Services International (PSI) promote a similar product (a 0.5% - 1.5% sodium hypochlorite solution) as part of their Safe Water System (SWS) strategy. The product is sold in developing countries under local brand names specifically for the purpose of disinfecting drinking water. Neither chlorine (e.g., bleach) nor iodine alone is considered completely effective against Cryptosporidium, although they are partially effective against Giardia. Chlorine is considered slightly better against the latter. A more complete field solution that includes chemical disinfectants is to first filter the water, using a 0.2 μm ceramic cartridge pumped filter, followed by treatment with iodine or chlorine, thereby filtering out cryptosporidium, Giardia, and most bacteria, along with the larger viruses, while also using chemical disinfectant to address smaller viruses and bacteria that the filter cannot remove. This combination is also potentially more effective in some cases than even using portable electronic disinfection based on UV treatment. Chlorine dioxide Chlorine dioxide can come from tablets or be created by mixing two chemicals together. It is more effective than iodine or chlorine against giardia, and although it has only low to moderate effectiveness against cryptosporidium, iodine and chlorine are ineffective against this protozoan. The cost of chlorine dioxide treatment is higher than the cost of iodine treatment. Mixed oxidant A simple brine {salt + water} solution in an electrolytic reaction produces a powerful mixed oxidant disinfectant (mostly chlorine in the form of hypochlorous acid (HOCl) and some peroxide, ozone, chlorine dioxide). Chlorine tablets Sodium dichloroisocyanurate or troclosene sodium, more commonly shortened as NaDCC, is a form of chlorine used for disinfection. It is used by major non-governmental organizations such as UNICEF to treat water in emergencies. Sodium dichloroisocyanurate tablets are available in a range of concentrations to treat differing volumes of water to give the World Health Organization's recommended 5ppm available chlorine. They are effervescent tablets allowing the tablet to dissolve in a matter of minutes. Other chemical disinfection additives Silver ion tablets An alternative to iodine-based preparations in some usage scenarios are silver ion/chlorine dioxide-based tablets or droplets. These solutions may disinfect water more effectively than iodine-based techniques while leaving hardly any noticeable taste in the water in some usage scenarios. Silver ion/chlorine dioxide-based disinfecting agents will kill Cryptosporidium and Giardia, if utilized correctly. The primary disadvantage of silver ion/chlorine dioxide-based techniques is the long purification times (generally 30 minutes to 4 hours, depending on the formulation used). Another concern is the possible deposition and accumulation of silver compounds in various body tissues leading to a rare condition called argyria that results in a permanent, disfiguring, bluish-gray pigmentation of the skin, eyes, and mucous membranes. Hydrogen peroxide One recent study has found that the wild Salmonella which would reproduce quickly during subsequent dark storage of solar-disinfected water could be controlled by the addition of just 10 parts per million of hydrogen peroxide. Ultraviolet purification Ultraviolet (UV) light induces the formation of covalent linkages on DNA and thereby prevents microbes from reproducing. Without reproduction, the microbes become far less dangerous. Germicidal UV-C light in the short wavelength range of 100–280 nm acts on thymine, one of the four base nucleotides in DNA. When a germicidal UV photon is absorbed by a thymine molecule that is adjacent to another thymine within the DNA strand, a covalent bond or dimer between the molecules is created. This thymine dimer prevents enzymes from "reading" the DNA and copying it, thus neutering the microbe. Prolonged exposure to ionizing radiation can cause single and double-stranded breaks in DNA, oxidation of membrane lipids, and denaturation of proteins, all of which are toxic to cells. Still, there are limits to this technology. Water turbidity (i.e., the amount of suspended & colloidal solids contained in the water to be treated) must be low, such that the water is clear, for UV purification to work well - thus a pre-filter step might be necessary. A concern with UV portable water purification is that some pathogens are hundreds of times less sensitive to UV light than others. Protozoan cysts were once believed to be among the least sensitive, however recent studies have proved otherwise, demonstrating that both Cryptosporidium and Giardia are deactivated by a UV dose of just 6 mJ/cm2 However, EPA regulations and other studies show that it is viruses that are the limiting factor of UV treatment, requiring a 10-30 times greater dose of UV light than Giardia or Cryptosporidium. Studies have shown that UV doses at the levels provided by common portable UV units are effective at killing Giardia and that there was no evidence of repair and reactivation of the cysts. Water treated with UV still has the microbes present in the water, only with their means for reproduction turned "off". In the event that such UV-treated water containing neutered microbes is exposed to visible light (specifically, wavelengths of light over 330-500 nm) for any significant period of time, a process known as photo reactivation can take place, where the possibility for repairing the damage in the bacteria's reproduction DNA arises, potentially rendering them once more capable of reproducing and causing disease. UV-treated water must therefore not be exposed to visible light for any significant period of time after UV treatment, before consumption, to avoid ingesting reactivated and dangerous microbes. Recent developments in semiconductor technology allows for the development of UV-C Light Emitting Diodes (LEDs). UV-C LED systems address disadvantages of mercury-based technology, namely: power-cycling penalties, high power needs, fragility, warm-up time, and mercury content. Solar water disinfection In solar water disinfection (often shortened as "sodis"), microbes are destroyed by temperature and UVA radiation provided by the sun. Water is placed in a transparent plastic PET bottle or plastic bag, oxygenated by shaking partially filled capped bottles prior to filling the bottles all the way, and left in the sun for 6–24 hours atop a reflective surface. Solar distillation Solar distillation relies on sunlight to warm and evaporate the water to be purified which then condenses and trickles into a container. In theory, a solar (condensation) still removes all pathogens, salts, metals, and most chemicals but in field practice the lack of clean components, easy contact with dirt, improvised construction, and disturbances result in cleaner, yet contaminated water. Homemade water filters Water filters can be made on-site using local materials such as sand and charcoal (e.g. from firewood burned in a special way). These filters are sometimes used by soldiers and outdoor enthusiasts. Due to their low cost they can be made and used by anyone. The reliability of such systems is highly variable. Such filters can do little, if anything, to mitigate germs and other harmful constituents and can give a false sense of security that the water so produced is potable. Water processed through an improvised filter should undergo secondary processing such as boiling to render it safe for consumption. Prevention of water contamination Human water-borne diseases usually come from other humans, thus human-derived materials (feces, medical waste, wash water, lawn chemicals, gasoline engines, garbage, etc.) should be kept far away from water sources. For example, human excreta should be buried well away (>) from water sources to reduce contamination. In some wilderness areas it is recommended that all waste be packed up and carted out to a properly designated disposal point. See also Ceramic water filter Desalination Self-supply of water and sanitation Solar water disinfection Traveler's diarrhea Water quality Wilderness acquired diarrhea References External links Household Water Treatment Knowledge on CAWST website Water Camping Drinking water Hiking Waterborne diseases Wilderness Camping equipment Hiking equipment Emergency services Water treatment
Portable water purification
Chemistry,Engineering,Environmental_science
4,929
3,062,637
https://en.wikipedia.org/wiki/Estimation%20of%20distribution%20algorithm
Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima. EDAs belong to the class of evolutionary algorithms. The main difference between EDAs and most conventional evolutionary algorithms is that evolutionary algorithms generate new candidate solutions using an implicit distribution defined by one or more variation operators, whereas EDAs use an explicit probability distribution encoded by a Bayesian network, a multivariate normal distribution, or another model class. Similarly as other evolutionary algorithms, EDAs can be used to solve optimization problems defined over a number of representations from vectors to LISP style S expressions, and the quality of candidate solutions is often evaluated using one or more objective functions. The general procedure of an EDA is outlined in the following: t := 0 initialize model M(0) to represent uniform distribution over admissible solutions while (termination criteria not met) do P := generate N>0 candidate solutions by sampling M(t) F := evaluate all candidate solutions in P M(t + 1) := adjust_model(P, F, M(t)) t := t + 1 Using explicit probabilistic models in optimization allowed EDAs to feasibly solve optimization problems that were notoriously difficult for most conventional evolutionary algorithms and traditional optimization techniques, such as problems with high levels of epistasis. Nonetheless, the advantage of EDAs is also that these algorithms provide an optimization practitioner with a series of probabilistic models that reveal a lot of information about the problem being solved. This information can in turn be used to design problem-specific neighborhood operators for local search, to bias future runs of EDAs on a similar problem, or to create an efficient computational model of the problem. For example, if the population is represented by bit strings of length 4, the EDA can represent the population of promising solution using a single vector of four probabilities (p1, p2, p3, p4) where each component of p defines the probability of that position being a 1. Using this probability vector it is possible to create an arbitrary number of candidate solutions. Estimation of distribution algorithms (EDAs) This section describes the models built by some well known EDAs of different levels of complexity. It is always assumed a population at the generation , a selection operator , a model-building operator and a sampling operator . Univariate factorizations The most simple EDAs assume that decision variables are independent, i.e. . Therefore, univariate EDAs rely only on univariate statistics and multivariate distributions must be factorized as the product of univariate probability distributions, Such factorizations are used in many different EDAs, next we describe some of them. Univariate marginal distribution algorithm (UMDA) The UMDA is a simple EDA that uses an operator to estimate marginal probabilities from a selected population . By assuming contain elements, produces probabilities: Every UMDA step can be described as follows Population-based incremental learning (PBIL) The PBIL, represents the population implicitly by its model, from which it samples new solutions and updates the model. At each generation, individuals are sampled and are selected. Such individuals are then used to update the model as follows where is a parameter defining the learning rate, a small value determines that the previous model should be only slightly modified by the new solutions sampled. PBIL can be described as Compact genetic algorithm (cGA) The CGA, also relies on the implicit populations defined by univariate distributions. At each generation , two individuals are sampled, . The population is then sorted in decreasing order of fitness, , with being the best and being the worst solution. The CGA estimates univariate probabilities as follows where, is a constant defining the learning rate, usually set to . The CGA can be defined as Bivariate factorizations Although univariate models can be computed efficiently, in many cases they are not representative enough to provide better performance than GAs. In order to overcome such a drawback, the use of bivariate factorizations was proposed in the EDA community, in which dependencies between pairs of variables could be modeled. A bivariate factorization can be defined as follows, where contains a possible variable dependent to , i.e. . Bivariate and multivariate distributions are usually represented as probabilistic graphical models (graphs), in which edges denote statistical dependencies (or conditional probabilities) and vertices denote variables. To learn the structure of a PGM from data linkage-learning is employed. Mutual information maximizing input clustering (MIMIC) The MIMIC factorizes the joint probability distribution in a chain-like model representing successive dependencies between variables. It finds a permutation of the decision variables, , such that minimizes the Kullback-Leibler divergence in relation to the true probability distribution, i.e. . MIMIC models a distribution New solutions are sampled from the leftmost to the rightmost variable, the first is generated independently and the others according to conditional probabilities. Since the estimated distribution must be recomputed each generation, MIMIC uses concrete populations in the following way Bivariate marginal distribution algorithm (BMDA) The BMDA factorizes the joint probability distribution in bivariate distributions. First, a randomly chosen variable is added as a node in a graph, the most dependent variable to one of those in the graph is chosen among those not yet in the graph, this procedure is repeated until no remaining variable depends on any variable in the graph (verified according to a threshold value). The resulting model is a forest with multiple trees rooted at nodes . Considering the non-root variables, BMDA estimates a factorized distribution in which the root variables can be sampled independently, whereas all the others must be conditioned to the parent variable . Each step of BMDA is defined as follows Multivariate factorizations The next stage of EDAs development was the use of multivariate factorizations. In this case, the joint probability distribution is usually factorized in a number of components of limited size . The learning of PGMs encoding multivariate distributions is a computationally expensive task, therefore, it is usual for EDAs to estimate multivariate statistics from bivariate statistics. Such relaxation allows PGM to be built in polynomial time in ; however, it also limits the generality of such EDAs. Extended compact genetic algorithm (eCGA) The ECGA was one of the first EDA to employ multivariate factorizations, in which high-order dependencies among decision variables can be modeled. Its approach factorizes the joint probability distribution in the product of multivariate marginal distributions. Assume is a set of subsets, in which every is a linkage set, containing variables. The factorized joint probability distribution is represented as follows The ECGA popularized the term "linkage-learning" as denoting procedures that identify linkage sets. Its linkage-learning procedure relies on two measures: (1) the Model Complexity (MC) and (2) the Compressed Population Complexity (CPC). The MC quantifies the model representation size in terms of number of bits required to store all the marginal probabilities The CPC, on the other hand, quantifies the data compression in terms of entropy of the marginal distribution over all partitions, where is the selected population size, is the number of decision variables in the linkage set and is the joint entropy of the variables in The linkage-learning in ECGA works as follows: (1) Insert each variable in a cluster, (2) compute CCC = MC + CPC of the current linkage sets, (3) verify the increase on CCC provided by joining pairs of clusters, (4) effectively joins those clusters with highest CCC improvement. This procedure is repeated until no CCC improvements are possible and produces a linkage model . The ECGA works with concrete populations, therefore, using the factorized distribution modeled by ECGA, it can be described as Bayesian optimization algorithm (BOA) The BOA uses Bayesian networks to model and sample promising solutions. Bayesian networks are directed acyclic graphs, with nodes representing variables and edges representing conditional probabilities between pair of variables. The value of a variable can be conditioned on a maximum of other variables, defined in . BOA builds a PGM encoding a factorized joint distribution, in which the parameters of the network, i.e. the conditional probabilities, are estimated from the selected population using the maximum likelihood estimator. The Bayesian network structure, on the other hand, must be built iteratively (linkage-learning). It starts with a network without edges and, at each step, adds the edge which better improves some scoring metric (e.g. Bayesian information criterion (BIC) or Bayesian-Dirichlet metric with likelihood equivalence (BDe)). The scoring metric evaluates the network structure according to its accuracy in modeling the selected population. From the built network, BOA samples new promising solutions as follows: (1) it computes the ancestral ordering for each variable, each node being preceded by its parents; (2) each variable is sampled conditionally to its parents. Given such scenario, every BOA step can be defined as Linkage-tree Genetic Algorithm (LTGA) The LTGA differs from most EDA in the sense it does not explicitly model a probability distribution but only a linkage model, called linkage-tree. A linkage is a set of linkage sets with no probability distribution associated, therefore, there is no way to sample new solutions directly from . The linkage model is a linkage-tree produced stored as a Family of sets (FOS). The linkage-tree learning procedure is a hierarchical clustering algorithm, which work as follows. At each step the two closest clusters and are merged, this procedure repeats until only one cluster remains, each subtree is stored as a subset . The LTGA uses to guide an "optimal mixing" procedure which resembles a recombination operator but only accepts improving moves. We denote it as , where the notation indicates the transfer of the genetic material indexed by from to . Input: A family of subsets and a population Output: A population . for each in do for each in do choose a random := := if then return The LTGA does not implement typical selection operators, instead, selection is performed during recombination. Similar ideas have been usually applied into local-search heuristics and, in this sense, the LTGA can be seen as an hybrid method. In summary, one step of the LTGA is defined as Other Probability collectives (PC) Hill climbing with learning (HCwL) Estimation of multivariate normal algorithm (EMNA) Estimation of Bayesian networks algorithm (EBNA) Stochastic hill climbing with learning by vectors of normal distributions (SHCLVND) Real-coded PBIL Selfish Gene Algorithm (SG) Compact Differential Evolution (cDE) and its variants Compact Particle Swarm Optimization (cPSO) Compact Bacterial Foraging Optimization (cBFO) Probabilistic incremental program evolution (PIPE) Estimation of Gaussian networks algorithm (EGNA) Estimation multivariate normal algorithm with thresheld convergence Dependency Structure Matrix Genetic Algorithm (DSMGA) Related CMA-ES Cross-entropy method Ant colony optimization algorithms References Evolutionary computation Stochastic optimization
Estimation of distribution algorithm
Biology
2,427
22,732,649
https://en.wikipedia.org/wiki/Pisces%20V
Pisces V is a type of crewed submersible ocean exploration device, powered by battery, and capable of operating to depths of , a depth that is optimum for use in the sea waters around the Hawaiian Islands. It is used by scientists to explore the deep sea around the underwater banks in the main Hawaiian Islands, as well as the underwater features and seamounts in the Northwestern Hawaiian Islands, specifically around Kamaʻehuakanaloa Seamount (formerly Loihi). In 1973, Pisces V took part in the rescue of Roger Mallinson and Roger Chapman, who were trapped on the seabed in Pisces Vs sister submersible Pisces III. In August 2002, Pisces V and her sister Pisces IV discovered a World War II Japanese midget submarine outside of Pearl Harbor which had been sunk by the destroyer in the first American shots fired in World War II. In 2011, marine scientists from HURL celebrated the 1,000 dives of Pisces V and Pisces IV. Uses The advantage of having two is that it allows preparation for an emergency. While one of the submersibles is conducting its dive, the other remains at readiness should there be an emergency, needing to be boarded on ship and hurried to the site of the problem. Such an emergency could include the submersible becoming tangled in fishing nets or entrapped in rocks or debris on the ocean floor. In such cases, the second heads to the rescue. There are also research experiments where it is advantageous to use the two vessels together. In August 2002, Pisces V and her sister vessel Pisces IV discovered a Japanese midget submarine; sunk on December 7, 1941 by the destroyer in the first American shots fired in World War II, the submarine was hit by a 4"/50 caliber gun shot and depth charged shortly before the attack on Pearl Harbor began. The submarine was found in of water about off the mouth of Pearl Harbor. This was the culmination of a 61-year search for the vessel and has been called "the most significant modern marine archeological find ever in the Pacific, second only to the finding of Titanic in the Atlantic". In 2003, Pisces V visited the Japanese midget submarine it had found in Pearl Harbor the year before. The U.S. State Department worked in conjunction with the Japanese Foreign Ministry to determine Japanese wishes regarding the fate of the midget submarine. The submersibles are used by HURL as teaching devices. In 2008, two members of the Tampa Bay Chapter of SCUBAnauts were invited to team with HURL and to visit the historic wreck of the Japanese submarine. One SCUBAnaut said as he stepped on Pisces V that "it looked and felt as if I were in a space shuttle preparing for lift-off". A mock-up of the control panel of Pisces V can be visited by the public at the Mokupāpapa Discovery Center in Hilo, Hawaii. On March 5, 2009, scientists discovered seven new species of bamboo coral, six of which may be of a new genus, an extraordinary finding in a genus so broad. They were able to find these specimens through the use of Pisces V which allowed them to reach depths beyond those attained by scuba divers. They also discovered a giant sponge approximately three feet tall and three feet wide that scientists named the "cauldron sponge". Notes Further reading External links Submarines of Canada Pisces-class deep submergence vehicles Ships built in North Vancouver 1973 ships Submarines of the United States Hydrology Physical geography
Pisces V
Chemistry,Engineering,Environmental_science
738
758,690
https://en.wikipedia.org/wiki/List%20of%20physical%20quantities
This article consists of tables outlining a number of physical quantities. The first table lists the fundamental quantities used in the International System of Units to define the physical dimension of physical quantities for dimensional analysis. The second table lists the derived physical quantities. Derived quantities can be expressed in terms of the base quantities. Note that neither the names nor the symbols used for the physical quantities are international standards. Some quantities are known as several different names such as the magnetic B-field which is known as the magnetic flux density, the magnetic induction or simply as the magnetic field depending on the context. Similarly, surface tension can be denoted by either σ, γ or T. The table usually lists only one name and symbol that is most commonly used. The final column lists some special properties that some of the quantities have, such as their scaling behavior (i.e. whether the quantity is intensive or extensive), their transformation properties (i.e. whether the quantity is a scalar, vector, matrix or tensor), and whether the quantity is conserved. Fundamental Scalar Vector Tensor See also List of photometric quantities List of radiometric quantities List of dimensionless quantities Quantities list Physical quantities
List of physical quantities
Physics,Chemistry,Mathematics
236
4,297,956
https://en.wikipedia.org/wiki/Clockwork%20universe
The clockwork universe is a concept which compares the universe to a mechanical clock. It continues ticking along, as a perfect machine, with its gears governed by the laws of physics, making every aspect of the machine predictable. It evolved during the Enlightenment in parallel with the emergence of Newton's laws governing motion and gravity. History This idea was very popular among deists during the Enlightenment, when Isaac Newton derived his laws of motion, and showed that alongside the law of universal gravitation, they could predict the behaviour of both terrestrial objects and the Solar System. A similar concept goes back to John of Sacrobosco's early 13th-century introduction to astronomy: On the Sphere of the World. In this widely popular medieval text, Sacrobosco spoke of the universe as the machina mundi, the machine of the world, suggesting that the reported eclipse of the Sun at the crucifixion of Jesus was a disturbance of the order of that machine. Responding to Gottfried Leibniz, a prominent supporter of the theory, in the Leibniz–Clarke correspondence, Samuel Clarke wrote: "The Notion of the World's being a great Machine, going on without the Interposition of God, as a Clock continues to go without the Assistance of a Clockmaker; is the Notion of Materialism and Fate, and tends, (under pretence of making God a Supra-mundane Intelligence,) to exclude Providence and God's Government in reality out of the World." In 2009, artist Tim Wetherell created a wall piece for Questacon (The National Science and Technology centre in Canberra, Australia) representing the concept of the clockwork universe. This steel artwork contains moving gears, a working clock, and a movie of the lunar terminator. See also Mechanical philosophy Determinism Eternalism (philosophy of time) History of science Orrery Philosophy of space and time Superdeterminism References Further reading E. J. Dijksterhuis (1961) The Mechanization of the World Picture, Oxford University Press Dolnick, Edward (2011) The Clockwork Universe: Isaac Newton, the Royal Society, and the Birth of the Modern World, HarperCollins. David Brewster (1850) "A Short Scheme of the True Religion", manuscript quoted in Memoirs of the Life, Writings and Discoveries of Sir Isaac Newton, cited in Dolnick, page 65. Anneliese Maier (1938) Die Mechanisierung des Weltbildes im 17. Jahrhundert Webb, R.K. ed. Knud Haakonssen (1996) "The Emergence of Rational Dissent." Enlightenment and Religion: Rational Dissent in Eighteenth-Century Britain, Cambridge University Press page 19. Westfall, Richard S. Science and Religion in Seventeenth-Century England. p. 201. Riskins, Jessica (2016) The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick, University of Chicago Press. External links "The Clockwork Universe". The Physical World. Ed. John Bolton, Alan Durrant, Robert Lambourne, Joy Manners, Andrew Norton. History of physics Isaac Newton Astronomical hypotheses Anthropic principle Physical cosmology Determinism
Clockwork universe
Physics,Astronomy
667
57,672,434
https://en.wikipedia.org/wiki/Ubrogepant
Ubrogepant, sold under the brand name Ubrelvy, is a medication used for the acute (immediate) treatment of migraine with or without aura (a sensory phenomenon or visual disturbance) in adults. It is not indicated for the preventive treatment of migraine. Ubrogepant is a small-molecule calcitonin gene-related peptide receptor antagonist. It is the first drug in this class approved for the acute treatment of migraine. The most common side effects are nausea, tiredness and dry mouth. Ubrogepant is contraindicated for co-administration with strong CYP3A4 inhibitors. History Ubrogepant, also known as MK-1602, was discovered by scientists at Merck. The effectiveness of ubrogepant for the acute treatment of migraine was demonstrated in two randomized, double-blind, placebo-controlled trials. In these studies, 1,439 adult patients with a history of migraine, with and without aura, received the approved doses of ubrogepant to treat an ongoing migraine. In both studies, the percentages of patients achieving pain relief two hours after treatment (defined as a reduction in headache severity from moderate or severe pain to no pain) and whose most bothersome migraine symptom (nausea, light sensitivity or sound sensitivity) stopped two hours after treatment were significantly greater among patients receiving ubrogepant (19–21% depending on the dose) compared to those receiving placebo (12%). Patients were allowed to take their usual acute treatment of migraine at least two hours after taking ubrogepant. 23% of patients were taking a preventive medication for migraine. In December 2019, the US Food and Drug Administration approved Ubrelvy produced by Allergan USA, Inc. for treatment of migraine after onset. References Drugs developed by AbbVie Antimigraine drugs Calcitonin gene-related peptide receptor antagonists Carboxamides Trifluoromethyl compounds Spiro compounds Piperidinones Pyridines
Ubrogepant
Chemistry
432
11,124,301
https://en.wikipedia.org/wiki/Mercury%20probe
The mercury probe is an electrical probing device to make rapid, non-destructive contact to a sample for electrical characterization. Its primary application is semiconductor measurements where otherwise time-consuming metallizations or photolithographic processing are required to make contact to a sample. These processing steps usually take hours and have to be avoided where possible to reduce device processing times. The mercury probe applies mercury contacts of well-defined areas to a flat sample. The nature of the mercury-sample contacts and the instrumentation connected to the mercury probe define the application. If the mercury-sample contact is ohmic (non-rectifying) then current-voltage instrumentation can be used to measure resistance, leakage currents, or current-voltage characteristics. Resistance can be measured on bulk samples or on thin films. The thin films can be composed of any material that does not react with mercury. Metals, semiconductors, oxides, and chemical coatings have all been measured successfully. Applications The mercury probe is a versatile tool for investigation of parameters of conducting, insulating and semiconductor materials. One of the first successful mercury probe applications was the characterization of epitaxial layers grown on silicon. It is critical to device performance to monitor the doping level and thickness of an epitaxial layer. Prior to the mercury probe, a sample had to undergo a metallization process, which could take hours. A mercury probe connected to capacitance-voltage doping profile instrumentation could measure an epitaxial layer as soon as it came out of the epitaxial reactor. The mercury probe formed a Schottky barrier of well-defined area that could be measured as easily as a conventional metallized contact. Another mercury probe application popular for it speed is oxide characterization. The mercury probe forms a gate contact and enables measurement of the capacitance-voltage or current-voltage parameters of the mercury-oxide-semiconductor structure. Using this device, material parameters such as permittivity, doping, oxide charge, and dielectric strength may be evaluated. The contact area of a mercury droplet resting on a semiconductor can be modified by electrowetting, meaning that accurate parameter extraction may need to take this effect into account. A mercury probe with concentric dot and ring contacts as well as a back contact extends mercury probe applications to silicon on insulator (SOI) structures, where a pseudo-MOSFET device is formed. This Hg-FET can be used to study mobility, interface trap density, and transconductance. The same mercury-sample structures can be measured with capacitance-voltage instrumentation to monitor permittivity and thickness of dielectric materials. These measurements are a convenient gauge for development of novel dielectrics of both low-k and high-k types. If the mercury-sample contact is rectifying then a diode has formed and offers other measurement possibilities. Current-voltage measurements of the diode can reveal properties of the semiconductor such as breakdown voltage and lifetime. Capacitance-voltage measurements allow computation of the semiconductor doping level and uniformity. These measurements are successfully made on many materials including SiC, GaAs, GaN, InP, CdS, and InSb. References Semiconductor fabrication equipment
Mercury probe
Engineering
656
8,206,431
https://en.wikipedia.org/wiki/American%20Ephemeris%20and%20Nautical%20Almanac
The American Ephemeris and Nautical Almanac was published for the years 1855 to 1980, containing information necessary for astronomers, surveyors, and navigators. It was based on the original British publication, The Nautical Almanac and Astronomical Ephemeris, with which it merged to form The Astronomical Almanac, published from the year 1981 to the present. History Authorized by Congress in 1849, the American Nautical Almanac Office was founded and attached to the Department of the Navy with Charles Henry Davis as the first superintendent. The American Ephemeris and Nautical Almanac was first published in 1852, containing data for the year 1855. Its data was originally calculated by human "computers", such as Chauncey Wright and Joseph Winlock. Between 1855 and 1881 it had two parts, the first for the meridian of Greenwich contained data on the Sun, Moon, lunar distances, Venus, Mars, Jupiter, and Saturn, which was published separately as The American Nautical Almanac. The second part contained data for the meridian of Washington on the Sun, Moon, planets, principal stars, eclipses, occultations, and other phenomena. Beginning in 1882, data for Mercury, Uranus, and Neptune was added to the first part, with eclipses, occultations, and other phenomena forming a separate third part. In 1916, The American Nautical Almanac ceased to be a reprint of the first part of the American Ephemeris and Nautical Almanac, becoming a separately prepared volume for the navigator. In 1937, the American Ephemeris and Nautical Almanac was divided into seven parts, with data for the meridian of Washington substantially reduced, then eliminated beginning in 1951. Data for Pluto was added in 1950. Beginning in 1960, all parts except for a few introductory pages were jointly calculated and typeset by the American Nautical Almanac Office and Her Majesty's Nautical Almanac Office but published separately within The American Ephemeris and Nautical Almanac and The Astronomical Ephemeris, a new name for the old British title The Nautical Almanac and Astronomical Ephemeris. Beginning in 1981, the title The American Ephemeris and Nautical Almanac and the British title The Astronomical Ephemeris were completely merged under the single title The Astronomical Almanac. See also Astronomical Almanac (specific title) Astronomical Ephemeris (generic article) Almanac (generic article) Nautical almanac (generic article) The Nautical Almanac (familiar name for a specific series of (official British) publications which appeared under a variety of different full titles for the period 1767 to 1959, as well as being a specific official title (jointly UK/US-published) for 1960 onwards) References External links History of The Astronomical Almanac 1852 establishments in Washington, D.C. American non-fiction books Publications established in 1852 United States Naval Observatory Astronomical almanacs Publications disestablished in 1980 1980 disestablishments in the United States
American Ephemeris and Nautical Almanac
Astronomy
595
43,349,181
https://en.wikipedia.org/wiki/Civic%20technology
Civic technology, or civic tech, enhances the relationship between the people and government with software for communications, decision-making, service delivery, and political process. It includes information and communications technology supporting government with software built by community-led teams of volunteers, nonprofits, consultants, and private companies as well as embedded tech teams working within government. Definition Civic technology refers to the use of technology to enhance the relationship between citizens and their government. There are four different types of e-government services, and civic technology falls within the category of government-to-citizen (G2C). The other categories include government-to-business (G2B), government-to-government (G2G), and government-to-employees (G2E). A 2013 report from the Knight Foundation, an American non-profit, attempts to map different focuses within the civic technology space. It broadly categorizes civic technology projects into two categories: open government and community action. Citizens are also now given access to their representatives through social media. They are able to express their concerns directly to government officials through sites like Twitter and Facebook. There have even been past cases of online voting being a polling option for local elections, which have seen vastly increased turnouts, such as in an Arizona election in 2000 which saw a turnout double that of the previous election. It is asserted though that civic technology in government provides for a good management technique but lacks in providing fair democratic representation. Social media is also becoming a growing aspect of government, towards furthering the communication between the government and its citizenry and towards greater transparency within the governmental sectors. This innovation is facilitating a change towards a more progressive and open government, based on civic engagement and technology for the people. With social media as a communicating platform, it enables the government to provide information to the constituents and citizens on the legislative processes and what is occurring in the Congress, for the sake of the citizens' concerns with the government procedures. The definition of what constitutes civic technology is contested to a certain extent, especially with regards to companies engaged in the sharing economy, such as Uber, Lyft, and Airbnb. For example, Airbnb's ability to provide New York residents with housing during the aftermath of Superstorm Sandy could be considered a form of civic technology. However, Nathaniel Heller, managing director of the Research for Development Institute's Governance Program contends that for-profit platforms definitively fall outside of the scope of civic technology: Heller has said that "while citizen-to-citizen sharing is indeed involved, the mission of these companies is focused on maximizing profit for their investors, not any sort of experiment in building social capital." From a goal perspective, civic technology can be understood as "the use of technology for the public good". Microsoft's Technology & Civic Engagement Team have attempted to produce a precise taxonomy of civic technology through a bottom-up approach. They inventoried the existing initiatives and classified them according to: their functions the social processes they involved their users and customers the degree of change they sought the depth of the technology. Microsoft's Civic Graph is guiding the developing network of civic innovators, expanding "its visualizations of funding, data usage, collaboration and even influence". It is a new tool that is opening up the access to track the world of civic technology towards improving the credibility and progress of this sector. This graph will enable more opportunities for access by governmental institutions and corporations to discover these innovators and use them for progressing society towards the future of technology and civic engagement. To create an informed and insightful community, there needs to be a sense of civic engagement in this community, where there is the sharing of information through civic technology platforms and applications. "Community engagement applied to public-interest technology requires that members of a community participate." With communal participation in civic tech platforms, this enables more informed residents to convene in a more engaged, unified community that seeks to share information, politically and socially, for the benefit of its citizenry and their concerns. This work resulted in the Civic Tech Field Guide, a free, crowdsourced collection of civic technology tools and projects. Individuals from over 100 countries have contributed to the documentation of technology, resources, funding and general information concerning "tech for social good". Technology that is designed to benefit the citizenry places the governments under pressure "to change and innovate the way in which their bureaucracies relate to citizens". E-government initiatives have been established and supported in order to strengthen the democratic values of governmental institutions, which can include transparency in government, along with improving the efficiency of the legislative processes to make the government more accountable and reactive to citizens' concerns. These will further civic engagement within the political spectrum for the sake of greater direct representation and a more democratic political system. Civic hacking refers to problem-solving by programmers, designers, data scientists, communicators, organizers, entrepreneurs, and government employees. A civic hacker may work autonomously and independently from the government but may still coordinate or collaborate with them. For example, in 2008, civic hacker William Entriken created an open-source web application that publicly displayed a comparison of the actual arrival times of Philadelphia’s local SEPTA trains to their scheduled times. It also automatically sends messages to SEPTA to recommend updates to the train schedule. SEPTA’s response indicated interest in coordinating with this civic hacker directly to improve the application. Some projects are led by nonprofits, such as Code for America and mySociety, often involving paid staff and contributions from volunteers. As the field of civic technology advances, it seems that apps and handheld devices will become a key focus for development as more companies and municipalities reach out to developers to help with specific issues. Apps are being used in conjunction with handheld devices to simplify tasks such as communication, data tracking, and safety. The most cost-effective way for citizens to get help and information is through neighbors and others around them. By linking people through apps and websites that foster conversation and promote civil service, cities have found an inexpensive way to provide services to their residents. Civic technology represents "just a piece of the $25.5 billion that government spends on external information technology (IT)," indicating that this sector will likely grow, fostering more innovation in both public and private sectors and furthering civic engagement within these platforms. Worldwide A worldwide organization that supports civic tech is the Open Government Partnership (OGP). It "is a multilateral initiative that aims to secure concrete commitments from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance". Created in 2011 by eight founding governments (Brazil, Indonesia, Mexico, Norway, the Philippines, South Africa, the United Kingdom and the United States), the OGP gathers every year for a summit. Countries involved are located mainly in America (North and South), Europe and South-Asia (Indonesia, Australia, South Korea). Only a few African countries are part of the OGP, though South Africa is one of the founding countries. Technological progress is rampant throughout the nations of the world, but there are dividing efforts and adoption techniques in how rapid certain countries are progressing compared to others. How countries are able to use information pertains to how devoted nations are to integrating technology into the lives of their citizens and businesses. Local and national governments are funding tens of billions of dollars towards information technology, for the sake of improving the functions and operations of this technology to work for the people and the governments. With more governments attaining a grasp on these technologies, it is paving the way towards more progressive and democratic political systems, for the concerns of future society and for those of the citizens of these nations. Africa Burkina Faso Government-led initiatives The government of Burkina Faso has a government website portal offering citizens online information about the government structure, their constitution, and laws. Kenya Citizen-led initiatives Launched in Kenya in 2014, "MajiVoice" is a joint initiative by the Water Services Regulatory Board (WASREB- the Water Sector Regulator in Kenya) and the World Bank's Water and Sanitation Program. As opposed to walk-in complaint centers, the initiative enables Kenyan citizens to report complaints with regards to water services via multiple channels of technology. The platform allows for communication between citizens and water service providers with the intention to improve service delivery in impoverished areas and user satisfaction. Users are given four options to report their water complaints. They can dial a number and report a complaint, send a text message (SMS) through their cell phone, or login to an online portal through a web browser on their phone or their laptop. One evaluation highlights the citizen engagement achieved after its implementation, from 400 complaints a month to 4000 complaints, and resolution rates from 46 percent to 94 percent. South Africa Government-led initiatives The South African government has a website portal for citizens called www.gov.za this was created by Center for Public Service Innovation (CPSI) in partnership with the Department of Public Service and Administration and the State Information Technology Agency. The government portal allows the citizens to interact with their government and provide feedback, request forms online, as well as access online to laws and contact information for lawmakers. GovChat is the official citizen engagement platform for the South African Government accessible via WhatsApp, Facebook Messenger, SMS and USSD, it offers information to citizens about a wide-array of services provided by the Government. Citizen-led initiatives Grassroot is a technology platform that supports community organizers to mobilize citizens, built for low-bandwidth, low-data settings that allows for smart-messaging through text message. Research by the MIT Governance Lab suggests that Grassroot can have important effects on the leadership capacity of community leaders, an effect that is most likely to be achieved through careful design, behavioral incentives, active coaching and iteration. Uganda Government-led initiatives The Ugandan government has a website portal created for citizens called Parliament Portal, which gives citizens online access to laws, their constitution, and election related news. Citizen-led initiatives U-Report, a mobile platform introduced by UNICEF Uganda in 2011, is an initiative that runs large scale polls with Ugandan youth on a wide range of issues, ranging from safety to access to education to inflation to early marriage. The goal of the initiative was to have Ugandan youth play a role in civic engagement within the context of local issues. U-Report is still active (as of April 2018), with over 240,000 users across Uganda. Support for the initiative primarily came from the aid of the government, NGO's, youth organizations, faith based organizations, and private companies. Users sign up for the program for free by sending a text on their phone, then every week "U-Reporters" answer a question regarding a public issue. Poll results are published in public media outlets such as newspapers, radio, etc. UNICEF takes these responses and provides members of parliament (MP's) a weekly review of these results, acting as a bridge between government and Ugandan youth. Asia Taiwan Taiwan is highly ranked internationally for its technological innovations including open data, digital inclusivity, and widespread internet participation. As of 2019, approximately 87% of Taiwan's citizens over 12 years old had connectivity to the internet. The widespread use of internet has facilitated online political participation by giving citizens a platform to express their political opinions. Through the internet, Taiwanese citizens can directly contact political figures through online channels and publicly voice their political beliefs. New innovations have continued to be made in Taiwan that foster more political participation. The online platform called "Join," for example, was created in 2015 to give Taiwanese citizens a way to discuss, review, and propose governmental policy. Overall, the development of the internet and the emergence of new technologies in Taiwan has shown to increase political participation among its citizens. Government-led initiatives Taiwan's Digital Minister Audrey Tang has made strides to increase communication and collaboration between the government and the general public. Networks of Participation Officers have been established in each minister to jointly create new governmental policies between the public sector, citizens. and other government departments through collaborative meetings. Taiwan has taken on a collaborative approach to civic technology as a way to encourage increased participation from the public. New governmental policies in Taiwan have helped foster technological advancement, such as the Financial Technology Development and Innovative Experimentation Act which passed in 2017 that created a Regulatory Sandbox platform to support the development of FinTech in Taiwan. This sandbox was created to support industry creativity by enabling entrepreneurs and companies to experiment freely with new technologies without legal constraints for a year. Citizen-led initiatives The g0v movement was created in 2012 with the goal of engaging more citizens in public affairs. It is a grassroots and decentralized civic tech community composed of coders, designers, NGO workers, civil servants and citizens designed to increase transparency of government information. All of g0v's projects are open-source and created by citizens. The g0v community has participated in a variety of social movements, including the Sunflower Student Movement where it provided a crowdsourcing platform, and the Hong Kong Umbrella Movement where it provided live broadcasting and a logistics system. The (v for virtual) was created initially by members of g0v and later as a collaboration with the Taiwan's government. is a digital space where participants can discuss controversial topics. It uses a conversation tool called pol.is that leverages machine learning to scale online discussion. Civic technology in Taiwan was a key component of the country's successful response to the COVID-19 epidemic. Partnering with the Taiwanese government, the civic tech community used open data to create maps available to citizens that visualized the availability of masks to make the distribution of PPE more efficient. Big data analytics and QR code scanning also were used in Taiwan's response to the pandemic, which enabled the government to send out real-time alerts during clinical visits and track citizens' travel history and health symptoms. The response to the COVID-19 pandemic in Taiwan is representative of the country's shift towards a 'techno-democratic statecraft' and positioned them as a new leader in the international sphere for digital infrastructure. Taiwan's handling and early response to the epidemic has gained them international praise, with the country having significantly fewer COVID-19 cases than their neighbors. Japan In Japan, the Civic Tech movement has been rapidly growing since around 2013. Japan's civic tech initiatives have been primarily citizen-led, but more recently, Japan has taken on government-led initiatives as well. Citizen-led initiatives The purpose of civic tech initiatives are to educate the population to use technology as a democratization tool and to access public information. Although the rapid growth of the civic tech movement in Japan started around 2013, the movement first came about in 2011 after the earthquake, tsunami and nuclear meltdowns that occurred in the Tōhoku region. After the Fukushima disaster, citizen-led initiative Safecast, which allows citizens to collect and distribute radiation data, was created. The mission of citizen-led initiative Code for All is to make data more accessible to the public and to encourage the use of technology for the democratization of governance. The Code for Japan chapter is one of several chapters started by Code for All. Although Code for Japan is a citizen-led initiative, it also works closely with the government. Policy Advisor of the Japanese Ministry of Internal Affairs, Naoki Ota, who is a promoter of Code for Japan's civic tech projects. In light of the COVID-19 pandemic, Code for Japan also developed stopcovid19.metro.tokyo.lg.jp for the Tokyo Metropolitan Government that informs the public about the number of coronavirus cases and reductions in metropolitan subway usage. A different citizen-led project led by JP-Mirai is working to release an app that allows migrant workers to file complaints and address issues regarding items like visas and taxation. The app currently remains unnamed. Government-led initiatives While civic technology initiatives in Japan had mostly been citizen-led, the inception of the coronavirus pandemic encouraged the Japanese government to transition to digitization. This is because former in-person practices moved to the digital space in lieu of the coronavirus. The government plans to focus on the digitization aspect of its functions: the implementation of more sophisticated systems in the central and local governments in order to increase the security of private and personal information and the transference from the primary use of Hanko –– a seal used in lieu of a signature on printed documents –– to digital verifications and documents in order to increase efficiency. The Tokyo Metropolitan Government has also made strides in light of the pandemic. Through the use of a copyright that allows for malleable content distribution Creative Commons licensing, and open-source development platform GitHub, the Tokyo Metropolitan Government has allowed other collaborators to add to the data and code of the project created by Code for Japan. Pakistan Pakistan's civic tech landscape is evolving rapidly, driven by both citizen-led and government-led initiatives. Civic technology in Pakistan is being used to address various socio-economic challenges, enhance governance, and improve public service delivery. The country is experiencing a growing trend of tech-driven solutions aimed at fostering transparency, accountability, and citizen engagement. Key areas of focus include open data initiatives, digital platforms for citizen services, and tools for civic participation. Citizen-led initiatives Code for Pakistan (CfP), founded in 2013, is a civic technology non-profit organization focused on bridging the gap between government and citizens via harnessing technology for civic and social good. CfP is an executive committee member of Code for All. CfP collaborates with government bodies to develop digital solutions to civic-facing problems, and it provides ways for people in Pakistan to be more civically engages. Notable projects include Civic Innovation Fellowship Programs with the governments of Khyber Pakhtunkhwa and Gilgit-Baltistan to create human-centered technology solutions for public services — and various open data initiatives that promote transparency and public participation. This includes creating the Khyber Pakhtunkhwa Open Data Portal in partnership with the Khyber Pakhtunkhwa government, and publishing Pakistan's first Open Data Playbook. CfP regularly organizes civic hackathons to address civic issues within Pakistan with the help of community members. Shehri Pakistan is dedicated to promoting urban planning and civic awareness around environmental issues. It runs projects that focuses on environmental and heritage conservation through public engagement and advocacy. Government-led initiatives The Pakistan Citizen's Portal (PCP) is a mobile application launched by the Government of Pakistan to facilitate citizen feedback and resolve public grievances. It features a grievance redressal system that allows citizens to lodge complaints regarding various government services and a performance monitoring system to track and monitor the performance of government officials in addressing complaints. Code for Pakistan assisted the government in the development of this application. The Punjab Information Technology Board (PITB) is an autonomous body set up by the Government of Punjab to promote IT in governance. Its key projects include e-Rozgaar, which provides digital skills training to youth for freelance work, and the School Information System, which digitizes school records and improves education management. The Khyber Pakhtunkhwa Information Technology Board (KPITB) is dedicated to the development of the IT sector in Khyber Pakhtunkhwa. Its major projects include Durshal, a network of co-working spaces and innovation labs across KP to support tech entrepreneurs, and Citizen Facilitation Centers, which provide one-stop digital services to citizens. Pakistan's civic tech ecosystem is characterized by a collaborative approach between citizens, tech communities, and government bodies. The ongoing efforts in this sector aim to empower citizens, improve governance, and address critical societal issues through innovative technological solutions. Nepal Civic technology in Nepal is growing, and has been utilized for tasks like mapping, migrant work technology, digital literacy and open data understanding in Nepal thus far. Citizen-led initiatives Kathmandu Living Labs (KLL), founded in 2013, is a civic technology company based in Nepal that works actively to train residents in Nepal and other Asian countries in mapping their communities via OpenStreetMap (OSM). During the 2015 earthquake in Nepal (magnitude of 7.3), organizations responsible for aid relief and reconstruction used OSM to navigate the disaster. In 2016, a new migration tool called Shuvayatra (Safe Journey) was launched in Nepal for the migrant workers of Nepal. The Asia Foundation worked with the Non-Residential Nepali Association (NRNA) and software firm, Young Innovations, in order to develop this mobile app that provides Nepali migrant workers with financial, education and training resources, as well as reliable employment services. The technology was developed in response to the often exploitative promises of working abroad as a migrant worker. In its beginnings, Code for Nepal, a non-profit organization that began in the United States, provided workshops in digital literacy for women in Kathmandu. Since, the organization has evolved to launching open data and civic tech products, as well as organizing conferences and scholarships for young men and women. Another civic tech non-profit called Open Knowledge Nepal has also been working to make data open and accessible to Nepali residents. Oceania Australia Citizen-led initiatives In Australia, a platform and proposed political party called MiVote has a mobile app for citizens to learn about policy and cast their vote for the policies they support. MiVote politicians elected to office would then vote in support of the majority position of the people using the app. Snap Send Solve is a mobile app for citizens to report to local councils and other authorities quickly and easily. In 2020, 430,000 reports where sent via the app. A January 2021 report in Melbourne's Herald Sun noted an increased number of reports for dumped rubbish. Europe Denmark Government-led initiatives In 2002, MindLab an innovation public sector service design group was established by the Danish ministries of Business and Growth, Employment, and Children and Education. MindLab was one of the world's first public sector design innovation labs and their work inspired the proliferation of similar labs and user-centered design methodologies deployed in many countries worldwide. The design methods used at MindLab are typically an iterative approach of rapid prototyping and testing to evolve not just their government projects, but also government organizational structure using ethnographic-inspired user research, creative ideation processes, and visualization and modeling of service prototypes. In Denmark, design within the public sector has been applied to a variety of projects including rethinking Copenhagen's waste management, improving social interactions between convicts and guards in Danish prisons, transforming services in Odense for mentally disabled adults and more. Estonia Government-led initiatives The process of digitalization in Estonia began in 2002, when local and central governments began building an infrastructure that allowed autonomous and interconnected data. That same year in 2002, Estonia launched a national ID system that was fully digitalized and paired with digital signatures. The national ID system allowed Estonians to pay taxes online, vote online, do online banking, access their health care records, as well as process 99% of Estonian public services online 24 hours a day, seven days. Estonia is well known internationally for its e-voting system. Internet voting (where citizens vote remotely with their own equipment) was piloted in Estonia in 2005 and has been in use since then. As of 2016, Estonia's Internet voting system has been implemented in three local elections, two European Parliament elections, and three parliamentary elections. In 2007, Estonia faced a politically motivated, large cyber attack which damaged most of the country's digital infrastructure, and as a result became the home of NATO Cyber Defense Centre of Excellence. The National Security Response was updated and approved in 2010 in response to the cyber attacks, and recognizes the growing threat of cyber crime in Estonia. In 2014, Estonia launched the e-Residency, which allowed users to create and manage a location independent business online from anywhere in the world. That was followed by an immigration visa for digital nomads, which was a novel way of approaching immigration policy. Citizen-led initiatives Several citizen designed e-democracy platforms have launched in Estonia. In 2013, the online platform People's Assembly (Rahvakogu) was launched for crowdsourcing ideas and proposals to amend Estonia's electoral laws, political party law, and other issues related to democracy. Citizen OS is another e-democracy platform and is free and open source. The platform was created with the goal of enabling Estonian citizens to engage in collaborative decision-making, encouraging users to initiate petitions and participate in meaningful discussion on issues in society. France The most dynamic French city regarding civic tech is Paris, with many initiatives moving in the Sentier, a neighborhood known for being a tech hub. According to Le Monde, French civic tech is "already a reality" but lacks investments to scale up. Government-led initiatives In France, public data are available on data.gouv.fr by the Etalab mission, located under the authority of the Prime Minister. Government agencies are also leading large citizen consultation through the Conseil national du numérique (National digital council), for example with the law about the digital republic (Projet de loi pour une république numérique). Citizen-led initiatives The French citizen community for civic tech is gathered in the collective Démocratie ouverte (Open democracy). The main purpose of this collective is to enhance democracy to increase citizen power, improve the way to decide collectively and update the political system. Démocratie ouverte gathers many projects focused on understanding politics, renewing institutions, participating in democracy, and public action. Several open-source, non-profit web platforms have been launched nationwide to support citizen's direct involvement: Communecter.org, Demodyne.org as well as Democracy OS France (derived from the Argentinian initiative). LaPrimaire.org organizes open primaries to allow the French to choose the candidates they wish to run for public elections Iceland The Icelandic constitutional reform, 2010–13 instituted a process for reviewing and redrafting their constitution after the 2008 financial crisis, using social media to gather feedback on twelve successive drafts. Beginning in October 2011, a Citizens Foundation platform called Betri Reykjavik had been implemented for citizens to inform each other and vote on issues. Each month the city council formally evaluates the top proposals before issuing an official response to each participant. As of 2017, the number of proposals approved by the city council reached 769. The Pirate Party (Iceland) uses the crowdsourcing platform Píratar for members to create party policies. Italy Citizen-led initiatives A consortium made by TOP-IX, FBK and RENA created the Italian civic tech school. The first edition was in May 2016 in Turin. The Five Star Movement, an Italian political party has a tool called Rousseau which gives members a way to communicate with their representatives. Spain The Madrid City Council has a department of Citizen Participation that facilitates a platform called Decide Madrid for registered users to discuss topics with others in the city, propose actions for the City Council, and submit ideas for how to spend a portion of the budget on projects voted on through participatory budgeting. Podemos (Spanish political party) uses a reddit called Plaza Podemos where anybody can propose and vote on ideas. Sweden The City of Stockholm has a make-a-suggestion page on stockholm.se and available as an app, allowing citizens to report any ideas for improvement in the city along with a photo and GPS. After receiving a suggestion, it is sent to the appropriate office that can place a work order. During 2016, one hundred thousand requests were recorded. This e-service began in September 2013. The city government of Gothenburg has an online participatory voting system, open for every citizen to propose changes and solutions. When a proposal receives more than 200 votes, it is delivered to the relevant political committee. United Kingdom Government-led initiatives In 2007 and 2008 documents from the British government explore the concept of "user-driven public services" and scenarios of highly personalized public services. The documents proposed a new view on the role of service providers and users in the development of new and highly customized public services, utilizing user involvement. This view has been explored through an initiative in the UK. Under the influence of the European Union, the possibilities of service design for the public sector are being researched, picked up, and promoted in countries such as Belgium. Care Opinion was set up to strengthen the voice of patients in the NHS in 2005. Behavioural Insights Team (BIT) (also known as Nudge) was originally part of the British cabinet and was founded in 2010, in order to apply nudge theory to try to improve British government policy, services and save money. As of 2014, BIT became a decentralized, semi-privatized company with Nesta (charity), BIT employees and the British government each owning a third of this new business. That same year a Nudge unit was added to the United States government under president Obama, referred to as the 'US Nudge Unit,' working within the White House Office of Science and Technology Policy. Citizen-led initiatives FixMyStreet.com is a website and app developed by mySociety, a UK based civic technology company that works to make online democracy tools for British citizens. FixMyStreet allows citizens in the United Kingdom to report public infrastructure issues (such as potholes, broken streetlights, etc.) to the proper local authority. FixMyStreet became inspiration to many countries around the world that followed suit to use civic technology to better public infrastructure. The website was funded by the Department for Constitutional Affairs Innovation fund and created by mySociety. Along with the platform itself, mySociety released FixMyStreet, a free and open-source software framework that allows users to create their own website to report street problems. mySociety has many different tools, like parliamentary monitoring ones, that work in many countries for different types of governance. When such tools are integrated into government systems, citizens can not only understand the inner workings of their now transparent government, but also have the means to "exert influence over the people in power". Newspeak House is a community space and venue focused on building a community of civic and political technology practitioners in the United Kingdom. Spacehive is a crowdfunding platform for civic improvement projects that allows citizens and local groups to propose project ideas such as improving a local park or starting a street market. Projects are then funding by a mix of citizens, companies and government bodies. The platform is used by several councils including the Mayor of London to co-fund projects. Democracy Club is a community interest company, founded in 2009 to provide British voters with easy access to candidate lists in upcoming elections. Democracy Club uses a network of volunteers to crowdsource information about candidates which is then presented to voters via a postcode search on the website whocanivotefor.co.uk. Democracy Club also works with the Electoral Commission to provide data for a national polling station finder at wheredoivote.co.uk and on the commission's own website. Ukraine Government-led initiatives In Ukraine, major civic tech movement started out with open data reform in 2014. As for now, public data are available on data.gov.ua, national open data portal. Citizen-led initiatives Some widely used Ukrainian civic tech projects are donor recruitment platform DonorUA, Ukrainian companies' data and court register monitoring service Open Data Bot, participatory budgeting platform "Громадський проект". The latter accounts for over 3 million users. In 2017, to foster the growth of civic tech initiatives, Ukrainian NGO SocialBoost launched 1991 Civic Tech Center, a dedicated community space in country's capital, Kyiv. The space opened following a $480,000 grant from Omidyar Network, the philanthropic investment firm established by eBay founder Pierre Omidyar. North America Canada Government-led initiatives Canadian Digital Service (CDS) was launched in 2017, as part of an attempt to bring better IT to the Canadian government. The CDS was established within the Treasury Board of Canada the Canadian agency that oversees spending within departments and the operations of the public service. Scott Brison, the president of the Canadian Treasury Board, launched CDS and was Canada's first minister of Digital Government. Citizen-led initiatives As in other countries, the Canadian civic technology movement is home to several organizations. Code for Canada is a non-profit group, following somewhat the model of Code for America. Several cities or regions host civic technology groups with regular meetings (in order from West to East): Vancouver, Calgary, Edmonton, Waterloo Region, Toronto, Ottawa, Fredericton, Saint John, and Halifax. United States Government-led initiatives The Clinton, Bush, and Obama administrations sought initiatives to further openness of the government, through either increased use of technology in political institutions or efficient ways to further civic engagement. The Obama administration pursued an Open Government Initiative based on principles of transparency and civic engagement. This strategy has paved the way for increased governmental transparency within other nations to improve democratically for the citizens' benefit and allow for greater participation within politics from a citizen's perspective. During his run for president, Obama was "tied directly to the extensive use of social media by the campaign". According to a study conducted by the International Data Corporation (IDC), an estimated $6.4 billion will be spent on civic technology in 2015 out of approximately $25.5 billion that governments in the United States will spend on external-facing technology projects. A Knight Foundation survey of the civic technology field found that the number of civic technology companies grew by roughly 23% annually between 2008 and 2013. Departments like 18F and the United States Digital Service have also been highlighted as examples of government investment in Civic Technology. Inspired by an appetite to build government technology with new processes, new digital agencies started the Digital Services Coalition to help build on the momentum. Citizen-led initiatives Civic technology is built by a variety of companies, organizations and volunteer groups. One prominent example is Code for America, a not-for-profit based in San Francisco, working toward addressing the gap between the government and citizens. College students from Harvard University created the national non-profit Coding it Forward that creates data science and technology internships for undergraduate and graduate students in United States federal agencies. Another example of a civic technology organization is the Chi Hack Night, based in Chicago. The Chi Hack Night is a weekly, volunteer-run event for building, sharing and learning about civic technology. Civic Hall is a coworking and event space in New York City for people who want to contribute to civic-minded projects using technology. And OpenGov creates software designed to enable public agencies to make data-driven decisions, improve budgeting and planning, and inform elected officials and citizens. OneBusAway, a mobile app that displays real-time transit info, exemplifies the open data use of civic technology. It is maintained by volunteers and has the civic utility of helping people navigate their way through cities. It follows the idea that technology can be a tool for which government can act as a society-equalizer. Princeton University Professor Andrew Appel set out to prove how easy it was to hack into a voting machine. On February 3, 2007, he and a graduate student, Alex Halderman, purchased a voting machine, and Halderman picked the lock in 7 seconds. They removed the 4 ROM chips and replaced them with modified versions of their own: a version of modified firmware that could throw off the machine's results, subtly altering the tally of votes, never to betray a hint to the voter. It took less than 7 minutes to complete the process. In September 2016, Appel wrote a testimony for the Congress House Subcommittee on Information Technology hearing on "Cybersecurity: Ensuring the Integrity of the Ballot Box", suggesting to for Congress to eliminate touchscreen voting machines after the election of 2016, and that it require all elections be subject to sensible auditing after every election to ensure that the systems are functioning properly and to prove to the American people that their votes are counted as cast. Mexico Government-led initiatives Within the Mexican president's office, there is a national digital strategy coordinator who works on Mexico's national digital strategy. The office has created the gob.mx portal, a website designed for Mexican citizen to engage with their government, as well as a system to share open government data. According to McKinsey & Company, in a 2018 survey Mexico had the worst-rated citizen experience (4.4 out of 10) for convenience and accessibility of Mexican government services, of the group of countries surveyed (Canada, France, Germany, Mexico, the United Kingdom, and the United States). Citizen-led initiatives Arena Electoral was an online platform created by Fundación Ethos during the Mexican electoral process of 2012 to promote responsible voting. An online simulation was created by taking the four presidential candidates in that election cycle and each were given policy issues based on the Mexican national agenda that they had to come up with a solution to. Once each candidate gave their solutions, the platform published it on their website and left it to the Mexican citizens to vote for the best policy. Latin America Argentina Partido de la Red (Net Party) is an Argentinean political party using the DemocracyOS open-source software with the goal of electing representatives who vote according to what citizens decide online. Caminos de la Villa is a citizen action platform where citizens can monitor the urbanization of the City of Buenos Aires. Users are able to view detailed information of the work the government is doing in the neighborhoods. Additionally, users are able to download documents, along with photos of what the government is doing. Users can also make reports of issues with public services to the platform. Bolivia Observatorio de Justicia Fiscal desde las Mujeres (English: The Women's Fiscal Justice Observatory), is an organization that reviews the fiscal policies of the country. They do this by using a system with the same name to process information regarding the spending of the country with a gender focus. This is done to have better equality in the expenditure of the country. Brazil In 2011, NOSSAS, a Brazilian organization that helps citizens and groups express their struggles and make change was founded. They have also made their own tech platform, BONDE. It is a platform in which other organizations can use to make their own website and use tools to spread their reach. Apart from BONDE, NOSSAS also provides support and programs to those who want to become activists. Chile CitizenLab is a civic technology company that is in many countries and local governments. Citizen Lab works so citizens are better informed in democracy to make public decisions. In 2019, they expanded to Chile and made teams to support them with engagement, budgeting, planning, and more. Colombia Founded in 2016, Movilizatorio was made to encourage and promote citizen participation in democracy. Movilizatorio works on many projects to address various issues in the country including political, social, behavioral, and cultural issues. One of their projects was able to get the local community together because an elementary school had not started classes. Shortly after the movement started, after getting signatures and going to the Secretary of Education, classes started. Panama Fundación para el Desarrollo de la Libertad Ciudadana (English: Foundation for the Development of Citizen Freedom) is an organization founded in 1995. The main goal is to improve democracy in Panama. Ways of achieving this is promoting transparency with the government to prevent corruption and engaging with citizens to increase democratic citizen participation. Paraguay TEDIC is an organization founded in 2012 that defends digital rights of citizens. TEDIC researches information on cybersecurity, copyright, artificial intelligence, and more. They also promote and develop their own software for people to use to make social change. They have worked on topics such as personal data, freedom of expression, gender and digital inclusion, and more. Uruguay A Tu Servicio, a civic tech platform, informs users and citizens on the country's public health services so that they are able to make informed decisions on medical providers. The platform was founded in 2015. It features a list users can use to compare 2 different health care providers. The data includes wait times, prices, number of users, workers, and more. DATA Uruguay is an organization that works on issues surrounding data. They work with other organizations and community to create tools with open data. DATA Uruguay promotes open data and transparency of public information. Venezuela Amidst the COVID-19 pandemic that was happening in Venezuela, programmers have made various apps with civic uses. One of which was Docti.App, it was an app that had a list of locations citizens can go to for emergencies. It had a filterable list to find whatever users needed, including medicine and oxygen bottles. Another example is Javenda, it was a web application used to find nearby hospitals. He gathered data from health centers, added it to a map, and made it accessible for users to locate them. Effects Effects on social behavior and civic engagement Because of the conveniences provided by civic technology, there are benefits as well as growing concern about the effects it may have on social behavior and civic engagement. New technology allows for connectivity and new communications, as well as changing how we interact with issues and contexts beyond one's intimate sphere. Civic technology affords transparency in government with open-government data, and allows more people of diverse socioeconomic levels to be able to build and engage with civic matters in a way that was not possible prior. Communication The importance of face-to-face interactions has also been called to question with the increase in e-mails and social media and a decrease in traditional, in-person social interaction. Technology as a whole may be responsible for this change in social norm, but it also holds potential for turning it around with audio and video communication capabilities. More research needs to be conducted in order to determine if these are appropriate substitutes for in-person interaction, or if any substitute is even feasible. Preece & Shneiderman discuss the important social aspect of civic technology with a discussion of the "reader-to-leader framework", which follows that users inform readers, who inform communicators, who then inform collaborators, before finally reaching leaders. This chain of communication allows for the interests of the masses to be communicated to the implementers. Elections Regarding elections and online polling, there is the potential for voters to make less informed decisions because of the ease of voting. Although many more voters will turn out, they may only be doing so because it is easy and may not be consciously making a decision based on their own synthesized opinion. It's suggested that if online voting becomes more common, so should constituent-led discussions regarding the issues or candidates being polled. Voting advice applications helps voters find candidates and parties closest to their preferences, with studies suggesting that the use of these applications tend to increase turnout and affect the choice of voters. An experiment during assembly elections in the Indian state of Uttar Pradesh showed that sending villages voice calls and text messages informing them of criminal charges of candidates increased the vote share of clean candidates and decreased the vote shares of violent criminal candidates. Effects on socioeconomics With advanced technologies coming at higher costs and with an increased reliance on civic technologies may leave low-income families in the dark if they cannot afford the platforms for civic technology, such as computers and tablets. This causes an increase in the gap between lower and middle/high socioeconomic class families. Knowledge of how to use computers is equally important when considering factors of accessing civic technology applications online, and is also generally lower in low-income households. According to a study performed by the National Center for Education Statistics, 14% of students between the age range of 3 and 18 do not have access to the internet. Those with a lower socio-economic status tend to cut their budgets by not installing internet in their homes. Public Schools have taken the lead in ensuring proper technology access and education in the classroom to better prepare children for the high-tech world, but there is still a clear difference between online contributions from those with and without experience on the internet. See also Collaborative e-democracy Comparison of civic technology platforms Digital citizen E-government Government by algorithm Open government Service design References E-government Information Age Information technology consulting Open government
Civic technology
Technology
9,050
63,241,811
https://en.wikipedia.org/wiki/Pyrrole%E2%80%93imidazole%20polyamides
Pyrrole–imidazole polyamides (PIPs) are a class of polyamides have the ability to bind to minor grooves found in the DNA helix. Scientists are experimenting with it as a drug-delivery mode that can switch genes on and off, as well as epigenetic modification in gene therapy. References Gene therapy
Pyrrole–imidazole polyamides
Engineering,Biology
72
27,960,442
https://en.wikipedia.org/wiki/Power-on%20hours
Power-on hours (POH) is the length of time, usually in hours, that electrical power is applied to a device. A part of the S.M.A.R.T. attributes (originally known as IntelliSafe, before its introduction to the public domain on 12 May 1995, by the computer hardware and software company Compaq), It is used to predict drive failure, supported by manufacturers such as Samsung, Seagate, Toshiba, IBM (Hitachi), Fujitsu, Maxtor, Kingston and Western Digital. Power-on hours is intended to indicate a remaining lifetime prediction for hard drives and solid state drives, generally, "the total expected life-time of a hard disk is 5 years" or 43,800 hours of constant use. Typically, after a disk reaches 5 years of power-on time, the disk is more likely to fail. Some drives can still work perfectly fine even after 43,800 hours had passed, and some have even reached 10 years or more without any problems. Google tested over 100,000 consumer grade serial and parallel ATA hard disks, finding evidence that S.M.A.R.T. attributes like POH played a heavy role in device failures. References Survival analysis Reliability analysis
Power-on hours
Engineering
258
38,262,855
https://en.wikipedia.org/wiki/Nanothermometry
Nanothermometry is a branch of physics and engineering exploring the use of non-invasive precise thermometers working at the nanoscale. These devices have high spatial resolution (below one micrometer), where conventional methods are ineffective. Sensitivity of a nanothermometer The sensitivity is a parameter that characterizes a thermometer giving information about the relative change on the output of the thermometer per degree of temperature change. Numerically, it can be computed using the calibration curve (temperature dependence of the thermometric parameter, Q) As Sr have small values, usually it is expressed as a percentage, like 1.0%· K−1, meaning that a degree change in temperature will be measured in the thermometric parameter as a change of 1.0%. This quantity is telling to determine the appropriate detector to be used in order to measure the temperature from the thermometric parameter change. Luminescent nanothermometers The well-known limitations of contact thermometers to work at submicron scale lead to the development of non-contact thermometry techniques, such as, IR thermography, thermoreflectance, optical interferometry, Raman spectroscopy, and luminescence. Luminescence nanothermometry exploits the relationship between temperature and luminescence properties to achieve thermal sensing from the spatial and spectral analysis of the light generated from the object to be thermally imaged. References Nanotechnology
Nanothermometry
Materials_science,Engineering
307
32,894,329
https://en.wikipedia.org/wiki/Astrobiophysics
Astrobiophysics is a field of intersection between astrophysics and biophysics concerned with the influence of the astrophysical phenomena upon life on planet Earth or some other planet in general. It differs from astrobiology which is concerned with the search of extraterrestrial life. Examples of the topics covered by this branch of science include the effect of supernovae on life on Earth and the effects of cosmic rays on irradiation at sea level. References External links Kansas University astrobiology page Astrophysics Biophysics .
Astrobiophysics
Physics,Astronomy,Biology
103
29,608,635
https://en.wikipedia.org/wiki/Lactarius%20subvelutinus
Lactarius subvelutinus is a member of the large milk-cap genus Lactarius in the order Russulales. It was first described scientifically by American mycologist Peck in 1904. See also List of Lactarius species References External links subvelutinus Fungi described in 1904 Fungi of North America Taxa named by Charles Horton Peck Fungus species
Lactarius subvelutinus
Biology
75
1,430,106
https://en.wikipedia.org/wiki/USB%20communications%20device%20class
USB communications device class (or USB CDC) is a composite Universal Serial Bus device class. The communications device class is used for computer networking devices akin to a network card, providing an interface for transmitting Ethernet or ATM frames onto some physical media. It is also used for modems, ISDN, fax machines, and telephony applications for performing regular voice calls. Microsoft Windows versions prior to Windows Vista do not work with the networking parts of the USB CDC, instead using Microsoft's own derivative named Microsoft RNDIS, a serialized version of the Microsoft NDIS (Network Driver Interface Specification). With a vendor-supplied INF file, Windows Vista works with USB CDC and USB WMCDC devices. This class can be used for industrial equipment such as CNC machinery to allow upgrading from older RS-232 serial controllers and robotics, since they can keep software compatibility. The device attaches to an RS-232 communications line and the operating system on the USB side makes the USB device appear as a traditional RS-232 port. While chip manufacturers such as Prolific Technology, FTDI, Microchip, and Atmel manufacture USB chips and provide drivers that expose the chip as a virtual RS-232 device, the chips do not use USB CDC protocol and rather use their custom protocols, though there are some exceptions (PL2305). Devices of this class are also implemented in embedded systems such as mobile phones so that a phone may be used as a modem, fax or network port. The data interfaces are generally used to perform bulk data transfer. See also List of USB Device Classes References External links USB-IF's Approved Class Specification Documents Class definitions for Communication Devices 1.2 (.zip file format, size 3.43 MB) Class definitions for Communication Devices 1.1 App Note, Migrating from RS-232 to USB Bridge Specification. Explains the use of USB CDC (Communications Device Class) ACM (Abstract Control Model) to emulate serial ports over USB. PL2305I USB to Printer Bridge Controller (component data) Communications device class
USB communications device class
Technology
420
40,905,834
https://en.wikipedia.org/wiki/Transportation%20of%20Dangerous%20Goods%20Act%2C%201992
The Transportation of Dangerous Goods Act, 1992 () is a Canadian federal statute. Introduced in the 34th Canadian Parliament, and receiving royal assent on June 23, 1992, the act regulates the transportation of dangerous goods in the country. The TGDA has an "Offences and Punishments" passage in which are detailed liabilities "on indictment to imprisonment for a term not exceeding two years", and that "Proceedings by way of summary conviction may be instituted at any time within, but not later than, five years after the day on which the subject matter of the proceedings arose." The TGDA falls under the control of the Minister of Transport. The Hazardous Waste Manifest form is mandated by the TDGA. Dangerous Goods Safety Marks Class 1, Explosives Class 1 placards all have an orange background. All placards under Class 1 have a placeholder for the product's compatibility group letter. Class 1.1, 1.2, and 1.3 bear a black exploding bomb symbol and have placeholders for the product's division. Class 2, Gases Class 2 placards all bear various symbols and background colours. Class 2.1, Flammable Gases, bears a black or white flame symbol on a red background. Class 2.2, Non-flammable and Non-toxic Gases, bears a black or white gas cylinder symbol on a green background. Class 2.3, Toxic Gases, bears a black skull and crossbones symbol on a white background. When anhydrous ammonia (UN 1005) is transported, the container must display (in addition to the existing Class 2.3 placard and UN number) a special placard with the product name and the words "inhalation hazard" (). If a product is an oxidizing gas, a yellow placard with black text and a flaming 'O' symbol must be displayed. Class 3, Flammable liquids The Class 3 placard bears a black or white flame symbol on a red background. Class 4, Flammable solids Class 4 placards all bear flame symbols with various backgrounds. Class 4.1, Flammable Solids, has a black symbol with a vertically striped red and white background. Class 4.2, Substances Liable to Spontaneous Combustion, has a black symbol with the upper half of the placard having a white background and the lower half with a red background. Class 4.3, Water-reactive Substances, has a black or white symbol with a blue background. Class 5, Oxidizing substances and organic peroxides Class 5 placards all bear various symbols and background colours. Class 5.1, Oxidizing Substances, bears a black flaming 'O' symbol on a yellow background. Class 5.2, Organic Peroxides, bears a black or white flame symbol with the upper half of the placard having a red background and the lower half with a yellow background. Class 6, Toxic and infectious substances Class 6 placards all bear various symbols on a white background. Class 7, Radioactive materials Class 8, Corrosives Class 9, Miscellaneous products Other placards & signs References External links Transportation of Dangerous Goods Act, 1992 Transportation of Dangerous Goods Regulations (SOR/2001-286) Canadian federal legislation Canadian transport law 1992 in Canadian law History of transport in Canada 1992 in transport Occupational safety and health Environment and health
Transportation of Dangerous Goods Act, 1992
Technology
692
51,467,278
https://en.wikipedia.org/wiki/OpenWebRTC
OpenWebRTC (OWR) is a free software stack that implements the WebRTC standard, a set of protocols and application programming interfaces defined by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF). It is an alternative to the reference implementation that is based on software from Global IP Solutions (GIPS). It is published under the terms of the Simplified (2-clause) BSD license and officially supports iOS, Linux, OS X, and Android operating systems. It is meant to also work outside web browsers, e.g. to power native mobile apps. It is mostly written in C and based largely on the multimedia framework GStreamer and a number of other, smaller external libraries. It officially supports both VP8 and H.264 as video formats. For H.264 it uses OpenH264 to which Cisco pays the patent licensing bills. Development of OpenWebRTC started at Ericsson Research under the lead of Stefan Ålund. They released it as free software in September 2014, together with the proof-of-concept web browser "Bowser" that is based on the stack. Among other things, this initial version didn't support data channels yet and was said to still be less mature than Google's reference implementation. References External links Software using the BSD license Web development Web standards
OpenWebRTC
Engineering
280
3,028,232
https://en.wikipedia.org/wiki/Topographical%20code
In medicine, "topographical codes" (or "topography codes") are codes that indicate a specific location in the body. Examples Only the first of these is a system dedicated only to topography. The others are more generalized systems that contain topographic axes. Nomina Anatomica (updated to Terminologia Anatomica) ICD-O SNOMED MeSH (the 'A' axis) See also Medical classification References Anatomy
Topographical code
Biology
90
516,762
https://en.wikipedia.org/wiki/Intersystem%20crossing
Intersystem crossing (ISC) is an isoenergetic radiationless process involving a transition between the two electronic states with different spin multiplicity. Excited singlet and triplet states When an electron in a molecule with a singlet ground state is excited (via absorption of radiation) to a higher energy level, either an excited singlet state or an excited triplet state will form. Singlet state is a molecular electronic state such that all electron spins are paired. That is, the spin of the excited electron is still paired with the ground state electron (a pair of electrons in the same energy level must have opposite spins, per the Pauli exclusion principle). In a triplet state the excited electron is no longer paired with the ground state electron; that is, they are parallel (same spin). Since excitation to a triplet state involves an additional "forbidden" spin transition, it is less probable that a triplet state will form when the molecule absorbs radiation. When a singlet state nonradiatively passes to a triplet state, or conversely a triplet transitions to a singlet, that process is known as intersystem crossing. In essence, the spin of the excited electron is reversed. The probability of this process occurring is more favorable when the vibrational levels of the two excited states overlap, since little or no energy must be gained or lost in the transition. As the spin/orbital interactions in such molecules are substantial and a change in spin is thus more favourable, intersystem crossing is most common in heavy-atom molecules (e.g. those containing iodine or bromine). This process is called "spin-orbit coupling". Simply-stated, it involves coupling of the electron spin with the orbital angular momentum of non-circular orbits. In addition, the presence of paramagnetic species in solution enhances intersystem crossing. The radiative decay from an excited triplet state back to a singlet state is known as phosphorescence. Since a transition in spin multiplicity occurs, phosphorescence is a manifestation of intersystem crossing. The time scale of intersystem crossing is on the order of 10−8 to 10−3 s, one of the slowest forms of relaxation. Metal complexes Once a metal complex undergoes metal-to-ligand charge transfer, the system can undergo intersystem crossing, which, in conjunction with the tunability of MLCT excitation energies, produces a long-lived intermediate whose energy can be adjusted by altering the ligands used in the complex. Another species can then react with the long-lived excited state via oxidation or reduction, thereby initiating a redox pathway via tunable photoexcitation. Complexes containing high atomic number d6 metal centers, such as Ru(II) and Ir(III), are commonly used for such applications due to them favoring intersystem crossing as a result of their more intense spin-orbit coupling. Complexes that have access to d orbitals are able to access spin multiplicities besides the singlet and triplet states, as some complexes have orbitals of similar or degenerate energies so that it is energetically favorable for electrons to be unpaired. It is possible then for a single complex to undergo multiple intersystem crossings, which is the case in light-induced excited spin-state trapping (LIESST), where, at low temperatures, a low-spin complex can be irradiated and undergo two instances of intersystem crossing. For Fe(II) complexes, the first intersystem crossing occurs from the singlet to the triplet state, which is then followed by intersystem crossing between the triplet and the quintet state. At low temperatures, the low-spin state is favored, but the quintet state is unable to relax back to the low-spin ground state due to their differences in zero-point energy and metal-ligand bond length. The reverse process is also possible for cases such as [Fe(ptz)6](BF4)2, but the singlet state is not fully regenerated, as the energy needed to excite the quintet ground state to the necessary excited state to undergo intersystem crossing to the triplet state overlaps with multiple bands corresponding to excitations of the singlet state that lead back to the quintet state. Applications Fluorophores Fluorescence microscopy relies upon fluorescent compounds, or fluorophores, in order to image biological systems. Since fluorescence and phosphorescence are competitive methods of relaxation, a fluorophore that undergoes intersystem crossing to the triplet excited state no longer fluoresces and instead remains in the triplet excited state, which has a relatively long lifetime, before phosphorescing and relaxing back to the singlet ground state so that it may continue to undergo repeated excitation and fluorescence. This process in which fluorophores temporarily do not fluoresce is called blinking. While in the triplet excited state, the fluorophore may undergo photobleaching, a process in which the fluorophore reacts with another species in the system, which can lead to the loss of the fluorescent characteristic of the fluorophore. In order to regulate these processes dependent upon the triplet state, the rate of intersystem crossing can be adjusted to either favor or disfavor formation of the triplet state. Fluorescent biomarkers, including both quantum dots and fluorescent proteins, are often optimized in order to maximize quantum yield and intensity of fluorescent signal, which in part is accomplished by decreasing the rate of intersystem crossing. Methods of adjusting the rate of intersystem crossing include the addition of Mn2+ to the system, which increases the rate of intersystem crossing for rhodamine and cyanine dyes. The changing of the metal that is a part of the photosensitizer groups bound to CdTe quantum dots can also affect rate of intersystem crossing, as the use of a heavier metal can cause intersystem crossing to be favored due to the heavy atom effect. Solar cells The viability of organometallic polymers in bulk heterojunction organic solar cells has been investigated due to their donor capability. The efficiency of charge separation at the donor-acceptor interface can be improved through the use of heavy metals, as their increased spin-orbit coupling promotes the formation of the triplet MLCT excited state, which could improve exciton diffusion length and reduce the probability of recombination due to the extended lifespan of the spin-forbidden excited state. By improving the efficiency of charge separation step of the bulk heterojunction solar cell mechanism, the power conversion efficiency also improves. Improved charge separation efficiency has been shown to be a result of the formation of the triplet excited state in some conjugated platinum-acetylide polymers. However, as the size of the conjugated system increases, the increased conjugation reduces the impact of the heavy atom effect and instead makes the polymer more efficient due to the increased conjugation reducing the bandgap. History In 1933, Aleksander Jabłoński published his conclusion that the extended lifetime of phosphorescence was due to a metastable excited state at an energy lower than the state first achieved upon excitation. Based upon this research, Gilbert Lewis and coworkers, during their investigation of organic molecule luminescence in the 1940s, concluded that this metastable energy state corresponded to the triplet electron configuration. The triplet state was confirmed by Lewis via application of a magnetic field to the excited phosphor, as only the metastable state would have a long enough lifetime to be analyzed and the phosphor would have only responded if it was paramagnetic due to it having at least one unpaired electron. Their proposed pathway of phosphorescence included the forbidden spin transition occurring when the potential energy curves of the singlet excited state and the triplet excited state crossed, from which the term intersystem crossing arose. See also Internal conversion (chemistry) Jablonski diagram Michael Kasha Population inversion Vibrational energy relaxation References Quantum mechanics Rotational symmetry
Intersystem crossing
Physics
1,702
31,895,749
https://en.wikipedia.org/wiki/Bombyx%20hybrid
The Bombyx hybrid is a hybrid between a male Bombyx mandarina moth and a female Bombyx mori moth. They produce larvae called silkworms, like all species of Bombyx. The larvae look a lot like the other variations. They are brown in the first half and gray at the bottom half, but they get larger black spots than other variations. Generally, they look like a normal Bombyx moth, but a bit darker. Hybrids are not used for silk, but for research. Because Bombyx mori males lost their ability to fly, their females are much more likely to mate with a male Bombyx mandarina. The reverse is possible, but both species have to be kept in the same container. Since Bombyx hybrids are much more common than the other variation, more is known about them. B. mori is a domesticated version of the wild B. mandarina. This domestication occurred over 5,000 years ago. See also Bombyx second hybrid References Bombycidae Hybrid animals
Bombyx hybrid
Biology
204
53,239,324
https://en.wikipedia.org/wiki/Fair%20river%20sharing
Fair river sharing is a kind of a fair division problem in which the waters of a river has to be divided among countries located along the river. It differs from other fair division problems in that the resource to be divided—the water—flows in one direction—from upstream countries to downstream countries. To attain any desired division, it may be required to limit the consumption of upstream countries, but this may require to give these countries some monetary compensation. In addition to sharing river water, which is an economic good, it is often required to share river pollution (or the cost of cleaning it), which is an economic bad. River sharing in practice There are 148 rivers in the world flowing through two countries, 30 through three, nine through four and 13 through five or more. Some notable examples are: The Jordan River, whose sources run from upstream Lebanon and Syria to downstream Israel and Jordan. The attempts of Syria to divert the Jordan River, starting in 1965, are cited as one of the reasons for the Six-Day War. Later, in 1994, the Israel–Jordan peace treaty determined a sharing of the waters between Israel and Jordan, by which Jordan receives water per year. The Nile, running from upstream Ethiopia through Sudan to downstream Egypt. There is a long history of disputes over the Nile agreements of 1929 and 1959. The Ganges, running from upstream India to downstream Bangladesh. There was controversy over the operation of the Farakka Barrage. Between Mexico and the United States, there was controversy over the desalination facility in the Morelos Dam. The Mekong runs from China's Yunnan Province to Myanmar, Laos, Thailand, Cambodia, and Vietnam. In 1995, Laos, Thailand, Cambodia, and Vietnam established the Mekong River Commission to assist in the management and coordinated use of the Mekong's resources. In 1996 China and Myanmar became "dialogue partners" of the MRC and the six countries now work together within a cooperative framework. Property rights In the international law, there are several conflicting views on the property rights to the river waters. The theory of absolute territorial sovereignty (ATS) states that a country has absolute property rights over any river basin in its territory. So any country may consume some or all of the waters that enter its area, without leaving any water to downstream countries. The theory of unlimited territorial integrity (UTI) states that a country shares the property rights to all the waters from the origin of the river down to its territory. So, a country may not consume all the waters in its territory, since this hurts the right of downstream countries. The theory of territorial integration of all basin states (TIBS) states that a country shares the property rights to all the waters of the river. So each country is entitled to an equal share of the river waters, regardless of its geographic location. Efficient water allocation Kilgour and Dinar were the first to suggest a theoretical model for efficient water sharing. The model The countries are numbered according to their location, so that country 1 is the most upstream, then 2, etc. The river picks up volume along its course: before each location , an amount of water enters the river. So, country 1 gets water, country 2 gets plus the water unconsumed by country 1, and so on. Each country has a benefit function that describes its utility from each amount of water. This function is increasing but strictly concave function, since the countries have diminishing returns. We can define for each country its marginal benefit function , which describes the price it is willing to pay for an additional unit of water given its current consumption; it is positive but strictly decreasing. Money can be transferred between countries. Countries have quasilinear utility, so a country who consumes water and receives money has utility . A consumption plan is a vector of water allocations and side-payments . The important aspect of the river sharing setting is that water only flows downstream. Therefore, the total consumption at each location must be at most the total amount of water that enters this location: . Additionally, the sum of the side-payments must be at most 0, so that the divider does not have to subsidize the division. The situation without cooperation Without cooperation, each country maximizes its individual utility. So if a country is an insatiable agent (its benefit function is always increasing), it will consume all the water that enters its region. This may be inefficient. For example, suppose there are two countries with the following benefit functions: The inflow is . Without cooperation, country 1 will consume 2 units and country 2 will have 0 units: . Then, the benefits will be . This is not Pareto efficient: it is possible to allocate 1 unit of water to each country: , and transfer e.g. units of money from country 2 to country 1. Then, the utilities will be which are better for both countries. The efficient allocation Because preferences are quasi-linear, an allocation is Pareto-efficient if-and-only-if it maximizes the sum of all agents' benefits and wastes no money. Under the assumption that benefit functions are strictly concave, there is a unique optimal allocation. It structure is simple. Intuitively, the optimal allocation should equalize the marginal benefits of all countries (as in the above example). However, this may be impossible because of the structure of the river: the upstream countries do not have access to downstream waters. For example, in the above two-country example, if the inflow is , then it is not possible to equalize the marginal benefits, and the optimal allocation is to let each country consume its own water: . Therefore, in the optimal allocation, the marginal benefits are weakly decreasing. The countries are divided to consecutive groups, from upstream to downstream. In each group, the marginal benefit is the same, and between groups, the marginal benefit is decreasing. The possibility of calculating an optimal allocation allows much more flexibility in water-sharing agreements. Instead of agreeing in advance on fixed water quantities, it is possible to adjust the quantities to the actual amount of water that flows through the river each year. The utility of such flexible agreements has been demonstrated by simulations based on historical of the Ganges flow. The social welfare when using the flexible agreement is always higher than when using the optimal fixed agreement, but the increase is especially significant in times of drought, when the flow is below the average. Stable monetary transfers Calculating the efficient water allocation is only the first step in solving a river-sharing problem. The second step is calculating monetary transfers that will incentivize countries to cooperate with the efficient allocation. What monetary transfer vector should be chosen? Ambec and Sprumont study this question using axioms from cooperative game theory. Cooperation when countries are non-satiable According to the ATS doctrine, each country has full rights to the water in its region. Therefore, the monetary payments should guarantee to each country at least the utility-level that it could attain on its own. With non-satiable countries, this level is at least . Moreover, we should guarantee to each coalition of countries, at least the utility-level that they could attain by the optimal allocation among the countries in the coalition. This implies a lower bound on the utility of each coalition, called the core lower bound. According to the UTI doctrine, each country has rights to all water in its region and upstream. These rights are not compatible since their sum is above the total amount of water. However, these rights define an upper bound - the largest utility that a country can hope for. This is the utility it could get alone, if there were no other countries upstream: . Moreover, the aspiration level of each coalition of countries is the highest utility-level it could attain in the absence of the other countries. This implies an upper bound on the utility of each coalition, called the aspiration upper bound. There is at most one welfare-distribution that satisfies both the core-lower-bound and the aspiration-upper-bound: it is the downstream incremental distribution. The welfare of each country should be the stand-alone value of the coalition minus the stand-alone value of the coalition . When the benefit functions of all countries are non-satiable, the downstream-incremental-distribution indeed satisfies both the core-lower-bounds and the aspiration-upper-bounds. Hence, this allocation scheme can be seen as a reasonable compromise between the doctrines of ATS and UTI. Cooperation when countries are satiable When the benefit functions are satiable, new coalitional considerations come into play. These are best illustrated by an example. Suppose there are three countries. Countries 1 and 3 are in a coalition. Country 1 wants to sell water to country 3 in order to increase their group welfare. If country 2 is non-satiable, then 1 cannot leave water to 3, since it will be entirely consumed by 2 along the way. So 1 must consume all its water. In contrast, if country 2 is satiable (and this fact is common knowledge), then it may be worthwhile for 1 to leave some water to 3, even if some of it will be consumed by 2. This increases the welfare of the coalition, but also the welfare of 2. Thus, cooperation is helpful not only for the cooperating countries, but also for the non-cooperating countries! With satiable countries, each coalition has two different core-lower-bounds: The non-cooperative core-lower-bound is the value that the coalition can guarantee to itself based on its own water sources, when the other countries do not cooperate. The cooperative core-lower-bound is the value that the coalition can guarantee to itself based on its own water sources, when the other countries cooperate. As illustrated above, the cooperative core-lower-bound is higher than the non-cooperative core-lower-bound. The non-cooperative-core is non-empty. Moreover, the downstream-incremental-distribution is the unique solution that satisfies both the non-cooperative-core-lower-bounds and the aspiration-upper-bound. However, the cooperative-core may be empty: there might be no allocation that satisfies the cooperative-core-lower-bound. Intuitively, it is harder to attain stable agreements, since middle countries might "free-ride" agreements by downstream and upstream countries. Sharing a polluted river A river carries not only water but also pollutants coming from agricultural, biological and industrial waste. River pollution is a negative externality: when an upstream country pollutes a river, this creates external cleaning costs for downstream countries. This externality may result in over-pollution by the upstream countries. Theoretically, by the Coase theorem, we could expect the countries to negotiate and achieve a deal in which polluting countries will agree to reduce the level of pollution for an appropriate monetary compensation. However, in practice this does not always happen. Empirical evidence and case-studies Evidence from various international rivers shows that, at water quality monitoring stations immediately upstream of international borders, the pollution levels are more than 40% higher than the average levels at control stations. This may imply that countries do not cooperate for pollution reduction, and the reason for this may be the unclearness in property rights. See and and for other empirical studies. Dong, Ni, Wang and Meidan Sun discuss the Baiyang Lake, which was polluted by a tree of 13 counties and townships. To clean the river and its sources, 13 wastewater treatment plants were built in the region. The authors discuss different theoretic models for sharing the costs of these buildings among the townships and counties, but mention that at the end the costs were not shared but rather paid by the Baoding municipal government, since the polluters did not have an incentive to pay. Hophmayer-Tokich and Kliot present two case studies from Israel where municipalities who suffer from water pollution initiated cooperation on wastewater treatment with upstream polluters. The findings suggest that regional cooperation can be an efficient tool in promoting advanced wastewater treatment, and has several advantages: an efficient use of limited resources (financial and land); balancing disparities between municipalities (size, socio-economic features, consciousness and ability of local leaders); and reducing spillover effects. However, some problems were reported in both cases and should be addressed. Several theoretical models were proposed for the problem. Market model: each agent can freely trade in licenses for emission/pollution Emissions trading is a market-based approach to attain an efficient pollution allocation. It is applicable to general pollution settings; river pollution is a special case. As an example, Montgomery studies a model with agents each of which emits pollutants, and locations each of which suffers pollution which is a linear combination of the emissions. The relation between and is given by a diffusion matrix , such that: . In the special case of a linear river presented above, we have , and is a matrix with a triangle of ones. Efficiency is attained by permitting free trade in licenses. Two kinds of licenses are studied: Emission license - a license which directly confers a right to emit pollutants up to a certain rate. Pollution license for a given monitoring-point - a license which confers the right to emit pollutants at a rate which will cause no more than a specified increase at the pollution-level . A polluter that affects water quality at a number of points (e.g. an upstream agent) has to hold a portfolio of licenses covering all relevant monitoring-points. In both markets, free trade can lead to an efficient outcome. However, the market in pollution-licenses is more widely applicable than the market in emission-licenses. There are several difficulties with the market approach, such as: how should the initial allocation of licenses be determined? How should the final allocation of licenses be enforced? See Emissions trading for more details. Non-cooperative game with money: each agent chooses how much pollution to emit Laan and Moes (2012) describe the polluted-river situation as follows. Each country can choose a level of emission (e.g., by choosing what factories to have, what waste-disposal system to have, etc.). Each country suffers a level of pollution that depends on the emissions from it and all upstream agents: Each country has a benefit function that depends on its emission it creates, ; the marginal benefit is assumed to be positive and strictly decreasing. Each country has a cost function that depends on the pollution it suffers, ; the marginal cost is assumed to be positive and strictly increasing. Money can be transferred between countries, and the utility of country is . Under the above assumptions, there exists a unique optimal emission-vector, in which the social welfare (the sum of benefits minus the sum of costs) is maximized. There also exists a unique Nash equilibrium emission-vector, in which each country produces the emission best for it given the emissions of the others. The total amount of emission in equilibrium is strictly higher than in the optimal situation, in accordance with the empirical findings of Sigman. For example, suppose there are two countries with the following benefit functions: The socially-optimal levels are , and the utilities are . The Nash equilibrium levels are , and the utilities (benefit minus cost) are . In equilibrium, the upstream country 1 over-pollutes; this improves its own utility but harms the utility of the downstream country 2. The main question of interest is: how to make countries reduce pollution to its optimal level? Several solutions have been proposed. Cooperative game with money: each agent chooses what coalition to join for pollution-reduction The cooperative approach deals directly with pollution levels (rather than licenses). The goal is to find monetary transfers that will make it profitable to agents to cooperate and implement the efficient pollution level. Gengenbach and Weikard and Ansink focus on the stability of voluntary coalitions of countries, that cooperate for pollution-reduction. van-der-Laan and Moes focus on property rights and the distribution of the gain in social welfare that arises when countries along an international river switch from no cooperation on pollution levels to full cooperation: It is possible to attain the efficient pollution levels by monetary payments. The monetary payments depend on property rights: According to the ATS doctrine, each country has a right to pollute as much as it wants inside its territory. So to prevent upstream countries from polluting, the downstream countries must pay them at least as much as required to keep their utility at their equilibrium level. In the above example, ATS implies that 2 should pay 1 at least 0.473-0.376=0.097. The ATS rule says that 2 pays 1 exactly this value, so that the utility of 1 is exactly its equilibrium payoff. This can be generalized to three or more agents using the downstream incremental distribution by which the utility of each group of upstream agents is exactly their equilibrium payoff, and all the gains of cooperation between these agents and agent are given to agent . According to the UTI doctrine, each country has a right to receive clean water and can prevent all countries upstream from it from creating any pollution. So to be able to pollute, the upstream countries must pay the downstream countries at least as much as required to keep their utility at the clean level. In the above example, UTI implies that 1 should pay 2 at least 0.139 - which is its utility when e1=0. The UTI rule says that 1 pays 2 exactly this value, so the utility of 2 is exactly its payoff from a clean incoming river. This can be generalized to three or more agents using an "upstream incremental distribution", by which the utility of each group of downstream agents is exactly their optimal payoff from a clean river, and all the gains of cooperation between these agents and agent are given to agent . According to the TIBS doctrine, all countries have equal rights to the river. One way to interpret this principle is that the utility of each country should be some kind of average between its ATS utility and its UTI utility. For every vector of responsibilities , it is possible to define a TIBS- rule that gives, to each country, a utility which is an -weighted average of its utilities under UTI and ATS. This model can be generalized to rivers that are not linear but have a tree-like topology. Cost-sharing models: cleaning-costs are fixed; a central authority decides how to divide them 1. Dong, Ni and Wang (extending a previous work by Ni and Wang) assume each agent has an exogenously given cost , caused by the need to clean the river to match environmental standards. This cost is caused by the pollution of the agent itself and all agents upstream to it. The goal is to charge each agent i a vector of payments such that , i.e., the payments of all agents for region j cover the cost of cleaning it. They suggest three rules for dividing the total costs of pollution among the agents: The ATS doctrine implies the Local Responsibility Sharing method, which holds each agent responsible for the costs on its own territory and therefore requires that each agent pays its own costs . The UTI doctrine implies the Upstream Equal Sharing method, which recognizes that the costs on the territory of each agent are caused by it and all its upstream agents and thus requires that is divided equally among i and all agents upstream from i. An alternative interpretation of the UTI doctrine implies the Downstream Equal Sharing method, which recognizes that the downstream agents enjoy the waters coming from upstream. Moreover, according to some river-sharing models, the enjoy the waters even more than the upstream agents. Therefore they should contribute to clean the water, so should be divided equally among i and all agents downstream from i. Each of these methods can be characterized by some axioms: additivity, efficiency (the payments exactly cover the costs), no blind costs (an agent with zero costs should pay zero - since he does not pollute), independence of upstream/downstream costs, upstream/downstream symmetry, and independence of irrelevant costs. The latter axiom is relevant for non-linear river trees, in which waters from various sources flow into a common lake. It means that the payments by agents in two different branches of the tree should be independent of each other's costs. In the above models, pollution levels are not specified. Hence, their methods do not reflect the different responsibility of each region in producing the pollution. 2. Alcalde-Unzu, Gomez-Rua and Molis suggest a different rule for cost-sharing, that does take into account the different pollution-production. The underlying idea is that each agent should pay for the pollution it emits. However, the emission levels are not known - only the cleaning-costs are known. The emission levels could be calculated from the cleaning costs using the transfer rate t (a number in [0,1]), as follows: However, usually t is not known accurately. Upper and lower bounds on t can be estimated from the vector of cleaning-costs. Based on these bounds, it is possible to calculate bounds on the responsibility of upstream agents. Their principles for cost-sharing are: Limits of responsibility - the cost paid by each agent for cleaning its own segment is within its limits of responsibility. No downstream responsibility - an agent j situated downstream from agent i does not effect the pollution at region i and so does not have to participate in its cleansing. Consistent responsibility - the part of the cost of cleaning a segment paid by one agent, relative to the part paid by another agent, is consistent throughout all the segments situated downstream from both agents. Monotonicity w.r.t. information on transfer rate - when information on transfer-rate becomes more accurate such that the estimate on the real transfer-rate becomes higher(lower), the amount of waste in any segment for which all its upstream agents are responsible should be weakly higher(lower). The rule characterized by these principles is called the Upstream Responsibility (UR) rule: it estimates the responsibility of each agent using expected value of the transfer-rate, and charges each agent according to its estimated responsibility. In a further study they present a different rule called the Expected Upstream Responsibility (EUR) rule: it estimate the expected responsibility of each agent taking the transfer-rate as a random variable, and charges each agent according to its estimated expected responsibility. The two rules are different because the responsibility is a non-linear function of t. In particular, the UR rule is better for upstream countries (it charges them less), and the EUR rule is better for downstream countries. The UR rule is incentive compatible: it incentivizes countries to reduce their pollution since this always leads to reduced payment. In contrast, the EUR rule might cause a perverse incentive: a country might pay less by polluting more, due to the effect on the estimated transfer rate. Further reading River-sharing with different entitlements, based on the leximin order. River-sharing when the river is not linear. References Fair division Border rivers
Fair river sharing
Mathematics
4,717
4,112,321
https://en.wikipedia.org/wiki/Ethics%20of%20care
The ethics of care (alternatively care ethics or EoC) is a normative ethical theory that holds that moral action centers on interpersonal relationships and care or benevolence as a virtue. EoC is one of a cluster of normative ethical theories that were developed by some feminists and environmentalists since the 1980s. While consequentialist and deontological ethical theories emphasize generalizable standards and impartiality, ethics of care emphasize the importance of response to the individual. The distinction between the general and the individual is reflected in their different moral questions: "what is just?" versus "how to respond?" Carol Gilligan, who is considered the originator of the ethics of care, criticized the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference". Assumptions of the framework include: persons are understood to have varying degrees of dependence and interdependence; other individuals affected by the consequences of one's choices deserve consideration in proportion to their vulnerability; and situational details determine how to safeguard and promote the interests of individuals. Historical background The originator of the ethics of care was Carol Gilligan, an American ethicist and psychologist. Gilligan created this model as a critique of her mentor, developmental psychologist Lawrence Kohlberg's model of moral development. Gilligan observed that measuring moral development by Kohlberg's stages of moral development found boys to be more morally mature than girls, and this result held for adults as well (although when education is controlled for there are no gender differences). Gilligan argued that Kohlberg's model was not objective, but rather a masculine perspective on morality, founded on principles of justice and rights. In her 1982 book In a Different Voice, she further posited that men and women have tendencies to view morality in different terms. Her theory claimed women tended to emphasize empathy and compassion over the notions of morality in terms of abstract duties or obligations that are privileged in Kohlberg's scale. Dana Ward stated, in an unpublished paper, that Kohlberg's scale is psychometrically sound. Subsequent research suggests that the differences in care-based or justice-based ethical approaches may be due to gender differences, or differences in life situations of genders. Gilligan's summarizing of gender differences provided feminists with a voice to question moral values and practices of the society as masculine. Relationship to traditional ethical positions Care ethics is different from other ethical models, such as consequentialist theories (e.g. utilitarianism) and deontological theories (e.g. Kantian ethics), in that it seeks to incorporate traditionally feminine virtues and values which, proponents of care ethics contend, are absent in traditional models of ethics. One of these values is the placement of caring and relationship over logic and reason. In care ethics, reason and logic are subservient to natural care, that is, care that is done out of inclination. This is in contrast to deontology, where actions taken out of inclination are unethical. Virginia Held has noted the similarities between care ethics and virtue ethics but distinguished it from the virtue ethics of British moralists such as Hume in that people are seen as fundamentally relational rather than independent individuals. Other philosophers have argued about the relation between care ethics and virtue ethics, taking various positions on the question of how closely the two are related. Jason Josephson Storm argued for close parallels between the ethics of care and traditional Buddhist virtue ethics, especially the prioritization of compassion by Śāntideva and others. Other scholars had also previously connected ethics of care with Buddhist ethics. Care ethics as feminist ethics While some feminists have criticized care-based ethics for reinforcing traditional gender stereotypes, others have embraced parts of the paradigm under the theoretical concept of care-focused feminism. Care-focused feminism, alternatively called gender feminism, is a branch of feminist thought informed primarily by the ethics of care as developed by Carol Gilligan and Nel Noddings. This theory is critical of how caring is socially engendered, being assigned to women and consequently devalued. "Care-focused feminists regard women's capacity for care as a human strength" which can and should be taught to and expected of men as well as women. Noddings proposes that ethical caring could be a more concrete evaluative model of moral dilemma, than an ethic of justice. Noddings' care-focused feminism requires practical application of relational ethics, predicated on an ethic of care. Ethics of care is a basis for care-focused feminist theorizing on maternal ethics. These theories recognize caring as an ethically relevant issue. Critical of how society engenders caring labor, theorists Sara Ruddick, Virginia Held, and Eva Feder Kittay suggest caring should be performed and care givers valued in both public and private spheres. This proposed paradigm shift in ethics encourages the view that an ethic of caring be the social responsibility of both men and women. Joan Tronto argues that the definition of "ethic of care" is ambiguous due in part to it not playing a central role in moral theory. She argues that considering moral philosophy is engaged with human goodness, then care would appear to assume a significant role in this type of philosophy. However, this is not the case and Tronto further stresses the association between care and "naturalness". The latter term refers to the socially and culturally constructed gender roles where care is mainly assumed to be the role of the woman. As such, care loses the power to take a central role in moral theory. Tronto states there are four ethical qualities of care: Attentiveness: Attentiveness is crucial to the ethics of care because care requires a recognition of others' needs in order to respond to them. The question which arises is the distinction between ignorance and inattentiveness. Tronto poses this question as such, "But when is ignorance simply ignorance, and when is it inattentiveness"? Responsibility: In order to care, we must take it upon ourselves, thus responsibility. The problem associated with this second ethical element of responsibility is the question of obligation. Obligation is often, if not already, tied to pre-established societal and cultural norms and roles. Tronto makes the effort to differentiate the terms "responsibility" and "obligation" with regards to the ethic of care. Responsibility is ambiguous, whereas obligation refers to situations where action or reaction is due, such as the case of a legal contract. This ambiguity allows for ebb and flow in and between class structures and gender roles, and to other socially constructed roles that would bind responsibility to those only befitting of those roles. Competence: To provide care also means competency. One cannot simply acknowledge the need to care, accept the responsibility, but not follow through with enough adequacy - as such action would result in the need of care not being met. Responsiveness: This refers to the "responsiveness of the care receiver to the care". Tronto states, "Responsiveness signals an important moral problem within care: by its nature, care is concerned with conditions of vulnerability and inequality". She further argues responsiveness does not equal reciprocity. Rather, it is another method to understand vulnerability and inequality by understanding what has been expressed by those in the vulnerable position, as opposed to re-imagining oneself in a similar situation. In 2013, Tronto added a fifth ethical quality: Plurality, communication, trust and respect; solidarity or caring with: Together, these are the qualities necessary for people to come together in order to take collective responsibility, to understand their citizenship as always imbricated in relations of care, and to take seriously the nature of caring needs in society. In politics It is often suggested that the ethics of care is only applicable within families and groups of friends, but many feminist theorists have argued against this suggestion, including Ruddick, Manning, Held, and Tronto. Attempts have been made to apply principles from the ethics of care more generally, by identifying values in one particular caring relationship and applying these values to other situations. Moral values are seen as embedded in acts of care. The ethics of care is contrasted with theories based on the "liberal individual" and a social contract, following Locke and Hobbes. Ethics-of-care theorists note that in many situations, such as childhood, there are very large power imbalances between individuals, and so these relationships are based on care rather than any form of contract. Noting the power imbalances that can exist in society, it is argued that care may be a better basis to understand society than freedom and social contracts. In mental health Psychiatrist Kaila Rudolph noted that care ethics aligns with a trauma-informed care framework in psychiatry. Criticism In the field of nursing, the ethics of care has been criticized by Peter Allmark, Helga Kuhse, and John Paley. Allmark criticized its focus on the mental state of the carer, on the grounds that subjectively caring does not prevent an individual's care from being harmful. Allmark also criticized the theory for conflicting with the idea of treating everyone with unbiased consideration, which he considered necessary in certain situations. Care ethics has been criticised for failing to protect the individual from paternalism, noting there is a risk of caregivers mistaking their needs for those of the people they care for. Individuals may need to cultivate the ability to distinguish their own needs from those that they care for, with Ruddick arguing for a need to respect the "embodied willfulness" of those who are cared for. See also Theorists References Further reading Care Altruism Environmentalism Ecofeminism Feminism Feminist ethics Liberalism Left-wing politics Progressivism Relational ethics Social justice Feminist philosophy Ethical theories
Ethics of care
Biology
1,998
67,247,989
https://en.wikipedia.org/wiki/MARC-60
The MARC-60 (Mitsubishi Aerojet Rocketdyne Collaboration), also known as MB-60, MB-XX, and RS-73, is a liquid-fuel cryogenic rocket engine designed as a collaborative effort by Japan's Mitsubishi Heavy Industries and US' Aerojet Rocketdyne. The engine burns cryogenic liquid oxygen and liquid hydrogen in an open expander cycle, driving the turbopumps with waste heat from the main combustion process. Description The MB-XX program shared the development duties of the engines between Boeing's Rocketdyne division (now Aerojet Rocketdyne) and the Japanese Mitsubishi Heavy Industries. Under the agreement, Boeing develops the LOX and LH turbopumps and the nozzle, while MHI develops the thrust chamber assembly (TCA), control systems, gimbal bearing, heat exchanger, and ducts. The TCA of the engine consists of the main combustion chamber, the regeneratively cooled portion of the nozzle, the injector, and the ignition system. Under the MB-XX program two engines were developed: the MARC-60 (MB-60) and the MB-35. Please note that the below table uses specifications as listed in 2003, and the MARC-60 engine has since then evolved. History The MARC-60's (then MB-60) development program was announced on 14 February 2000 by Boeing's Rocketdyne division and Japan's Mitsubishi Heavy Industries, as a part of the MB-XX family of cryogenic upper stage rocket engines. The aim of the MB-XX program was to develop an engine with "robust operating margins, high reliability, increased thrust, and high specific impulse at an affordable cost". The MB-XX family of engines was intended to be used on new or upgraded upper stages of Boeing's Delta IV and MHI's H-IIA families of launch vehicles. Potential applications also included Lockheed Martin's Atlas V. Both Delta IV and Atlas V are now operated by United Launch Alliance. Development of the MB-XX family of engines was started in early 1999. From 2000 to 2001, market forces drove the focus of the MB-XX program from the 267 kN (60,000 lbf) MB-60 to the 156 kN (35,000 lbf) MB-35. The MB-35 was not a new design, instead the existing MB-60 design was tuned to operate at the lower thrust level. The MB-35 was designed to be a modern, drop-in replacement for the Aerojet Rocketdyne RL10. Component-level testing of the MB-XX demonstrator was completed in 2004, and a system-level demonstrator engine was successfully hot-fired in September 2005. In 2013, NASA was evaluating MARC-60 as the engine of choice for the Space Launch System's Exploration Upper Stage (EUS). The study explored the possibility of utilizing two MARC-60 engines in place of four RL10 engines, as well as the possibility of the stage using a single J-2X engine. Under the plan, the engine's control unit would have been provided by NASA. The proposal also resulted in the engine being renamed to MARC-60, as Rocketdyne had changed hands multiple times after the MB-60's (Mitsubishi Boeing-Rocketdyne) inception in 1999. In 2016 NASA announced that the EUS would be powered by four RL10C-3 engines, dropping both MARC-60 and J-2X. See also RL60, a LOX/LH expander cycle engine of same thrust and weight class RL10, the closed expander cycle LOX/LH engine that was supposed to be replaced by MB-35, the down-scaled version of MARC-60 References Rocketdyne engines Rocket engines Rocket engines using the expander cycle Rocket engines using hydrogen propellant
MARC-60
Technology
798
38,488,533
https://en.wikipedia.org/wiki/Ramaria%20strasseri
Ramaria strasseri is a species of coral fungus in the family Gomphaceae. First described by Giacomo Bresadola in 1900 as Clavaria strasseri, it was transferred to the genus Ramaria in 1950 by E.J.H. Corner. References Gomphaceae Fungi described in 1900 Taxa named by Giacomo Bresadola Fungus species
Ramaria strasseri
Biology
79
39,618,697
https://en.wikipedia.org/wiki/The%20Doomsday%20Machine%20%28book%29
The Doomsday Machine: The High Price of Nuclear Energy, the World's Most Dangerous Fuel is a 2012 book by Martin Cohen and Andrew McKillop which addresses a broad range of concerns regarding the nuclear industry, the economics and environmental aspects of nuclear energy, nuclear power plants, and nuclear accidents. The book has been described by The New York Times as "a polemic on the evils of splitting the atom". Synopsis Economic fundamentals "The usual rule of thumb for nuclear power is that about two thirds of the generation cost is accounted for by fixed costs, the main ones being the cost of paying interest on the loans and repaying the capital..." Areva, the French nuclear plant operator, for example, offers that 70 percent of the cost of a kWh of nuclear electricity is accounted for by the fixed costs from the construction process. In the foreword to the book, Steve Thomas, Professor of Energy Studies at the University of Greenwich in the UK, states that "the economic realities of rapidly escalating costs and insurmountable financing problems... will mean that the much-hyped nuclear renaissance will one day be remembered as just another 'nuclear myth'." In discussions about the economics of nuclear power, the authors explain, what is often not appreciated is that the cost of equity, that is, companies using their own funds to pay for new plants, is generally higher than the cost of debt. Another advantage of borrowing may be that "once large loans have been arranged at low interest rates—perhaps with government support—the money can then be lent out at higher rates of return". Environmental platform As Matthew Wald in the New York Times noted, despite being at its heart environmentalist, the book challenges certain Green orthodoxies, notably the idea that whatever the risks of nuclear energy, the threat from man-made climate change is greater. As Chiara Proietti Silvestri wrote in a review for the Italian Energy journal Energia in the Doomsday Machine, the authors argue that the fight against pollution from CO2 generated mostly by the human activities and described as be the primary cause of rising temperatures at the global level thus becomes a "simple story" told by a club dominated from Anglophone countries in an attempt to defend and promote particular national interests. Reducing huge national subsidies to domestic coal industries and promoting the lucrative market for nuclear power stations being the case in point. The book describes how fossil fuels remain the main source of energy for the world, while nuclear power manages to meet only three percent of the needs of world energy. This leads to the question: why is the figure so low when it is often read that nuclear is a key part of the "world energy mix"? The authors explain that the trick is in the "fiddling" of statistics, writing that the "key point about world energy is that it is almost all thermal. Whether it is created by burning coal, oil or gas, or firewood or dung, or even running nuclear power plants, the first thing produced is heat". Historical perspective Although nuclear power is still today presented as the energy of future ("the first myth" ), its roots are paradoxically in the speech "Atoms for Peace", by President Eisenhower. The authors state that this history shows that the origins of nuclear power lie with military needs, and are anything but peaceful, and that Eisenhower's speech itself was an attempt to distract the world from the tests of the US hydrogen bomb. Scientists — from Albert Einstein and Georges Lemaître to Enrico Fermi and Robert Oppenheimer — come in for criticism for serving the same military strategies. The history of nuclear power is also marked by the slogan — "too cheap to meter" — described as the foundation of another myth. The book argues that nuclear power is not and has never been cheap but rather found the resources to tap public subsidies, special systems of taxation, loans government and other beneficial guarantees. The arrival of the liberalization of the electrical market has had a strong impact on the nuclear industry, revealing the true costs (e.g. the over-run in costs on the new EPR at Olkiluoto in Finland). This leads to "tricks" to manipulate figures (cost projections of construction, decommissioning and insurance schemes), the extension of the life of the reactors, the reuse of the depleted fuel) in order to conceal the fundamental non-affordability. Today, according to the authors, the nuclear lobby triumphantly describes the new order flow, especially in developing countries, where the "environment tends to remain a 'free good'", and there is a "cultural indifference to public hazard and risk" all of which, they argue, raises new concerns about the environmental protection, about technical expertise and political instability. Reception A central theme of the book is the issue of the true, economic, cost of nuclear electricity. The preface by Steve Thomas indicates the information density with which the authors construct their arguments, expressed nonetheless in witty and tight language, as the Italian Energia review put it, while according to Kirukus: "The authors deliver a convincing account of the partnership between industry and government (essential because nuclear plants require massive subsidies) to build wildly expensive generators whose electricity remains uncompetitive without more subsidies." In another review, science policy writer Jon Turney stated that the "strongest suit" of the book was "energy economics and supply data". New York Times''' Matthew L. Wald analyzes the argument put forth in the Doomsday Machine that "even if global warming science was not explicitly invented by the nuclear lobby, the science could hardly suit the lobby better". He comments, "In fact, the [nuclear] industry continues to argue that in the United States it is by far the largest source of zero-carbon energy, and recently began a campaign of upbeat ads to improve its image." Finding the claim that "In almost every country — usually for reasons completely unrelated to its ability to deliver electricity — there is almost universal political support for nuclear power" is "probably an exaggeration" in the case of Japan post-Fukushima and Germany, Wald agrees that "two countries with enormous demand for electricity and not much hand-wringing over global warming, are planning huge reactor construction projects". Wald notes that, even in Japan, the "catastrophe plays in some quarters as a reason to build new reactors".New Scientists Fred Pearce panned the book, calling it "mendacious and frequently anti-scientific", remarking that it "combines hysterical opposition to all things nuclear with an equally deranged climate-change denialism". See also List of books about nuclear issues List of books about renewable energy List of pro-nuclear environmentalistsContesting the Future of Nuclear PowerNuclear or Not?Reaction Time (book)Non-Nuclear Futures Notes References Cohen, Martin; and Andrew McKillop (2012). The Doomsday Machine'', Palgrave Macmillan, 256 pages. 2012 non-fiction books Nuclear power Energy policy Books about nuclear issues Palgrave Macmillan books
The Doomsday Machine (book)
Physics,Environmental_science
1,451
2,147,062
https://en.wikipedia.org/wiki/Ecology%20of%20Bermuda
Bermuda's ecology has an abundance of unique flora and fauna due to the island's isolation from the mainland of North America. The wide range of endemic species and the islands form a distinct ecoregion, the Bermuda subtropical conifer forests. The variety of species found both on land and in the waters surrounding Bermuda have varying positive and negative impacts on the ecosystem of the island, depending on the species. There are varying biotic and abiotic factors that have threatened and continue to threaten the island's ecology. There are, however, also means of conservation that can be used to mitigate these threats. Setting Located 900 km off the American East Coast, Bermuda is a crescent-shaped chain of 184 islands and islets that were once the rim of a volcano. The islands are slightly hilly rather than having steep cliffs, with the highest point being 79 m. The coast has many bays and inlets, with sandy beaches especially on the south coasts. Bermuda has a semi-tropical climate, warmed by the Gulf Stream current. Bermuda is very densely populated. Twenty of the islands are inhabited. Wildlife that could fly to the island or were carried there by winds and currents formed the species. There are no native mammals other than bats, and only two reptiles, but there are large numbers of birds, plants, and insects. Once on the island, organisms had to adapt to local conditions, such as the humid climate, lack of fresh water, frequent storms, and salt spray. The area of the islands shrank as water levels rose at the end of the Pleistocene epoch, and fewer species were able to survive in the reduced land-area. Nearly 8,000 different species of flora and fauna are known from the islands of Bermuda. The number is likely to be considerably higher if microorganisms, cave-dwellers and deep-sea species were counted. Today the variety of species on Bermuda has been greatly increased by introductions, both deliberate and accidental. Many of these introduced species have posed a threat to the native flora and fauna because of competition and interference with habitat. Plants Over 1000 species of vascular plants are found on the islands, the majority of which were introduced. Of the 165 native species, 17 are endemic. Forest cover is around 20% of the total land area, equivalent to 1,000 hectares of forest in 2020, which was unchanged from 1990. At the time of the first human settlement by shipwrecked English sailors in 1593, Bermuda was dominated by forests of Bermuda cedar (Juniperus bermudiana) with mangrove swamps on the coast. More deliberate settlement began after 1609, and colonists began clearing forests to use for building and shipmaking, and to develop agricultural cultivation. By the 1830s, the demands of the shipbuilding industry had denuded the forests, but these recovered in many areas. In the 1940s the cedar forests were devastated by introduced scale insects, which killed roughly eight-million trees. Replanting using resistant trees has taken place since then, but the area covered by cedar is only 10% of what it used to be. Another important component of the original forest was Bermuda palmetto (Sabal bermudana), a small palm tree. It now grows in a few small patches, notably at Paget Marsh. Other trees and shrubs include Bermuda olivewood (Cassine laneana) and Bermuda snowberry (Chiococca alba). The climate allows for the growth of other introduced palms such as royal palm (Roystonea spp.) and coconut palm (Cocos nucifera), although the coconuts seldom fruit properly, due to the relatively moderate temperatures on the island. Bermuda is the farthest north location where coconut palms grow naturally. Remnant patches of mangrove swamp can be found around the coast and at some inland sites, including Hungry Bay Nature Reserve and Mangrove Lake. These are important for moderating the effects of storms and providing transitional habitats. Here black mangrove (Avicennia germinans) and red mangrove (Rhizophora mangle) are the northernmost mangroves in the Atlantic. The inland swamps are particularly interesting as mangroves thrive in salty water; in this case, the saltwater arrives through underground channels rather than the usual tidal wash of coastal mangrove swamps. Areas of peat marsh include Devonshire, Pembroke, and Paget marshes. Bermuda has four endemic ferns: Bermuda maidenhair fern (Adiantum bellum), Bermuda shield fern (Thelypteris bermudiana), Bermuda cave fern (Ctenitis sloanei) and Governor Laffan's fern (Diplazium laffanianum). The latter is extinct in the wild but is grown at Bermuda Botanical Gardens. The endemic flora of the island also include two mosses, ten lichens and forty fungi. Among the many introduced species are the casuarina (Casuarina equisetifolia) and Suriname cherry (Eugenia uniflora). Endemic Bermudiana (Sisyrinchium bermudiana) Darrell's fleabane (Erigeron darrellianus) Bermuda campylopus (moss) (Campylopus bermudianus) Bermuda bean (Phaseolus lignosus) Bermuda spike rush (Eleocharis bermudiana) Bermuda trichostomum (moss) (Trichostomum bermudanum) Governor Laffan's fern (Diplazium laffanianum) Native Forestiera (Forestiera segregata) Lamarcks trema (Trema lamarckiana) Black mangrove (Avicennia nitida) White stopper (Eugenia axillaris) Wild coffee shrub (Psychotria undata) Yellow wood (Zanthoxylum flavum) Land animals Amphibians Bermuda has no native amphibians. A species of toad, cane toad (Rhinella marina), and two species of frog, Antilles coqui (Eleutherodactylus johnstonei), and Eleutherodactylus gossei were introduced by humans through the transportation of orchids to the island prior to the 1900s, and subsequently became naturalized. R. marina and E. johnstonei are common, but E. gossei is thought to have been recently extirpated. They are nocturnal and can often be heard at night in Bermuda. Their songs are most prevalent from April until November. Reptiles Four species of lizard and two species of turtle comprise Bermuda's non-marine reptilian fauna. Of the lizards, the Bermuda rock lizard (Plestiodon longirostris), also known as the rock lizard or Bermuda skink, is the only endemic species. Once very common, the Bermuda skink is critically endangered. The Jamaican anole (Anolis grahami) was deliberately introduced in 1905 from Jamaica and is now by far the most common lizard in Bermuda. The Leach's anole (Anolis leachii) was accidentally introduced from Antigua about 1940 and is now common. The Barbados anole (Anolis extremus) was accidentally introduced about 1940 and is rarely seen. The diamondback terrapin (Malaclemys terrapin) is native to Bermuda. The red-eared slider turtle (Trachemys scripta elegans) was introduced as a pet, but has subsequently become invasive. Mammals All mammals in Bermuda were introduced by humans, except for four species of migratory North American bats of the genus Lasiurus: the hoary bat, eastern red bat, Seminole bat and silver-haired bat. Early accounts refer to wild or feral hogs, descendants of pigs left by the Spanish and Portuguese as feedstock for ships stopping at the islands for supplies. The house mouse, brown rat and black rat were accidentally introduced soon after the settlement of Bermuda, and feral cats have become common as another introduced species. Birds Over 360 species of bird have been recorded on Bermuda. The majority of these are migrants or vagrants from North America or elsewhere. Only 24 species breed on the island; 13 of these are thought to be native. One endemic species is the Bermuda petrel or cahow (Pterodroma cahow), which was thought to have been extinct since the 1620s. Its ground-nesting habitats had been severely disrupted by introduced species and colonists had killed the birds for food. In 1951, researchers discovered 18 breeding pairs, and started a recovery program to preserve and protect the species. Another endemic subspecies is the Bermuda white-eyed vireo or chick-of-the-village (Vireo griseus bermudianus). The national bird of Bermuda is the white-tailed tropicbird or longtail, which is a summer migrant to Bermuda, its most northerly breeding site. Other native birds include the eastern bluebird, grey catbird and perhaps the common ground dove. The common moorhen is the most common native waterbird; very small numbers of American coot and pied-billed grebe are breeding. Small numbers of common tern nest around the coast. The barn owl and mourning dove colonized the island during the 20th century, and the green heron has recently begun to breed. Of the introduced birds, the European starling, house sparrow, great kiskadee, rock dove, American crow and chicken are all very numerous and considered to be pests. Other introduced species include the mallard, northern cardinal, European goldfinch and small numbers of orange-cheeked and common waxbills. The yellow-crowned night heron was introduced in the 1970s to replace the extinct native heron. Fossil remains of a variety of species have been found on the island, including a crane, an owl and the short-tailed albatross. Some of these became extinct as the islands' land-mass shrank by nine tenths after the Last Glacial Maximum, while others were exterminated by early settlers. The Bermuda petrel was thought to be extinct until its rediscovery in 1951. Among the many non-breeding migrants are a variety of shorebirds, herons and ducks. In spring many shearwaters can be seen of the South Shore. Over 30 species of New World warbler are seen each year, with the yellow-rumped warbler being the most abundant. The arrival of many species is dependent on weather conditions; low-pressure systems moving across from North America often bring many birds to the islands. Among the rare visitors recorded are the Siberian flycatcher from Asia and the fork-tailed flycatcher and tropical kingbird from South America. Insects Lawrence Ogilvie, Bermuda's agricultural scientist 1923 to 1928 identified 395 local insects and wrote the Department of Agriculture's 52-page book The Insects of Bermuda, including Aphis ogilviei, which he discovered. Ants There are four ant species found in Bermuda. The African big-headed ant (Pheidole megacephala) and Argentine ant (Linepithema humile) are both invasive to Bermuda. The African ant was first recorded on the island in 1889, and the Argentine ant arrived in Bermuda in the 1940s. These two ants battle for territory and control over the island. Furthermore, there is the Bermuda ant (Odontomachus insularis), which is indigenous to the island. This ant was initially presumed to be extinct, however, they were re-discovered alive in July 2002. Carpenter ants (Camponotus spp.) are also found in Bermuda. Terrestrial invertebrates More than 1100 kinds of insects and spiders are found on Bermuda, including 41 endemic insects and a possibly endemic spider. Eighteen species of butterfly have been seen; about six of these breed on the islands, including the large monarch and the very common Bermuda buckeye (Junonia coenia bergi). More than 200 moths have been recorded; one of the most conspicuous is Pseudosphinx tetrio, which can reach in wingspan. Bermuda has lost a number of its endemic invertebrates, including the Bermuda cicada (Neotibicen bermudianus), which became extinct when the cedar forests disappeared. Some species feared extinct have been rediscovered, including a Bermuda land snail (Poecilozonties circumfirmatus) and the Bermuda ant (Odontomachus insularis). Marine life Bermuda lies on the western edge of the Sargasso Sea, an area with high salinity, high temperature and few currents. Large quantities of seaweed of the genus Sargassum are present and there are high concentrations of plankton, but the area is less attractive to commercial fish species and seabirds. Greater diversity is present in the coral reefs which surround the island. Marine mammals A variety of whales, dolphins and porpoises have been recorded in the waters around Bermuda. The most common of these is the humpback whale, which passes the islands in April and May during its northward migration. Fish There are many fish species in Bermuda's waters, such as the barracuda, Bermuda chub, bluestriped grunt, hogfish, longspine squirrelfish, various types of parrotfish, smooth trunkfish, and slippery dick, to name a few. Marine invertebrates Sea squirts There are various types of sea squirts, such as the black sea squirt (Phallusia nigra), the purple sea squirt (Clavelina picta), the orange sea squirt (Ecteinascidia turbinata), and the lacy sea squirt (Botrylloides nigrum). Crustaceans There are various types of crabs in Bermuda. There are, Sally Lightfoot crabs (Grapsus grapsus), decorator crabs, swimming crabs (Portunidae), spider crabs (Majoidea), and Verrill's hermit crab (Calcinus verrillii). Great land crabs (Cardisoma guanhumi) are hard to find, but present in Bermuda. Finally, there is the Bermuda land crab (Gecarcinus lateralis), which is native, but not exclusive to, Bermuda. Threats and preservation Bermuda was the first place in the Americas to pass conservation laws, protecting the Bermuda petrel in 1616 and the Bermuda cedar in 1622. It has a well-organised network of protected areas including Spittal Pond, marshes in Paget and Devonshire and Pembroke Parishes, Warwick Pond and the hills above Castle Harbour. Only small areas of natural forest remain today; much was cleared since colonisation began in the 17th century, and recovered forest was lost in the 1940s due to insect infestation. The Bermuda petrel and Bermuda skink are highly endangered, and Bermuda cedar, Bermuda palmetto and Bermuda olivewood are all listed as threatened species. Some wild plants, including a spike rush, have disappeared. Introduced plants and animals have had adverse effects on the wildlife of the islands. The thriving tourist industry creates its own challenges to preserve the wildlife and habitat that attract visitors. Parrotfish are crucial to the coral reefs of Bermuda. Overfishing has caused issues for parrotfish that live in the coastal waters of other islands, such as the U.S. Virgin Islands. Looking at these instances in other habitats can help conservationists prevent the parrotfish population in Bermuda from experiencing decline as well. Birds such as the white-tailed tropicbird (Bermuda longtail) are strongly affected by hurricanes, as the hurricanes harm individuals and destroy their nests, which makes reproduction nearly impossible. Invasive species have been known to use similar nesting sites. Specifically, the rock pigeon often builds its nests within crevices around the island, including on rocky shorelines and cracks in Bermuda's tall cliffs. This is also where the Bermuda longtail nests, so the competition makes reproduction harder for the local species. References Amos, Eric J. R. (1991) A Guide to the Birds of Bermuda, privately published. Bermuda Aquarium, Museum and Zoo Bermuda Biodiversity Project , downloaded 21 February 2007. Dobson, A. (2002) A Birdwatching Guide to Bermuda, Arlequin Press, Chelmsford, UK. Flora of Bermuda (Illustrated) by Nathaniel Lord Britton, Ph.D., Sc.D., LL.D. (Published 1918) Forbes, Keith Archibald (2007) Bermuda's Fauna, downloaded downloaded 21 February 2007. Gehrman, Elizabeth (2102) Rare Birds: The Extraordinary Tale of the Bermuda Petrel and the Man Who Brought It Back from Extinction (Beacon Press). Ogden, George (2002) Bermuda A Gardener's Guide, The Garden Club of Bermuda Ogilvie, Lawrence (1928) The Insects of Bermuda Raine, André (2003) A Field Guide to the Birds of Bermuda, Macmillan, Oxford. External links Bermuda Audubon Society Bermuda biodiversity, Flickr Bermuda National Trust Bermuda Government, Department of Environment and Natural Resources Environment of Bermuda Nearctic ecoregions Tropical and subtropical coniferous forests Bermuda
Ecology of Bermuda
Biology
3,441
1,633,173
https://en.wikipedia.org/wiki/Ecological%20engineering
Ecological engineering uses ecology and engineering to predict, design, construct or restore, and manage ecosystems that integrate "human society with its natural environment for the benefit of both". Origins, key concepts, definitions, and applications Ecological engineering emerged as a new idea in the early 1960s, but its definition has taken several decades to refine. Its implementation is still undergoing adjustment, and its broader recognition as a new paradigm is relatively recent. Ecological engineering was introduced by Howard Odum and others as utilizing natural energy sources as the predominant input to manipulate and control environmental systems. The origins of ecological engineering are in Odum's work with ecological modeling and ecosystem simulation to capture holistic macro-patterns of energy and material flows affecting the efficient use of resources. Mitsch and Jorgensen summarized five basic concepts that differentiate ecological engineering from other approaches to addressing problems to benefit society and nature: 1) it is based on the self-designing capacity of ecosystems; 2) it can be the field (or acid) test of ecological theories; 3) it relies on system approaches; 4) it conserves non-renewable energy sources; and 5) it supports ecosystem and biological conservation. Mitsch and Jorgensen were the first to define ecological engineering as designing societal services such that they benefit society and nature, and later noted the design should be systems based, sustainable, and integrate society with its natural environment. Bergen et al. defined ecological engineering as: 1) utilizing ecological science and theory; 2) applying to all types of ecosystems; 3) adapting engineering design methods; and 4) acknowledging a guiding value system. Barrett (1999) offers a more literal definition of the term: "the design, construction, operation and management (that is, engineering) of landscape/aquatic structures and associated plant and animal communities (that is, ecosystems) to benefit humanity and, often, nature." Barrett continues: "other terms with equivalent or similar meanings include ecotechnology and two terms most often used in the erosion control field: soil bioengineering and biotechnical engineering. However, ecological engineering should not be confused with 'biotechnology' when describing genetic engineering at the cellular level, or 'bioengineering' meaning construction of artificial body parts." The applications in ecological engineering can be classified into 3 spatial scales: 1) mesocosms (~0.1 to hundreds of meters); 2) ecosystems (~one to tens of km); and 3) regional systems (>tens of km). The complexity of the design likely increases with the spatial scale. Applications are increasing in breadth and depth, and likely impacting the field's definition, as more opportunities to design and use ecosystems as interfaces between society and nature are explored. Implementation of ecological engineering has focused on the creation or restoration of ecosystems, from degraded wetlands to multi-celled tubs and greenhouses that integrate microbial, fish, and plant services to process human wastewater into products such as fertilizers, flowers, and drinking water. Applications of ecological engineering in cities have emerged from collaboration with other fields such as landscape architecture, urban planning, and urban horticulture, to address human health and biodiversity, as targeted by the UN Sustainable Development Goals, with holistic projects such as stormwater management. Applications of ecological engineering in rural landscapes have included wetland treatment and community reforestation through traditional ecological knowledge. Permaculture is an example of broader applications that have emerged as distinct disciplines from ecological engineering, where David Holmgren cites the influence of Howard Odum in development of permaculture. Design guidelines, functional classes, and design principles Ecological engineering design will combine systems ecology with the process of engineering design. Engineering design typically involves problem formulation (goal), problem analysis (constraints), alternative solutions search, decision among alternatives, and specification of a complete solution. A temporal design framework is provided by Matlock et al., stating the design solutions are considered in ecological time. In selecting between alternatives, the design should incorporate ecological economics in design evaluation and acknowledge a guiding value system which promotes biological conservation, benefiting society and nature. Ecological engineering utilizes systems ecology with engineering design to obtain a holistic view of the interactions within and between society and nature. Ecosystem simulation with Energy Systems Language (also known as energy circuit language or energese) by Howard Odum is one illustration of this systems ecology approach. This holistic model development and simulation defines the system of interest, identifies the system's boundary, and diagrams how energy and material moves into, within, and out of, a system in order to identify how to use renewable resources through ecosystem processes and increase sustainability. The system it describes is a collection (i.e., group) of components (i.e., parts), connected by some type of interaction or interrelationship, that collectively responds to some stimulus or demand and fulfills some specific purpose or function. By understanding systems ecology the ecological engineer can more efficiently design with ecosystem components and processes within the design, utilize renewable energy and resources, and increase sustainability. Mitsch and Jorgensen identified five Functional Classes for ecological engineering designs: Ecosystem utilized to reduce/solve pollution problem. Example: phytoremediation, wastewater wetland, and bioretention of stormwater to filter excess nutrients and metals pollution Ecosystem imitated or copied to address resource problem. Example: forest restoration, replacement wetlands, and installing street side rain gardens to extend canopy cover to optimize residential and urban cooling Ecosystem recovered after disturbance. Example: mine land restoration, lake restoration, and channel aquatic restoration with mature riparian corridors Ecosystem modified in ecologically sound way. Example: selective timber harvest, biomanipulation, and introduction of predator fish to reduce planktivorous fish, increase zooplankton, consume algae or phytoplankton, and clarify the water. Ecosystems used for benefit without destroying balance. Example: sustainable agro-ecosystems, multispecies aquaculture, and introducing agroforestry plots into residential property to generate primary production at multiple vertical levels. Mitsch and Jorgensen identified 19 Design Principles for ecological engineering, yet not all are expected to contribute to any single design: Ecosystem structure & function are determined by forcing functions of the system; Energy inputs to the ecosystems and available storage of the ecosystem is limited; Ecosystems are open and dissipative systems (not thermodynamic balance of energy, matter, entropy, but spontaneous appearance of complex, chaotic structure); Attention to a limited number of governing/controlling factors is most strategic in preventing pollution or restoring ecosystems; Ecosystem have some homeostatic capability that results in smoothing out and depressing the effects of strongly variable inputs; Match recycling pathways to the rates of ecosystems and reduce pollution effects; Design for pulsing systems wherever possible; Ecosystems are self-designing systems; Processes of ecosystems have characteristic time and space scales that should be accounted for in environmental management; Biodiversity should be championed to maintain an ecosystem's self design capacity; Ecotones, transition zones, are as important for ecosystems as membranes for cells; Coupling between ecosystems should be utilized wherever possible; The components of an ecosystem are interconnected, interrelated, and form a network; consider direct as well as indirect efforts of ecosystem development; An ecosystem has a history of development; Ecosystems and species are most vulnerable at their geographical edges; Ecosystems are hierarchical systems and are parts of a larger landscape; Physical and biological processes are interactive, it is important to know both physical and biological interactions and to interpret them properly; Eco-technology requires a holistic approach that integrates all interacting parts and processes as far as possible; Information in ecosystems is stored in structures. Mitsch and Jorgensen identified the following considerations prior implementing an ecological engineering design: Create conceptual model of determine the parts of nature connected to the project; Implement a computer model to simulate the impacts and uncertainty of the project; Optimize the project to reduce uncertainty and increase beneficial impacts. Relationship to other engineering disciplines The field of Ecological Engineering is closely related to the fields of environmental engineering and civil engineering. The three broadly overlap in the area of water resources engineering, particularly the treatment and management of stormwater and wastewater. While the three disciplines of engineering are closely related to one another, there are distinct areas of expertise within each field. Ecological engineering is primarily focused on the natural environment and natural infrastructure, emphasizing the mediation of the relationship between people and planet. In complementary disciplines, civil engineering is primarily focused on built infrastructure and public works while environmental engineering focuses on the protection of public and environmental health through the treatment and management of waste streams. Academic curriculum (colleges) An academic curriculum was proposed for ecological engineering in 2001. Key elements of the suggested curriculum are: environmental engineering; systems ecology; restoration ecology; ecological modeling; quantitative ecology; economics of ecological engineering, and technical electives. Complementing this set of courses were prerequisites courses in physical, biological, and chemical subject areas, and integrated design experiences. According to Matlock et al., the design should identify constraints, characterize solutions in ecological time, and incorporate ecological economics in design evaluation. Economics of ecological engineering has been demonstrated using energy principles for a wetland., and using nutrient valuation for a dairy farm. With these principals in mind, the world's first B.S. Ecological Engineering program was formalized in 2009 at Oregon State University. In 2024, the US Accreditation Board for Engineering and Technology, Inc. (ABET) published criteria for accreditation of Ecological Engineering program for the first time. To be accredited, B.S. Ecological Engineering programs must include: mathematics through differential equations, probability and statistics, calculus-based physics, and college-level chemistry; earth science, fluid mechanics, hydraulics, and hydrology. biological and advanced ecological sciences that focus on multi-organism self-sustaining systems at a range of scales, systems ecology, ecosystem services, and ecological modeling; material and energy balances; fate and transport of substances in and between air, water, and soil; thermodynamics of living systems; and applications of ecological principles to engineering design that include considerations of climate, species diversity, self-organization, uncertainty, sustainability, resilience, interactions between ecological and social systems, and system-scale impacts and benefits. See also Afforestation Agroecology Agroforestry Analog forestry Biomass (ecology) Buffer strip Constructed wetland Energy-efficient landscaping Environmental engineering Forest farming Forest gardening Great Green Wall Great Plains Shelterbelt (1934- ) Great Plan for the Transformation of Nature - an example of applied ecological engineering in the 1940s and 1950s Hedgerow Home gardens Human ecology Macro-engineering Sand fence Seawater greenhouse Sustainable agriculture Terra preta Three-North Shelter Forest Program Wildcrafting Windbreak Literature Howard T. Odum (1963), "Man and Ecosystem" Proceedings, Lockwood Conference on the Suburban Forest and Ecology, in: Bulletin Connecticut Agric. Station. W.J. Mitsch (1993), Ecological engineering—"a cooperative role with the planetary life–support systems. Environmental Science & Technology 27:438-445. H.D. van Bohemen (2004), Ecological Engineering and Civil Engineering works, Doctoral thesis TU Delft, The Netherlands. References External links What is "ecological engineering"? Webtext, Ecological Engineering Group, 2007. Ecological Engineering Student Society Website, EESS, Oregon State University, 2011. Ecological Engineering webtext by Howard T. Odum Center for Wetlands at the University of Florida, 2007. Organizations American Ecological Engineering Society, homepage. Ecological Engineering Student Society Website, EESS, Oregon State University, 2011. American Society of Professional Wetland Engineers, homepage, wiki. Ecological Engineering Group, homepage. International Ecological Engineering Society homepage. Scientific journals Ecological Engineering since 1992, with a general description of the field. Landscape and Ecological Engineering since 2005. Journal of Ecological Engineering Design Officially launched in 2021, this journal offers a diamond open access format (free to the reader, free to the authors). This is the official journal of the American Ecological Engineering Society with production support from the University of Vermont Libraries. Ecological restoration Environmental terminology Environmental engineering Environmental social science Engineering disciplines Climate change policy
Ecological engineering
Chemistry,Engineering,Environmental_science
2,470
5,298,904
https://en.wikipedia.org/wiki/Trioxidane
Trioxidane (systematically named dihydrogen trioxide,), also called hydrogen trioxide is an inorganic compound with the chemical formula (can be written as or ). It is one of the unstable hydrogen polyoxides. In aqueous solutions, trioxidane decomposes to form water and singlet oxygen: The reverse reaction, the addition of singlet oxygen to water, typically does not occur in part due to the scarcity of singlet oxygen. In biological systems, however, ozone is known to be generated from singlet oxygen, and the presumed mechanism is an antibody-catalyzed production of trioxidane from singlet oxygen. Preparation Trioxidane can be obtained in small, but detectable, amounts in reactions of ozone and hydrogen peroxide, or by the electrolysis of water. Larger quantities have been prepared by the reaction of ozone with organic reducing agents at low temperatures in a variety of organic solvents, such as the anthraquinone process. It is also formed during the decomposition of organic hydrotrioxides (ROOOH). Alternatively, trioxidane can be prepared by reduction of ozone with 1,2-diphenylhydrazine at low temperature. Using a resin-bound version of the latter, relatively pure trioxidane can be isolated as a solution in organic solvent. Preparation of high purity solutions is possible using the methyltrioxorhenium(VII) catalyst. In acetone-d6 at −20 °C, the characteristic 1H NMR signal of trioxidane could be observed at a chemical shift of 13.1 ppm. Solutions of hydrogen trioxide in diethyl ether can be safely stored at −20 °C for as long as a week. The reaction of ozone with hydrogen peroxide is known as the "peroxone process". This mixture has been used for some time for treating groundwater contaminated with organic compounds. The reaction produces H2O3 and H2O5. Structure In 1970-75, Giguère et al. observed infrared and Raman spectra of dilute aqueous solutions of trioxidane. In 2005, trioxidane was observed experimentally by microwave spectroscopy in a supersonic jet. The molecule exists in a skewed structure, with an oxygen–oxygen–oxygen–hydrogen dihedral angle of 81.8°. The oxygen–oxygen bond lengths of 142.8 picometer are slightly shorter than the 146.4 pm oxygen–oxygen bonds in hydrogen peroxide. Various dimeric and trimeric forms also seem to exist. There is a trend of increasing gas-phase acidity and corresponding pKa as the number of oxygen atoms in the chain increases in HOnH structures (n=1,2,3). Reactions Trioxidane readily decomposes into water and singlet oxygen, with a half-life of about 16 minutes in organic solvents at room temperature, but only milliseconds in water. It reacts with organic sulfides to form sulfoxides, but little else is known of its reactivity. Recent research found that trioxidane is the active ingredient responsible for the antimicrobial properties of the well known ozone/hydrogen peroxide mix. Because these two compounds are present in biological systems as well it is argued that an antibody in the human body can generate trioxidane as a powerful oxidant against invading bacteria. The source of the compound in biological systems is the reaction between singlet oxygen and water (which proceeds in either direction, of course, according to concentrations), with the singlet oxygen being produced by immune cells. Computational chemistry predicts that more oxygen chain molecules or hydrogen polyoxides exist and that even indefinitely long oxygen chains can exist in a low-temperature gas. With this spectroscopic evidence a search for these type of molecules can start in interstellar space. A 2022 publication suggested the possibility of the presence of detectable concentrations of polyoxides in the atmosphere. See also Molozonide References Inorganic compounds Oxides Polyoxides Oxoacids
Trioxidane
Chemistry
836
41,875,425
https://en.wikipedia.org/wiki/Pi%20Gruis
π Gruis, Latinised as Pi Gruis, is an optical double comprising two unrelated stars in the constellation Grus appearing close by line of sight: π1 Gruis (HR 8521), a semiregular S-type star π2 Gruis (HR 8524), an F-type star Gruis, Pi Grus (constellation)
Pi Gruis
Astronomy
76
53,570,017
https://en.wikipedia.org/wiki/Emotion%20Review
Emotion Review is a peer-reviewed scholarly journal published by Sage Publications in association with the International Society for Research on Emotions (ISRE). The editor is W. Gerrod Parrott of Georgetown University. It is indexed in the Social Sciences Citation Index, Journal Citation Reports, and Current Contents. Aims and scope Emotion Review publishes articles covering the whole spectrum of emotions research. It is an interdisciplinary journal publishing work in anthropology, biology, computer science, economics, history, humanities, linguistics, neuroscience, philosophy, physiology, political science, psychiatry, psychology, sociology, and in other areas where emotion research is active. The journal focuses on a combination of theoretical, conceptual, and review papers. It allows commentaries given its aim to enhance debate about critical issues in emotion theory and research. Articles do not include reports of empirical studies. References External links Emotion Review International Society for Research on Emotions (Emotion Review) Psychology journals Emotion English-language journals
Emotion Review
Biology
189
74,976,282
https://en.wikipedia.org/wiki/Electoral%20bonds
Electoral bonds were a mode of funding for political parties in India from their introduction in 2018 until they were struck down as unconstitutional by the Supreme Court on 15 February 2024. Following their termination, a five-judge bench headed by the Chief Justice directed the State Bank of India to cede the identities and other details of donors and recipients to the Election Commission of India, which was in turn asked to publish them on its website. The course of action was introduced in The Finance Bill, 2017 during the Union Budget 2017-18 by then Finance Minister Arun Jaitley. They were classified as a Money Bill, and thus bypassed certain parliamentary scrutiny processes, in what was alleged to be a violation of Article 110 of Indian constitution. Mr Jaitley also proposed to amend the Reserve Bank of India (RBI) Act in order to facilitate the issuance of electoral bonds by banks for the purpose of political funding. Although introduced in early 2017, the Department of Economic Affairs in Ministry Of Finance notified the Electoral Bond Scheme 2018 in a Gazette only on 2 January 2018. According to an estimate, a total of 18,299 electoral bonds equivalent to a monetary value of ₹9,857 crore (98.57 billion) were successfully transacted during the period spanning from March 2018 to April 2022. On 7 November 2022, the Electoral Bond scheme was amended to increase the sale days from 70 to 85 in a year where any assembly election may be scheduled. The decision on Electoral Bond (Amendment) Scheme, 2022 was taken shortly prior to the assembly elections in Gujarat and Himachal Pradesh, while the Model Code of Conduct was implemented in both the states. Ahead of the 2019 General Elections, Congress announced its intention to eliminate electoral bonds, if the party is elected to power. The Communist Party of India (Marxist) has also opposed the scheme, and was the sole national party to refuse donations through electoral bonds. On 15 February 2024, a five-judge bench of the Supreme Court of India, headed by Chief Justice DY Chandrachud, unanimously struck down the electoral bonds scheme, as well as amendments to the Representation of People Act, Companies Act and Income Tax Act, as unconstitutional. They found it "violative of RTI (Right to Information)" and of voters’ right to information about political funding under Article 19(1)(a) of the Constitution. They also pointed out that it "would lead to quid pro quo arrangements" between corporations and politicians. The State Bank of India was asked to hand over details of donors and recipients to the Election Commission of India by 6 March, and the ECI was to publish these online by 13 March. However, the SBI failed to submit the details by 6 March, and approached the Supreme Court asking for more time. The court turned down this request, following which the details were turned over to the ECI and published on their website. Features Electoral Bonds functioned like Promissory notes and interest-free banking tools. Any Indian citizen or organization registered in India could buy these bonds after fulfilling the KYC norms laid down by the Reserve Bank of India (RBI). They could be procured by a donor solely through the means of cheque or digital payments in various denominations, such as one thousand, ten thousand, one lakh, ten lakh, and one crore from specific branches of the State Bank of India (SBI). Within a span of 15 days of issuance, these electoral bonds could be redeemed in the designated account of a legally registered political party under the Representation of the People Act, 1951 (u/s 29A) which had received at least 1% of the votes in the last election. The stanches of bonds were available for purchase for 10 days in the months of January, April, July, and October with an additional time-frame of 30 days in the year of general elections for Lok Sabha. Electoral bonds featured anonymity since they bore no identification of the donor and the political party to which they were issued. In the event that the 15-day deadline was not met, neither the donor nor the receiving political party received a refund for the issued electoral bonds. Rather, the fund value of the electoral bond was remitted to the Prime Minister's National Relief Fund (PMNRF). Objective The government capped cash donations to political parties at . Enforcing donation amount exceeding via the banking system would mean the declaration of assets by political parties and also enable their traceability. Then Finance Minister Arun Jaitley argued that this reform of electoral bonds is expected to enhance transparency and accountability in the realm of political funding, while also preventing the creation of illegal funds for future generations. Jaitley, while defending the anonymity of bonds, argued that if donors were asked to disclose their identity, they would revert back to cash donations. The anonymity of the bonds was also intended to protect the donors' privacy and shield them from potential harassment. Timeline On 28 January 2017, the finance ministry in a correspondence with the Reserve Bank of India (RBI) sought comments on the proposed amendments in The Finance Bill, 2017. The necessity of amendments to RBI Act was realized. Next day, on 30 January 2017, the RBI replied by expressing its severe apprehensions, contending that the electoral bond scheme was susceptible to illicit financial activities, lack of transparency, and possible exploitation. HuffPost India reported that the government overlooked the RBI concerns and went ahead with its announcement during Budget Session in Parliament on 1 February 2017. Impact on existing regulations The monetary contribution limit that any registered political party in India can receive from an individual has been restricted to ₹2,000 – representing a reduction to 10% of the previous threshold of ₹20,000. This was done through the Finance Act, 2017. Introduction of electoral bonds has effectively abolished the ceiling on contributions made by corporations, which was earlier limited to 7.5% of the organization's average net earnings over the preceding three-year period. An amendment to the Companies Act, 2013 ensured this change. The scheme resulted in the elimination of the mandatory obligation for individuals or corporations to provide comprehensive information regarding their political contributions. Instead of reporting a comprehensive breakdown of political donations within their annual financial reports, companies would now be solely required to disclose a consolidated sum for purchase of electoral bonds. Relevant provisions under the Income Tax Act, 1961 were amended in this regard. The Foreign Contribution Regulation Act (FCRA) was amended by the government, with the support of the opposition, to broaden the definition of a "foreign" entity, with the explicit aim of expanding the scope of firms that could lawfully make political contributions. Critics argued that these changes would allow any person, corporation, or interest group to now anonymously donate an unrestricted amount of funds to any political party, and that no citizen, journalist, or civil society representative would be able to establish any connections. First tranches - 2018 Postponed from January 2018 and commencing on 1 March 2018, the first tranche of electoral bonds were made available for purchase over a span of ten days. State Bank of India issued and encashed these electoral bonds in fours of its branches at Chennai, Delhi, Kolkata and Mumbai. Analysis of political parties' "Audited Accounts Contribution Statements" for the fiscal year 2017-18 ending on 31 March, which were duly submitted to both the Income Tax Department and the Election Commission of India reveals that BJP obtained electoral bonds amounting to ₹215 crore, whereas the Congress party received only ₹5 crore. It is noteworthy that no other registered political party, national or regional, has reported receiving any form of contribution through electoral bonds. The third phase of electoral bond sale was scheduled for purchase from 1 May to 10 May at eleven SBI branches in Chennai, Kolkata, Mumbai and New Delhi along with designated branches in Assam, Gujarat, Haryana, Karnataka, Madhya Pradesh, Punjab, Rajasthan and Uttar Pradesh. An application under the Right to Information (RTI) Act was filed in May 2018 to obtain information on cumulative number of donors and the aggregate quantity of electoral bonds sold by each authorized branch during the months of March and April, among other relevant particulars. The Central Public Information Officer (CPIO) declined the application in June 2018, citing that the compilation of such data would result in "disproportionate diversion of the bank’s resources" but gave only the denomination-wise figures for the sale of all such bonds through the designated branches which did not match with information provided in another application. However, SBI’s first appellate authority (FAA), in July 2018, acknowledged the blunder that the previously given electoral bond sale data — 10 bonds of ₹100,000 denomination, 38 bonds of ₹10,00,000 and 9 bonds of ₹1,00,00,000 denomination, totaling ₹12.93 crore — for the Gandhinagar branch was, in fact, "belonged" to the Bengaluru branch of the State Bank of India. The RTI applicant regretted the claims of transparency made by the government. 2019 The RTI response of SBI to a query by Vihar Durve has shown that the 2019 sale of Electoral Bonds in two tranches of January and March at eleven SBI branches, prior to the Lok Sabha elections in May, has experienced a significant surge amounting to Rs 1,716.05 crore. This figure represents a notable increase of 62.39% in comparison to the total sales of Rs 1,056.73 crore achieved through six tranches conducted in March, April, May, July, October and November of 2018. 2022 From 1 July to 10 July, SBI conducted the 21st phase of Electoral Bond sale. 2023 From 19 January to 28 January, 25th electoral bond sale was carried out while 26th phase of sale worth ₹970.50 crore was conducted from 3 April to 12 April ahead of Karnataka assembly elections on 10 May 2023. Based on the data obtained through the Right to Information (RTI) by Commodore Lokesh Batra (retired) for the 26th edition of bond sale conducted by the State Bank of India (SBI), it has been revealed that a total of 1,470 bonds were sold. Among these, 923 bonds accounting for 95.10% of the total, were of the denomination of ₹1 crore. Additionally, the SBI sold 468 bonds valued at ₹ one lakh each, 69 bonds valued at ₹ one lakh each, and 10 bonds valued at ₹10,000 each. Highest sales of bonds amounting to ₹335.30 crore was at Hyderabad branch, followed by the Kolkata branch with sales of ₹197.40 crore and the Mumbai branch with sales of ₹169.37 crore. The Chennai branch recorded sales of bonds worth ₹122 crore, while the Bengaluru branch sold bonds worth ₹46 crore. In terms of bond encashment, the New Delhi branch saw redemption of bonds valued at ₹565.79 crore. Subsequently, the Kolkata branch redeemed bonds worth ₹186.95 crore. 27th edition of bond sale was held between 3 July and 12 July which saw 1,371 transactions worth ₹812.80 crore. The Hyderabad branch achieved the highest sales in bonds amounting to ₹266.72 crore, followed by the Kolkata branch with sales worth ₹143.20 crore, and the Mumbai branch with sales worth ₹135 crore. The Bengaluru branch recorded sales of bonds worth ₹46 crore. In terms of bond encashment, the Bhubaneswar branch saw highest redemption of electoral bonds valued at ₹155.50 crore. Subsequently, the New Delhi branch encashed electoral bonds worth ₹117.58 crore. Pre-implementation Previously, it was mandatory for all political parties to report the names and other details of donors who contribute more than ₹20,000 towards the party fund while filing their income-tax returns. Information was not sought for those donating an amount less than ₹20,000. Such donations were declared as Income from "unknown sources" and the details of such donors are not available in the public domain. Association for Democratic Reforms (ADR) conducted a study in 2017 which found the total income of political parties in India between 2004–05 and 2014-15 was ₹11,367 crore and that 69% of income from donations below ₹20,000 given to political parties amounting to ₹7,833 crore came from unknown sources. Only 16 per cent of their total income was from the known donors. Bahujan Samaj Party (BSP) declared 100% of its income of ₹111.96 crore in 2014-15 from unknown sources. This was a 2,057% jump from its ₹5.19 crore income in 2004–05. Congress Party had the highest income of ₹3,982 crore among national parties while Bharatiya Janata Party (BJP) reported an income of ₹3,272.63 crore. Approximately 83% of the aggregate income received by the Congress, totaling ₹3,323.39 crore, and 65% of the total income acquired by the BJP, amounting to ₹2,125.91 crore was attributed to unknown sources. The CPI(M) declared ₹893 crore as its income, which was third highest in 2014–15. Samajwadi Party had the highest income of ₹819.1 crore among regional political parties while Dravida Munnetra Kazhagam (DMK) reported second highest income of ₹203.02 crore followed by All India Anna Dravida Munnetra Kazhagam (AIADMK) with third highest net revenue of ₹165.01 crore. ADR study showed 94% of Samajwadi Party income (₹766.27 crore) was from unknown sources whereas Shiromani Akali Dal (SAD) reported 86% (₹88.06 crore) of revenue from unknown sources. Post-implementation In 2018, the 133-year old Indian National Congress which ruled for 49 years of Independent India's 71 years history, for the first time made a public request for "small contribution" to its party fund which pointed to shortfall in party's income. It is impossible to determine the exact amount of money that political parties earn or possess. The appeal by Congress was seen as a move to portray itself as an honest party that is not getting enough funding from corporate companies and wealthy donors. The Congress-led UPA government faced allegations of involvement in major scandals during its previous tenure. While the INC saw dip in their revenues, BJP had seen doubling of its earning in the same period. For the Financial Year 2021–22, ADR Annual Audit report revealed that the regional parties declared ₹887.55 crore as income from unknown sources which amounts to 75% of their net income. Only 12% (₹145.42 crore) of the total income (₹1165.576 crore) of the 27 regional parties was attributed to known sources. Electoral bonds contributed to 93.26% (₹827.76 crore) of the entire income from unknown sources. Also, ₹38.354 crore (4.32%) was garnered from sale of coupons while ₹21.293 crore (2.4%) was received by the regional parties in form of donations each amounting to less than ₹20,000. According the information given by 27 out of 54 recognized regional parties to Election Commission of India by May 2023, DMK received 96.01% (₹306.025 crore) of its total income (₹318.745 crore) from unknown sources. Data released by the Election Commission of India On 11 March 2024, the Supreme Court ordered the State Bank of India to disclose the details of electoral bonds to the Election Commission of India (ECI) by the end of business hours the next day. This data was subsequently released by the ECI on their website on 15 March 2024. It includes the details of all bonds encashed between 12 April 2019, and 24 January 2024. On 17 March 2024, the Election Commission unveiled data received directly from political parties and is believed to be from the period before 12 April 2019. The data released by the ECI showed that the biggest donor was Future Gaming and Hotels Pvt Ltd run by Mr Santiago Martin. This lottery company purchased bonds worth Rs 1,300 crore during the period 2019–2024. Of these, bonds worth Rs 100 crore were purchased seven days after a raid by India's Enforcement Directorate over charges of money laundering. The second and fifth biggest donors - Megha Engineering and Infrastructures Ltd and Vedanta Limited - also faced probes by law enforcement agencies during the period. Meanwhile, the third biggest donor - Qwik Supply Chain - was accused of being a subsidiary of Reliance Industries, a charge Reliance denied. However, the company's registration details indicated a connection. Public interest litigation The electoral bonds scheme has been subjected to legal challenge through a Public Interest Litigation (PIL) in the Supreme Court of India on two grounds. Firstly, it is argued that the scheme has resulted in a complete lack of transparency in political funding in India, thereby preventing the Election Commission and the citizens of the country from accessing crucial information regarding political contributions and parties' significant source of income. Secondly, it is contended that the passage of this scheme as a Money Bill, thereby circumventing the upper house of Parliament — Rajya Sabha, is unconstitutional and infringes upon the doctrine of separation of powers and the citizen's fundamental right to information, both of which form integral components of the basic structure of the Constitution. The PIL was initiated in October 2017, with the Ministry of Finance submitting its response in January 2018 and the Law Ministry responding in March 2018. The Supreme Court considered a collection of petitions submitted by the non-governmental organization Association for Democratic Reforms, Common Cause (India), (represented by Prashant Bhushan) and the Communist Party of India (Marxist), which contest the legality of the modifications made to the Reserve Bank of India Act, the Representation of the People Act, the Income Tax Act, the Companies Act, and the Foreign Contribution Regulation Act through the Finance Acts of 2016 and 2017. These amendments enabled the utilization of electoral bonds. In his petition against the government's decision, CPI(M) leader Sitaram Yechury demanded for revocation of Electoral Bond Scheme and the called the issuance of electoral bonds and the Finance Act 2017, as "arbitrary" and "discriminatory". The government in its defense of its decision to introduce electoral bonds, asserted in Supreme Court that its primary objective was to ensure increased accountability and promote electoral reforms as a means to combat the escalating threat of "black money" and to help nation's transitions towards a cashless and digital economy. Government said that the implementation of a restricted time-frame and a significantly brief period of maturity for electoral bonds reduces the likelihood of any potential misuse. The acquisition of these bonds by donors will be duly recorded in their financial statements, thereby reflecting the donations made. Further, government added that the introduction of electoral bonds will encourage donors to opt for the banking channel as a means of contributing, with their personal information being captured by the authorized issuing entity. Consequently, this measure will guarantee transparency, accountability, and serve as a significant stride towards electoral reform. The government requested the dismissal of the petition submitted by the left party, asserting that there is an absence of "invidious or arbitrary discrimination" and no infringement upon any fundamental rights of the Petitioner. The Election Commission of India submitted its views on the matter to the Supreme Court that the amendments in existing legislation, which permit the utilization of electoral bonds and eliminate the restriction on donations, including those from foreign origins, to political parties, would inevitably result in a surge in the utilization of illicit funds (black money) during the electoral process. Furthermore, this alteration would have profound consequences on the transparency of financial contributions to political parties, ultimately leading to the manipulation of Indian policies. Criticism The Constitution of India clearly specifies the conditions for a legislation under consideration to be classified as a Money Bill. The provisions pertaining to electoral bonds fail to satisfy the criteria for classification as a Money Bill. The government's justification for this is that any component of the Budget, being a Money Bill, automatically fulfills the prerequisites for classification as such. A comparable instance of the misuse of the Money Bill was seen previous to the retrospective amendments made to the Foreign Currency Regulation Act (FCRA) by the government, on two separate occasions, in an effort to shield the BJP and the Congress from prosecution for FCRA violations, as determined by the Delhi High Court. The UPA-administration had earlier implemented "Electoral Trusts" with false narratives on better electoral fundings. All political parties have a tendency to possess a vested interest in maintaining the anonymity of fund origins due to the predominant presence of unreported funds. These parties not only tolerate the existence of illicit funds in the form of "black money" but also safeguard their sources and actively engage in their utilization. Regrettably, comprehensive documentation in regard to the allocation of these funds does not exist as it is not mandated by any of the existing laws. Consequently, electoral finances and expenditures emerge as the principal catalysts for corruption and the proliferation of unaccounted funds, hence the black money is widely prevalent in Indian politics. While the Modi-administration has taken steps to address the issue of political funding by imposing restrictions on cash donations and introducing "election bonds", some politicians and election officials believe that these measures will not have a significant impact. It is said that the upcoming elections will not be directly affected by these changes, but the government is likely to view them as an opportunity for Narendra Modi to enhance his reputation as a fighter against corruption. This comes after he announced several measures to address illegal wealth. It is difficult to fully understand the financial aspects of politics in the world's largest democracy due to the lack of transparency in campaign financing. The introduction of Electoral Bonds has enabled companies to anonymously finance political parties without any limitations on the amount of donations. This development has been criticized by activists who argue that it grants corporations excessive influence and obfuscates the connections between politicians and business entities. The initial issuance of electoral bonds saw the BJP acquire approximately 95 percent of the total bonds, as per information obtained by Reuters through a Right to Information request and BJP submissions. The purpose of introducing the bonds was to expose illegal money and increase transparency in political funding. However, critics argue that it has had the opposite effect, as they claim the bonds are shrouded in secrecy. There exists an absence of publicly available documentation regarding the purchasers of individual bonds and the recipients of the corresponding donations. This lack of transparency renders the bonds susceptible to being deemed "unconstitutional and problematic," as it hinders taxpayers and citizens from obtaining knowledge about the origins of these contributions. Moreover, according to some critics, it is argued that the anonymity of the bonds is not absolute since SBI, the state-owned bank maintains a comprehensive record of both the benefactor and the beneficiary. Consequently, this enables the ruling government to effortlessly obtain pertinent information and potentially exploit it in order to exert influence over donors. This practice has been deemed as conferring an unjust advantage upon the ruling party and the government. In 2017, upon the initial announcement of the bonds, the Election Commission of India expressed concerns regarding the potential compromise of electoral transparency. This apprehension was echoed by various entities including the central bank, the law ministry, and several Members of Parliament, who contended that the electoral bonds would not effectively deter the influx of illicit funds into the political sphere. Notably, the ECI later altered its stance and extended support to the electoral bonds after a year. Furthermore, the courts have postponed their verdict on the matter, thus leaving it unresolved. Those who oppose this scheme argue that the government has effectively enacted opacity through the implementation of electoral bonds. There can be a possibility of shell companies' accounts to serve as a means to facilitate contributions for favorable political parties. Opponents highlight that there exists a potential scenario wherein benefactors may procure bonds using funds acquired through legitimate means or subjected to taxation, and subsequently vend them to a third party utilizing illicitly obtained or undeclared funds for tax purposes. The third party may then transfer the bonds to a political organization. A major issue with political financing in India is the anonymity of funding sources. Since the ₹2,000 limit still exist for donations in cash, there is an apprehension that political parties will hide a large portion of their illegal money in this category. The potential consequences of a political party in possession of both Electoral Bonds funds and the Aadhaar and Jan Dhan account numbers of voters are significant. The party could potentially engage in a "Direct Benefit Transfer" of funds into the accounts of voters in advance of an election. The pathway from Electoral Bonds to Aadhaar and bank account numbers, or even just UPI numbers, presents a potential conduit for untraceable funds from anonymous sources to anonymous voters. This development could ultimately undermine the integrity of democratic elections, rendering them neither "free" nor "fair." Retired Chief Election Commissioner V. S. Sampath said the Election Commission can't do much in situations where legislative modifications are tailored to accommodate the interests of corporate benefactors and political organizations that receive financial contributions from businesses including foreign entities. Controversies In 2023, former CM Chandrababu Naidu was arrested and the Andhra Pradesh Criminal Investigation Department (CID) had submitted evidence to the Anti Corruption Bureau (ACB) Court, indicating that the Telugu Desam Party (TDP) received ₹27 crore in the form of electoral bonds as donations during the fiscal year 2018-19. This substantial amount allegedly serves as proof of the funds that were misappropriated in a skill development project. Calling this a political hunt and misuse of state police machinery, the TDP countered that ruling YSR Congress Party (YSRCP) led by Chief Minister Y. S. Jagan Mohan Reddy had obtained a total of ₹99.84 crore in electoral bonds during the fiscal year 2018-19, ₹74.35 crore in 2019-20, ₹96.25 crore in 2020-21, and ₹60 crore in 2021-22 but the YSRCP refrained from disclosing this information in its publication, Saakshi, indicating a lack of transparency and accountability. Notes See also Electoral reform in India Political funding in India References External Links The lists of electoral bond donors and recipients, published on the website of the Election Commission of india Electoral reform in India Modi administration initiatives Political corruption India Politics of India Abuse Organized crime activity Corruption in India Political controversies in India Electoral fraud State crime
Electoral bonds
Biology
5,484
16,094,706
https://en.wikipedia.org/wiki/COSMOS%20International
COSMOS International or COSMOS International Satellitenstart GmbH is a joint Russian-German launch service provider and satellite manufacturer. A partnership between OHB System, Fuchs Gruppe and PO Polyot, COSMOS commercially markets launches using the Kosmos-3M rocket, which are subcontracted to the Russian Space Forces. The organisation conducted its first launch on 28 April 1999, placing ABRIXAS and MegSat into orbit, on a single rocket. It has been responsible for the launches of SAR-Lupe satellites for Germany's Bundeswehr (defence force). See also Eurockot Starsem International Launch Services Sea Launch External links COSMOS Website References Commercial launch service providers
COSMOS International
Astronomy
137
32,559,204
https://en.wikipedia.org/wiki/CVNH%20domain
In molecular biology, the CVNH domain (CyanoVirin-N Homology domain) is a conserved protein domain. It is found in the sugar-binding antiviral protein cyanovirin-N (CVN) as well as proteins from filamentous ascomycetes and in the fern Ceratopteris richardii. Cyanovirin-N (CV-N) is an 11-kDa protein from the cyanobacterium Nostoc ellipsosporum that displays virucidal activity against several viruses, including human immunodeficiency virus (AIDS). The virucidal activity of CV-N is mediated through specific high-affinity interactions with the viral surface envelope glycoproteins gp120 and gp41, as well as to high-mannose oligosaccharides found on the HIV envelope. In addition, CV-N is active against rhinoviruses, human parainfluenza virus, respiratory syncytial virus, and enteric viruses. The virucidal activity of CV-N against influenza virus is directed towards viral haemagglutinin. CV-N has a complex fold composed of a duplication of a tandem repeat of two homologous motifs comprising three-stranded beta sheet and beta hairpins. References Protein domains
CVNH domain
Biology
278
29,224,388
https://en.wikipedia.org/wiki/Newton%27s%20theorem%20about%20ovals
In mathematics, Newton's theorem about ovals states that the area cut off by a secant of a smooth convex oval is not an algebraic function of the secant. Isaac Newton stated it as lemma 28 of section VI of book 1 of Newton's Principia, and used it to show that the position of a planet moving in an orbit is not an algebraic function of time. There has been some controversy about whether or not this theorem is correct because Newton did not state exactly what he meant by an oval, and for some interpretations of the word oval the theorem is correct, while for others it is false. If "oval" means merely a continuous closed convex curve, then there are counterexamples, such as triangles or one of the lobes of Huygens lemniscate y2 = x2 − x4, while pointed that if "oval" an infinitely differentiable convex curve then Newton's claim is correct and his argument has the essential steps of a rigorous proof. generalized Newton's theorem to higher dimensions. Statement An English translation Newton's original statement is: "There is no oval figure whose area, cut off by right lines at pleasure, can be universally found by means of equations of any number of finite terms and dimensions." In modern mathematical language, Newton essentially proved the following theorem: There is no convex smooth (meaning infinitely differentiable) curve such that the area cut off by a line ax + by = c is an algebraic function of a, b, and c. In other words, "oval" in Newton's statement should mean "convex smooth curve". The infinite differentiability at all points is necessary: For any positive integer n there are algebraic curves that are smooth at all but one point and differentiable n times at the remaining point for which the area cut off by a secant is algebraic. Newton observed that a similar argument shows that the arclength of a (smooth convex) oval between two points is not given by an algebraic function of the points. Newton's proof Newton took the origin P inside the oval, and considered the spiral of points (r, θ) in polar coordinates whose distance r from P is the area cut off by the lines from P with angles 0 and θ. He then observed that this spiral cannot be algebraic as it has an infinite number of intersections with a line through P, so the area cut off by a secant cannot be an algebraic function of the secant. This proof requires that the oval and therefore the spiral be smooth; otherwise the spiral might be an infinite union of pieces of different algebraic curves. This is what happens in the various "counterexamples" to Newton's theorem for non-smooth ovals. References Alternative translation of earlier (2nd) edition of Newton's Principia. Theorems about curves Isaac Newton Theorems in plane geometry
Newton's theorem about ovals
Mathematics
587
33,930,275
https://en.wikipedia.org/wiki/Resilience%20of%20coral%20reefs
The resilience of coral reefs is the biological ability of coral reefs to recover from natural and anthropogenic disturbances such as storms and bleaching episodes. Resilience refers to the ability of biological or social systems to overcome pressures and stresses by maintaining key functions through resisting or adapting to change. Reef resistance measures how well coral reefs tolerate changes in ocean chemistry, sea level, and sea surface temperature. Reef resistance and resilience are important factors in coral reef recovery from the effects of ocean acidification. Natural reef resilience can be used as a recovery model for coral reefs and an opportunity for management in marine protected areas (MPAs). Thermal tolerance Many corals rely on a symbiotic algae called zooxanthellae for nutrient uptake through photosynthesis. Corals obtain about 60-85% of their total nutrition from symbiotic zooxanthellae. Slight increases in sea surface temperature can cause zooxanthellae to die. Coral hosts become bleached when they lose their zooxanthellae. Differences in symbionts, determined by genetic groupings (clades A-H), may explain thermal tolerance in corals. Research has shown that some corals contain thermally-resistant clades of zooxanthellae. Corals housing primarily clade D symbionts, and certain types of thermally resistant clade C symbionts, allow corals to avoid bleaching as severely as others experiencing the same stressor. Scientists remain in debate if thermal resistance in corals is due to mixing or shifting of symbionts, or thermally resistant vs. thermally-sensitive types of zooxanthellae. Species of coral housing multiple types of zooxanthellae can withstand a 1-1.5 °C change in temperature. However, few species of coral are known to house multiple types of zooxanthellae. Corals are more likely to contain clade D symbionts after multiple coral bleaching events. Reef recovery Research studies of the Mediterranean species of coral Oculina patagonica reveal that the presence of endolithic algae in coral skeletons may provide additional energy which can result in post-bleaching recovery. During bleaching, the loss of zooxanthellae decreases the amount of light absorbed by coral tissue, which allows increased amounts of photosynthetically active radiation to penetrate the coral skeleton. Greater amounts of photosynthetically active radiation in coral skeletons cause an increase in endolithic algae biomass and production of photoassimilates. During bleaching, the energy input to the coral tissue of phototrophic endoliths expand as the energy input of the zooxanthellae dwindles. This additional energy could explain the survival and rapid recovery of O. patagonica after bleaching events. A study by the Australian Research Council proposed that the loss of fast-growing coral could lead to less resilience of the remaining coral. The study was undertaken in both the Caribbean and the Indo-Pacific and reached the conclusion that the latter may be more resilient than the former based on several factors; the process of herbivory and the rates of algal blooms forming. Coral bleaching effects on biodiversity Coral bleaching is a major consequence of stress on coral reefs. Bleaching events due to distinct temperature changes, pollution, and other shifts of environmental conditions are detrimental to coral health, but corals can restore from bleaching events if the stress is not chronic. When corals are exposed to a long period of severe stress, death may occur due to the loss of zooxanthellae, which are vital to the coral's survival because of the nutrients they supply. Coral bleaching, degradation, and death have a great effect on the surrounding ecosystem and biodiversity. Coral reefs are important, diverse ecosystems that host a plethora of organisms that contribute different services to maintain reef health. For example, herbivorous reef fish, like the parrotfish, maintain levels of macro algae. The upkeep of seaweed contributes to decreasing space competition for substrate-seeking organisms, like corals, to establish and propagate, creating a stronger, more resilient reef. However, when corals become bleached, organisms often leave the coral reef habitat which in turn takes away the services that they were previously supplying. Reefs also administer many ecosystem services such as food provision for many people around the world who are dependent on fishing reefs to sustain themselves. There is evidence that some species of coral are resilient to elevated sea surface temperatures for a short period of time. Natural disturbances Natural forces such as disease and storms degrade corals. The frequency of coral disease caused by microbial pathogens has increased over the years, contributing to coral reef mortality. Bacterial, fungal, viral, and parasitic infections can result in physiological and morphological effects. Some of the most common coral diseases include black band disease, white pox disease, white plague, and white band disease, all of which involve tissue degradation and exposure of the coral skeleton. Diseases such as these can quickly spread among healthy coral reefs, potentially making them more susceptible to injury from disturbances like storms. Storms, including cyclones and hurricanes, can cause mechanical destruction to reefs and a change in sedimentation. The strong waves that result from these disturbances can strike corals, causing them to dislodge, and can also cause the reef to come into contact with released sediments and freshwater. Anthropogenic disturbances Anthropogenic forces contribute to coral reef degradation, reducing their resiliency. Some anthropogenic forces that degrade corals include pollution, sedimentation from coastal development, and ocean acidification due to increased fossil fuel emissions. Carbon emissions cause ocean surface waters to warm and acidify. The combustion of fossil fuels results in the emission of greenhouse gases, such as carbon dioxide into the atmosphere. The ocean uptakes some of the emitted carbon dioxide, injurious to the natural processes that occur in the ocean. Ocean acidification results in a lower ocean water pH, negatively affecting the formation of calcium carbonate structures which are imperative to coral development. Developing coastal areas has the potential for chemical and nutrient pollution to run off into surrounding waters. Nutrient pollution causes the overgrowth of aquatic vegetation which has the ability to out-compete corals for space, nutrients, and other resources. Overfishing can also have devastating effects on coral reefs. Due to the food security that reefs hold, they are often overfished, which can cause reef ecosystems to be unable to reconstruct after damage has been done. Restoration can be challenging due to the direct harm that fishing activities can have on coral reefs through damage caused by fishing gear, including nets, lines, and traps. Additionally, noticeable changes in marine life, such as the loss of herbivorous fish that offer valuable services to coral reefs, can reduce ecosystem function as a whole. Another anthropogenic force that degrades coral reefs is bottom trawling; a fishing practice that scrapes coral reef habitats and other bottom substrate-dwelling organisms off the ocean floor. Bottom trawling results in physical wreckage and stress that leads to corals being broken and zooxanthellae expelled. Similar to bottom trawling, rock anchoring used for fishing can cause physical damage to these fragile reefs due to the heavy weight of the anchor, cables, and chains. If coral reefs are exposed to physical damage like rock anchoring regularly, it can result in less resiliency to ocean acidification. Ecotourism is another anthropogenic factor that contributes to coral reef degradation. During ecotourism, humans can cause stress to the corals by accidentally touching, polluting, or breaking off parts of the reef, often resulting in coral bleaching as they attempt to fight off the intrusion. However, ecotourism is not only harmful when humans are close enough to touch the coral. Less direct impacts, such as harmful chemicals in sunscreen and sedimentation driven by the tourism industry, can have irreversible effects as well. Managing coral reefs In an attempt to prevent coral bleaching, scientists are experimenting by "seeding" corals that can host multiple types of zooxanthellae with thermally-resistant zooxanthellae. MPAs have begun to apply reef resilience management techniques in order to improve the 'immune system' of coral reefs and promote reef recovery after bleaching. The Nature Conservancy has developed, and is continually refining, a model to help manage and promote reef resilience. Although this model does not guarantee reef resilience, it is a comprehensible management model to follow. The principles outlined in their model are: Representation and replication: Coral survivorship is ensured by representing and replicating resilient species and habitats in an MPA network. The presence of resilient species in management in MPAs will help protect corals from bleaching events and other natural disturbances. Critical areas: Conservation priority areas provide protection to critical marine areas, such as sources of larvae for coral reef regeneration or nursery habitats for fish spawning. Connectivity: Preserving the connectivity between coral reefs and surrounding habitats provides healthy coral communities and fish habitat. Effective management: Resilience based strategies are based on reducing threats to maintain healthy reefs. Measurements of effective management of MPAs allows for adaptive management. Scientists have also developed a new technique by Smithsonian’s National Zoo and Conservation Biology Institute and funded by a conservation organization called Revive and Restore. This technique is referred to as cryopreservation and involves freezing and thawing entire fragments of coral, resulting in slowing the loss of coral species and restoring damaged reefs. Previous coral cryopreservation techniques relied on largely freezing sperm and larvae, making collection difficult, as spawning events only occur a few days a year. This previous technique was also difficult because frequent marine heatwaves and warm waters can cause corals to be biologically stressed, resulting in their reproductive material being too weak to be frozen or thawed. The new technique is easier and works more rapidly, as it allows researchers and preservations to work throughout the year, rather than waiting for a certain species to spawn and put stress on coral's reproductive materials. Scientists have also looked deeper into energy reserves and coral feeding. Feeding on zooplankton, brine shrimp, and algae may serve as a buffer for the harsh effects of climate change. Feeding corals can help them sustain tissue biomass and energy reserves and enhance nitrogen content, allowing for a higher zooxanthellae concentration and increased photosynthesis. Increased feeding rates can also allow certain species of bleached and recovering coral to exceed their daily metabolic energy requirements. These results suggest that coral species with a high CHAR (percent contribution of heterotrophically acquired carbon to daily animal respiration) capability may be more resilient to bleaching events, becoming the dominant species, and helping to safeguard affected reefs from extinction. References Further references Oliver, Thomas Andrew (2009) The role of coral's algal symbionts in coral reef adaptation to climate change ProQuest. . External links Reef resilience – coral reef conservation site of The Nature Conservancy
Resilience of coral reefs
Biology
2,294
4,331,086
https://en.wikipedia.org/wiki/ISO%207736
ISO 7736 is a standard size for dashboard mounted head units, for car audio. It was originally established by the German national organization for standardization, the Deutsches Institut für Normung, as DIN 75490, and is therefore commonly referred to as the DIN size. It was adopted by the International Organization for Standardization in 1984. It does not define connectors for car audio, which are defined in ISO 10487. Head units are generally designed around the single (180 x 50 mm panel) or double DIN (180 x 100.3 mm panel) sizes, with a recent trend towards the latter with the increasing popularity of large, touch-screen displays and interfaces like Apple CarPlay and Android Auto. Gallery References Mechanical standards 07736 In-car entertainment
ISO 7736
Engineering
155
76,174,134
https://en.wikipedia.org/wiki/Leucocoprinus%20attinorum
Leucocoprinus attinorum is a species of mushroom-producing fungus in the family Agaricaceae. Taxonomy It was described in 2023 by the mycologists Salomé Urrea‑Valencia, Rodolfo Bizarria Júnior, Pepijn W. Kooij, Quimi Vidaurre Montoya and Andre Rodrigues who conducted a study on fungal species cultivated by lower attine ants which described the new species Leucocoprinus attinorum and L. dunensis. Description Leucocoprinus attinorum is a fungus cultivated by Mycocepurus goeldii ants. Cap: 3-4cm wide, starting campanulate before expanding to applanate with age. The surface is coated in small brown scales with a darker brown centre disc. Gills: Free with a collar, crowded and whitish. Stem: 2.5-8cm long and 4-8mm thick with a slightly bulbous base but otherwise generally consistent thickness across the length and solid inner flesh. The surface light brown and is coated in fine fibrils but turns dark brown when bruised or touched. The movable stem ring is white with a dark brown margin of a similar colour to the cap centre. Spore print: Pale white. Spores: 7-8 x 5-6 (6.5) μm. Ellipsoid to amygdaliform with a rounded apex and germ pore covered with a hyaline cap. Smooth, thick walled and hyaline with no colour change in KOH. Congophilous, dextrinoid, metachromatic in cresyl blue. Basidia: 22-30 x 10-11 μm. Clavate, 4-spored, hyaline. Etymology The specific epithet attinorum is named in reference to the subtribe Attina to which the Mycocepurus goeldii ants belong. Habitat and distribution The species is cultivated by the fungus farming ant species Mycocepurus goeldii, the geographical range of which includes Brazil, parts of Bolivia, Paraguay and Northern Argentina. so this fungus may possibly extend over this same range. References Fungi described in 2023 Fungus species Leucocoprinus
Leucocoprinus attinorum
Biology
458
36,873,359
https://en.wikipedia.org/wiki/Generalized%20Lagrangian%20mean
In continuum mechanics, the generalized Lagrangian mean (GLM) is a formalism – developed by – to unambiguously split a motion into a mean part and an oscillatory part. The method gives a mixed Eulerian–Lagrangian description for the flow field, but appointed to fixed Eulerian coordinates. Background In general, it is difficult to decompose a combined wave–mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models, when the main interest is in the mean motion – slowly varying at scales much larger than those of the individual undulations. From a series of postulates, arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part. The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field – following individual fluid parcels – that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space. The specification of mean properties for the oscillatory part of the flow, like: Stokes drift, wave action, pseudomomentum and pseudoenergy – and the associated conservation laws – arise naturally when using the GLM method. The GLM concept can also be incorporated into variational principles of fluid flow. Notes References By Andrews & McIntyre By others See Chapter 12: "Generalized Lagrangian mean (GLM) formulation", pp. 105–113. Continuum mechanics Concepts in physics
Generalized Lagrangian mean
Physics
398
33,174,865
https://en.wikipedia.org/wiki/Decoupling%20Natural%20Resource%20Use%20and%20Environmental%20Impacts%20from%20Economic%20Growth%20report
The report Decoupling Natural Resource Use and Environmental Impacts from Economic Growth is one of a series of reports researched and published by the International Resource Panel (IRP) of the United Nations Environment Programme. The IRP provides independent scientific assessments and expert advice on a variety of areas, including: The volume of selected raw material reserves and how efficiently these resources are being used The lifecycle-long environmental impacts of products and services created and consumed around the globe Options to meet human and economic needs with fewer or cleaner resources. About the report The concept of decoupling is not about stopping economic growth, but rather doing more with less. In the report's preface, the panel explained that the "conceptual framework for decoupling and understanding of the instrumentalities for achieving it is still in an infant stage" and that this "first report is simply an attempt to scope the challenges." The report considered the amount of resources currently being consumed by humanity and analysed how that would likely increase with population growth and future economic development. Its scenarios showed that by 2050 humans could use triple the amount of minerals, ores, fossil fuels and biomass annually – 140 billion tonnes per year – unless the rate of resource consumption could be decoupled from that of economic growth. Developed country citizens currently consume as much as 25 tonnes of those four key resources each year, while the average person in India consumes four tonnes annually. Another billion middle-class are set to emerge as developing countries rapidly become industrialised. There is evidence that decoupling is already underway; world gross domestic product grew by a factor of 23 in the 20th century, while resource use rose by a factor of eight. However, this will not be enough to avoid meeting resource scarcity and severe environmental limits. Resource use may ultimately need to fall to between five and six tonnes per person annually. Recycling, re-use and greater efficiency can all help achieve decoupling. It showed that decoupling might be a good strategy for economic growth in developing countries to avoid becoming resource-intensive economies in the future. See also Ecological modernization References External links www.resourcepanel.org www.unep.org United Nations Environment Programme Human impact on the environment Environmental mitigation Environmental impact assessment
Decoupling Natural Resource Use and Environmental Impacts from Economic Growth report
Chemistry,Engineering
454
7,622,415
https://en.wikipedia.org/wiki/Cambria%20Iron%20Company
The Cambria Iron Company of Johnstown, Pennsylvania, was a major producer of iron and steel that operated independently from 1852 to 1916. The company adopted many innovations in the steelmaking process, including those of William Kelly and Henry Bessemer. Founded in 1852, the company became the nation's largest steel foundry within two decades. It was reorganized and renamed the Cambria Steel Company in 1898, purchased by Midvale Steel and Ordnance Company in 1916, and sold to the Bethlehem Steel Company in 1923. The company's facilities, which extend some along the Conemaugh and Little Conemaugh rivers, operated until 1992. Today, they are designated as a National Historic Landmark District. Several works by the firm are listed on the National Register of Historic Places. Facilities The industrial facilities of the Cambria occupied five separate sites in and around Johnstown, Pennsylvania. Its earliest facilities, known as the Lower Works, are located on the east bank of the Conemaugh River, north of downtown Johnstown and the Little Conemaugh River. The Gautier Plant is northeast of downtown Johnstown on the south side of the Little Conemaugh. Further up that river is the extensive Franklin Plant and Wheel Plant, while the Rod and Wire Plant is located on the west side of the Conemaugh River, north of the Lower Works. Each of these facilities represents a different phase of development and growth of the steel industry. The Lower Works no longer has significant traces of the earliest facilities used in steel manufacturing. All five of these areas comprise the National Historic Landmark District designated in 1989. Company history The Cambria Iron Company was founded in 1852 to provide iron for the construction of railroads. In 1854, the iron works, which had gone out of the blast, were purchased by a group of Philadelphia merchants led by Matthew Newkirk. After a fire destroyed the main rolling mill in 1857, Newkirk persuaded his co-investors to rebuild it on a larger scale. The company grew rapidly and by the 1870s, was a leading producer of steel and an innovator in the advancement of steelmaking technology. It performed early experiments with the Kelly converter, built the first blooming mill, and was one of the first plants to use hydraulics for the movement of ingots. It built one of the first plants to use the Bessemer process for making steel at a large scale. The company's innovations, methods, and processes were widely influential throughout the steel industry. The company was at its height in the 1870s, under the long-term leadership of general manager Daniel Johnson Morrell, who had overseen the expansion of the works into one of the largest producers of rails in the United States. He helped to end US dependence on British railroad construction imports. A Republican, Morrell also served as a member of the 40th United States Congress and 41st United States Congresses from Pennsylvania, from 1867-1871. Morrell became concerned about the South Fork Dam, which formed Lake Conemaugh above Johnstown and Cambria Iron Company's facilities. To monitor the dam, Morrell joined South Fork Fishing and Hunting Club, which owned the dam. Morrell campaigned to club officials to improve the dam, which he had inspected by his own engineers and by those of the Pennsylvania Railroad. Morrell offered to effect repairs, partially at his own expense, but was rejected by club president Benjamin F. Ruff. Morell died in 1885, his warnings unheeded. On May 31, 1889, the dam failed, unleashing the Johnstown Flood. The flood killed more than 2,200 people—then the largest disaster in U.S. history—and badly damaged the Cambria Iron Company's facilities along the rivers. The company reopened one week later, but at reduced capacity, and it was eclipsed by other producers as it rebuilt. After Morrell's death, his club membership was purchased by Cyrus Elder, who became the club's only Johnstown native; most of the men were from Pittsburgh. Elder was a former news editor who had become chief legal counsel for Cambria Iron Company. His wife and daughter died in the flood. He continued to be a notable civic leader. He also wrote books and poetry. In 1916, Cambria Iron was acquired by Midvale Steel and Ordnance Company. Midvale sold the company to Bethlehem Steel in 1923., It operated continuously until 1992. Cambria Steel Company had formed a proprietary subsidiary shipping company called Franklin Steamship Company of Cleveland in 1906 and Beaver Steamship Company in 1916. Both companies were sold to Bethlehem Steamship Company in 1924. Works produced Infrastructure whose parts were manufactured by the Cambria Company include the following (with variations in attribution). All have been listed on the National Register of Historic Places (NRHP): Bell Bridge, county road over Niobrara River, northeast of Valentine, Nebraska (Cambria Steel Co.), NRHP-listed Boone River Bridge, Buchanan Avenue over Boone River, Goldfield, Iowa (Cambria Steel Company), NRHP-listed Borman Bridge, county road over Niobrara River, southeast of Valentine, Nebraska (Cambria Steel Co.), NRHP-listed Eldorado Bridge, State Street over Turkey River, Eldorado, Iowa (Cambria Steel Co.), NRHP-listed Johnstown Inclined Railway, Johns Street and Edgehill Drive, Johnstown, Pennsylvania (Cambria Iron Co.), NRHP-listed Neligh Mill Bridge, Elm Street over Elkhorn River, Neligh, Nebraska (Cambria/Lackawanna Steel Cos.), NRHP-listed North Loup Bridge, county road over North Loup River, northeast of North Loup, Nebraska (Cambria & Lackawanna Steel Cos.), NRHP-listed Republican River Bridge, county road over Republican River, east and south of Riverton, Nebraska (Cambria Steel Co.), NRHP-listed Willow Creek Bridge, county road over Willow Creek, south of Foster, Nebraska (Cambria Steel Co.), NRHP-listed See also Rolling Mill Mine List of National Historic Landmarks in Pennsylvania National Register of Historic Places listings in Cambria County, Pennsylvania References External links History of the Steel Industry in Johnstown Lists of National Historic Landmarks Steel companies of the United States National Historic Landmarks in Pennsylvania Historic American Engineering Record in Pennsylvania Buildings and structures in Johnstown, Pennsylvania Bridge companies 1852 establishments in Pennsylvania National Register of Historic Places in Cambria County, Pennsylvania Historic districts on the National Register of Historic Places in Pennsylvania Blacksmith shops Blast furnaces Rolling mills Construction and civil engineering companies of the United States Construction and civil engineering companies established in 1852 American companies established in 1852
Cambria Iron Company
Chemistry
1,346
10,279,908
https://en.wikipedia.org/wiki/Lee%20algorithm
The Lee algorithm is one possible solution for maze routing problems based on breadth-first search. It always gives an optimal solution, if one exists, but is slow and requires considerable memory. Algorithm 1) Initialization - Select start point, mark with 0 - i := 0 2) Wave expansion - REPEAT - Mark all unlabeled neighbors of points marked with i with i+1 - i := i+1 UNTIL ((target reached) or (no points can be marked)) 3) Backtrace - go to the target point REPEAT - go to next node that has a lower mark than the current node - add this node to path UNTIL (start point reached) 4) Clearance - Block the path for future wirings - Delete all marks Of course the wave expansion marks only points in the routable area of the chip, not in the blocks or already wired parts, and to minimize segmentation you should keep in one direction as long as possible. External links http://www.eecs.northwestern.edu/~haizhou/357/lec6.pdf References Electronic engineering Electronic design automation Electronics optimization Remzi Osmanli
Lee algorithm
Technology,Engineering
238
22,474,945
https://en.wikipedia.org/wiki/Linked%20data%20structure
In computer science, a linked data structure is a data structure which consists of a set of data records (nodes) linked together and organized by references (links or pointers). The link between data can also be called a connector. In linked data structures, the links are usually treated as special data types that can only be dereferenced or compared for equality. Linked data structures are thus contrasted with arrays and other data structures that require performing arithmetic operations on pointers. This distinction holds even when the nodes are actually implemented as elements of a single array, and the references are actually array indices: as long as no arithmetic is done on those indices, the data structure is essentially a linked one. Linking can be done in two ways using dynamic allocation and using array index linking. Linked data structures include linked lists, search trees, expression trees, and many other widely used data structures. They are also key building blocks for many efficient algorithms, such as topological sort and set union-find. Common types of linked data structures Linked lists A linked list is a collection of structures ordered not by their physical placement in memory but by logical links that are stored as part of the data in the structure itself. It is not necessary that it should be stored in the adjacent memory locations. Every structure has a data field and an address field. The Address field contains the address of its successor. Linked list can be singly, doubly or multiply linked and can either be linear or circular. Basic properties Objects, called nodes, are linked in a linear sequence. A reference to the first node of the list is always kept. This is called the 'head' or 'front'. A linked list with three nodes contain two fields each: an integer value and a link to the next node Example in Java This is an example of the node class used to store integers in a Java implementation of a linked list: public class IntNode { public int value; public IntNode link; public IntNode(int v) { value = v; } } Example in C This is an example of the structure used for implementation of linked list in C: struct node { int val; struct node *next; }; This is an example using typedefs: typedef struct node node; struct node { int val; node *next; }; Note: A structure like this which contains a member that points to the same structure is called a self-referential structure. Example in C++ This is an example of the node class structure used for implementation of linked list in C++: class Node { int val; Node *next; }; Search trees A search tree is a tree data structure in whose nodes data values can be stored from some ordered set, which is such that in an in-order traversal of the tree the nodes are visited in ascending order of the stored values. Basic properties Objects, called nodes, are stored in an ordered set. In-order traversal provides an ascending readout of the data in the tree. Advantages and disadvantages Linked list versus arrays Compared to arrays, linked data structures allow more flexibility in organizing the data and in allocating space for it. In arrays, the size of the array must be specified precisely at the beginning, which can be a potential waste of memory, or an arbitrary limitation which would later hinder functionality in some way. A linked data structure is built dynamically and never needs to be bigger than the program requires. It also requires no guessing at creation time, in terms of how much space must be allocated. This is a feature that is key in avoiding wastes of memory. In an array, the array elements have to be in a contiguous (connected and sequential) portion of memory. But in a linked data structure, the reference to each node gives users the information needed to find the next one. The nodes of a linked data structure can also be moved individually to different locations within physical memory without affecting the logical connections between them, unlike arrays. With due care, a certain process or thread can add or delete nodes in one part of a data structure even while other processes or threads are working on other parts. On the other hand, access to any particular node in a linked data structure requires following a chain of references that are stored in each node. If the structure has n nodes, and each node contains at most b links, there will be some nodes that cannot be reached in less than logb n steps, slowing down the process of accessing these nodes - this sometimes represents a considerable slowdown, especially in the case of structures containing large numbers of nodes. For many structures, some nodes may require worst case up to n−1 steps. In contrast, many array data structures allow access to any element with a constant number of operations, independent of the number of entries. Broadly the implementation of these linked data structure is through dynamic data structures. It gives us the chance to use particular space again. Memory can be utilized more efficiently by using these data structures. Memory is allocated as per the need and when memory is not further needed, deallocation is done. General disadvantages Linked data structures may also incur in substantial memory allocation overhead (if nodes are allocated individually) and frustrate memory paging and processor caching algorithms (since they generally have poor locality of reference). In some cases, linked data structures may also use more memory (for the link fields) than competing array structures. This is because linked data structures are not contiguous. Instances of data can be found all over in memory, unlike arrays. In arrays, nth element can be accessed immediately, while in a linked data structure we have to follow multiple pointers so element access time varies according to where in the structure the element is. In some theoretical models of computation that enforce the constraints of linked structures, such as the pointer machine, many problems require more steps than in the unconstrained random-access machine model. See also List of data structures References Abstract data types Linked lists Trees (data structures)
Linked data structure
Mathematics
1,229
71,695,301
https://en.wikipedia.org/wiki/Arachnomyces%20bostrychodes
Arachnomyces bostrychodes is a species of infectious ascomycete fungus discovered in 2021 from clinical specimens of fungal strains in Texas, United States. Etymology The specific epithet comes from the Greek βοστρυχος-, meaning curl, referencing the curly appearance of the reproductive hyphae. Morphology and asexual reproduction A. bostrychodes grows septate, hyaline, branched, vegetative hyphae with smooth and thin walls, between 1 and 2 μm wide. The fertile hyphae are well-differentiated, arising as lateral branches from the vegetative hyphae, successively branching to form dense, tightly curled, sinuous clusters that are also between 1 and 2 μm wide, forming random arthroconidia both intercalary and terminally. The conidia measure 4–8 x 1–2 μm, are mostly curved and truncated at one or more commonly both ends; they are enteroarthric, hyaline, one-celled, smooth-walled, cylindrical, barrel-shaped; they are finger-shaped when terminal. The conidia are separated from the fertile hyphae by rhexolysis. There have been no observations of chlamydospores, racquet-shaped hyphae, setae, or sexual reproduction. References Fungi described in 2021 Eurotiomycetes Fungus species
Arachnomyces bostrychodes
Biology
293
1,355,482
https://en.wikipedia.org/wiki/Pell%20number
In mathematics, the Pell numbers are an infinite sequence of integers, known since ancient times, that comprise the denominators of the closest rational approximations to the square root of 2. This sequence of approximations begins , , , , and , so the sequence of Pell numbers begins with 1, 2, 5, 12, and 29. The numerators of the same sequence of approximations are half the companion Pell numbers or Pell–Lucas numbers; these numbers form a second infinite sequence that begins with 2, 6, 14, 34, and 82. Both the Pell numbers and the companion Pell numbers may be calculated by means of a recurrence relation similar to that for the Fibonacci numbers, and both sequences of numbers grow exponentially, proportionally to powers of the silver ratio 1 + . As well as being used to approximate the square root of two, Pell numbers can be used to find square triangular numbers, to construct integer approximations to the right isosceles triangle, and to solve certain combinatorial enumeration problems. As with Pell's equation, the name of the Pell numbers stems from Leonhard Euler's mistaken attribution of the equation and the numbers derived from it to John Pell. The Pell–Lucas numbers are also named after Édouard Lucas, who studied sequences defined by recurrences of this type; the Pell and companion Pell numbers are Lucas sequences. Pell numbers The Pell numbers are defined by the recurrence relation: In words, the sequence of Pell numbers starts with 0 and 1, and then each Pell number is the sum of twice the previous Pell number, plus the Pell number before that. The first few terms of the sequence are 0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860, … . Analogously to the Binet formula, the Pell numbers can also be expressed by the closed form formula For large values of n, the term dominates this expression, so the Pell numbers are approximately proportional to powers of the silver ratio , analogous to the growth rate of Fibonacci numbers as powers of the golden ratio. A third definition is possible, from the matrix formula Many identities can be derived or proven from these definitions; for instance an identity analogous to Cassini's identity for Fibonacci numbers, is an immediate consequence of the matrix formula (found by considering the determinants of the matrices on the left and right sides of the matrix formula). Approximation to the square root of two Pell numbers arise historically and most notably in the rational approximation to . If two large integers x and y form a solution to the Pell equation then their ratio provides a close approximation to . The sequence of approximations of this form is where the denominator of each fraction is a Pell number and the numerator is the sum of a Pell number and its predecessor in the sequence. That is, the solutions have the form The approximation of this type was known to Indian mathematicians in the third or fourth century BCE. The Greek mathematicians of the fifth century BCE also knew of this sequence of approximations: Plato refers to the numerators as rational diameters. In the second century CE Theon of Smyrna used the term the side and diameter numbers to describe the denominators and numerators of this sequence. These approximations can be derived from the continued fraction expansion of : Truncating this expansion to any number of terms produces one of the Pell-number-based approximations in this sequence; for instance, As Knuth (1994) describes, the fact that Pell numbers approximate allows them to be used for accurate rational approximations to a regular octagon with vertex coordinates and . All vertices are equally distant from the origin, and form nearly uniform angles around the origin. Alternatively, the points , , and form approximate octagons in which the vertices are nearly equally distant from the origin and form uniform angles. Primes and squares A Pell prime is a Pell number that is prime. The first few Pell primes are 2, 5, 29, 5741, 33461, 44560482149, 1746860020068409, 68480406462161287469, ... . The indices of these primes within the sequence of all Pell numbers are 2, 3, 5, 11, 13, 29, 41, 53, 59, 89, 97, 101, 167, 181, 191, 523, 929, 1217, 1301, 1361, 2087, 2273, 2393, 8093, ... These indices are all themselves prime. As with the Fibonacci numbers, a Pell number Pn can only be prime if n itself is prime, because if d is a divisor of n then Pd is a divisor of Pn. The only Pell numbers that are squares, cubes, or any higher power of an integer are 0, 1, and 169 = 132. However, despite having so few squares or other powers, Pell numbers have a close connection to square triangular numbers. Specifically, these numbers arise from the following identity of Pell numbers: The left side of this identity describes a square number, while the right side describes a triangular number, so the result is a square triangular number. Falcón and Díaz-Barrero (2006) proved another identity relating Pell numbers to squares and showing that the sum of the Pell numbers up to P4n&hairsp;+1 is always a square: For instance, the sum of the Pell numbers up to P5, , is the square of . The numbers forming the square roots of these sums, 1, 7, 41, 239, 1393, 8119, 47321, … , are known as the Newman–Shanks–Williams (NSW) numbers. Pythagorean triples If a right triangle has integer side lengths a, b, c (necessarily satisfying the Pythagorean theorem ), then (a,b,c) is known as a Pythagorean triple. As Martin (1875) describes, the Pell numbers can be used to form Pythagorean triples in which a and b are one unit apart, corresponding to right triangles that are nearly isosceles. Each such triple has the form The sequence of Pythagorean triples formed in this way is (4,3,5), (20,21,29), (120,119,169), (696,697,985), … Pell–Lucas numbers The companion Pell numbers or Pell–Lucas numbers are defined by the recurrence relation In words: the first two numbers in the sequence are both 2, and each successive number is formed by adding twice the previous Pell–Lucas number to the Pell–Lucas number before that, or equivalently, by adding the next Pell number to the previous Pell number: thus, 82 is the companion to 29, and The first few terms of the sequence are : 2, 2, 6, 14, 34, 82, 198, 478, … Like the relationship between Fibonacci numbers and Lucas numbers, for all natural numbers n. The companion Pell numbers can be expressed by the closed form formula These numbers are all even; each such number is twice the numerator in one of the rational approximations to discussed above. Like the Lucas sequence, if a Pell–Lucas number Qn is prime, it is necessary that n be either prime or a power of 2. The Pell–Lucas primes are 3, 7, 17, 41, 239, 577, … . For these n are 2, 3, 4, 5, 7, 8, 16, 19, 29, 47, 59, 163, 257, 421, … . Computations and connections The following table gives the first few powers of the silver ratio δ = δ&hairsp;S = 1 +  and its conjugate = 1 − . {| class="wikitable" style="text-align:center" |- ! n ! (1 + )n ! (1 − )n |- ! 0 | 1 + 0 = 1 | 1 − 0 = 1 |- ! 1 | 1 + 1 = 2.41421… | 1 − 1 = −0.41421… |- ! 2 | 3 + 2 = 5.82842… | 3 − 2 = 0.17157… |- ! 3 | 7 + 5 = 14.07106… | 7 − 5 = −0.07106… |- ! 4 | 17 + 12 = 33.97056… | 17 − 12 = 0.02943… |- ! 5 | 41 + 29 = 82.01219… | 41 − 29 = −0.01219… |- ! 6 | 99 + 70 = 197.9949… | 99 − 70 = 0.0050… |- ! 7 | 239 + 169 = 478.00209… | 239 − 169 = −0.00209… |- ! 8 | 577 + 408 = 1153.99913… | 577 − 408 = 0.00086… |- ! 9 | 1393 + 985 = 2786.00035… | 1393 − 985 = −0.00035… |- ! 10 | 3363 + 2378 = 6725.99985… | 3363 − 2378 = 0.00014… |- ! 11 | 8119 + 5741 = 16238.00006… | 8119 − 5741 = −0.00006… |- ! 12 | 19601 + 13860 = 39201.99997… | 19601 − 13860 = 0.00002… |} The coefficients are the half-companion Pell numbers Hn and the Pell numbers Pn which are the (non-negative) solutions to . A square triangular number is a number which is both the t-th triangular number and the s-th square number. A near-isosceles Pythagorean triple is an integer solution to where . The next table shows that splitting the odd number Hn into nearly equal halves gives a square triangular number when n is even and a near isosceles Pythagorean triple when n is odd. All solutions arise in this manner. {| class="wikitable" style="text-align:center" |- !n !Hn !Pn !t !t + 1 !s !a !b !c |- !0 |1 |0 |0 |1 |0 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !1 |1 |1 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |0 |1 |1 |- !2 |3 |2 |1 |2 |1 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !3 |7 |5 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |3 |4 |5 |- !4 |17 |12 |8 |9 |6 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !5 |41 |29 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |20 |21 |29 |- !6 |99 |70 |49 |50 |35 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !7 |239 |169 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |119 |120 |169 |- !8 |577 |408 |288 |289 |204 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !9 |1393 |985 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |696 |697 |985 |- !10 |3363 |2378 |1681 |1682 |1189 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !11 |8119 |5741 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |4059 |4060 |5741 |- !12 |19601 |13860 |9800 |9801 |6930 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |} Definitions The half-companion Pell numbers Hn and the Pell numbers Pn can be derived in a number of easily equivalent ways. Raising to powers From this it follows that there are closed forms: and Paired recurrences Reciprocal recurrence formulas Let n be at least 2. Matrix formulations So Approximations The difference between Hn and Pn is which goes rapidly to zero. So is extremely close to 2Hn. From this last observation it follows that the integer ratios rapidly approach ; and and rapidly approach 1 + . H&hairsp;&hairsp;2 − 2P&hairsp;&hairsp;2 = ±1 Since is irrational, we cannot have  = , i.e., The best we can achieve is either The (non-negative) solutions to are exactly the pairs with n even, and the solutions to are exactly the pairs with n odd. To see this, note first that so that these differences, starting with , are alternately 1 and −1. Then note that every positive solution comes in this way from a solution with smaller integers since The smaller solution also has positive integers, with the one exception: which comes from H0 = 1 and P0 = 0. Square triangular numbers The required equation is equivalent to which becomes with the substitutions H = 2t + 1 and P = 2s. Hence the n-th solution is Observe that t and t + 1 are relatively prime, so that  = s&hairsp;2 happens exactly when they are adjacent integers, one a square H&hairsp;&hairsp;2 and the other twice a square 2P&hairsp;&hairsp;2. Since we know all solutions of that equation, we also have and This alternate expression is seen in the next table. {| class="wikitable" style="text-align:center" |- !n !Hn !Pn !t !t + 1 !s !a !b !c |- !0 |1 |0 |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |style="background: grey;"|  |- !1 |1 |1 |1 |2 |1 |3 |4 |5 |- !2 |3 |2 |8 |9 |6 |20 |21 |29 |- !3 |7 |5 |49 |50 |35 |119 |120 |169 |- !4 |17 |12 |288 |289 |204 |696 |697 |985 |- !5 |41 |29 |1681 |1682 |1189 |4059 |4060 |5741 |- !6 |99 |70 |9800 |9801 |6930 |23660 |23661 |33461 |} Pythagorean triples The equality occurs exactly when which becomes with the substitutions and . Hence the n-th solution is and . The table above shows that, in one order or the other, an and are and while . Notes References External links —The numerators of the same sequence of approximations Integer sequences Recurrence relations Unsolved problems in mathematics
Pell number
Mathematics
3,526
17,018,793
https://en.wikipedia.org/wiki/Ocean%20Data%20Standards
The vast scale of the oceans, the difficulty and expense of making measurements due to the hostility of the environment and the internationality of the marine environment has led to a culture of data sharing in the oceanographic data community. As far back as 1961 UNESCO's Intergovernmental Oceanographic Commission (IOC) set up IODE (International Oceanographic Data Exchange, subsequently renamed International Oceanographic Data and Information Exchange to reflect the increasing importance of metadata) to enhance marine research, exploitation and development by facilitating the exchange of oceanographic data and information. Traditionally, oceanographic data exchange was based on manual transactions involving delivery of physical packages of data on magnetic tape, CD-ROM or more recently by electronic FTP transfer. However, the increasing need of climate scientists for regional or global data syntheses to support their modelling activities requires automation of the data exchange process. Consequently, oceanographic data managers are developing 'virtual data centres' to support the distribution of data through software agents. Distributed data systems are critically dependent on machine-readable metadata to provide information on such issues as physical access protocols and semantics of the data. It is essential that this metadata conforms to agreed standards to prevent the computing paradigm of 'garbage in, garbage out' blighting automated data exchange. for example if a data description of 'temperature' were allowed it could lead to the merging of sea temperature data from one centre with air temperature data from another. Many of these standards are community based, such as the CF conventions developed for global climate modelling. Significant progress documenting these informal standards leading to exposure and encouragement of best practice has been made by the Marine Metadata Interoperability project. However, if the oceanographic community is to emulate the success of the spatial data community in the development of data interoperability then a more formalised standards development framework is required. To this end an ocean data standards review, development and publication infrastructure is being developed under the auspices of the Joint WMO-IOC Technical Commission on Oceanography and Marine Meteorology (JCOMM). References The Ocean Data Standards website External links Ocean Data Standards Oceanography
Ocean Data Standards
Physics,Environmental_science
423
2,874,293
https://en.wikipedia.org/wiki/Single-minute%20exchange%20of%20die
Single-minute digit exchange of die (SMED) is one of the many lean production methods for reducing inefficiencies in a manufacturing process. It provides a rapid and efficient way of converting a manufacturing process from running the current product to running the next product. This is key to reducing production lot sizes, and reducing uneven flow (Mura), production loss, and output variability. The phrase "single minute" does not mean that all changeovers and startups should only take one minute, rather, it should take less than 10 minutes ("single-digit minute"). A closely associated yet more difficult concept is one-touch exchange of die (OTED), which says changeovers can and should take less than 100 seconds. A die is a tool used in manufacturing. However, SMED's utility is not limited to manufacturing (see value stream mapping). History Frederick Winslow Taylor analyzed non-value-adding parts of setups in his 1911 book, Shop Management (page 171). However, he did not create any method or structured approach around it. Frank Bunker Gilbreth studied and improved working processes in many different industries, from bricklaying to surgery. As part of his work, he also looked into changeovers. His book Motion Study (also from 1911) described approaches to reduce setup time. Even Henry Ford's factories were using some setup reduction techniques. In the 1915 publication Ford Methods and Ford Shops, setup reduction approaches were clearly described. However, these approaches never became mainstream. For most parts during the 20th century, the economic order quantity was the gold standard for lot sizing. The JIT workflow of Toyota had a problem of tool changeover taking between two and eight hours. Setup time and lot reduction had been ongoing in Toyota's production system since 1945 when Taiichi Ohno became manager of the machine shops at Toyota. On a trip to the US in 1955, Ohno observed Danly stamping presses with rapid die change capability. Subsequently, Toyota bought multiple Danly presses for the Motomachi plant and started improving the changeover time of their presses. This was known as Quick Die Change, or QDC for short. They developed a structured approach based on a framework from the US World War II Training within Industry (TWI) program, called ECRS – Eliminate, Combine, Rearrange, and Simplify. Over time, Toyota decreased changeover times from hours to fifteen minutes by the 1960s, three minutes by the 1970s, and ultimately just 180 seconds by the 1990s. During the late 1970s, when Toyota's method was already well refined, Shigeo Shingo participated in one QDC workshop. After he started to publicize details of the Toyota Production System without permission, the business connection was terminated abruptly by Toyota. Shingo moved to the US and started to consult on lean manufacturing. Besides claiming to have invented this quick changeover method (among many other things), he renamed it Single Minute Exchange of Die or, in short, SMED. The Single Minute stands for a single digit minute (i.e., less than ten minutes). He promoted TPS and SMED in US. Example Toyota found that the most difficult tools to change were the dies on the large transfer-stamping machines that produce car vehicle body parts. The dies – which must be changed for each model – weigh many tons, and must be assembled in the stamping machines with tolerances of less than a millimeter, otherwise the stamped metal will wrinkle, if not melt, under the intense heat and pressure. When Toyota engineers examined the change-over, they discovered that the established procedure was to stop the line, let down the dies by an overhead crane, position the dies in the machine by human eyesight, and then adjust their position with crowbars while making individual test stampings. The existing process took from twelve hours to almost three days to complete. Toyota's first improvement was to place precision measurement devices on the transfer stamping machines, and record the necessary measurements for each model's die. Installing the die against these measurements, rather than by human eyesight, immediately cut the change-over to a mere hour and a half. Further observations led to further improvements – scheduling the die changes in a standard sequence (as part of FRS) as a new model moved through the factory, dedicating tools to the die-change process so that all needed tools were nearby, and scheduling use of the overhead cranes so that the new die would be waiting as the old die was removed. Using these processes, Toyota engineers cut the change-over time to less than 10 minutes per die, and thereby reduced the economic lot size below one vehicle. The success of this program contributed directly to just-in-time manufacturing which is part of the Toyota Production System. SMED makes load balancing much more achievable by reducing economic lot size and thus stock levels. Effects of implementation Shigeo Shingo, who created the SMED approach, claims that in his data from between 1975 and 1985 that average setup times he has dealt with have reduced to 2.5% of the time originally required; a 40 times improvement. However, the power of SMED is that it has a lot of other effects which come from systematically looking at operations; these include: Stockless production which drives inventory turnover rates, Reduction in footprint of processes with reduced inventory freeing floor space Productivity increases or reduced production time Increased machine work rates from reduced setup times even if number of changeovers increases Elimination of setup errors and elimination of trial runs reduces defect rates Improved quality from fully regulated operating conditions in advance Increased safety from simpler setups Simplified housekeeping from fewer tools and better organization Lower expense of setups Operator preferred since easier to achieve Lower skill requirements since changes are now designed into the process rather than a matter of skilled judgement Elimination of unusable stock from model changeovers and demand estimate errors Goods are not lost through deterioration Ability to mix production gives flexibility and further inventory reductions as well as opening the door to revolutionized production methods (large orders ≠ large production lot sizes) New attitudes on controllability of work process amongst staff Implementation techniques Shigeo Shingo recognizes eight fundamental techniques that should be considered in implementing SMED. Separate internal from external setup operations Convert internal to external setup Standardize function, not shape Use functional clamps or eliminate fasteners altogether Use intermediate jigs Adopt parallel operations (see image below) Eliminate adjustments Mechanization NB External setup can be done without the line being stopped whereas internal setup requires that the line be stopped. He suggests that SMED improvement should pass through four conceptual stages: A) ensure that external setup actions are performed while the machine is still running, B) separate external and internal setup actions, ensure that the parts all function and implement efficient ways of transporting the die and other parts, C) convert internal setup actions to external, D) improve all setup actions. Formal method There are seven basic steps to reducing changeover using the SMED system: OBSERVE the current methodology Separate the internal and external activities. Internal activities are those that can only be performed when the process is stopped, while external activities can be done while the last batch is being produced, or once the next batch has started. For example, go and get the required tools for the job before the machine stops. Convert (where possible) internal activities into external ones (pre-heating of tools is a good example of this). Streamline the remaining internal activities, by simplifying them. Focus on fixings – Shigeo Shingo observed that it's only the last turn of a bolt that tightens it—the rest is just movement. Streamline the external activities, so that they are of a similar scale to the internal ones (D). Document the new procedure, and actions that are yet to be completed. Do it all again: For each iteration of the above process, a 45% improvement in set-up times should be expected, so it may take several iterations to cross the ten-minute line. This diagram shows four successive runs with learning from each run and improvements applied before the next. Run 1 illustrates the original situation. Run 2 shows what would happen if more changeovers were included. Run 3 shows the impact of the improvements in changeover times that come from doing more of them and building learning into their execution. Run 4 shows how these improvements can get you back to the same production time but now with more flexibility in production capacity. Run N (not illustrated) would have changeovers that take 1.5 minutes (97% reduction) and whole shift time reduced from 420 minutes to 368 minutes a productivity improvement of 12%. The SMED concept is credited to Shigeo Shingo, one of the main contributors to the consolidation of the Toyota Production System, along with Taiichi Ohno. Key elements to observe Look for: Shortages, mistakes, inadequate verification of equipment causing delays and can be avoided by check tables, especially visual ones, and setup on an intermediary jig Inadequate or incomplete repairs to equipment causing rework and delays Optimization for least work as opposed to least delay Unheated molds which require several wasted 'tests' before they will be at the temperature to work Using slow precise adjustment equipment for the large coarse part of adjustment Lack of visual lines or benchmarks for part placement on the equipment Forcing a changeover between different raw materials when a continuous feed, or near equivalent, is possible Lack of functional standardization, that is standardization of only the parts necessary for setup e.g. all bolts use same size spanner, die grip points are in the same place on all dies Much operator movement around the equipment during setup More attachment points than actually required for the forces to be constrained Attachment points that take more than one turn to fasten Any adjustments after initial setup Any use of experts during setup Any adjustments of assisting tools such as guides or switches Record all necessary data Parallel operations using multiple operators By taking the 'actual' operations and making them into a network which contains the dependencies it is possible to optimize task attribution and further optimize setup time. Issues of effective communication between the operators must be managed to ensure safety is assured where potentially noisy or visually obstructive conditions occur. See also Changeover Value stream mapping References Lean manufacturing Toyota Production System
Single-minute exchange of die
Engineering
2,095
35,063,489
https://en.wikipedia.org/wiki/Hartshorne%20ellipse
In mathematics, a Hartshorne ellipse is an ellipse in the unit ball bounded by the 4-sphere S4 such that the ellipse and the circle given by intersection of its plane with S4 satisfy the Poncelet condition that there is a triangle with vertices on the circle and edges tangent to the ellipse. They were introduced by , who showed that they correspond to k = 2 instantons on S4. References Algebraic geometry
Hartshorne ellipse
Mathematics
94
2,665,974
https://en.wikipedia.org/wiki/Nitro%20blue%20tetrazolium%20chloride
Nitro blue tetrazolium is a chemical compound composed of two tetrazole moieties. It is used in immunology for sensitive detection of alkaline phosphatase (with BCIP). NBT serves as the oxidant and BCIP is the AP-substrate (and gives also dark blue dye). Clinical significance In immunohistochemistry the alkaline phosphatase is often used as a marker, conjugated to an antibody. The colored product can either be of the NBT/BCIP reaction reveals where the antibody is bound, or can be used in immunofluorescence. The NBT/BCIP reaction is also used for colorimetric/spectrophotometric activity assays of oxidoreductases. One application is in activity stains in gel electrophoresis, such as with the mitochondrial electron transport chain complexes. Nitro blue tetrazolium is used in a diagnostic test, particularly for chronic granulomatous disease and other diseases of phagocyte function. When there is an NADPH oxidase defect, the phagocyte is unable to make reactive oxygen species or radicals required for bacterial killing. As a result, bacteria may thrive within the phagocyte. The higher the blue score, the better the cell is at producing reactive oxygen species. References Biochemistry detection reactions Immunologic tests 4-Nitrophenyl compounds Phenol ethers Tetrazoles
Nitro blue tetrazolium chloride
Chemistry,Biology
309
1,813,310
https://en.wikipedia.org/wiki/Pinaka%20multi-barrel%20rocket%20launcher
Pinaka (from Sanskrit: पिनाक, see Pinaka) is a multiple rocket launcher produced in India and developed by the Defence Research and Development Organisation (DRDO) for the Indian Army. It is also called India's Grad missile system as it's characteristics are derived from BM-21 Grad. The system has a maximum range of for Mark-I Enhanced and for Mark-II ER version, and can fire a salvo of 12 HE rockets per launcher in 44 seconds. The system is mounted on a Tatra truck for mobility. Pinaka saw service during the Kargil War, where it was successful in neutralising Pakistani positions on the mountain tops. It has since been inducted into the Indian Army in large numbers. In April 2013, was sanctioned for increasing the production capacity of Pinaka rockets from then 1,000 to 5,000 per year. Unutilised land of the Yantra India Limited was also being considered for further capacity expansion when production of advanced variants would commence. The expansion was completed by 2014. Development The Indian Army operates the Russian BM-21 Grad Launchers. In 1981, in response to the Indian Army's need for a long range artillery system, the Indian Ministry of Defence (MoD) sanctioned two confidence building projects. In July 1983, the Army formulated their General Staff Qualitative Requirement (GSQR) for the system. In December 1986 MoD sanctioned . Armament Research & Development Establishment (ARDE) was appointed the System Coordinator for the project. The project included seven other laboratories DRDO like Combat Vehicles Research and Development Establishment (CVRDE), High Energy Materials Research Laboratory (HEMRL) and Electronics and Radar Development Establishment (LRDE). DRDO was to fabricate seven launcher vehicles, of which six were to be supplied to the Army for user trials, three replenishment-cum-loader vehicles including two for the Army’s user trials, one command post vehicle. Induction was planned at the rate of one Regiment per year from 1994 onwards. This system would eventually replace the Grads. Mark 1 Development began in December 1986, with a sanctioned budget of . The development was to be completed in December 1992. As per a report the prototype was rolled out by 1992. The user trials of the system by the Army began by February 1999 after the developmental trials. A section of two launchers were deployed in June of 1999 during the Kargil War under the 121 Rocket Regiment. The user trials ended in December 1999. The first order for full-rate production was placed with Ordnance Factory in 2007. The Pinaka is in the process of further improvement. Israel Military Industries teamed up with DRDO to implement its Trajectory Correction System (TCS) on the Pinaka, for further improvement of its CEP. This has been trialled and has shown excellent results. The rockets can also be guided by GPS to improve their accuracy. A wraparound microstrip antenna has been developed by DRDO for this system.To decrease single source dependency from Ordnance Factory Board (OFB) and increase competition in product pricing front, final developmental trials of Pinaka manufactured fully by Indian private sector Solar Industries under transfer-of-technology agreement from DRDO were successfully conducted by Indian Army at Pokhran Range on 19 August 2020. The trials of rockets developed by Economic Explosives Ltd. (subsidiary of Solar Industries Group) and Yantra India Limited-Munitions India Limited (YIL-MIL) are underway for two variants, Mk-I Enhanced and Mk-I ADM. The order for these variants are to be placed with one or two of the competitors in order to replace the shorter ranged Mk-I variant rockets. Mark 2 Pinaka Mk II is being developed by Armament Research and Development Establishment (ARDE), Pune; Research Centre Imarat (RCI), Hyderabad; and Defence Research and Development Laboratory (DRDL), Hyderabad. Another variant of Mark II called Guided Pinaka is equipped with a navigation, guidance, control kit and has considerably enhanced the range and accuracy of the missile. The range of the missile is estimated to be between 60 km-75 km at all ranges. Sagem completed delivery of its Sigma 30 laser-gyro artillery navigation and pointing system for 2 of the Pinaka MBRL systems in June 2010. The Sigma 30 artillery navigation and pointing system is designed for high-precision firing at short notice. The systems would be integrated by Tata Power SED and Larsen & Toubro. The system was ordered in February 2008. Pinaka Mark 2 manufactured by Solar Industries completed User Assisted Technical Trial (UATT) on 8 December 2021 and will now go for user trial which will be completed by March 2022. While Yantra India Limited-Munitions India Limited (YIL-MIL) is developing prototype of Pinaka Mark 2 due to delay in transfer of technology by ARDE as of December 2021. As of September 2022, Defence Acquisition Council (DAC) has cleared the procurement proposal of induction of Guided Pinaka variant. The Flight Trials as part of Validation Trials were completed in November 2024. Further development In 2005, ARDE revealed about the development of a long range MRL similar to the Smerch MRLS. A 7.2-metre rocket for the Pinaka MBRL, which can reach a distance of 120 km and carry a 250 kg payload will be developed. These new rockets can be fired in 44 seconds, have a maximum speed of Mach 4.7, rise to an altitude of 40 km before hitting its target at Mach 1.8. Integrating UAVs with the Pinaka is also in the pipeline, as DRDO intends to install guidance systems on these rockets to increase their accuracy. On 17 January 2024, reports revealed that DRDO is developing two new variants of Pinaka rockets, one with a range of 120 km and the other with a range of over 200 km. On 24 January 2024, few other reports revealed that the range of the rockets shall be 120 km and 300 km, respectively. The development of new variants have been approved by the Indian Army. While the 120 km rocket is to have the same calibre as of the earlier variants (214 mm), enabling it to be fired from earlier launchers, the Preliminary Services Qualitative Requirements of the other variant is being chalked out. In September 2024, Lieutenant General Adosh Kumar, the Director General of the Regiment of Artillery, stated that future plans for the Pinaka include first doubling the range of the rockets, and then later increasing the range to almost four times the current range. In November 2024, the Pinaka's range was enhanced to over 75km after the system successful completed flight tests. Ramjet propulsion A group of researchers led by Lieutenant General P.R. Shankar, a professor in the aerospace department at IIT Madras and the former Director General of Artillery for the Indian Army Combat and Combat Support Arms, are developing ramjet propulsion technology that will be incorporated into the Pinaka rockets. It is anticipated that the range of 210–214 mm rocket will increase at 225–250 km with the addition of ramjet propulsion, all the while preserving the operational flexibility of the system. Testing The first tests of the rocket system (Pinaka Mk-I) was conducted in late 1990, around 1995. User trials of the Pinaka Mk-I was carried out from February to December in 1999. It also took part in the Kargil War. From 30 and 31 January 2013, at least three rounds of Pinaka Mk-II were successfully fired from the test range of Proof and Experimental Establishment (PXE) at Chandipur for developmental trials. The tests were conducted by personnel of Armament Research & Development Establishment (ARDE) between 11:00 am and 12:00 pm IST. On 28 July 2013, successful firing trials of Pinaka Mk-II were conducted in the Chandan area of Pokhran by DRDO and Army personnel. The rockets destroyed the targets in Keru area, 30 km from the point of launch. On 7 August 2013, tests for two rounds of Pinaka Mk-II fired from the test range of Proof and Experimental Establishment (PXE) at Chandipur failed to provide desired results. On 20 December 2013, six rounds of Pinaka Mk-II were successfully fired from the test range of Proof and Experimental Establishment (PXE) at Chandipur for developmental trials. This test, conducted by ARDE personnel, was reportedly conducted post a failed attempt to test the same variant four months back in August. On 30 May 2014, Pinaka Mk-II were successfully fired from the test range of PXE at Chandipur by ARDE. The rockets were launched at a range of 61 km against its then maximum range of 65 km. From 20 to 23 May 2016, four rounds of the Pinaka Mk-II were successfully fired from the test range of Proof and Experimental Establishment (PXE) at Chandipur-on-sea for testing a new guidance system. On 12 January 2017 and 24 January 2017, two successful tests of the Pinaka Mk-II was conducted with range of 65 km and 75 km respectively from Launch Complex-III, Integrated Test Range, Chandipur. On 30 May 2018, two rounds of tests of the Pinaka Mk-II were successfully conducted from Launch Complex-III, ITR, Chandipur. Another round of tests was conducted successfully on 11 March 2019. On 19 December 2019, Pinaka Mk-II version was tested at a range of 75 km. On 20 December 2019, two Pinaka Mk-II variant rockets were fired in salvo mode at an interval of 60 seconds at low range of 20 km at 11:00 am IST from Integrated Test Range, Chandipur. The proximity fuse initiation and accuracy at low ranges. The newer Pinaka Mk-II ER variant was reported to be flight tested at a range of up to 90 km. On 19 August 2020, Pinaka Mk I Enhanced variant rockets, manufactured by Economic Explosives Ltd. of Solar Group, were tested successfully from Pokhran Range, Rajasthan. This was the first time in India that a munition of this kind was manufactured and tested by a private sector company. On 20 August 2020, trials for Pinaka Guided was carried out for the first time. On 4 November 2020, a series of 6 Pinaka Mk I Enhanced variant rockets, manufactured by Economic Explosives Ltd. of Solar Group, were tested successfully from Integrated Test Range, Chandipur. The variant is expected to replace the older Mark I variant in production. This time DRDO has decreased the size of the rockets compared to the older generation Mark I. On 24 June 2021, DRDO successfully fired 25 Pinaka Mk I Enhanced variant at a range of 45 km in quick succession mode as part of saturation attack simulation. The rockets were manufactured by EEL. On 25 June 2021, DRDO successfully test fired 40 km range 122 mm calibre rocket which are made to replace the older BM-21 Grad rockets in the Indian Army. December 2021 trials: A total of 24 rockets of multiple variants were fired at Pokhran Range. On 8 December 2021, Pinaka Mk-I Enhanced was successfully tested at a range of 45 km. The variant was manufactured by Economic Explosives Ltd (EEL). The User Assisted Technical Trial (UATT) of Pinaka Mk-II produced by EEL was also completed the same day. Further user trials for Pinaka Mk-II variants produced by both EEL and YIL is to undergo user trials by March 2022. On 10 December 2021, Pinaka Area Denial Munition (ADM) variant, equipped with Dual-Purpose Improved Conventional Munition (DPICM) was tested. The tests included the rockets manufactured by both Economic Explosives Ltd (EEL) and Yantra India Limited (YIL). While both the manufacturers claimed success of the trials, it was confirmed that for the YIL's variant 96.6% of the DPICM exploded surpassing benchmark set at 90%. During these tests, ARDE evaluated locally developed Direct-Action Self Destruction (DASD) and Anti-Tank Munition (ATM) fuses. In April 2022, a total of 24 Enhanced Pinaka Rocket System (EPRS) along with Pinaka ADM were fired at Pokhran Range for different ranges. The rockets developed by Yantra India Limited-Munitions India Limited (YIL-MIL) were flight tested during the trials. In the last two weeks of August 2022, user trials of Pinaka Enhanced were conducted from both Pokhran Range and Integrated Test Range, Balasore. Rockets developed by both the manufacturers MIL and EEL successfully passed the trials. On 14 November 2024, DRDO completed the final Flight Tests of Guided Pinaka Weapon Systems as part of Provisional Staff Qualitative Requirements (PSQR) Validation Trials. The tests were conducted in three phases in field firing ranges. The three parameters of PSQR, which are ranging, accuracy, consistency and rate of fire to engage multiple targets in salvo mode was assessed during the trials. A total of 12 rockets from each production agencies, Economic Explosives Ltd. (EEL) and Munitions India Limited (MIL), were tested from two launchers that were upgraded by the launcher manufacturers, Tata Advanced Systems Limited (TASL) and Larsen & Toubro (L&T). Details Pinaka is a complete MBRL system, each Pinaka battery consists of: six launcher vehicles, each with 12 rockets; six loader-replenishment vehicles; six replenishment vehicles; two Command Post vehicle (one stand by) with a Fire Control computer, and the DIGICORA MET radar (meteorological radar, provides data on winds). A battery of six launchers can neutralise an area of 1,000 m × 800 m. The Army generally deploys a battery that has a total of 72 rockets. All of the 72 rockets can be fired in 44 seconds, taking out an area of 1 km2. Each launcher can fire in a different direction too. The system has the flexibility to fire all the rockets in one go or only a few. This is made possible with a fire control computer. There is a command post linking together all the six launchers in a battery. Each launcher has an individual computer which enables it to function autonomously in case it gets separated from the other five vehicles in a war. Modes of operation The launcher can operate in the following modes: Autonomous mode. The launcher is fully controlled by a fire control computer (FCC). The microprocessor on the launcher automatically executes the commands received from the FCC, giving the operator the status of the system on displays and indicators. Stand-alone mode: In this mode, the launcher is not linked to the FCC operator, and the operator at the console enters all the commands for laying of the launcher system and selection of firing parameters. Remote mode: In this mode, a remote control unit carried outside the cabin up to a distance of about 200 m can be used to control the launcher system, the launcher site and to unload the fired rocket pods from the launcher. Manual mode: All launcher operations including laying of the system and firing are manually controlled. This mode is envisaged in the situations where the microprocessor fails or where there is no power to activate the microprocessor-based operator's console. The Pinaka was tested in the Kargil conflict and proved its effectiveness. Since then it has been inducted into the Indian Army and series production has been ordered. The Pinaka MBRL is stated to be cheaper than other systems. It costs per system compared to the M270 which costs . Salient features Use of state-of-the-art technologies for improved combat performance Total operational time optimised for shoot & scoot capability Cabin pressurisation for crew protection in addition to blast shields Microprocessor-based fully automatic positioning and fire control console Night vision devices for driver and crew Neutralisation/destruction of the exposed troop concentrations, B-Class military land vehicles and other such soft targets Neutralisation of enemy guns/rocket locations Laying of anti-personnel and anti-tank mines at a short notice. Orders The Pinaka project has been a significant success for the DRDO and its development partners in developing and delivering a state of the art, high value project to the Indian Army's demanding specifications. While DRDO was responsible for the overall design and development, its partners played a significant role in developing important subsystems and components. They include Tata Power SED, Larsen & Toubro, Solar Industries, Munitions India Limited and Yantra India Limited. As of August 2024, of the total Pinaka systems in service with the Army, Tata has delivered 40 launchers and 8 Command Posts. Another 36 launchers were also on order with the same firm. On 29 March 2006, the Indian Army awarded Tata Power SED and Larsen & Toubro's Heavy Engineering Division a contract worth , to produce 40 Pinaka MBRLs each for 2 regiments. Tata Power SED declared that it would be delivering the first units within six months. The deliveries were completed by 2010. On 29 October 2015, the Defence Acquisition Council (DAC) chaired by the Defence Minister of India, cleared purchase of two more Pinaka regiments at a cost of . On March 18, 2016, the Cabinet Committee on Security (CCS) cleared the purchase of two additional Pinaka regiments. As a consequence, a deal worth was signed with Tata Power SED for delivery of one regiment (20 launchers and 8 command posts). The contract was signed with BEML (for vehicles), Tata Power SED and Larsen & Toubro (for launchers and command posts) and Ordnance Factory Board (rocket ammunition). The entire order was placed in 2016 and all units have been delivered as of 2024. In November 2016, the MoD has cleared a RFP for 6 additional regiments and was followed by Defence Acquisition Council (DAC) clearance in 2018. This led to the signing of a contract on 31 August 2020 for six additional regiments worth of launcher systems at from Tata Power Company Ltd. (TPCL) and Larsen & Toubro (L&T). Defence public sector undertaking Bharat Earth Movers Ltd (BEML) which will provide the vehicles will also be part of the project. The contract will include "114 launchers with Automated Gun Aiming & Positioning System (AGAPS), 45 Command Posts to be procured from TPCL and L&T, and 330 Vehicles from BEML." This order included the procurement of Pinaka Mk-II variant of the MBRL. On 13 December 2023, the Ministry of Defence cleared the acquisition of 6,400 Pinaka ADM Type 2 and Type 3 rockets at a cost of over . Two main contenders for the order are Economic Explosives Ltd. (EEL) and Munitions India Limited (MIL). As reported on 20 Januay 2025, Army has plans to conclude 2 major contracts to purchase ammunition, for the total of 10 Pinaka regiments on order, by 31 March (end of fiscal year). 1st contract: high-explosive pre-fragmented ammunition with 45km range (Mk-I Enhanced variant) worth 2nd contract: area denial munitions with 37km range (ADM variant) worth Deployment Each Pinaka regiment consists of three batteries of six Pinaka launchers (total of 18 launchers); each of which is capable of launching 12 rockets with a range of 40+ km in a space of 44 seconds. In addition to these, a regiment also has support vehicles, a radar and a command post. The Pinaka will operate in conjunction with the Indian Army's Firefinder radars and Swathi Weapon Locating Radar of which 36 are in service and 6 are on order. The Indian Army is networking all its artillery units together with the DRDO's Artillery Command & Control System (ACCS), which acts as a force multiplier. The ACCS is now in series production. The Pinaka units will also be able to make use of the Indian Army's SATA (Surveillance & Target Acquisition) Units which have been improved substantially throughout the late 1990s, with the induction of the Searcher-1, Searcher-2 and IAI Heron UAVs into the Indian Army, as well as the purchase of a large number of both Israeli made and Indian made Battle Field Surveillance radars. These have also been coupled with purchases of the Israeli LORROS Long-Range Reconnaissance and Observation System which is a combination of FLIR/CCD system for long range day/night surveillance. In February 2000, the first Pinaka regiment was raised. The first two regiments of Pinaka were inducted by 2010. As of 2016, the Indian Army had plans to operate 10 regiments by 2022 and further increase the numbers to 22 regiments within 2032. The Pinaka system will replace the older Grad MLRS regiments are being retired. As of November 2024, 4 regiments of Pinaka have been inducted by the Army. More than 72 launcher units are active. 6 more regiments were ordered in 2020. It was reported in March 2024, that the Army plans to raise 2 more Pinaka regiments by the end of year along Line of Actual Control. Pinaka, Pralay, Nirbhay and BrahMos will become part of the Integrated Rocket Force (IRF), a separate entity from Strategic Forces Command (SFC). Exports Armenia signed a combined deal worth for 4 Pinaka batteries and other defense equipment. The order includes supplies of extended range and guided rocket for Pinaka system in the future. The order has been supplied in July 2023. The deliveries were concluded by November 2024. Indonesia and Nigeria have also shown interest in Pinaka multi-barrel rocket launcher. According to a report in 2024, some Southeast Asian and European nations has also shown interest to acquire Pinaka MBRL and Netra AEW&C. On 9 November 2024, Brigadier General Stephane Richou confirmed to Asian News International about the French Army evaluating Pinaka MBRL system for their requirement. This was mentioned during the visit of the high-ranking official who said that that "two countries share much more than just a business relationship and want to cooperate more". Specifications Operators – 4 regiments (72+ launchers) of Pinaka Mk-I in service as of 2025. 6 Pinaka Mk-II regiments on order. Total 22 regiments of MK-I & MK-II are planned. – 4 batteries ordered in September 2022. Delivered in July 2023. Potential Operators : Under evaluation trials as of November 2024. See also References External links Technical: DRDO Technology Focus : Warhead for Missiles, Torpedoes and Rockets DRDO Technology Focus : Artillery Rocket Systems Wheeled self-propelled rocket launchers Indian Army Defence Research and Development Organisation Military vehicles of India Artillery of India Multiple rocket launchers Modular rocket launchers Military vehicles introduced in the 2000s
Pinaka multi-barrel rocket launcher
Engineering
4,695
74,548,741
https://en.wikipedia.org/wiki/U-asema
U-asema, or Pitkäranta–Loimola (U-line) was a defense line during the Continuation War in Ladoga Karelia. The U-station is named after its chief designer and equipment manager, Major Yrjö Urto. Fortress Fortification work began in December 1943. When the attack began in the summer of 1944, the line was 55 kilometers long and best equipped in the direction of Uoma and Pitkäranta. The line had 15 ready-made concrete bunkers, 300 meters of armored barriers and five kilometers of battle trenches. Otherwise, the station was mostly unfinished. U-station and its operation U-station was an unfinished defensive position with a few strong fire positions, about 25 covered trenches, 12 concrete bunkers, trenches and one barbed wire barrage along the entire length of the station. All tank trenches had strong anti-tank barriers and mines. In Battle of Nietjärvi, the Defensive fighting position of three separate lines, the last one, on a narrow sand ridge, was clearly the strongest. The position was defended by the 8th Division, 5th Division and 7th Division supported by the 15th Brigade. The Finnish troops occupied U-station for the most part by July 10. According to the retreating forces, 120 Artillery and 40 mortars arrived on the 6th-9th. July. The Battle of Nietjärvi area was occupied by the 5th division's infantry regiment 44 under the command of lieutenant colonel Ilmari Rytkönen and infantry regiment 2 under the command of colonel Heikki Saure. The battles The Karelian Front of the Soviet Union attacked the U-station with the forces of four army units in July 1944. The 5th, 8th and 7th divisions of the Finnish army were grouped at the U-station. The biggest battle for the place took place in Battle of Nietjärvi. The fighting was of the nature of a great battle between the 14th and 15th. since July. The Finns repulsed the attacking Red Army troops. After the Moscow Armistice, the U-station remained on the side of the Soviet Union. References Forts in Finland World War II defensive lines
U-asema
Engineering
433
53,547,575
https://en.wikipedia.org/wiki/Decentralized%20wastewater%20system
Decentralized wastewater systems (also referred to as decentralized wastewater treatment systems) convey, treat and dispose or reuse wastewater from small and low-density communities, buildings and dwellings in remote areas, individual public or private properties. Wastewater flow is generated when appropriate water supply is available within the buildings or close to them. Decentralized wastewater systems treat, reuse or dispose the effluent in relatively close vicinity to its source of generation. They have the purpose to protect public health and the natural environment by reducing substantially health and environmental hazards. They are also referred as "decentralized wastewater treatment systems" because the main technical challenge is the adequate choice of a treatment and/or disposal facility. A commonly used acronym for decentralized wastewater treatment system, is DEWATS. Background Comparison to centralized systems Centralized wastewater systems are the most widely applied in well-developed urban environments and the oldest approach to the solution of the problems associated with wastewater. They collect wastewater in large and bulk pipeline networks, also referred as sewerage, which transport it at long distances to one or several treatment plants. Storm water can be collected in either combined sewers or in a separate storm water drains. The latter consists of two separate pipeline systems, one for the wastewater and one for the storm water. The treated effluent is disposed in different ways, most often discharged into natural water bodies. The treated effluent may also be used for beneficial purposes and in this case it is referred as reclaimed water. The main difference between decentralized and centralized systems is in the conveyance structure. In decentralized systems the treatment and disposal or reuse of the effluent is close to the source of generation. This results in a small conveyance network, in some cases limited only to one pipeline. The size of the network allows for applications of different conveyance methods, in addition to the well-known gravity sewers, such as pressurized sewers and vacuum sewers. The quantity of the effluent is low and is characterized by significant fluctuations. Applications In locations with developed infrastructure, decentralized wastewater systems could be a viable alternative of the conventional centralized system, especially in cases of upgrading or retrofitting existing systems. This can be easier to accomplish with decentralized systems, as centralized infrastructures have long lifetimes and are locked into their location and condition. Many different combinations and variations of hybrid systems are possible. Decentralized applications are a necessity in cases of new urban developments, where the construction of the infrastructure is not ready or will be executed in future. In many regions, the infrastructure development (roads, water supply and especially wastewater/drainage systems) is executed years after the housing development. In such cases decentralized wastewater facilities are considered as a temporary solution, but they are mandatory, in order to prevent public health and ecological problems. In this context, decentralized solutions are favorable in their ability to be locally applied as needed, while still carrying the potential to cover large areas at lower costs. Decentralized systems allow for flow separation or source separation, which segregates different types of wastewater, based on their origin, such as: black water, greywater and urine. This approach requires separate parallel pipeline/plumbing systems to convey the segregated flows and the purpose is to apply different level of treatment and handling of each flow and to enhance the safe reuse and disposal of the end products. In the specific case of developing countries, where localities with poor infrastructure are common, decentralized wastewater treatment has been promoted extensively because of the possibility to apply technologies with low operation and maintenance requirements. In addition, decentralized approaches require smaller scale investments, compared to centralized solutions. Types Based on the size of the served area, different scales of decentralization could be found: Decentralization at the level of a suburb or satellite township in an urban area – these systems could be defined as small centralized systems when applied to small towns or rural communities. But if they are applied only to selected suburbs or districts in medium or large population centres, with existing centralized system, the whole system could be defined as a hybrid system, where decentralization is applied to parts of the whole drained area. Decentralization at the level of a neighbourhood – this category includes clusters of homes, gated communities, small districts and areas, which are served by vacuum sewers. Decentralization at "on-site" level (on-site sanitation) – in these cases the whole system lays within one property and serves one or several buildings. Wastewater treatment options Treatment/disposal facilities requiring effluent infiltration Usually they are applied at on-site level and are adequate because of the very low wastewater quantity generated. However, they require suitable soil conditions, permitting infiltration of the excess water, and low ground water table. If not applied properly, they may be a serious source of ground water pollution. Pit latrines are applied when the water supply is very scarce and wastewater flow can hardly be generated. They are the most common sanitation technique in under-developed areas. Septic tanks are the most common on-site treatment technology used, which can be applied successfully where an adequate water supply is available and the soil/groundwater conditions are acceptable. Treatment facilities resembling natural purification processes Their application requires significant surface area, because of the slow pace of the biological processes applied. For the same reason they are more suitable for warmer climates, because the rate of the purification process is temperature dependent. These technologies are more resilient to fluctuating loads and do not require complex maintenance and operation. Constructed wetlands are more suitable for applications at on-site or at neighbourhood level, while stabilization ponds could be a viable alternative for decentralized systems at the level of small towns or rural communities. Engineered wastewater treatment technologies There is a large variety of wastewater treatment plants where different treatment processes and technologies are applied. Small-scale treatment facilities in decentralized systems, apply similar technologies as medium or large plants. For on-site applications package plants are developed, which are compact and have different compartments for the different processes. However, the design and operation of small treatment plants, especially at neighbourhood or on-site level, present significant challenges to wastewater engineers, related to flow fluctuations, necessity of competent and specialized operation and maintenance, required to deal with a large number of small plants, and relatively high per capita cost. Regulations and management Water pollution regulations in the form of legislation documents, guidelines or ordinances prescribe the necessary level of treatment, so that the treated effluent meets the requirements for safe disposal or reuse. Effluent may be disposed by discharging into a natural water body or infiltrated in the ground. In addition, regulations mention requirements regarding the design and operation of wastewater systems, as well as the penalties and other measures for their enforcement. Centralized systems are designed, built and operated in order to fulfil the existing regulations. Their management usually is executed by local authorities. In hybrid systems and small centralized systems in towns or rural communities management can be executed in the same way. In the case of decentralization at on-site level and clusters of buildings, the whole wastewater system is located within private premises. The costs and responsibility for the design, construction, operation and maintenance is the responsibility of the owner. In many cases specialized companies might execute the operation and maintenance procedures. The local authorities issue permits and may provide support for the operation and management in the form of collecting wastes, issuing certificates/licenses for standardized treatment equipment, or for selected qualified private companies. From regulatory point of view, the control of the quality of treated effluent for reuse, discharge or disposal is entirely the responsibility of local or national government authorities. This might be a challenge if a large number of systems must be controlled and inspected. It is in the owner's interest to operate and maintain the system properly, especially in the case of reuse of the treated effluent. Most often the operational problems are associated with clogging of the treatment facilities as result of irregular removal of the sludge or hydraulic overloading due to increased number of population served or increased water consumption. Urban planning and infrastructure issues Wastewater systems are part of the infrastructure of urban or rural communities and the urban planning process. Urban planning data and information, such as plots of individual dwellings, roads/streets, stormwater drainage, water supply, and electricity systems are essential for the design and implementation of a sustainable wastewater system. In decentralized wastewater systems, which collect and treat wastewater only, stormwater might be overlooked and cause flooding problems. If planned decentralized solutions are applied, stormwater drainage should be executed together with the roads system. In under-developed population centres where no infrastructure is available, is difficult to provide sustainable sanitation measures; e.g. pit latrines/septic tanks need periodic cleansing, usually executed by vacuum trucks, which have to access the latrine and need a basic road for this purpose. Fecal sludge management deals with the organization and implementation of this practice in a sustainable way, including collection, transport, treatment and disposal/reuse of faecal sludge from pit latrines and septic tanks. In the cases of new urban/rural developments, or the retrofitting of existing ones, it is advisable to consider different alternatives regarding the design of the wastewater system, including decentralized solutions. A sustainable approach would require optimal technical solutions in terms of reliability and cost effectiveness. From this perspective, centralized solutions might be more appropriate in many cases, depending on existing sizes of plots, topography, geology, groundwater tables and climatic conditions. But when applied adequately, decentralized systems allow for the application of environmentally friendly solutions and reuse of the treated effluent, including resource recovery. In this way, alternative water resources are provided and the environment is protected. Public awareness, perceptions and support play an important part in the urban planning process for choosing adequate wastewater systems which fit the specific context. Examples BORDA One example of decentralized treatment is the "DEWATS technology" which has been promoted under this name by the German NGO BORDA. It has been applied in many countries in South East Asia and in South Africa. It applies anaerobic treatment processes, including anaerobic baffled reactors (ABRs) and anaerobic filters, followed by aerobic treatment in ponds or in constructed wetlands. This technology was researched and tested in South Africa where it was shown that the treatment efficiency was lower than expected. Botswana Technology Centre A case study of a decentralized wastewater system at on-site level with treated effluent reuse was performed at the Botswana Technology Centre in Gaborone, Botswana. It is an example of a decentralized wastewater system, which serves one institutional building, located in an area served by municipal sewerage. Wastewater from the building is treated in a plant consisting of: septic tank, followed by planted rock filter, bio-filter and a surface flow wetland. The treated effluent is reused for irrigation of the surrounding green areas, but the study registered outflow from the wetland only during periods of heavy rains. This example shows the need for careful estimation of the expected quantity, quality and fluctuations of the generated wastewater when designing decentralized wastewater systems. EcoSwell Founded in 2013, the Peru-based NGO EcoSwell works on rural development projects, including water supply and sanitation in Peru; they are based in the northwestern Lobitos district of the Talara region, an arid coastal area that faces water stress. EcoSwell establishes decentralized wastewater systems with the help of local residents and interns, including communal biodigesters, dry toilets, and greywater reuse projects. They also work on reforestation and constructed wetlands as avenues to naturally treat waste effluent and deactivate pathogens. See also History of water supply and sanitation Onsite sewage facility Sanitation Sewer mining Wastewater treatment References External links Library of the Sustainable Sanitation Alliance containing further information Pressurized sewer explains the principle of pressurized sewers. Code of practice - on-site wastewater management, Publication 891.4, July 2016, Environment Protection Agency, Victoria, Au , a comprehensive example of a regulating practice of decentralized wastewater systems. Sanitation Water pollution
Decentralized wastewater system
Chemistry,Environmental_science
2,492
38,731,925
https://en.wikipedia.org/wiki/Low-impact%20development%20%28U.S.%20and%20Canada%29
Low-impact development (LID) is a term used in Canada and the United States to describe a land planning and engineering design approach to manage stormwater runoff as part of green infrastructure. LID emphasizes conservation and use of on-site natural features to protect water quality. This approach implements engineered small-scale hydrologic controls to replicate the pre-development hydrologic regime of watersheds through infiltrating, filtering, storing, evaporating, and detaining runoff close to its source. Green infrastructure investments are one approach that often yields multiple benefits and builds city resilience. Broadly equivalent terms used elsewhere include Sustainable drainage systems (SuDS) in the United Kingdom (where LID has a different meaning), water-sensitive urban design (WSUD) in Australia, natural drainage systems in Seattle, Washington, "Environmental Site Design" as used by the Maryland Department of the Environment, and "Onsite Stormwater Management", as used by the Washington State Department of Ecology. Alternative to conventional stormwater management practices A concept that began in Prince George's County, Maryland in 1990, LID began as an alternative to traditional stormwater best management practices (BMPs) installed at construction projects. Officials found that the traditional practices such as detention ponds and retention basins were not cost-effective and the results did not meet water quality goals. The Low Impact Development Center, Inc., a non-profit water resources research organization, was formed in 1998 to work with government agencies and institutions to further the science, understanding, and implementation of LID and other sustainable environmental planning and design approaches, such as Green Infrastructure and the Green Highways Partnership. The LID design approach has received support from the U.S. Environmental Protection Agency (EPA) and is being promoted as a method to help meet goals of the Clean Water Act. Various local, state, and federal agency programs have adopted LID requirements in land development codes and implemented them in public works projects. LID techniques can also play an important role in Smart Growth and Green infrastructure land use planning. Designing for low-impact development The basic principle of LID to use nature as a model and manage rainfall at the source is accomplished through sequenced implementation of runoff prevention strategies, runoff mitigation strategies, and finally, treatment controls to remove pollutants. Although Integrated Management Practices (IMPs) — decentralized, microscale controls that infiltrate, store, evaporate, and detain runoff close to the source — get most of the attention by engineers, it is crucial to understand that LID is more than just implementing a new list of practices and products. It is a strategic design process to create a sustainable site that mimics the undeveloped hydrologic properties of the site. It requires a prescriptive approach that is appropriate for the proposed land use. Design using LID principles follows four simple steps. Determine pre-developed conditions and identify the hydrologic goal (some jurisdictions suggest going to wooded conditions). Assess treatment goals, which depend on site use and local keystone pollutants. Identify a process that addresses the specific needs of the site. Implement a practice that utilizes the chosen process and that fits within the site's constraints. The basic processes used to manage stormwater include pretreatment, filtration, infiltration, and storage and reuse. Pre-treatment Pre-treatment is recommended to remove pollutants such as trash, debris, and larger sediments. Incorporation of a pretreatment system, such as a hydrodynamic separator, can prolong the longevity of the entire system by preventing the primary treatment practice from becoming prematurely clogged. Filtration When stormwater is passed through a filter media, solids and other pollutants are removed. Most media remove solids by mechanical processes. The gradation of the media, irregularity of shape, porosity, and surface roughness characteristics all influence solids removal. Many other pollutants such as nutrients and metals can be removed through chemical and/or biological processes. Filtration is a key component to LID sites, especially when infiltration is not feasible. Filter systems can be designed to remove the primary pollutants of concern from runoff and can be configured in decentralized small-scale inlets. This allows for runoff to be treated close to its source without additional collection or conveyance infrastructure. Infiltration Infiltration reclaims stormwater runoff and allows for groundwater recharge. Runoff enters the soil and percolates through to the subsurface. The rate of infiltration is affected by soil compaction and storage capacity, and will decrease as the soil becomes saturated. The soil texture and structure, vegetation types and cover, water content of the soil, soil temperature, and rainfall intensity all play a role in controlling infiltration rate and capacity. Infiltration plays a critical role in LID site design. Some of the benefits of infiltration include improved water quality (as water is filtered through the soil) and reduction in runoff. When distributed throughout a site, infiltration can significantly help maintain the site's natural hydrology. Storage and reuse Capturing and reusing stormwater as a resource helps maintain a site's predevelopment hydrology while creating an additional supply of water for irrigation or other purposes. Rainwater harvesting is an LID practice that facilitates the reuse of stormwater. Five principles of low-impact development There are 5 core requirements when it comes to designing for LID. Conserve natural areas wherever possible (don't pave over the whole site if you don't need to). Minimize the development impact on hydrology. Maintain runoff rate and duration from the site (don't let the water leave the site). Scatter integrated management practices (IMPs) throughout your site – IMPs are decentralized, microscale controls that infiltrate, store, evaporate, and/or detain runoff close to the source. Implement pollution prevention, proper maintenance and public education programs. Typical practices and controls Planning practices include several related approaches that were developed independently by various practitioners. These differently named approaches include similar concepts and share similar goals in protecting water quality. Conservation design, also called Conservation Development Better Site Design Green Infrastructure. Planners select structural LID practices for an individual site in consideration of the site's land use, hydrology, soil type, climate and rainfall patterns. There are many variations on these LID practices, and some practices may not be suitable for a given site. Many are practical for retrofit or site renovation projects, as well as for new construction. Optimal places for retrofitting LID are single houses, school/university areas, and parks. Frequently used practices include: Bioretention cells, also known as rain gardens Cisterns and rain barrels Green roofs Pervious concrete, also called "porous pavement", similar to Permeable paving Grassed swales, also known as bioswales. Commercially manufactured stormwater management devices that capture pollutants (e.g., media filters) and/or aid in on-site infiltration. Tree pits Limitations for LID progress Urban areas are especially prone to create barriers for LID practices. The most common limits are: Lack of suitable places for LID facilities in existing complex infrastructure of urban areas Lack of design standards that can be applicable around the world Lack of knowledge about LID technology among local governments and residents Varied performance due to lack of understanding of climate difference False belief that LID practices are difficult to maintain and/or the maintenance cost is high. Benefits LID has multiple benefits, such as protecting animal habitats, improving management of runoff and flooding, and reducing impervious surfaces. For example, Dr. Allen Davis from the University of Maryland, College Park conducted research on the runoff management from LID rain gardens. His data indicated that LID rain gardens can hold up to 90% of water after a major rain event and release this water over a time scale of up to two weeks. LID also improves groundwater quality and increases its quantity, which increases aesthetics, therefore raising community value. LID can also be used to eliminate the need for stormwater ponds, which occupy expensive land. Incorporating LID into designs enables developers to build more homes on the same plot of land and maximize their profits. In some municipalities, LID can be a cost-effective way to reduce the incidence of combined sewer overflows (CSO). According to the co-benefits approach, LID is an opportunity to technically mitigate urban heat island (UHI) phenomenon with higher compatibilities in cool pavement and green infrastructures. Although there are some intrinsic discrepancies among understandings of LID and UHI mitigation towards blue infrastructure, the osmotic pool, wet pond, and regulating pond are essential supplements to urban water bodies, performing their roles in nourishing vegetation and evaporating for cooling in UHI mitigation. LID pilot projects have already provided the financial foundation for taking the UHI mitigation further. It is an attempt for people in different disciplines to synergistically think about how to mitigate UHI effects, which is conducive to the generation of holistic policies, guidelines and regulations. Furthermore, the inclusion of UHI mitigation can be a driver to public participation in SPC construction, which can consolidate the PPP model for more funds. See also Rainwater harvesting Sustainable development Sustainable urban drainage systems Water pollution Synonyms Nature-based solutions (European Union) Water-sensitive urban design (Australia) Sponge city (China) ABC water (Singapore) References External links Pervious Concrete Blog – Discussion on the latest in Pervious Concrete Technology UC Davis Center for Water and Land Use – Provides a map with approximately 40 case studies of LID on the west coast. Also provides a detailed stormwater calculator for development. Center for Watershed Protection – Provides practical guidance for runoff reduction Low Impact Development Center – A water quality research organization; many links to green infrastructure, LID practices, projects and stormwater resources City of Redmond, Washington – Low Impact Development examples in a small city Case Study: Incorporating LID into Stormwater Management U.S. EPA Low Impact Development Urban Design Tools - Low Impact Development Center, Beltsville, MD Alberta Low Impact Development Partnership - Alberta Low Impact Development Partnership, Alberta, Canada. Equipping Alberta's professionals to create vibrant, functional landscapes within the fabric of the built environment, through comprehensive stormwater management. The City of Calgary, Alberta Low Impact Development. Sustainable Technologies Evaluation Program Low Impact Development Stormwater Planning and Design Guide A mediawiki format LID design guide. Environmental engineering Hydrology and urban planning Landscape Water and the environment Water pollution in the United States Sustainable urban planning Water pollution in Canada Sustainable technologies
Low-impact development (U.S. and Canada)
Chemistry,Engineering,Environmental_science
2,158
27,387,553
https://en.wikipedia.org/wiki/Te-Tzu%20Chang
Te-Tzu Chang or T. T. Chang (; 1927–2006) was a prominent Chinese agricultural and environmental scientist. Biography Chang was born in Shanghai on April 3, 1927 to a "scholar-gentry" family. Chang's father graduated from the Saint John's University in Shanghai and won the Boxer Rebellion Indemnity Scholarship Program and completed his study in the United States. Chang had three (older) sisters and one (younger) brother. Chang finished his secondary education at the Saint John's School (a middle school afflicted to the Saint John's University) in Shanghai. Chang at beginning studied agricultural science at the Saint John's University in Shanghai, which was his father's alma mater. After about one year, Chang transferred to the University of Nanking in Nanjing and majored in agriculture and horticulture. Chang graduated from University of Nanking with BSA in 1949. After graduation, Chang worked for the Council of Agriculture in Guangzhou, the capital city of Guangdong Province. During this period of time Shen Tsung-han (1895–1980, 沈宗瀚, born Ningbo, Zhejiang; death Taipei, Taiwan) was one of his mentors. Shen was the second and former Director-general of the Council of Agriculture. In 1950, Chang moved to Taiwan and served as a technician in the Ministry of Agriculture. Recommended by Shen, in 1952 Chang went to study plant genetics at Cornell University which was also the alma mater of Shen (Shen received his PhD from Cornell). Chang obtained his MSc from Cornell in 1954 and continued his study at the University of Minnesota where he earned PhD in plant genetics in 1959. Chang went back to Taiwan in 1959. However, after two years of staying in Taiwan, Chang moved to Philippines and worked for the International Rice Research Institute (IRRI) in Los Baños, Laguna. From 1962 to 1991, Chang managed the International Rice Germplasm Center. The T. T. Chang Genetic Resources Center is named after him. Awards and honors List of awards and honors received by Chang: In 1969, John Scott Award, Philadelphia, USA In 1978, Fellow, American Society of Agronomy (ASA) In 1980, International Service in Agronomy Award, American Society of Agronomy (ASA) In 1982, Fellow and Chartered Biologist, Institute of Biology, UK In 1985, Honorary Fellow, Crop Science Society of the Philippines In 1985, Fellow, Crop Science Society of America (CSSA) In 1986, Outstanding Achievement Award, University of Minnesota, USA In 1988, Rank Prize in Agronomy and Nutrition, Rank Prize Foundation, UK In October 1990, Frank N. Meyer Award and Medal in Plant Germplasm, Cactus and Succulent Society of America (CSSA), USA In 1990, Honorary Research Fellow, China National Rice Research Institute, and China Academy of Agriculture & Forestry, P.R.China In 1991, International Service in Crop Science Award, Crop Science Society of America (CSSA), USA In 1993, Honorary Fellow, Society for the Advancement of Breeding Research in Asia and Oceania (SABRAO) In 1994, SINAG Award (Guiding Light Award), IRRI, Philippines In 1994, Foreign Member, United States National Academy of Sciences (NAS), USA In 1996, Fellow, National Academy of Agricultural Sciences, India In 1996, Academician, Academia Sinica, Taipei In 1996, Member, TWAS, the Academy of Sciences for the Developing World On April 18, 1997, Fellow, Pontifical Academy of Sciences, Vatican City In 1999, Tyler Prize for Environmental Achievement References 20th-century Chinese botanists Cornell University College of Agriculture and Life Sciences alumni Environmental scientists University of Minnesota College of Food, Agricultural and Natural Resource Sciences alumni Foreign associates of the National Academy of Sciences Educators from Shanghai 1927 births 2006 deaths Members of Academia Sinica Chinese geneticists Academic staff of the University of the Philippines Biologists from Shanghai University of Nanking alumni St. John's University, Shanghai alumni Taiwanese people from Shanghai
Te-Tzu Chang
Environmental_science
815