id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
287,137
https://en.wikipedia.org/wiki/Snub%20dodecahedron
In geometry, the snub dodecahedron, or snub icosidodecahedron, is an Archimedean solid, one of thirteen convex isogonal nonprismatic solids constructed by two or more types of regular polygon faces. The snub dodecahedron has 92 faces (the most of the 13 Archimedean solids): 12 are pentagons and the other 80 are equilateral triangles. It also has 150 edges, and 60 vertices. It has two distinct forms, which are mirror images (or "enantiomorphs") of each other. The union of both forms is a compound of two snub dodecahedra, and the convex hull of both forms is a truncated icosidodecahedron. Kepler first named it in Latin as dodecahedron simum in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from either the dodecahedron or the icosahedron, called it snub icosidodecahedron, with a vertical extended Schläfli symbol and flat Schläfli symbol Cartesian coordinates Let be the real zero of the cubic polynomial , where is the golden ratio. Let the point be given by Let the rotation matrices and be given by represents the rotation around the axis through an angle of counterclockwise, while being a cyclic shift of represents the rotation around the axis through an angle of . Then the 60 vertices of the snub dodecahedron are the 60 images of point under repeated multiplication by and/or , iterated to convergence. (The matrices and generate the 60 rotation matrices corresponding to the 60 rotational symmetries of a regular icosahedron.) The coordinates of the vertices are integral linear combinations of and . The edge length equals Negating all coordinates gives the mirror image of this snub dodecahedron. As a volume, the snub dodecahedron consists of 80 triangular and 12 pentagonal pyramids. The volume of one triangular pyramid is given by: and the volume of one pentagonal pyramid by: The total volume is The circumradius equals The midradius equals . This gives an interesting geometrical interpretation of the number . The 20 "icosahedral" triangles of the snub dodecahedron described above are coplanar with the faces of a regular icosahedron. The midradius of this "circumscribed" icosahedron equals 1. This means that is the ratio between the midradii of a snub dodecahedron and the icosahedron in which it is inscribed. The triangle–triangle dihedral angle is given by The triangle–pentagon dihedral angle is given by Metric properties For a snub dodecahedron whose edge length is 1, the surface area is Its volume is Alternatively, this volume may be written as where Its circumradius is Its midradius is There are two inscribed spheres, one touching the triangular faces, and one, slightly smaller, touching the pentagonal faces. Their radii are, respectively: The four positive real roots of the sextic equation in R2 are the circumradii of the snub dodecahedron (U29), great snub icosidodecahedron (U57), great inverted snub icosidodecahedron (U69), and great retrosnub icosidodecahedron (U74). The snub dodecahedron has the highest sphericity of all Archimedean solids. If sphericity is defined as the ratio of volume squared over surface area cubed, multiplied by a constant of 36 (where this constant makes the sphericity of a sphere equal to 1), the sphericity of the snub dodecahedron is about 0.947. Orthogonal projections The snub dodecahedron has two especially symmetric orthogonal projections as shown below, centered on two types of faces: triangles and pentagons, corresponding to the A2 and H2 Coxeter planes. Geometric relations The snub dodecahedron can be generated by taking the twelve pentagonal faces of the dodecahedron and pulling them outward so they no longer touch. At a proper distance this can create the rhombicosidodecahedron by filling in square faces between the divided edges and triangle faces between the divided vertices. But for the snub form, pull the pentagonal faces out slightly less, only add the triangle faces and leave the other gaps empty (the other gaps are rectangles at this point). Then apply an equal rotation to the centers of the pentagons and triangles, continuing the rotation until the gaps can be filled by two equilateral triangles. (The fact that the proper amount to pull the faces out is less in the case of the snub dodecahedron can be seen in either of two ways: the circumradius of the snub dodecahedron is smaller than that of the icosidodecahedron; or, the edge length of the equilateral triangles formed by the divided vertices increases when the pentagonal faces are rotated.) The snub dodecahedron can also be derived from the truncated icosidodecahedron by the process of alternation. Sixty of the vertices of the truncated icosidodecahedron form a polyhedron topologically equivalent to one snub dodecahedron; the remaining sixty form its mirror-image. The resulting polyhedron is vertex-transitive but not uniform. Alternatively, combining the vertices of the snub dodecahedron given by the Cartesian coordinates (above) and its mirror will form a semiregular truncated icosidodecahedron. The comparisons between these regular and semiregular polyhedrons is shown in the figure to the right. Cartesian coordinates for the vertices of this alternative snub dodecahedron are obtained by selecting sets of 12 (of 24 possible even permutations contained in the five sets of truncated icosidodecahedron Cartesian coordinates). The alternations are those with an odd number of minus signs in these three sets: and an even number of minus signs in these two sets: where is the golden ratio. The mirrors of both the regular truncated icosidodecahedron and this alternative snub dodecahedron are obtained by switching the even and odd references to both sign and position permutations. Related polyhedra and tilings This semiregular polyhedron is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n = 6, and hyperbolic plane for any higher n. The series can be considered to begin with n = 2, with one set of faces degenerated into digons. Snub dodecahedral graph In the mathematical field of graph theory, a snub dodecahedral graph is the graph of vertices and edges of the snub dodecahedron, one of the Archimedean solids. It has 60 vertices and 150 edges, and is an Archimedean graph. See also Planar polygon to polyhedron transformation animation ccw and cw spinning snub dodecahedron References (Section 3-9) External links Editable printable net of a Snub Dodecahedron with interactive 3D view The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra Mark S. Adams and Menno T. Kosters. Volume Solutions to the Snub Dodecahedron Chiral polyhedra Uniform polyhedra Archimedean solids Snub tilings
Snub dodecahedron
Physics
1,623
68,951,902
https://en.wikipedia.org/wiki/Soaking%20%28sexual%20practice%29
Soaking is a sexual practice of inserting the penis into the vagina but not subsequently thrusting or ejaculating, reportedly used by some members of the Church of Jesus Christ of Latter-day Saints (LDS Church). News sources do not report it being a common practice, and some Latter-day Saints have said that soaking is an urban legend and not an actual practice. Others report knowing church members who had soaked, or gave a firsthand account of trying the practice with a partner before marriage while a member of the LDS Church. Postings on TikTok and other social media sites have stated that soaking serves as a purported loophole to the LDS Church's sexual code of conduct, called the law of chastity, which says that all sexual activity outside of a heterosexual marriage is a sin, and further bars masturbation for church members. At church-run schools like Brigham Young University, students who confess to or are reported for having pre- or extra-marital sex can be expelled because of the universities' codes of conduct. The LDS Church teaches that "it is wrong to touch the private[...] parts of another person's body even if clothed" outside of a monogamous heterosexual marriage. Some news sources directly state that the LDS Church and its adherents do not believe soaking is a loophole to the church's code of sexual conduct. One source stated it was difficult to know how common it was due to the secrecy and shame around sex in the LDS Church, and underreporting due to the social-desirability bias is a common issue even among anonymous surveys of many stigmatized sexual behaviors. The term comes from the idea that vaginal lubrication is "soaking" the penis. One source said the term started as "dick soak" on an internet forum in 2009, and morphed to simply "soaking" by 2011, and gained wider use in 2019. In popular culture In 2021, a video about soaking went viral on TikTok, and since then the topic has an estimated 69 million posts and 243 million mention on the platform as of 2024. The practice has its own page on the pop-culture website "Know Your Meme". It has been used as a plot point in sitcoms in the early 2020s, such as the television series Alpha House, Get Shorty, and Jury Duty. It was also referenced in the book Up Up, Down Down, in a Barstool Sports video segment, a Make Some Noise improv sketch, and in at least one short film of Mormon pornography. Comedian Chelsea Handler explained the practice in an interview on The Late Late Show with James Corden. Reactions Two satirical social media accounts, the BYU Virginity Club and the BYU Slut Club, have both disavowed the practice. Articles have stated that soaking does not prevent the spread of sexually transmitted infection and may still result in pregnancy. One interviewee stated it was "a dangerous form of misinformation being used to manipulate naive girls in college dorms." Related practices Practices described in the following sources as related to soaking include jump humping, provo pushing, and durfing: Jump humping – Soaking is sometimes accompanied by "jump humping", in which a third person is invited to bounce on the bed (or to push up on the mattress from below) for a couple engaged in soaking, thus generating motion for them (according to TikTokers ExmoLex and FuneralPotatoSlut, and a BYU student interviewee on Barstool Sports). The external source of motion allegedly absolves the soaking couple from responsibility for any genital movement. Provo pushing – The "jump hump" assistant has been termed the "bed jumper" or "Provo pusher" (after Provo, Utah, home of BYU). Other definitions of "provo push" refer to it as clothed or unclothed, non-penetrative dry humping or sexual frottage between church members. Durfing – Dry humping between church members is also called "durfing". Poophole loophole – Using anal sex to skirt rules around vaginal intercourse and to retain virginity is termed the "poophole loophole", and is reportedly used by some LDS adherents. One study found US teens who pledged to not have sex until marriage were more likely to engage in anal sex without vaginal sex than teens who had not made a sexual abstinence pledge, and found pledge-takers were just as likely to test positive for a sexually transmitted infection (STI) five years after taking the pledge as those who had not pledged to abstinence. Historical citing of the practice In 1885, one of the LDS Church's top leaders, 73-year-old apostle Albert Carrington, argued during the hearings before his excommunication that his decade of extramarital sexual relationships with multiple younger women did not count as adultery (a violation of the church's law of chastity) and was just a "little folly" because he would only partially penetrate the vagina with just the tip of his penis and part of the shaft (reportedly to less than the total "depth of four inches"), and pulled out before ejaculation. Then First Presidency member Joseph F. Smith called Carrington's actions a "transgression" and other top leaders called them "crimes of lewd and lascivious conduct and adultery", and Carrington was excommunicated. Carrington was rebaptized two years later. See also Coitus reservatus Saddlebacking Sexuality and the LDS Church Technical virgin References Sexual slang Sexual acts Sexuality and Mormonism
Soaking (sexual practice)
Biology
1,166
75,539,112
https://en.wikipedia.org/wiki/Area%20sampling%20frame
An area sampling frame is an alternative to the most traditional type of sampling frames. A sampling frame is often defined as a list of elements of the population we want to explore through a sample survey. A slightly more general concept considers that a sampling frame is a tool that allows the identification and access to the elements of the population, even if an explicit list does not exist. Traditional sampling frames are sometimes referred to as list frames In many cases, suitable lists are not available. This can happen for several reasons, for example: Existing lists, such as population censuses, are too old and do not correspond anymore to the current reality. We are targeting a population whose list is not feasible, for example a wild animal species. The population is a continuous feature in a given geographic area and the definition of its elements is not straightforward. This often happens for sample surveys designed to produce environmental statistics. Area sampling frames are generally defined by two elements: The boundaries of a target region in a given cartographic projection. The type of geographic units to be sampled. We can mention three main types of units: Points. In principle, points are dimensionless, but, for practical reasons, we can attribute them a certain size, such as 1 m x 1 m. The suitable size is linked to the accuracy of the tool used for the location of the point. Possible tools are GPS devices, orthophotos or satellite images. Point sampling can be based on a two-stage scheme, sampling clusters in the first stage and sampling points in the second stage. Another option is a two-phase scheme of unclustered points: a large first-phase sample is selected. A stratification is conducted only for the first-phase sample and a stratified sample is chosen in the second phase. Transects. A transect is a piece of straight line of a given length. Transect sampling is useful to estimate the total length of linear landscape elements Areal units defined by polygons. In the jargon of agricultural surveys, areal units are generally called "segments", even if a segment in geometry rather corresponds to the concept of transect used in area sampling frames. Segments can be delineated by photo-interpretation or generated automatically, usually on the basis of a regular grid. The optimal size of segments depends on the spatial auto-correlation of the monitored processes and the cost function that links the price of data collection with the size of the sample unit Fields of application The oldest field of application area sampling frames has been probably forest inventories, one of the fields with the most obvious geographic component in which the traditional list frame approach cannot be applied. For the same reason, area frames appear as a natural tool for many environmental topics, such as soil surveys and other topics that require spatial statistics tools. Different area frame approaches have been widely discussed and compared for agricultural statistics. In the 1930's the of the National Agricultural Statistical Service of the US Department of Agriculture introduced area sampling frames for the estimation of crop area and yield on the basis of a sample of areal units (segments). The French Teruti survey chose in the 1960's an approach based on a systematic sample of clusters of points. The Italian AGRIT survey has explored different approaches, comparing segment and point methods. The Joint Research Centre of the EC has conducted a large number of studies on area sampling frame methodology and area frame surveys for agricultural, forestry, environmental and human settlement studies. The soaring number of applications of satellite images has boosted the interest on area sampling frames, not only because of the use of remote sensing for statistics and because the integration of satellite images has improved the quality of samplig frames and related estimators, but also because satellite images may need to be sampled. Validation of thematic maps produced by satellite image analysis has become one of the main application fields of area sampling frames References Sampling (statistics) Survey methodology Spatial analysis
Area sampling frame
Physics
789
73,590,152
https://en.wikipedia.org/wiki/IC%202631
IC 2631 or Chamaeleon Cloud is a bright reflection nebula in the southern constellation of Chamaeleon. The nebula is lit up by a massive pre-main sequence star called HD 97300 at a distance of ~630 light years. It can be found 14.9° above the galactic plane of the Milky Way. IC 2631 can be easily seen in the southern hemisphere for the most part of the year. References Reflection nebulae Chamaeleon 2631
IC 2631
Astronomy
99
638,633
https://en.wikipedia.org/wiki/Dictionary-based%20machine%20translation
Machine translation can use a method based on dictionary entries, which means that the words will be translated as a dictionary does – word by word, usually without much correlation of meaning between them. Dictionary lookups may be done with or without morphological analysis or lemmatisation. While this approach to machine translation is probably the least sophisticated, dictionary-based machine translation is ideally suitable for the translation of long lists of phrases on the subsentential (i.e., not a full sentence) level, e.g. inventories or simple catalogs of products and services. It can also be used to speed up manual translation, if the person carrying it out is fluent in both languages and therefore capable of correcting syntax and grammar. LMT LMT, introduced around 1990, is a Prolog-based machine-translation system that works on specially made bilingual dictionaries, such as the Collins English-German (CEG), which have been rewritten in an indexed form which is easily readable by computers. This method uses a structured lexical data base (LDB) in order to correctly identify word categories from the source language, thus constructing a coherent sentence in the target language, based on rudimentary morphological analysis. This system uses "frames" to identify the position a certain word should have, from a syntactical point of view, in a sentence. This "frames" are mapped via language conventions, such as UDICT in the case of English. In its early (prototype) form LMT uses three lexicons, accessed simultaneously: source, transfer and target, although it is possible to encapsulate this whole information in a single lexicon. The program uses a lexical configuration consisting of two main elements. The first element is a hand-coded lexicon addendum which contains possible incorrect translations. The second element consist of various bilingual and monolingual dictionaries regarding the two languages which are the source and target languages. Example-Based & Dictionary-Based Machine Translation This method of Dictionary-Based Machine translation explores a different paradigm from systems such as LMT. An example-based machine translation system is supplied with only a "sentence-aligned bilingual corpus". Using this data the translating program generates a "word-for-word bilingual dictionary" which is used for further translation. Whilst this system would generally be regarded as a whole different way of machine translation than Dictionary-Based Machine Translation, it is important to understand the complementing nature of this paradigms. With the combined power inherent in both systems, coupled with the fact that a Dictionary-Based Machine Translation works best with a "word-for-word bilingual dictionary" lists of words it demonstrates the fact that a coupling of this two translation engines would generate a very powerful translation tool that is, besides being semantically accurate, capable of enhancing its own functionalities via perpetual feedback loops. A system which combines both paradigms in a way similar to what was described in the previous paragraph is the Pangloss Example-Based Machine Translation engine (PanEBMT) machine translation engine. PanEBMT uses a correspondence table between languages to create its corpus. Furthermore, PanEBMT supports multiple incremental operations on its corpus, which facilitates a biased translation used for filtering purposes. Parallel Text Processing Douglas Hofstadter through his "Le Ton beau de Marot: In Praise of the Music of Language" proves what a complex task translation is. The author produced and analysed dozens upon dozens of possible translations for an eighteen line French poem, thus revealing complex inner workings of syntax, morphology and meaning. Unlike most translation engines who choose a single translation based on back to back comparison of the texts in both the source and target languages, Douglas Hofstadter's work prove the inherent level of error which is present in any form of translation, when the meaning of the source text is too detailed or complex. Thus the problem of text alignment and "statistics of language" is brought to attention. This discrepancies led to Martin Kay's views on translation and translation engines as a whole. As Kay puts it "More substantial successes in these enterprises will require a sharper image of the world than any that can be made out simply from the statistics of language use" [(page xvii) Parallel Text Processing: Alignment and Use of Translation Corpora]. Thus Kay has brought back to light the question of meaning inside language and the distortion of meaning through processes of translation. Lexical Conceptual Structure One of the possible uses of Dictionary-Based Machine Translation is facilitating "Foreign Language Tutoring" (FLT). This can be achieved by using Machine-Translation technology as well as linguistics, semantics and morphology to produce "Large-Scale Dictionaries" in virtually any given language. Development in lexical semantics and computational linguistics during the time period between 1990 and 1996 made it possible for "natural language processing" (NLP) to flourish, gaining new capabilities, nevertheless benefiting machine translation in general. "Lexical Conceptual Structure" (LCS) is a representation that is language independent. It is mostly used in foreign language tutoring, especially in the natural language processing element of FLT. LCS has also proved to be an indispensable tool for machine translation of any kind, such as Dictionary-Based Machine Translation. Overall one of the primary goals of LCS is "to demonstrate that synonymous verb senses share distributional patterns". "DKvec" "DKvec is a method for extracting bilingual lexicons, from noisy parallel corpora based on arrival distances of words in noisy parallel corpora". This method has emerged in response to two problems plaguing the statistical extraction of bilingual lexicons: "(1) How can noisy parallel corpora be used? (2) How can non-parallel yet comparable corpora be used?" The "DKvec" method has proven invaluable for machine translation in general, due to the amazing success it has had in trials conducted on both English – Japanese and English – Chinese noisy parallel corpora. The figures for accuracy "show a 55.35% precision from a small corpus and 89.93% precision from a larger corpus". With such impressive numbers it is safe to assume the immense impact that methods such as "DKvec" has had in the evolution of machine translation in general, especially Dictionary-Based Machine Translation. Algorithms used for extracting parallel corpora in a bilingual format exploit the following rules in order to achieve a satisfactory accuracy and overall quality: Words have one sense per corpus Words have single translation per corpus No missing translations in the target document Frequencies of bilingual word occurrences are comparable Positions of bilingual word occurrences are comparable This methods can be used to generate, or to look for, occurrence patterns which in turn are used to produce binary occurrence vectors which are used by the "DKvec" method. History of Machine Translation The history of machine translation (MT) starts around the mid 1940s. Machine translations was probably the first time computers were used for non-numerical purposes. Machine translation enjoyed a fierce research interest during the 1950s and 1960s, which was followed by a stagnation until the 1980s. After the 1980s, machine translation became mainstream again, enjoying an even bigger popularity than in the 1950s and 1960s as well as rapid expansion, largely based on the text corpora approach. The basic concept of machine translation can be traced back to the 17th century in the speculations surrounding "universal languages and mechanical dictionaries". The first true practical machine translation suggestions were made in 1933 by Georges Artsrouni in France and Petr Trojanskij in Russia. Both had patented machines that they believed could be used for translating meaning from a language to another. "In June 1952, the first MT conference was convened at MIT by Yehoshua Bar-Hillel". On 7 January 1954 a Machine Translation convention in New York, sponsored by IBM, served at popularizing the field. The conventions popularity came from the translation of short English sentences into Russian. This engineering feat mesmerised the public and the governments of both the US and USSR who therefore stimulated large-scale funding in machine translation research. Although the enthusiasm for machine translation was extremely high, technical and knowledge limitations led to disillusions regarding what machine translation was actually capable of doing, at least at that time. Thus machine translation lost in popularity until the 1980s, when advances in linguistics and technology helped revitalise the interest in this field. Translingual information retrieval "Translingual information retrieval (TLIR) consists of providing a query in one language and searching document collections in one or more different languages". Most methods of TLIR can be quantified into two categories, namely statistical-IR approaches and query translation. Machine translation based TLIR works in one of two ways. Either the query is translated in the target language, or the original query is used to search while the collection of possible results is translated in the query language and used for cross-reference. Both methods have pros and cons, namely: Translation Accuracy – the correctness of any machine translation, is dependent on the size of the translated text, thus short texts or words may suffer from a bigger degree of semantic errors, as well as lexical ambiguities, whereas a larger text may provide context, which helps at disambiguation. Retrieval Accuracy – based on the same logic invoked at the previous point, it is preferably to have whole documents translated, rather than queries, because large texts are likely to suffer from less loss of meaning in translation then short queries. Practicality – unlike the previous points, translating short queries is the best way to go. This is because it is easy to translate short texts, whilst translating whole libraries is highly resource intensive, plus the volume of such a translating task implies the indexing of the new translated documents All this points prove the fact that Dictionary-Based machine translation is the most efficient and reliable form of translation when working with TLIR. This is because the process "looks up each query term in a general-purpose bilingual dictionary, and uses all its possible translations." Machine Translation of Very Close Languages The examples of RUSLAN, a dictionary-based machine translation system between Czech and Russian and CESILKO, a Czech – Slovak dictionary-based machine translation system, shows that in the case of very close languages simpler translation methods are more efficient, fast and reliable. The RUSLAN system was made in order to prove the hypotheses that related languages are easier to translate. The system development started in 1985 and was terminated five years later due to lack of further funding. The lessons taught by the RUSLAN experiment are that a transfer-based approach of translation retains its quality regardless of how close the languages are. The main two bottlenecks of "full-fledged transfer-based systems" are complexity and unreliability of syntactic analysis. Multilingual Information Retrieval MLIR "Information Retrieval systems rank documents according to statistical similarity measures based on the co-occurrence of terms in queries and documents". The MLIR system was created and optimised in such a way that facilitates dictionary based translation of queries. This is because queries tend to be short, a couple of words, which, despite not providing a lot of context it is a more feasible than translating whole documents, due to practical reasons. Despite all this, the MLIR system is highly dependent on a lot of resources such as automated language detection software. See also Example-based machine translation Language industry Machine translation Neural machine translation Rule-based machine translation Statistical machine translation Translation Bibliography Machine translation
Dictionary-based machine translation
Technology
2,371
28,656,352
https://en.wikipedia.org/wiki/Lie%20conformal%20algebra
A Lie conformal algebra is in some sense a generalization of a Lie algebra in that it too is a "Lie algebra," though in a different pseudo-tensor category. Lie conformal algebras are very closely related to vertex algebras and have many applications in other areas of algebra and integrable systems. Definition and relation to Lie algebras A Lie algebra is defined to be a vector space with a skew symmetric bilinear multiplication which satisfies the Jacobi identity. More generally, a Lie algebra is an object, in the category of vector spaces (read: -modules) with a morphism that is skew-symmetric and satisfies the Jacobi identity. A Lie conformal algebra, then, is an object in the category of -modules with morphism called the lambda bracket, which satisfies modified versions of bilinearity, skew-symmetry and the Jacobi identity: One can see that removing all the lambda's, mu's and partials from the brackets, one simply has the definition of a Lie algebra. Examples of Lie conformal algebras A simple and very important example of a Lie conformal algebra is the Virasoro conformal algebra. Over it is generated by a single element with lambda bracket given by In fact, it has been shown by Wakimoto that any Lie conformal algebra with lambda bracket satisfying the Jacobi identity on one generator is actually the Virasoro conformal algebra. Classification It has been shown that any finitely generated (as a -module) simple Lie conformal algebra is isomorphic to either the Virasoro conformal algebra, a current conformal algebra or a semi-direct product of the two. There are also partial classifications of infinite subalgebras of and . Generalizations Use in integrable systems and relation to the calculus of variations References Victor Kac, "Vertex algebras for beginners". University Lecture Series, 10. American Mathematical Society, 1998. viii+141 pp. Non-associative algebra Lie algebras Conformal field theory
Lie conformal algebra
Mathematics
425
12,996,503
https://en.wikipedia.org/wiki/Suzuki%20Advanced%20Cooling%20System
The Suzuki Advanced Cooling System (SACS) was developed by Suzuki engineer Etsuo Yokouchi in the early 1980s. The system was used extensively on GSX-R model bikes from 1985 through 1992. Suzuki continued to use the system in its GSF (Bandit) and GSX (GSX-F, GSX1400, Inazuma) lines until the 2006 model-year and DR650 from 1990 to present. Engines using the SACS system were generally regarded as being very durable. Development While addressing reliability issues in Suzuki's only turbo charged bike, the XN85, the SACS system was first conceived by Etsuo Yokouchi, who looked to World War II–era aircraft for inspiration. Like air-cooled motorcycles, radial engines used in many early aircraft suffered from heat and reliability issues. To overcome these problems, aircraft engineers often used oil jets aimed at the bottom of an engine's pistons to carry away excess heat. Following their example, Yokouchi decided to apply the approach to motorcycles. When the GSX-R entered development, Suzuki set a goal of for a 750 engine and, due to known heat-related problems in high-power air-cooled engines, determined that air cooling alone would not be sufficient. Therefore, the SACS system was applied to the bike's design and was eventually carried over to all larger GSX-Rs. The final GSX-R SACS engine appeared on the Suzuki GSX-R1100 in 1992, later bikes featured water cooling. Mechanics The SACS system uses high volumes of engine oil aimed at strategic points of the engine, like the top of the combustion chamber, which are not typically well served by air cooling alone. In order to provide enough oil for both cooling and lubrication, the system uses a double-chamber oil pump, using the high-pressure side for lubrication of the parts (crankshaft, connecting rods, valvetrain), while the low-pressure, high-volume side provides oil to the cooling and filtering circuit. The oil removes heat from hot engine parts through direct contact, is pumped away and subsequently routed through the oil filter, followed by routing through an oil cooler before being returned to the main sump. References Motorcycle engines Motorcycle technology Engine cooling systems
Suzuki Advanced Cooling System
Technology
463
3,768,241
https://en.wikipedia.org/wiki/Thai%20calendar
In Thailand, two main calendar systems are used alongside each other: the Thai solar calendar, based on the Gregorian calendar and used for official and most day-to-day purposes, and the Thai lunar calendar (a version of the Buddhist calendar, technically a lunisolar calendar), used for traditional events and Buddhist religious practices. The use of the solar powered calendar was introduced in 1889 by King Chulalongkorn (Rama V), replacing the lunar calendar in official contexts. The beginning of the year was originally marked as April 1st, however this was changed to January 1st in 1941. The days and months now correspond exactly to the Gregorian calendar. The years follow the Buddhist Era, introduced in 1913 to replace the Rattanakosin Era, which in turn replaced the Chula Sakarat in 1889. The reckoning of the Buddhist Era in Thailand is 543 years ahead of the Gregorian calendar (Anno Domini), so the year AD corresponds to B.E. . The lunar calendar contains 12 or 13 months in a year, with 15 waxing moon and 14 or 15 waning moon days in a month, amounting to years of 354, 355 or 384 days. The years are usually noted by the animal of the Chinese zodiac, although there are several dates used to count the New Year. As with the rest of the world, the seven-day week is used alongside both calendars. The solar calendar now governs most aspects of life in Thailand, and while official state documents invariably follow the Buddhist Era, the Common Era is also used by the private sector. The lunar calendar determines the dates of Buddhist holidays, traditional festivals and astrological practices, and the lunar date is still recorded on birth certificates and printed in most daily newspapers. Calendars Red numerals mark Sundays and public holidays in Thailand. Buddha images mark Buddhist Sabbaths, Wan Phra (). Red tablets with white Chinese characters mark the New and Full Moons of the Chinese calendar, which typically differ by one day from those of the Thai. Thai lunar calendar dates appear below the solar calendar date. Birthdays Mundane astrology figures prominently in Thai culture, so modern Thai birth certificates include lunar calendar dates, and the appropriate Chinese calendar zodiacal animal year-name for both Thai Hora (, ho-ra-sat) and Chinese astrology. Thai birth certificates record the date, month and time of birth, followed by the day of the week, lunar date, and the applicable zodiac animal name. Thai traditionally reckon age by the 12-year animal-cycle names, with the twelfth and sixtieth anniversaries being of special significance; but the official calendar determines age at law. For instance, 12 August 2004 was observed without regard to the lunar date as Queen Sirikit's birthday, a public holiday also observed as Thai Mother's Day. Her zodiacal animal is the Monkey and her traditionally significant sixtieth anniversary year was 1992. Born on a Friday, her auspicious birthday colour is blue. Thai auspicious colours of the day are given in the table of weekdays, followed below it by a link to the Buddha images for each day of the week. Weeks A week (, sapda or सप्ताह, , sapdaha from Sanskrit "seven") is a 7-day period beginning on Sunday and ending Saturday. Days of the week are named after the first seven of the nine Indian astrological Navagraha; i.e., the sun, moon, and five classical planets. Note: Colours are those considered auspicious for the given days of the week, that of Wednesday day being green and of Wednesday night, light green. Of Buddha images representing episodes () from his life, there is one that represents a week and others for each day of the week: Monday has three options that are similar and Wednesday, entirely different ones for day and night. Thai representations of the planets in deity form are below: Weekends and holidays Saturdays and Sundays ( sao athit) are observed as legal non-workdays (, wan yut ratchakan) and are generally shown on calendars in red, as are public holidays. Since 1996 and subject to declaration by the Cabinet of Thailand, public holidays that fall on weekends are followed by Substitution days (, wan chot choei) generally shown in a lighter shade of red, as shown above for Monday, 2 August 2004. Buddhist feasts that are public holidays are calculated according to the Thai lunar calendar, so their dates change every year with respect to the solar calendar. Lunar New Year and other feasts observed by Thai Chinese vary with respect to both, as these are calculated according to the Chinese calendar. See also Date and time notation in Thailand Thai solar calendar References Calendar Calendar Calendars
Thai calendar
Physics
971
203,711
https://en.wikipedia.org/wiki/Bioluminescence
Bioluminescence is the emission of light during a chemiluminescence reaction by living organisms. Bioluminescence occurs in diverse organisms ranging from marine vertebrates and invertebrates, as well as in some fungi, microorganisms including some bioluminescent bacteria, dinoflagellates and terrestrial arthropods such as fireflies. In some animals, the light is bacteriogenic, produced by symbiotic bacteria such as those from the genus Vibrio; in others, it is autogenic, produced by the animals themselves. In most cases, the principal chemical reaction in bioluminescence involves the reaction of a substrate called luciferin and an enzyme, called luciferase. Because these are generic names, luciferins and luciferases are often distinguished by the species or group, e.g. firefly luciferin or cypridina luciferin. In all characterized cases, the enzyme catalyzes the oxidation of the luciferin resulting in excited state oxyluciferin, which is the light emitter of the reaction. Upon their decay to the ground state they emit visible light. In all known cases of bioluminescence the production of the excited state molecules involves the decomposition of organic peroxides. In some species, the luciferase requires other cofactors, such as calcium or magnesium ions, and sometimes also the energy-carrying molecule adenosine triphosphate (ATP). In evolution, luciferins vary little: one in particular, coelenterazine, is found in 11 different animal phyla, though in some of these, the animals obtain it through their diet. Conversely, luciferases vary widely between different species. Bioluminescence has arisen over 40 times in evolutionary history. Both Aristotle and Pliny the Elder mentioned that damp wood sometimes gives off a glow. Many centuries later Robert Boyle showed that oxygen was involved in the process, in both wood and glowworms. It was not until the late nineteenth century that bioluminescence was properly investigated. The phenomenon is widely distributed among animal groups, especially in marine environments. On land it occurs in fungi, bacteria and some groups of invertebrates, including insects. The uses of bioluminescence by animals include counterillumination camouflage, mimicry of other animals, for example to lure prey, and signaling to other individuals of the same species, such as to attract mates. In the laboratory, luciferase-based systems are used in genetic engineering and biomedical research. Researchers are also investigating the possibility of using bioluminescent systems for street and decorative lighting, and a bioluminescent plant has been created. History Before the development of the safety lamp for use in coal mines, dried fish skins were used in Britain and Europe as a weak source of light. This experimental form of illumination avoided the necessity of using candles which risked sparking explosions of firedamp. In 1920, the American zoologist E. Newton Harvey published a monograph, The Nature of Animal Light, summarizing early work on bioluminescence. Harvey notes that Aristotle mentions light produced by dead fish and flesh, and that both Aristotle and Pliny the Elder (in his Natural History) mention light from damp wood. He records that Robert Boyle experimented on these light sources, and showed that both they and the glowworm require air for light to be produced. Harvey notes that in 1753, J. Baker identified the flagellate Noctiluca "as a luminous animal" "just visible to the naked eye", and in 1854 Johann Florian Heller (1813–1871) identified strands (hyphae) of fungi as the source of light in dead wood. Tuckey, in his posthumous 1818 Narrative of the Expedition to the Zaire, described catching the animals responsible for luminescence. He mentions pellucids, crustaceans (to which he ascribes the milky whiteness of the water), and cancers (shrimps and crabs). Under the microscope he described the "luminous property" to be in the brain, resembling "a most brilliant amethyst about the size of a large pin's head". Charles Darwin noticed bioluminescence in the sea, describing it in his Journal: While sailing in these latitudes on one very dark night, the sea presented a wonderful and most beautiful spectacle. There was a fresh breeze, and every part of the surface, which during the day is seen as foam, now glowed with a pale light. The vessel drove before her bows two billows of liquid phosphorus, and in her wake she was followed by a milky train. As far as the eye reached, the crest of every wave was bright, and the sky above the horizon, from the reflected glare of these livid flames, was not so utterly obscure, as over the rest of the heavens. Darwin also observed a luminous "jelly-fish of the genus Dianaea", noting that: "When the waves scintillate with bright green sparks, I believe it is generally owing to minute crustacea. But there can be no doubt that very many other pelagic animals, when alive, are phosphorescent." He guessed that "a disturbed electrical condition of the atmosphere" was probably responsible. Daniel Pauly comments that Darwin "was lucky with most of his guesses, but not here", noting that biochemistry was too little known, and that the complex evolution of the marine animals involved "would have been too much for comfort". Bioluminescence attracted the attention of the United States Navy in the Cold War, since submarines in some waters can create a bright enough wake to be detected; a German submarine was sunk in the First World War, having been detected in this way. The Navy was interested in predicting when such detection would be possible, and hence guiding their own submarines to avoid detection. Among the anecdotes of navigation by bioluminescence is one recounted by the Apollo 13 astronaut Jim Lovell, who as a Navy pilot had found his way back to his aircraft carrier USS Shangri-La when his navigation systems failed. Turning off his cabin lights, he saw the glowing wake of the ship, and was able to fly to it and land safely. The French pharmacologist Raphaël Dubois carried out work on bioluminescence in the late nineteenth century. He studied click beetles (Pyrophorus) and the marine bivalve mollusc Pholas dactylus. He refuted the old idea that bioluminescence came from phosphorus, and demonstrated that the process was related to the oxidation of a specific compound, which he named luciferin, by an enzyme. He sent Harvey siphons from the mollusc preserved in sugar. Harvey had become interested in bioluminescence as a result of visiting the South Pacific and Japan and observing phosphorescent organisms there. He studied the phenomenon for many years. His research aimed to demonstrate that luciferin, and the enzymes that act on it to produce light, were interchangeable between species, showing that all bioluminescent organisms had a common ancestor. However, he found this hypothesis to be false, with different organisms having major differences in the composition of their light-producing proteins. He spent the next 30 years purifying and studying the components, but it fell to the young Japanese chemist Osamu Shimomura to be the first to obtain crystalline luciferin. He used the sea firefly Vargula hilgendorfii, but it was another ten years before he discovered the chemical's structure and published his 1957 paper Crystalline Cypridina Luciferin. Shimomura, Martin Chalfie, and Roger Y. Tsien won the 2008 Nobel Prize in Chemistry for their 1961 discovery and development of green fluorescent protein as a tool for biological research. Harvey wrote a detailed historical account on all forms of luminescence in 1957. An updated book on bioluminescence covering also the twentieth and early twenty-first century was published recently. Evolution In 1932 E. N. Harvey was among the first to propose how bioluminescence could have evolved. In this early paper, he suggested that proto-bioluminescence could have arisen from respiratory chain proteins that hold fluorescent groups. This hypothesis has since been disproven, but it did lead to considerable interest in the origins of the phenomenon. Today, the two prevailing hypotheses (both concerning marine bioluminescence) are those put forth by Howard Seliger in 1993 and Rees et al. in 1998. Seliger's theory identifies luciferase enzymes as the catalyst for the evolution of bioluminescent systems. It suggests that the original purpose of luciferases was as mixed-function oxygenases. As the early ancestors of many species moved into deeper and darker waters natural selection favored the development of increased eye sensitivity and enhanced visual signals. If selection were to favor a mutation in the oxygenase enzyme required for the breakdown of pigment molecules (molecules often associated with spots used to attract a mate or distract a predator) it could have eventually resulted in external luminescence in tissues. Rees et al. use evidence gathered from the marine luciferin coelenterazine to suggest that selection acting on luciferins may have arisen from pressures to protect oceanic organisms from potentially deleterious reactive oxygen species (e.g. H2O2 and O2− ). The functional shift from antioxidation to bioluminescence probably occurred when the strength of selection for antioxidation defense decreased as early species moved further down the water column. At greater depths exposure to ROS is significantly lower, as is the endogenous production of ROS through metabolism. While popular at first, Seliger's theory has been challenged, particularly on the biochemical and genetic evidence that Rees examines. What remains clear, however, is that bioluminescence has evolved independently at least 40 times. Bioluminescence in fish began at least by the Cretaceous period. About 1,500 fish species are known to be bioluminescent; the capability evolved independently at least 27 times. Of these, 17 involved the taking up of bioluminous bacteria from the surrounding water while in the others, the intrinsic light evolved through chemical synthesis. These fish have become surprisingly diverse in the deep ocean and control their light with the help of their nervous system, using it not just to lure prey or hide from predators, but also for communication. All bioluminescent organisms have in common that the reaction of a "luciferin" and oxygen is catalyzed by a luciferase to produce light. McElroy and Seliger proposed in 1962 that the bioluminescent reaction evolved to detoxify oxygen, in parallel with photosynthesis. Thuesen, Davis et al. showed in 2016 that bioluminescence has evolved independently 27 times within 14 fish clades across ray-finned fishes. The oldest of these appears to be Stomiiformes and Myctophidae. In sharks, bioluminescence has evolved only once. Genomic analysis of octocorals indicates that their ancestor was bioluminescent as long as 540 million years ago. Chemical mechanism Bioluminescence is a form of chemiluminescence where light energy is released by a chemical reaction. This reaction involves a light-emitting pigment, the luciferin, and a luciferase, the enzyme component. Because of the diversity of luciferin/luciferase combinations, there are very few commonalities in the chemical mechanism. From currently studied systems, the only unifying mechanism is the role of molecular oxygen; often there is a concurrent release of carbon dioxide (CO2). For example, the firefly luciferin/luciferase reaction requires magnesium and ATP and produces CO2, adenosine monophosphate (AMP) and pyrophosphate (PP) as waste products. Other cofactors may be required, such as calcium (Ca2+) for the photoprotein aequorin, or magnesium (Mg2+) ions and ATP for the firefly luciferase. Generically, this reaction can be described as: Luciferin + O2->[\text{Luciferase}][\text{other cofactors}]Oxyluciferin + light energy Instead of a luciferase, the jellyfish Aequorea victoria makes use of another type of protein called a photoprotein, in this case specifically aequorin. When calcium ions are added, rapid catalysis creates a brief flash quite unlike the prolonged glow produced by luciferase. In a second, much slower step, luciferin is regenerated from the oxidized (oxyluciferin) form, allowing it to recombine with aequorin, in preparation for a subsequent flash. Photoproteins are thus enzymes, but with unusual reaction kinetics. Furthermore, some of the blue light released by aequorin in contact with calcium ions is absorbed by a green fluorescent protein, which in turn releases green light in a process called resonant energy transfer. Overall, bioluminescence has arisen over 40 times in evolutionary history. In evolution, luciferins tend to vary little: one in particular, coelenterazine, is the light emitting pigment for nine phyla (groups of very different organisms), including polycystine radiolaria, Cercozoa (Phaeodaria), protozoa, comb jellies, cnidaria including jellyfish and corals, crustaceans, molluscs, arrow worms and vertebrates (ray-finned fish). Not all these organisms synthesise coelenterazine: some of them obtain it through their diet. Conversely, luciferase enzymes vary widely and tend to be different in each species. Distribution Bioluminescence occurs widely among animals, especially in the open sea, including fish, jellyfish, comb jellies, crustaceans, and cephalopod molluscs; in some fungi and bacteria; and in various terrestrial invertebrates, nearly all of which are beetles. In marine coastal habitats, about 2.5% of organisms are estimated to be bioluminescent, whereas in pelagic habitats in the eastern Pacific, about 76% of the main taxa of deep-sea animals have been found to be capable of producing light. More than 700 animal genera have been recorded with light-producing species. Most marine light-emission is in the blue and green light spectrum. However, some loose-jawed fish emit red and infrared light, and the genus Tomopteris emits yellow light. The most frequently encountered bioluminescent organisms may be the dinoflagellates in the surface layers of the sea, which are responsible for the sparkling luminescence sometimes seen at night in disturbed water. At least 18 genera of these phytoplankton exhibit luminosity. Luminescent dinoflagellate ecosystems are present in warm water lagoons and bays with narrow openings to the ocean. A different effect is the thousands of square miles of the ocean which shine with the light produced by bioluminescent bacteria, known as mareel or the milky seas effect. Pelagic zone Bioluminescence is abundant in the pelagic zone, with the most concentration at depths devoid of light and surface waters at night. These organisms participate in diurnal vertical migration from the dark depths to the surface at night, dispersing the population of bioluminescent organisms across the pelagic water column. The dispersal of bioluminescence across different depths in the pelagic zone has been attributed to the selection pressures imposed by predation and the lack of places to hide in the open sea. In depths where sunlight never penetrates, often below 200m, the significance of bioluminescent is evident in the retainment of functional eyes for organisms to detect bioluminescence. Bacterial symbioses Organisms often produce bioluminescence themselves, rarely do they generate it from outside phenomena. However, there are occasions where bioluminescence is produced by bacterial symbionts that have a symbiotic relationship with the host organism. Although many luminous bacteria in the marine environment are free-living, a majority are found in symbiotic relationships that involve fish, squids, crustaceans etc. as hosts. Most luminous bacteria inhabit the sea, dominated by Photobacterium and Vibrio. In the symbiotic relationship, bacterium benefit from having a source of nourishment and a refuge to grow. Hosts obtain these bacterial symbionts either from the environment, spawning, or the luminous bacterium is evolving with their host. Coevolutionary interactions are suggested as host organisms' anatomical adaptations have become specific to only certain luminous bacteria, to suffice ecological dependence of bioluminescence. Benthic zone Bioluminescence is widely studied amongst species located in the mesopelagic zone, but the benthic zone at mesopelagic depths has remained widely unknown. Benthic habitats at depths beyond the mesopelagic are also poorly understood due to the same constraints. Unlike the pelagic zone where the emission of light is undisturbed in the open sea, the occurrence of bioluminescence in the benthic zone is less common. It has been attributed to the blockage of emitted light by a number of sources such as the sea floor, and inorganic and organic structures. Visual signals and communication that is prevalent in the pelagic zone such as counter-illumination may not be functional or relevant in the benthic realm. Bioluminescence in bathyal benthic species still remains poorly studied due to difficulties of the collection of species at these depths. Uses in nature Bioluminescence has several functions in different taxa. Steven Haddock et al. (2010) list as more or less definite functions in marine organisms the following: defensive functions of startle, counterillumination (camouflage), misdirection (smoke screen), distractive body parts, burglar alarm (making predators easier for higher predators to see), and warning to deter settlers; offensive functions of lure, stun or confuse prey, illuminate prey, and mate attraction/recognition. It is much easier for researchers to detect that a species is able to produce light than to analyze the chemical mechanisms or to prove what function the light serves. In some cases the function is unknown, as with species in three families of earthworm (Oligochaeta), such as Diplocardia longa, where the coelomic fluid produces light when the animal moves. The following functions are reasonably well established in the named organisms. Counterillumination camouflage In many animals of the deep sea, including several squid species, bacterial bioluminescence is used for camouflage by counterillumination, in which the animal matches the overhead environmental light as seen from below. In these animals, photoreceptors control the illumination to match the brightness of the background. These light organs are usually separate from the tissue containing the bioluminescent bacteria. However, in one species, Euprymna scolopes, the bacteria are an integral component of the animal's light organ. Attraction Bioluminescence is used in a variety of ways and for different purposes. The cirrate octopod Stauroteuthis syrtensis uses emits bioluminescence from its sucker like structures. These structures are believed to have evolved from what are more commonly known as octopus suckers. They do not have the same function as the normal suckers because they no longer have any handling or grappling ability due its evolution of photophores. The placement of the photophores are within the animals oral reach, which leads researchers to suggest that it uses it bioluminescence to capture and lure prey. Fireflies use light to attract mates. Two systems are involved according to species; in one, females emit light from their abdomens to attract males; in the other, flying males emit signals to which the sometimes sedentary females respond. Click beetles emit an orange light from the abdomen when flying and a green light from the thorax when they are disturbed or moving about on the ground. The former is probably a sexual attractant but the latter may be defensive. Larvae of the click beetle Pyrophorus nyctophanus live in the surface layers of termite mounds in Brazil. They light up the mounds by emitting a bright greenish glow which attracts the flying insects on which they feed. In the marine environment, use of luminescence for mate attraction is chiefly known among ostracods, small shrimp-like crustaceans, especially in the family Cyprididae. Pheromones may be used for long-distance communication, with bioluminescence used at close range to enable mates to "home in". A polychaete worm, the Bermuda fireworm creates a brief display, a few nights after the full moon, when the female lights up to attract males. Defense The defense mechanisms for bioluminescent organisms can come in multiple forms; startling prey, counter-illumination, smoke screen or misdirection, distractive body parts, burglar alarm, sacrificial tag or warning coloration. The shrimp family Oplophoridae Dana use their bioluminescence as a way of startling the predator that is after them. Acanthephyra purpurea, within the Oplophoridae family, uses its photophores to emit light, and can secrete a bioluminescent substance when in the presence of a predator. This secretory mechanism is common among prey fish. Many cephalopods, including at least 70 genera of squid, are bioluminescent. Some squid and small crustaceans use bioluminescent chemical mixtures or bacterial slurries in the same way as many squid use ink. A cloud of luminescent material is expelled, distracting or repelling a potential predator, while the animal escapes to safety. The deep sea squid Octopoteuthis deletron may autotomize portions of its arms which are luminous and continue to twitch and flash, thus distracting a predator while the animal flees. Dinoflagellates may use bioluminescence for defense against predators. They shine when they detect a predator, possibly making the predator itself more vulnerable by attracting the attention of predators from higher trophic levels. Grazing copepods release any phytoplankton cells that flash, unharmed; if they were eaten they would make the copepods glow, attracting predators, so the phytoplankton's bioluminescence is defensive. The problem of shining stomach contents is solved (and the explanation corroborated) in predatory deep-sea fishes: their stomachs have a black lining able to keep the light from any bioluminescent fish prey which they have swallowed from attracting larger predators. The sea-firefly is a small crustacean living in sediment. At rest it emits a dull glow but when disturbed it darts away leaving a cloud of shimmering blue light to confuse the predator. During World War II it was gathered and dried for use by the Japanese army as a source of light during clandestine operations. The larvae of railroad worms (Phrixothrix) have paired photic organs on each body segment, able to glow with green light; these are thought to have a defensive purpose. They also have organs on the head which produce red light; they are the only terrestrial organisms to emit light of this color. Warning Aposematism is a widely used function of bioluminescence, providing a warning that the creature concerned is unpalatable. It is suggested that many firefly larvae glow to repel predators; some millipedes glow for the same purpose. Some marine organisms are believed to emit light for a similar reason. These include scale worms, jellyfish and brittle stars but further research is needed to fully establish the function of the luminescence. Such a mechanism would be of particular advantage to soft-bodied cnidarians if they were able to deter predation in this way. The limpet Latia neritoides is the only known freshwater gastropod that emits light. It produces greenish luminescent mucus which may have an anti-predator function. The marine snail Hinea brasiliana uses flashes of light, probably to deter predators. The blue-green light is emitted through the translucent shell, which functions as an efficient diffuser of light. Communication Communication in the form of quorum sensing plays a role in the regulation of luminescence in many species of bacteria. Small extracellularly secreted molecules stimulate the bacteria to turn on genes for light production when cell density, measured by concentration of the secreted molecules, is high. Pyrosomes are colonial tunicates and each zooid has a pair of luminescent organs on either side of the inlet siphon. When stimulated by light, these turn on and off, causing rhythmic flashing. No neural pathway runs between the zooids, but each responds to the light produced by other individuals, and even to light from other nearby colonies. Communication by light emission between the zooids enables coordination of colony effort, for example in swimming where each zooid provides part of the propulsive force. Some bioluminous bacteria infect nematodes that parasitize Lepidoptera larvae. When these caterpillars die, their luminosity may attract predators to the dead insect thus assisting in the dispersal of both bacteria and nematodes. A similar reason may account for the many species of fungi that emit light. Species in the genera Armillaria, Mycena, Omphalotus, Panellus, Pleurotus and others do this, emitting usually greenish light from the mycelium, cap and gills. This may attract night-flying insects and aid in spore dispersal, but other functions may also be involved. Quantula striata is the only known bioluminescent terrestrial mollusk. Pulses of light are emitted from a gland near the front of the foot and may have a communicative function, although the adaptive significance is not fully understood. Mimicry Bioluminescence is used by a variety of animals to mimic other species. Many species of deep sea fish such as the anglerfish and dragonfish make use of aggressive mimicry to attract prey. They have an appendage on their heads called an esca that contains bioluminescent bacteria able to produce a long-lasting glow which the fish can control. The glowing esca is dangled or waved about to lure small animals to within striking distance of the fish. The cookiecutter shark uses bioluminescence to camouflage its underside by counter-illumination, but a small patch near its pectoral fins remains dark, appearing as a small fish to large predatory fish like tuna and mackerel swimming beneath it. When such fish approach the lure, they are bitten by the shark. Female Photuris fireflies sometimes mimic the light pattern of another firefly, Photinus, to attract its males as prey. In this way they obtain both food and the defensive chemicals named lucibufagins, which Photuris cannot synthesize. South American giant cockroaches of the genus Lucihormetica were believed to be the first known example of defensive mimicry, emitting light in imitation of bioluminescent, poisonous click beetles. However, doubt has been cast on this assertion, and there is no conclusive evidence that the cockroaches are bioluminescent. Illumination While most marine bioluminescence is green to blue, some deep sea barbeled dragonfishes in the genera Aristostomias, Pachystomias and Malacosteus emit a red glow. This adaptation allows the fish to see red-pigmented prey, which are normally invisible to other organisms in the deep ocean environment where red light has been filtered out by the water column. These fish are able to utilize the longer wavelength to act as a spotlight for its prey that only they can see. The fish may also use this light to communicate with each other to find potential mates. The ability of the fish to see this light is explained by the presence of specialized rhodopsin pigment. The mechanism of light creation is through a suborbital photophore that utilizes gland cells which produce exergonic chemical reactions that produce light with a longer, red wavelength. The dragonfish species which produce the red light also produce blue light in photophore on the dorsal area. The main function of this is to alert the fish to the presence of its prey. The additional pigment is thought to be assimilated from chlorophyll derivatives found in the copepods which form part of its diet. The angler siphonophore (Erenna) utilizes red bioluminescence in appendages to lure fish. Biotechnology Biology and medicine Bioluminescent organisms are a target for many areas of research. Luciferase systems are widely used in genetic engineering as reporter genes, each producing a different color by fluorescence, and for biomedical research using bioluminescence imaging. For example, the firefly luciferase gene was used as early as 1986 for research using transgenic tobacco plants. Vibrio bacteria symbiose with marine invertebrates such as the Hawaiian bobtail squid (Euprymna scolopes), are key experimental models for bioluminescence. Bioluminescent activated destruction is an experimental cancer treatment. In Vivo luminescence cell and animal imaging uses dyes and fluorescent proteins as chromophores. The characteristics of each chromophore determine which cell area(s) will be targeted and illuminated. Light production The structures of photophores, the light producing organs in bioluminescent organisms, are being investigated by industrial designers. Engineered bioluminescence could perhaps one day be used to reduce the need for street lighting, or for decorative purposes if it becomes possible to produce light that is both bright enough and can be sustained for long periods at a workable price. The gene that makes the tails of fireflies glow has been added to mustard plants. The plants glow faintly for an hour when touched, but a sensitive camera is needed to see the glow. University of Wisconsin–Madison is researching the use of genetically engineered bioluminescent E. coli bacteria, for use as bioluminescent bacteria in a light bulb. In 2011, Philips launched a microbial system for ambience lighting in the home. An iGEM team from Cambridge (England) has started to address the problem that luciferin is consumed in the light-producing reaction by developing a genetic biotechnology part that codes for a luciferin regenerating enzyme from the North American firefly. In 2016, Glowee, a French company started selling bioluminescent lights for shop fronts and street signs, for use between 1 and 7 in the morning when the law forbids use of electricity for this purpose. They used the bioluminescent bacterium Aliivibrio fischeri, but the maximum lifetime of their product was three days. In April 2020, plants were genetically engineered to glow more brightly using genes from the bioluminescent mushroom Neonothopanus nambi to convert caffeic acid into luciferin. Another possible application is to replace chemiluminescence with bioluminescent enzymes. A Canadian company, Lux Bio, is developing long-duration bioluminescent enzymes for this purpose. ATP bioluminescence ATP bioluminescence is the process in which ATP is used to generate luminescence in an organism, in conjunction with other compounds such as luciferin. It proves to be a very good biosensor to test for the presence of living microbes in water. Different types of microbial populations are determined through different sets of ATP assays using other substrates and reagents. Renilla- and Gaussia-based cell viability assays use the substrate coelenterazine. See also Animal coloration Biophoton Life That Glows, 2016 full-length documentary Notes References Further reading Schramm, Stefan; Weiß, Dieter (2024). "Bioluminescence – The Vibrant Glow of Nature and its Chemical Mechanisms". ChemBioChem. 25 (9): e202400106. doi:10.1002/cbic.202400106. Victor Benno Meyer-Rochow (2009) Bioluminescence in Focus – a collection of illuminating essays Research Signpost: Shimomura, Osamu (2006). Bioluminescence: Chemical Principles and Methods. Word Scientific Publishing. . Lee, John (2016). "Bioluminescence, the Nature of the Light." The University of Georgia Libraries. http://hdl.handle.net/10724/20031 Anctil, Michel (2018). Luminous Creatures: The History and Science of Light Production in Living Organisms. McGill-Queen's University Press. External links BBC: Red tide: Electric blue waves wash California shore MBARI: Gonyaulax Bioluminescence UF/IFAS: glow-worms TED: Glowing life in an underwater world (video) Smithsonian Ocean Portal: Bioluminescent animals photo gallery National Geographic: Bioluminescence Annual Review of Marine Science: Bioluminescence in the Sea Canon Australia – Tips on How to Photograph Bioluminescence The New York City American Natural History Museum's "Creatures of Light: Nature's Bioluminescence" 2022 Featured Exhibit (in concert with the Ottawa, Canada-based Canadian Museum of Nature and Chicago's Field Museum of Natural History) webpage: Fisheries science Light sources Counter-illumination camouflage Mimicry Bioelectromagnetics
Bioluminescence
Chemistry,Biology
6,952
56,910,813
https://en.wikipedia.org/wiki/Archives%20management
Archives management is the area of management concerned with the maintenance and use of archives. It is concerned with acquisition, care, arrangement, description and retrieval of records once they have been transferred from an organisation to the archival repository. Once records have been selected and transferred to archival custody, they become archives. Managing archives The steps involved in managing archives include acquiring and receiving from the office of the origin, arranging and describing according to archival principles and practices, providing easy retrieval and access to archives. Archives and accessibility An increasingly relevant aspect of archives management is ensuring the accessibility of archives and archive materials to all users regardless of physical ability. Most archivist and library associations now include resources on educating archivists on how to manage their archives to be more accessible. Both archivists and special collections librarians are faced with the issue of making their resources more accessible to the public, as their items were created prior to the consideration of accessibility. See also Collections management References Knowledge management Information systems Groupware Business terms Hypertext Library science
Archives management
Technology
203
47,515,759
https://en.wikipedia.org/wiki/Alvernaviridae
Alvernaviridae is a family of non-enveloped positive-strand RNA viruses. Dinoflagellates serve as natural hosts. There is one genus in this family, Dinornavirus, which contains one species: Heterocapsa circularisquama RNA virus 01. Diseases associated with this family include host population control, possibly through lysis of the host cell. Structure Viruses in Alvernaviridae are non-enveloped, with icosahedral and spherical geometries, and T=3 symmetry. The diameter is around 34 nm. Genome Genomes are linear and non-segmented, around 4.4kb in length. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive-strand RNA virus replication model in the cytoplasm. Positive-strand RNA virus transcription is the method of transcription. The virus is assembled in the cytoplasm. Dinoflagellates serve as the natural host. References External links Viralzone: Alvernaviridae ICTV Virus families Riboviria
Alvernaviridae
Biology
228
47,372,902
https://en.wikipedia.org/wiki/M85-HCC1
M85-HCC1 is an ultracompact dwarf galaxy with a star density 1,000,000 times that of the solar neighbourhood, lying near the galaxy Messier 85. , it is the densest galaxy known. See also List of most massive galaxies References See also M59-UCD3 (second-densest galaxy known, as of 2015) M60-UCD1 (another dense galaxy) Dwarf galaxies 20150727 Coma Berenices
M85-HCC1
Astronomy
98
33,815,350
https://en.wikipedia.org/wiki/Jennifer%20Quinn
Jennifer J. Quinn is an American mathematician specializing in combinatorics, and professor of mathematics at the University of Washington Tacoma. She sits on the board of governors of the Mathematical Association of America, and is serving as its president for the years 2021 and 2022. From 2004 to 2008 she was co-editor of Math Horizons. Education and career Quinn went to Williams College as an undergraduate, graduating in 1985. She earned a master's degree from the University of Illinois at Chicago in 1987, and completed her doctorate at the University of Wisconsin–Madison in 1993. Her dissertation, Colorings and Cycle Packings in Graphs and Digraphs, was supervised by Richard A. Brualdi. She taught at Occidental College until 2005, when she gave up her position as full professor and department chair to move with her husband, biologist Mark Martin, to Washington. She became a part-time lecturer, and executive director of the Association for Women in Mathematics, until earning a faculty position at Tacoma in 2007. Recognition Quinn won a Distinguished Teaching Award from the Mathematical Association of America in 2001, and the Deborah and Franklin Tepper Haimo Award for Distinguished College or University Teaching of Mathematics of the association in 2007. Quinn's book with Arthur T. Benjamin, Proofs that Really Count: The Art of Combinatorial Proof (2003) won the CHOICE Award for Outstanding Academic Title of the American Library Association and the Beckenbach Book Prize of the Mathematical Association of America. In 2018, Quinn was elected an officer-at-large member of the board of directors of the Mathematical Association of America (MAA). In 2020, Quinn joined the board of directors of the MAA as president-elect. Her term as president began in 2021. In 2022 she will become a fellow of the Association for Women in Mathematics, "For her outstanding achievements as a teacher, mentor, leader, expositor, and editor; for her pioneering service as AWM executive director; and for continued service as AWM volunteer and supporter." References External links Year of birth missing (living people) Living people 20th-century American mathematicians 21st-century American mathematicians Combinatorialists Williams College alumni University of Illinois Chicago alumni University of Wisconsin–Madison alumni Occidental College faculty University of Washington faculty 20th-century American women mathematicians 21st-century American women mathematicians
Jennifer Quinn
Mathematics
473
3,045,799
https://en.wikipedia.org/wiki/Capacity%20building
Capacity building (or capacity development, capacity strengthening) is the improvement in an individual's or organization's facility (or capability) "to produce, perform or deploy". The terms capacity building and capacity development have often been used interchangeably, although a publication by OECD-DAC stated in 2006 that capacity development was the preferable term. Since the 1950s, international organizations, governments, non-governmental organizations (NGOs) and communities use the concept of capacity building as part of "social and economic development" in national and subnational plans. The United Nations Development Programme defines itself by "capacity development" in the sense of "'how UNDP works" to fulfill its mission. The UN system applies it in almost every sector, including several of the Sustainable Development Goals to be achieved by 2030. For example, the Sustainable Development Goal 17 advocates for enhanced international support for capacity building in developing countries to support national plans to implement the 2030 Agenda.  Under the codification of international development law, capacity building is a "cross cutting modality of international intervention". It often overlaps or is part of interventions in public administration reform, good governance and education in line sectors of public services. The consensus approach of the international community for the components of capacity building as established by the World Bank, United Nations and European Commission consists of five areas: a clear policy framework, institutional development and legal framework, citizen participation and oversight, human resources improvements including education and training, and sustainability. Some of these overlap with other interventions and sectors. Much of the actual focus has been on training and educational inputs where it may be a euphemism for education and training. For example, UNDP focuses on training needs in its assessment methodology rather than on actual performance goals. The pervasive use of the term for these multiple sectors and elements and the huge amount of development aid funding devoted to it has resulted in controversy over its true meaning. There is also concern over its use and impacts. In international development funding, evaluations by the World Bank and other donors have consistently revealed problems in this overall category of funding dating back to the year 2000. Since the arrival of capacity building as a dominant subject in international aid, donors and practitioners have struggled to create a concise mechanism for determining the effectiveness of capacity building initiatives. An independent public measurement indicator for improvement and oversight of the large variety of capacity building initiatives was published in 2015. This scoring system is based on international development law and professional management principles. Definitions Capacity development A "good practice paper" by OECD-DAC defined capacity development as follows: "Capacity development is understood as the process whereby people, organizations and society as a whole unleash, strengthen, create, adapt and maintain capacity over time." Capacity is understood as "the ability of people, organizations and society as a whole to manage their affairs successfully". The OECD-DAC stated in 2006 that the term "capacity development" should be used rather than the term "capacity building". This is because "capacity building" would imply starting from a plain surface and a step-by-step erection of a new structure - which is not how it works. The European Commission Toolkit defines capacity development in the same way and stresses that capacity relates to "abilities", "attributes" and a "process". It is an attribute of people, individual organizations and groups of organizations. Capacity is shaped by, adapting to and reacting to external factors and actors, but it is not something external — it is internal to people, organizations and groups or systems of organizations. Thus, capacity development is a change process internal to organizations and people. The United Nations Office for Disaster Risk Reduction (UNDRR), formerly the United Nations International Strategy for Disaster Reduction (UNISDR), defines capacity development in the disaster risk reduction domain as "the process by which people, organizations and society systematically stimulate and develop their capability over time to achieve social and economic goals, including through improvement of knowledge, skills, systems, and institutions – within a wider social and cultural enabling environment." Outside of international interventions, capacity building can refer to strengthening the skills of people and communities, in small businesses and local grassroots movements. Organizational capacity building is used by NGOs and governments to guide their internal development and activities as a form of managerial improvements following administrative practices. Community capacity building The United Nations Committee of Experts on Public Administration in 2006 offered an additional term, "community capacity building". It is defined as a long-term continual process of development that involves all stakeholders as opposed to practices which limit oversight and involvement in interventions with governments. The list of parties that it defines as "community" includes ministries, local authorities, non-governmental organizations, professionals, community members, academics and more. According to the Committee, capacity building takes place at an individual, an institutional, societal level and "non-training" level. The term "community capacity building" (CCB) began to be used in 1995 and since then became popular for example within the policy literature in the United Kingdom, particularly in the context of urban policy, regeneration and social development. It is, however, difficult to distinguish it from the practice of "community development". It is "built on a deficit model of communities which fails to engage properly with their own skills, knowledge and interests". Therefore, it does not properly address structural reasons for poverty and inequality. Components The World Bank, United Nations and European Commission describe capacity building to consist of five areas: a clear policy framework, institutional development and legal framework, citizen/democratic participation and oversight, human resources improvements including education and training, and sustainability. The United Nations Development Group Capacity Development Guidelines presents a framework of capacity development comprising three interconnected levels of capacity: Individual, Institutional and Enabling Policy. Thinking of capacity building as simply training or human resource development is insufficient. Evolution History The discourse on and concept of capacity development has traditionally been closely associated with development cooperation. The UNDP was one of the forerunners in designing international interventions in the category of capacity building and development. In the early 1970s, the UNDP offered guidance to its staff and governments on what it called "institution-building" which is one of the pillars of its current work and is part of a category of "public administration reform". In the 1970s, international organizations emphasized building capacity through technical skills training in rural areas, and also in the administrative sectors of developing countries. In the 1980s they expanded the concept of institutional development further. "Institutional development" was viewed as a long-term process of interventions in a developing country's government, public and private sector institutions, and NGOs.  Under the UNDP's 2008–2013 "strategic plan for development", capacity building is the "organization's core contribution to development". The UNDP focused on building capacity at an institutional level and offers a six-step process for systematic capacity building. The six steps are: Conducting training need assessment, engage stakeholders on capacity development, assess capacity needs and assets, formulate a capacity development response, implement a capacity development response, evaluate capacity development. Trends Since about 2005, the capacity development agenda has also been adopted beyond the traditional aid community. This is particularly true for Africa: for example the African Union has developed a Capacity Development Strategic Framework and is using capacity development as one of three themes to structure its Development Effectiveness internet portal. Trends in development cooperation shape how capacity development is discussed. These include for example: new forms of financing and less of a North–South dichotomy; more in-country leadership and less donor power; resilience as a framework in fragile environments; increasing private sector engagement. Global goals The UNDP integrated this capacity-building system into its work on reaching the Millennium Development Goals (MDGs) by the year 2015. The UNDP states that it focused on building capacity at the institutional level because it believed that "institutions are at the heart of human development, and that when they are able to perform better, [...] they can contribute more meaningfully to the achievement of national human development goals." The United Nations Sustainable Development Goals mention capacity building (rather than capacity development) in several places: Sustainable Development Goal 17 is to "Strengthen the means of implementation and revitalize the Global Partnership for Sustainable Development". Target 9 of that goal is formulated as "Enhance international support for implementing effective and targeted capacity-building in developing countries to support national plans to implement all the Sustainable Development Goals, including through north–south, South-South and triangular cooperation." Sustainable Development Goal 6 also includes capacity building in its Target 6a which is to "By 2030, expand international cooperation and capacity-building support to developing countries in water- and sanitation-related activities and programmes, including water harvesting, desalination, water efficiency, wastewater treatment, recycling and reuse technologies". Similarly, Sustainable Development Goal 8 Target 8.10 states "Strengthen the capacity of domestic financial institutions to encourage and expand access to banking, insurance and financial services for all". Scale As of 2009, some $20 billion per year of international development intervention funding went for capacity development; roughly 20% of total funding in this category  The World Bank committed more than $1 billion per year to this service in loans or grants (more than 10% of its portfolio of nearly $10 billion). A publication by OECD-DAC in 2005 estimated that "about a quarter of donor aid, or more than $15 billion a year, has gone into "Technical Cooperation", the bulk of which is ostensibly aimed at capacity development". Processes for different entities Governments One of the most fundamental ideas associated with capacity building is the idea of building the capacities of governments in developing countries so they are able to handle the problems associated with environmental protection, economic and social needs. Developing a government's capacity whether at the local, regional or national level can improve governance and can lead to sustainable development and political reform. Capacity building in governments often targets a government's ability to budget, collect revenue, create and implement laws, promote civic engagement. Local communities and NGOs International donors often include capacity building as a form of interventions with local governments or NGOs working in developing areas. A study in 2001 observed that "the act of resetting aspirations and strategy is often the first step in improving an organization's capacity". Secondly good management is important (committed people in senior positions to make capacity building happen). Thirdly, patience is required: "there are few quick fixes when it comes to building capacity". Some methods of capacity building for NGOs might include visiting training centers, organizing exposure visits, office and documentation support, on-the-job training, learning centers, and consultations. Private sector organizations For private sector organizations, capacity building may go beyond the improvement of services for public organizations and include fund-raising and income generation, diversity, partnerships and collaboration, marketing, positioning, planning and other activities relating to production and performance.:35–36 Capacity development of private organizations involves the build-up of an organization's tangible and intangible assets. Organization development (OD) is the study and implementation of practices, systems, and techniques that affect organizational change. The goal of which is to modify an organization's performance and/or culture. Evaluation Challenges with evaluations The difficulties with achieving results from capacity development projects have regularly been described in a range of publications. For example, in 2006, a document by OECD-DAC stated that: "evaluation results confirm that development of sustainable capacity remains one of the most difficult areas of international development practice. Capacity development has been one of the least responsive targets of donor assistance, lagging behind progress in infrastructure development or improving health and child mortality". Since the arrival of capacity building as a dominant subject in international aid, donors and practitioners have struggled to create a concise mechanism for determining the effectiveness of capacity building initiatives. Recognition of problems in capacity building interventions in evaluations funded and managed by international organizations dates back to the year 1999. A World Bank review in the year 2000 found many examples where capacity building interventions undermined public management efforts. In these cases, public sector reform and institution-building were hindered. In 2005, the Bank noted again in its evaluations that business practices to its capacity building work are not as rigorous as they are in other areas. For example, standard quality assurance processes were missing at the design stage. Similar problems were reported by UNDP in 2002 when they reviewed their capacity building projects. Effective evaluation and monitoring In 2007, specific criteria for effective evaluation and monitoring of the capacity building of NGOs were proposed, though only in generalities without clear measures for the tool. The proposal suggested only that evaluating the capacity building ability of NGOs should be based on a combination of monitoring the results of their activities and also a more open flexible way of monitoring that also takes into consideration, self-improvement and cooperation. Other wishes were that monitoring for capacity building effectiveness should include an organization's clarity of mission, an organization's leadership, an organization's learning, an organization's emphasis on on-the-job-development, an organization's monitoring processes. In 2007, USAID published a report on its approach to monitoring and evaluating the capacity building. According to the report, USAID monitors program objectives, the links between projects and activities of an organization and its objectives, a program or organization's measurable indicators, data collection, and progress reports. USAID noted two types of indicators for progress: "output indicators" and "outcome indicators." Output indicators measure immediate changes or results such as the number of people trained. Outcome indicators measure the impact, such as laws changed due to trained advocates. Both the "numbers of people trained" and "laws changed" are, however, just inputs or intermediate inputs and do not measure actual improvements in "performance" in terms of measurable outcomes of public agencies that are the definition of capacity building. Despite these claims of existence of these evaluation approaches, there was little more than lists of inputs and outputs without use of professional management standards or any kind of real oversight, and a report for the World Bank in 2009 noted that the failures were deep and systemic, where the measures used are "smile sheets", asking beneficiaries if they are "happy" or "better off" and measuring things like "raised awareness", "enhanced skills", and "improved teamwork" that are "locally driven", rather than on whether the underlying problems are solved, and refraining from asking whether there may be hidden agendas to buy influence, subsidize elites, and continue dependency. An independent public measurement indicator for improvement and oversight of the large variety of capacity building initiatives was published in 2015, with scoring, and based on international development law and professional management principles. This comprehensive indicator for capacity building was proposed as part of the elements codifying international development law in a treatise. It consists of 20 specific elements that apply law, administrative principles, social science concepts, and education concepts, to troubleshoot the actual problems that occur and to promote public oversight and accountability. The indicator has two sections: one with 11 questions to assure proper application of the five recognized principles of capacity building, analyzing their application in diagnosis and design of an intervention (7 questions), sustainability of reform (2 questions), and good governance (2 questions), and second, with 9 questions to assure professionalism and safeguards against conflicts of interest, unintended consequences, and distortion of public and private systems. This indicator is one of 13 that is part of the treatise of international development law and can be applied with the other indicators for specific sectors and development principles, as well as assurance of quality of evaluation systems. Critique Critique of capacity development has centered on the ambiguity surrounding it in terms of its anticipated focus, its effectiveness, the role of infrastructure organisations (such as empowerment networks), and the unwillingness or inability of public agencies to apply their own principles and international law. Capacity building has been called a buzzword within development which comes with a heavy normative load but little critical interrogation and appropriate review. The term capacity building is usually "loaded with positive value". Despite some 20 years recognizing the problems, practitioners continue to note that some capacity development projects are just "throwing money at symptoms with no logic or analysis". Others are "disguised bribes to government officials and attempts to undermine entire government structures by setting up foreign run Ministries and foreign influenced political parties or civil society to lobby for foreign interests" using the interventions as a form of "soft power". One common problem of interventions that focus on education and training of foreign government officials is that they are akin to trying to "teach elephants to fly" or to "teach wolves not to eat sheep" while avoiding the actual changes needed for impact. Under international development law, there is also concern that much of the implementation of capacity building has been and continues to be in violation of existing international treaties such as the U.N. Declaration Against Corruption and Bribery, Articles 15, 16, 18, and 19. Examples Below are examples of capacity building in developing countries: At state government level: In 1999, the UNDP supported capacity building of the state government in Bosnia and Herzegovina. The program focused on strengthening the state's government by fostering new organizational, leadership and management skills in government figures, improved the government's technical abilities to communicate with the international community and civil society within the country. In India the Sanitation Capacity Building platform (SCBP) was designed to "support and build the capacity of town/cities to plan and implement decentralized sanitation solutions" with funding by the Bill & Melinda Gates Foundation from 2015 to 2022. References Community development International development Non-profit technology Assistance
Capacity building
Technology
3,587
41,051,457
https://en.wikipedia.org/wiki/Gerardine%20DeSanctis
Gerardine L. (Gerry) DeSanctis (January 5, 1954 – August 16, 2005) was an American organizational theorist and information systems researcher and Thomas F. Keller Professor of Business Administration at Duke University, known for her work on group decision support systems and automated decision support Biography DeSanctis received degrees in psychology, a bachelor's from Villanova University and master's from Fairleigh Dickinson University. In 1982, she was granted a doctorate in management, with a focus on organizational behavior and information systems, from the Rawls College of Business at Texas Tech University. DeSanctis joined the Fuqua School of Business at Duke University in 1992, where from 2001 to 2005 she was Professor of Business Administration at Duke University. She lectured in Duke's Global Executive MBA Program. She has been Visiting Professor at the Delft University of Technology, Erasmus University Rotterdam and INSEAD in France and in Singapore. DeSanctis has been member of the editorial boards of Information Systems Research, Journal of Organizational Behavior, Management Science, MIS Quarterly, and Organization Science. In 2004 DeSanctis was awarded the Maurice Holland Award. In 2007 the Organizational Communication & Information Systems (OCIS) has initiated the Gerardine DeSanctis Dissertation Award 2007. Work DeSanctis authored and co-authored many publications. in the field on "learning in distributed teams and online communities". Theories of technology Theories of technology are adapted and augmented by researchers interested in the relationship between technology and social structures, such as information technology in organizations. DeSanctis and Poole proposed an "adaptive structuration theory" with respect to the emergence and use of group decision support systems. In particular, they chose Giddens' notion of modalities to consider how technology is used with respect to its "spirit". "Appropriations" are the immediate, visible actions that reveal deeper structuration processes and are enacted with "moves". Appropriations may be faithful or unfaithful, be instrumental and be used with various attitudes. This theory of technology which are not defined or claimed by a proponent, but are used by authors in describing existing literature, in contrast to their own or as a review of the field. DeSanctis and Poole (1994) wrote of three views of technology's effects: Decision-making: the view of engineers associated with positivist, rational, systems rationalization, and deterministic approaches Institutional school: technology is an opportunity for change, focuses on social evolution, social construction of meaning, interaction and historical processes, interpretive flexibility, and an interplay between technology and power An integrated perspective (social technology): soft-line determinism, with joint social and technological optimization, structural symbolic interaction theory Selected publications Burton, Richard M., Børge Obel, and Gerardine DeSanctis. Organizational design: A step-by-step approach. Cambridge University Press, 2011. Articles, a selection: Desanctis, Gerardine, and R. Brent Gallupe. "A foundation for the study of group decision support systems." Management science 33.5 (1987): 589–609. Poole, Marshall Scott, and Gerardine DeSanctis. "Understanding the use of group decision support systems: The theory of adaptive structuration." Organizations and communication technology 173 (1990): 191. DeSanctis, Gerardine, and Marshall Scott Poole. "Capturing the complexity in advanced technology use: Adaptive structuration theory." Organization science 5.2 (1994): 121–147. References External links Gerardine DeSanctis Dissertation Award 2007 1954 births 2005 deaths American business theorists Information systems researchers Duke University faculty Fairleigh Dickinson University alumni Rawls College of Business alumni University of Minnesota faculty Villanova University alumni American organizational theorists
Gerardine DeSanctis
Technology
780
42,818,958
https://en.wikipedia.org/wiki/St%20Bathans%20fauna
The St Bathans fauna is found in the lower Bannockburn Formation of the Manuherikia Group of Central Otago, in the South Island of New Zealand. It comprises a suite of fossilised prehistoric animals from the late Early Miocene (Altonian) period, with an age range of 19–16 million years ago. The layer in which the fossils are found derives from littoral zone sediments deposited in a shallow, freshwater lake, with an area of 5600 km2 from present day Central Otago to Bannockburn and the Nevis Valley in the west; to Naseby in the east; and from the Waitaki Valley in the north to Ranfurly in the south. The lake was bordered by an extensive floodplain containing herbaceous and grassy wetland habitats with peat-forming swamp–woodland. At that time the climate was warm with a distinctly subtropical Australian climate and the surrounding vegetation was characterised by casuarinas, eucalypts and palms as well as podocarps, araucarias and southern beeches. The fossiliferous layer has been exposed at places along the Manuherikia River and at other sites in the vicinity of the historic gold mining town of St Bathans. The fauna consists of a variety of vertebrates, including fish, a crocodilian, a rhynchocephalian (a relative of tuatara), geckos, skinks, a primitive mammal, several species of bats, and several kinds of birds, especially waterbirds. Of tree-dwelling birds, parrots outnumber pigeons thirty to one. Proapteryx, a basal form of kiwi, is known from there. The Miocene ecosystem was recovering from the ‘Oligocene drowning’ a few million years earlier, when up to 80% of the current land area of New Zealand was submerged. The wildlife that lived in, on, and around the palaeolake Manuherikia was uniquely New Zealand, which strongly suggesting that some emergent land remained during this near drowning event. Marked global cooling and drying during the Miocene, Pliocene and the Pleistocene Ice Ages resulted in the extinction of the 'subtropical' elements of the St Bathans fauna. Those that survived adapted to the dynamic geological and climatic changes, and would form part of the enigmatic fauna that characterised New Zealand when humans arrived in the late 13th century. History of excavation Research on the St Bathans fauna is led by Trevor Worthy, a New Zealander based in Flinders University, Adelaide. Other key scientists involved include Jenny Worthy from Flinders University, Paul Scofield and Vanesa De Pietri from Canterbury Museum, and Alan Tennyson from the Museum of New Zealand Te Papa Tongarewa. In 2016 Vanesa De Pietri was awarded a Royal Society of New Zealand Marsden Fast Start grant to study the shorebird fossils. This long-running (since 2000) collaborative research programme also includes scientists from the University of New South Wales in Sydney and from the University of Queensland in Brisbane. Mammals Surprisingly, given modern New Zealand's dearth of land mammals, there is a basal theriiform mammal, the St Bathans mammal. Several species of mystacine bats are also known, as well as a vesper bat and several incertae sedis species. This bat fauna included Vulcanops, a giant burrowing bat three times the size of today’s relatives, and more closely related to South American bats. This suggests that small land mammals were a common component of New Zealand's fauna in the Miocene, with even bats being significantly more diverse than today. Birds New Zealand's two modern palaeognath clades, the kiwi and moa, have early representatives in the fauna. The former is represented by the diminutive, possibly volant Proapteryx. The latter is represented by several bones and egg shells of currently unnamed species, but already identifiable as true moa, being large sized and flightless. The fact that moa are already recognisably modern in anatomy, and possibly ecology, while kiwis are fairly unspecialised and probably still flighted, confirms the previous suspicions that neither clade is closely related and that they arrived in New Zealand independently: moa arrived and became flightless earlier in the Cenozoic, while kiwi were then recent arrivals. Anseriforms (waterfowl) dominate the fauna. At least nine species are recognised from St Bathans, making it the richest waterfowl fauna in the world. All the waterfowl species are unique to New Zealand. Bones attributable to Cape Barren goose (Cereopsis spp.), thought to represent the ancestors of extinct Pleistocene-Holocene Cnemiornis goose, and those of a second possible goose species have been found. In both instances, there is not enough material currently to erect species. Stiff-tailed ducks dominate the fauna with Manuherikia lacustrina, M. minuta, M. douglasi, Dunstanneta johnstoneorum and a further undescribed species of Manuherikia. One species of shelduck, Miotadorna sanctibathansi, has been found and is common. The dabbling duck Matanas enrightii remains poorly known as only a few fossils have been found. Palaelodids are ancient relatives of flamingos. The new species from St Bathans (Palaelodus aotearoa) is smaller than, and morphologically distinct from, the Late Oligocene-Early Miocene Palaelodus wilsoni from Australia. Two pigeon species have been described. Rupephaps is a large fruit pigeon, possibly related to the modern Hemiphaga species. The Zealandian dove is similar to the Nicobar pigeon. Several Gruiformes have been described. The St Bathans adzebill (Aptornis proasciarostratus) was only slightly smaller than its more recent descendants. There were two flightless rails: the common Priscaweka parvales and uncommon Litorallus livezeyi. Priscaweka parvales was no bigger than a sparrow. Charadriiformes, including gulls, terns, noddies, snipes, dotterels, plovers, jacanas, oystercatchers, sheathbills and the plains-wanderer, are a large group of birds that are mostly found in marine or semi-marine environments. There are about 350 species, and they are mostly small to medium-sized. Two of these are known from St Bathans, the New Zealand lake-wanderer (Hakawai melvillei), a relative of the plains-wanderer, and Sansom's plover (Neilus sansomae), a plover-like bird of uncertain affinities but possibly related to sheathbills and the Magellanic plover. Petrels are seabirds in the order Procellariformes. This group includes albatrosses. Petrels today make up most of all species of seabird, and the order is the only order of birds to be entirely marine. One species of petrel is known from the St Bathans Fauna – a diving petrel in the same genus as modern diving petrels, the Miocene diving petrel (Pelecanoides miokuaka). At least two herons are known: Pikaihao bartlei and Matuku otagoense. The former is a bittern, while the latter is a much larger species that appears to be basal within Ardeidae (the herons). One eagle, similar in size to a wedge-tailed eagle, and another bird of prey, similar in size to a small hawk, have been found, but await formal description. Two parrot genera are represented. Heracles is represented by its sole species, Heracles inexpectatus, the largest known parrot, weighing 7 kilograms and standing 1 meter tall. Nelepsittacus is represented by at least four species. These vary drastically in size, suggesting that they occupied a wide variety of ecological niches, having diversified in the relative absence of other parrots. A New Zealand wren, Kuiornis indicator, is known from these deposits, possibly similar to the modern rifleman. Two or three other passerine species remain undescribed. Herpetofauna (amphibians and reptiles) The St Bathans fauna is rich in reptile and amphibian remains. Several groups present in modern New Zealand are represented, such as leiopelmatid frogs, a sphenodontian similar to the modern tuatara, geckos, and skinks. However, there are also several species not seen in modern-day New Zealand, such as a mekosuchine crocodile up to 3 metres in length and pleurodire and meiolaniid turtles. This suggests that New Zealand's herpetofauna was much richer in this epoch, probably because its climate was considerably warmer than today. Fish The vast majority of the bones excavated from St Bathans are those of freshwater fish such as the ancient relatives of today's bullies, galaxiids, and the extinct New Zealand grayling. Aquatic invertebrates As well as fishes, shellfish, including freshwater mussels, and freshwater crayfish dominated the aquatic life in the palaeolake Manuherikia. A new species of St Bathans freshwater limpet, Latia manuherikia, was described by malacologist Bruce Marshall in 2011. This was both the first known fossil Latia and the first record of this genus from the South Island. Absent taxa Notable examples of absent taxa include marsupials, snakes, agamid and varanid lizards, lungfish, eels, cockatoos, and all but one lineage (bellbirds and tūī) of the 80 species of Australian honeyeaters. References Fossils of New Zealand Geography of Otago Lagerstätten Miocene paleontological sites Paleontological sites of Otago Prehistoric fauna by locality Cenozoic paleobiotas
St Bathans fauna
Biology
2,068
1,446,517
https://en.wikipedia.org/wiki/Network%20simulation
In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions. Network simulator A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become tool complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE Simulations Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a list of pending "events" is stored, and those events are processed in order, with some events triggering future events—such as the event of the arrival of a packet at one node triggering the event the arrival of that packet at a downstream node. Network emulation Network emulation allows users to introduce real devices and applications into a test network (simulated) that alters packet flow in such a way as to mimic the behavior of a live network. Live traffic can pass through the simulator and be affected by objects within the simulation. The typical methodology is that real packets from a live application are sent to the emulation server (where the virtual network is simulated). The real packet gets 'modulated' into a simulation packet. The simulation packet gets demodulated into a real packet after experiencing effects of loss, errors, delay, jitter etc., thereby transferring these network effects into the real packet. Thus it is as-if the real packet flowed through a real network but in reality it flowed through the simulated network. Emulation is widely used in the design stage for validating communication networks prior to deployment. List of network simulators There are both free/open-source and proprietary network simulators available. Examples of notable open source network simulators / emulators include: ns Simulator GloMoSim There are also some notable commercial network simulators. These include: OPNET (Riverbed) NetSim (Tetcos). The source code is open, and NetSim Lite is available for free download for academic institutions and students Uses of network simulators Network simulators provide a cost-effective method for 5G, 6G coverage, capacity, throughput and latency analysis Network R & D (More than 70% of all Network Research paper reference a network simulator) Defense applications such as UHF/VHF/L-Band Radio based MANET Radios, Dynamic TDMA MAC, PHY Waveforms etc. IOT, VANET simulations UAV network/drone swarm communication simulation Machine Learning for communication networks Education: Online courses, Lab experimentation, and R & D. Most universities use a network simulator for teaching / R & D since it is too expensive to buy hardware equipment There are a wide variety of network simulators, ranging from the very simple to the very complex. Minimally, a network simulator must enable a user to Model the network topology specifying the nodes on the network and the links between those nodes Model the application flow (traffic) between the nodes Providing network performance metrics such as throughput, latency, error, etc., as output Evaluate protocol and device designs Log radio measurements, packet and events for drill-down analyses and debugging See also Network emulation Traffic generation model References Computer networking Telecommunications engineering Computer network analysis Simulation Military radio systems
Network simulation
Technology,Engineering
934
70,586,461
https://en.wikipedia.org/wiki/Auricularia%20americana
Auricularia americana is a species of fungus in the family Auriculariaceae found in North America and East Asia. Its basidiocarps (fruitbodies) are gelatinous, ear-like, and grow on dead conifer wood. Taxonomy The species was originally described in 1987 from Quebec on Abies balsamea, but was not validly published until 2003. Molecular research, based on cladistic analysis of DNA sequences, has shown that Auricularia americana is a distinct species. The species was formerly confused with Auricularia auricula-judae, which grows on broadleaf wood and is confined to Europe. Description Auricularia americana forms thin, brown, rubbery-gelatinous fruit bodies that are ear-shaped and across and about thick. The fruitbodies occur singly or in clusters. The upper surface is finely pilose. The spore-bearing underside is smooth. The spore print is white. Microscopic characters The microscopic characters are typical of the genus Auricularia. The basidia are tubular, laterally septate, 55–70 × 4–5 μm. The spores are allantoid (sausage-shaped), 14–16.5 × 4.5–5.5 μm. Similar species In North America, Auricularia angiospermarum is almost identical but grows on the wood of broadleaf trees. No other North American Auricularia species grows on conifer wood. In China and Tibet, however, a second species, A. tibetica, also occurs on conifers. It can be distinguished microscopically by its longer basidia and larger basidiospores. Additionally, A. nigricans, Exidia crenata, and Phylloscypha phyllogena are similar. Habitat and distribution Auricularia americana is a wood-rotting species, typically found on dead attached or fallen wood of conifers. It is widely distributed in North America (primarily in the Northeast, between April and September) and is also known from China and the Russian Far East. References Auriculariales Fungi of North America Fungi of China Fungus species
Auricularia americana
Biology
451
72,665,270
https://en.wikipedia.org/wiki/Uncertain%20geographic%20context%20problem
The uncertain geographic context problem or UGCoP is a source of statistical bias that can significantly impact the results of spatial analysis when dealing with aggregate data. The UGCoP is very closely related to the Modifiable areal unit problem (MAUP), and like the MAUP, arises from how we divide the land into areal units. It is caused by the difficulty, or impossibility, of understanding how phenomena under investigation (such as people within a census tract) in different enumeration units interact between enumeration units, and outside of a study area over time. It is particularly important to consider the UGCoP within the discipline of time geography, where phenomena under investigation can move between spatial enumeration units during the study period. Examples of research that needs to consider the UGCoP include food access and human mobility. The uncertain geographic context problem, or UGCoP, was first coined by Dr. Mei-Po Kwan in 2012. The problem is highly related to the ecological fallacy, edge effect, and Modifiable areal unit problem (MAUP) in that, it relates to aggregate units as they apply to individuals. The crux of the problem is that the boundaries we use for aggregation are arbitrary and may not represent the actual neighborhood of the individuals within them. While a particular enumeration unit, such as a census tract, contains a person's location, they may cross its boundaries to work, go to school, and shop in completely different areas. Thus, the geographic phenomena under investigation extends beyond the delineated boundary . Different individuals, or groups may have completely different activity spaces, making an enumeration unit that is relevant for one person meaningless to another. For example, a map that aggregates people by school districts will be more meaningful when studying a population of students than the general population. Traditional spatial analysis, by necessity, treats each discrete areal unit as a self-contained neighborhood and does not consider the daily activity of crossing the boundaries. Implications The UGCoP has further implications when considering the area outside of a study area. Tobler's second law of geography states, "the phenomenon external to a geographic area of interest affects what goes on inside." As a study area is often a subset of the planet, data on the edges of the study area will be excluded. If the boundary demarcating the study area is permeable to travel, then the phenomena under investigation within it may extend beyond, and be impacted by, forces excluded from the analysis. This uncertainty contributes to the UGCoP. All maps are wrong, and a cartographer must ensure that their maps' limitations are well documented to avoid misleading the users. With modern technology, there is an emphasis on individual-level data and understanding how individuals interact with their environment. When making maps with this individual-level data, the UGCoP is one source of bias that can impact the results of an analysis. When these results inform policy, they can have real world ramifications. The UGCoP is particularly important when understanding food access and human mobility. Suggested solutions Geographic information systems, along with technologies that can monitor the position of individuals in real time, are possible methods for addressing the UGCoP. These technologies allow scientists to analyze and visualize the 3D space-time path of people moving through a study area, and better understand their actual activity space. Web GIS has also been employed to address the UGCoP by allowing researchers to better contextualize subjects' real and perceived activity space. These technologies have helped to address the problem by moving away from aggregate data and introducing a temporal component to the modeling of subject activity. See also Arbia's law of geography Automotive navigation system Collaborative mapping Concepts and Techniques in Modern Geography Counter-mapping Distributed GIS Geographic information systems in geospatial intelligence GIS and aquatic science GIS and public health GIS in archaeology Historical GIS Integrated Geo Systems List of GIS data sources List of GIS software Map database management Modifiable temporal unit problem Neighborhood effect averaging problem Participatory GIS QGIS Technical geography Tobler's first law of geography Tobler's second law of geography Traditional knowledge GIS Virtual globe References Bias Geographic information systems Problems in spatial analysis
Uncertain geographic context problem
Technology
864
1,786,418
https://en.wikipedia.org/wiki/Technetium-99m%20generator
A technetium-99m generator, or colloquially a technetium cow or moly cow, is a device used to extract the metastable isotope 99mTc of technetium from a decaying sample of molybdenum-99. 99Mo has a half-life of 66 hours and can be easily transported over long distances to hospitals where its decay product technetium-99m (with a half-life of only 6 hours, inconvenient for transport) is extracted and used for a variety of nuclear medicine diagnostic procedures, where its short half-life is very useful. Parent isotope source 99Mo can be obtained by the neutron activation (n,γ reaction) of 98Mo in a high-neutron-flux reactor. However, the most frequently used method is through fission of uranium-235 in a nuclear reactor. While most reactors currently engaged in 99Mo production use highly enriched uranium-235 targets, proliferation concerns have prompted some producers to transition to low-enriched uranium targets. The target is irradiated with neutrons to form 99Mo as a fission product (with 6.1% yield). Molybdenum-99 is then separated from unreacted uranium and other fission products in a hot cell. Generator invention and history 99mTc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory, Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from tellurium-132, its 3.2-days parent, produced in the Brookhaven Graphite Research Reactor. They detected a trace contaminant which proved to be 99mTc, which was coming from 99Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer. Generator function and mechanism Technetium-99m's short half-life of 6 hours makes long-term storage impossible. Transport of 99mTc from the limited number of production sites to radiopharmacies (for manufacture of specific radiopharmaceuticals) and other end users would be complicated by the need to significantly overproduce to have sufficient remaining activity after long journeys. Instead, the longer-lived parent nuclide 99Mo can be supplied to radiopharmacies in a generator, after its extraction from the neutron-irradiated uranium targets and its purification in dedicated processing facilities. Radiopharmacies may be hospital-based or stand-alone facilities, and in many cases will subsequently distribute 99mTc radiopharmaceuticals to regional nuclear medicine departments. Development in direct production of 99mTc, without first producing the parent 99Mo, precludes the use of generators; however, this is uncommon and relies on suitable production facilities close to radiopharmacies. Production Generators provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from 99mTc generator is 20–50 μSv/h during transport. These generators' output declines with time and must be replaced weekly, since the half-life of 99Mo is still only 66 hours. Since the half-life of the parent nuclide (99Mo) is much longer than that of the daughter nuclide (99mTc), 50% of equilibrium activity is reached within one daughter half-life, 75% within two daughter half-lives. Hence, removing the daughter nuclide (elution process) from the generator ("milking" the cow) is reasonably done as often as every 6 hours in a 99Mo/99mTc generator. Separation Most commercial 99Mo/99mTc generators use column chromatography, in which 99Mo in the form of molybdate, MoO42− is adsorbed onto acid alumina (Al2O3). When the 99Mo decays it forms pertechnetate TcO4−, which, because of its single charge, is less tightly bound to the alumina. Pouring normal saline solution through the column of immobilized 99Mo elutes the soluble 99mTc, resulting in a saline solution containing the 99mTc as pertechnetate, with sodium as the counterion. The solution of sodium pertechnetate may then be added in an appropriate concentration to the pharmaceutical kit to be used, or sodium pertechnetate can be used directly without pharmaceutical tagging for specific procedures requiring only the 99mTcO4− as the primary radiopharmaceutical. A large percentage of the 99mTc generated by a 99Mo/99mTc generator is produced in the first 3 parent half-lives, or approximately one week. Hence, clinical nuclear medicine units purchase at least one such generator per week or order several in a staggered fashion. Isomeric ratio When the generator is left unused, 99Mo decays to 99mTc, which in turn decays to 99Tc. The half-life of 99Tc is far longer than its metastable isomer, so the ratio of 99Tc to 99mTc increases over time. Both isomers are carried out by the elution process and react equally well with the ligand, but the 99Tc is an impurity useless to imaging (and cannot be separated). The generator is washed of 99Tc and 99mTc at the end of the manufacturing process of the generator, but the ratio of 99Tc to 99mTc then builds up again during transport or any other period when the generator is left unused. The first few elutions will have reduced effectiveness because of this high ratio. References Radiopharmaceuticals Radioactivity Technetium-99m Medical physics
Technetium-99m generator
Physics,Chemistry
1,278
1,270,572
https://en.wikipedia.org/wiki/Khmer%20numerals
Khmer numerals ០ ១ ២ ៣ ៤ ៥ ៦ ៧ ៨ ៩ are the numerals used in the Khmer language. They have been in use since at least the early 7th century. Numerals Having been derived from the Hindu numerals, modern Khmer numerals also represent a decimal positional notation system. It is the script with the first extant material evidence of zero as a numerical figure, dating its use back to the seventh century, two centuries before its certain use in India. Old Khmer, or Angkorian Khmer, also possessed separate symbols for the numbers 10, 20, and 100. Each multiple of 20 or 100 would require an additional stroke over the character, so the number 47 was constructed using the 20 symbol with an additional upper stroke, followed by the symbol for number 7. This inconsistency with its decimal system suggests that spoken Angkorian Khmer used a vigesimal system. As both Thai and Lao scripts are derived from Old Khmer, their modern forms still bear many resemblances to the latter, demonstrated in the following table: Modern Khmer numbers The spoken names of modern Khmer numbers represent a biquinary system, with both base 5 and base 10 in use. For example, 6 () is formed from 5 () plus 1 (). Numbers from 0 to 5 With the exception of the number 0, which stems from Sanskrit, the etymology of the Khmer numbers from 1 to 5 is of proto-Austroasiatic origin. For details of the various alternative romanization systems, see Romanization of Khmer. Some authors may alternatively mark as the pronunciation for the word two, and either or for the word three. In neighbouring Thailand the number three is thought to bring good luck. However, in Cambodia, taking a picture with three people in it is considered bad luck, as it is believed that the person situated in the middle will die an early death. Comparison to other Austroasiatic languages 1-5 Whilst Vietnamese vocabulary is very Sinicized, the numbers 1-5 retain proto-Austroasiatic origins. Numbers from 6 to 20 The numbers from 6 to 9 may be constructed by adding any number between 1 and 4 to the base number 5 (), so that 7 is literally constructed as 5 plus 2. Beyond that, Khmer uses a decimal base, so that 14 is constructed as 10 plus 4, rather than 2 times 5 plus 4; and 16 is constructed as 10+5+1. Colloquially, compound numbers from eleven to nineteen may be formed using the word preceded by any number from one to nine, so that 15 is constructed as , instead of the standard . In constructions from 6 to 9 that use 5 as a base, may alternatively be pronounced ; giving , , , and . This is especially true in dialects which elide , but not necessarily restricted to them, as the pattern also follows Khmer's minor syllable pattern. Numbers from 30 to 90 The modern Khmer numbers from 30 to 90 are as follows: The word , which appears in each of these numbers, can be dropped in informal or colloquial speech. For example, the number 81 can be expressed as instead of the full . Historically speaking, Khmer borrowed the numbers from 30 to 90 from a southern Middle Chinese variety by way of a neighboring Tai language, most likely Thai. This is evidenced by the fact that the numbers in Khmer most closely resemble those of Thai, as well as the fact that the numbers cannot be deconstructed in Khmer. For instance, is not used on its own to mean "four" in Khmer and is not used on its own to mean "ten", while they are in Thai (see Thai numerals). The table below shows how the words in Khmer compare to other nearby Tai and Sinitic languages. Words in parentheses indicate literary pronunciations, while words preceded by an asterisk only occur in specific constructions and are not used for basic numbers from 3 to 10. Prior to using a decimal system and adopting these words, Khmer used a base 20 system, so that numbers greater than 20 were formed by multiplying or adding on to the cardinal number for twenty. Under this system, 30 would've been constructed as (20 × 1) + 10 "twenty-one ten" and 80 was constructed as 4 × 20 "four twenties / four scores". See the section Angkorian numbers for details. Numbers from 100 to 10,000,000 The standard Khmer numbers starting from one hundred are as follows: Although is most commonly used to mean ten million, in some areas this is also colloquially used to refer to one billion (which is more properly ). In order to avoid confusion, sometimes is used to mean ten million, along with for one hundred million, and ("one thousand million") to mean one billion. Different Cambodian dialects may also employ different base number constructions to form greater numbers above one thousand. A few of the such can be observed in the following table: Counting fruits Reminiscent of the standard base 20 Angkorian Khmer numbers, the modern Khmer language also possesses separate words used to count fruits, not unlike how English uses words such as a "dozen" for counting items such as eggs. Sanskrit and Pali influence As a result of prolonged literary influence from both the Sanskrit and Pali languages, Khmer may occasionally use borrowed words for counting. Generally speaking, asides a few exceptions such as the numbers for 0 and 100 for which the Khmer language has no equivalent, they are more often restricted to literary, religious, and historical texts than they are used in day to day conversations. One reason for the decline of these numbers is that a Khmer nationalism movement, which emerged in the 1960s, attempted to remove all words of Sanskrit and Pali origin. The Khmer Rouge also attempted to cleanse the language by removing all words which were considered politically incorrect. Ordinal numbers Khmer ordinal numbers are formed by placing the word in front of a cardinal number. This is similar to the use of ที่ thi in Thai, and thứ (次) in Vietnamese. Angkorian numbers It is generally assumed that the Angkorian and pre-Angkorian numbers also represented a dual base (quinquavigesimal) system, with both base 5 and base 20 in use. Unlike modern Khmer, the decimal system was highly limited, with both the numbers for ten and one hundred being borrowed from the Chinese and Sanskrit languages respectively. Angkorian Khmer also used Sanskrit numbers for recording dates, sometimes mixing them with Khmer originals, a practice which has persisted until the last century. The numbers for twenty, forty, and four hundred may be followed by multiplying numbers, with additional digits added on at the end, so that 27 is constructed as twenty-one-seven, or 20×1+7. Proto-Khmer numbers Proto-Khmer is the hypothetical ancestor of the modern Khmer language bearing various reflexes of the proposed proto-Mon–Khmer language. By comparing both modern Khmer and Angkorian Khmer numbers to those of other Eastern Mon–Khmer (or Khmero-Vietic) languages such as Pearic, Proto-Viet–Muong, Katuic, and Bahnaric; it is possible to establish the following reconstructions for Proto-Khmer. Numbers from 5 to 10 Contrary to later forms of the Khmer numbers, Proto-Khmer possessed a single decimal number system. The numbers from one to five correspond to both the modern Khmer language and the proposed Mon–Khmer language, while the numbers from six to nine do not possess any modern remnants, with the number ten *kraaj (or *kraay) corresponding to the modern number for one hundred. It is likely that the initial *k, found in the numbers from six to ten, is a prefix. Notes References General Specific Numerals Numerals
Khmer numerals
Mathematics
1,580
3,080,690
https://en.wikipedia.org/wiki/Processing%20medium
In industrial engineering, a processing medium is a gaseous, vaporous, fluid or shapeless solid material that plays an active role in manufacturing processes - comparable to that of a tool. Examples A processing medium for washing is a soap solution, a processing medium for steel melting is a plasma, and a processing medium for steam drying is superheated steam. Synonyms Operating medium Working medium. Engineering concepts
Processing medium
Engineering
82
39,787,042
https://en.wikipedia.org/wiki/Wind%20Talker%20sound%20suppressor
The Wind Talker sound suppressor is a direct-connect sound suppressor made by Smith Enterprise Inc. for use by the US military on M14 rifles and M4 carbines that utilize a Vortex Flash Hider. It is an improvement over the older M14 Direct Connect (M14 DC) sound suppressor. History The M14 is an accurate and reliable rifle, but it has had less than optimum results from being used with a sound suppressor. There are a number of factors which contribute to this. The first pertains to the fine threads on the muzzle which were not intended for attaching and removing different muzzle devices. The second is the rifle's gas system and the diversion of gases from the suppressor toward the face of the shooter. In 2003 Ron Smith and Richard Smith of Smith Enterprise, Inc. and Dave Fisher of Fisher Enterprises developed and produced an M14 sound suppressor that quickly attached to and detached from the Smith Enterprise, Inc. Vortex flash hider for US troops in the wars in Iraq and Afghanistan. Prior attempts at suppressing the M14 included removing the rifle's existing flash suppressor with an attached sight and sometimes the gas lock which could result in losing parts for the rifle and rendering it ineffective. The initial version was called the M14 Direct Connect, or M14 DC, and was a factory supplied part of the M14SE/M21A5 and Mk 14 Enhanced Battle Rifle systems. The M14 Direct Connect was unique in that it was the first sound suppressor expressly built for the M14 rifle that could be disassembled in the field for routine maintenance as opposed to returning it to the manufacturer. In 2006 the body of the suppressor was changed to titanium and this dropped the weight of the suppressor from over 2 lbs to 1.42 lbs. In 2009 Fisher designed an optional locking collar for use on the M1A SOOCOM rifle, which uses a different style of flash suppressor/muzzle brake. As the war effort began winding down in 2011, Fisher Enterprises and Smith Enterprise ceased their collaboration and Smith made some changes to the M14 DC and renamed the new model as the Wind Talker. These features included a new baffle design, a shortened locking collar made from nitrocarburized stainless steel and revolver cylinder cuts on the coupler which allowed propellant gas to exit just forward of the locking collar. Specifications The Wind Talker is constructed with a stainless steel tube and a lightweight version is offered with an aluminum tube. The decibel rating level is 25dB. The Wind Talker can be mounted to any rifle with either a 7.62 mm or 5.56mm bore diameter as long as it utilizes a Vortex flash hider. The Wind Talker sound suppressor is in the military inventory system as NATO Stock Number: NSN 1005-LLL-997965, utilizing the same NSN number as the M14DC which it replaced. References Firearm components
Wind Talker sound suppressor
Technology
602
2,095,495
https://en.wikipedia.org/wiki/Loot%20%28video%20games%29
In video games, loot is the collection of items picked up by the player character that increase their power or level up their abilities, such as currency, spells, equipment and weapons. Loot is meant to reward the player for progressing in the game, and can be of superior quality to items that can be purchased. It can also be part of an upgrade system that permanently increases the player's abilities. Loot boxes are a particular type of randomized loot system that consists of boxes that can be unlocked through normal play, or by purchasing more via microtransaction. Functions Early computer role-playing games such as SSI's Gold Box series rewarded player progress with in-game treasure, which was typically preset in the games' programming. Recent games tend to randomly or procedurally generate loot, with better loot such as more powerful weapons or stronger armor obtained from more difficult challenges. The random nature of loot was established in the roguelike genre of games and made mainstream through Blizzard Entertainment's Diablo which was based on roguelike design principles. Fixed items, determined essential for game progress, may also drop alongside random loot. In single-player games, loot is often obtained as treasure through exploration or looted from defeated enemies, and loot is considered distinct from items purchased from in-game shops. In multiplayer games, loot may be provided in such a manner that only one player may acquire any given item. "Ninja-looting" is the resulting practice of looting items off enemies defeated by other players. Players may choose to employ a loot system to distribute their spoils. In a PVP situation, loot may be taken from a defeated player. In role-playing video games or loot shooters, loot often forms the core economy of the game, in which the player fights to obtain loot and then uses it to purchase other items. Loot is often assigned to tiers of rarity, with the rarer items being more powerful and more difficult to obtain. The various tiers of rarity are often indicated by particular colors that allow a player to quickly recognize the quality of their loot. The concept of color-coded loot rarity was initially popularized with the 1996 game Diablo and its 2000 sequel Diablo II, whose designer, David Brevik, took the idea from the roguelike video game Angband. In Diablo, equippable items were either white (normal), blue (magic) or gold (unique), and Diablo II expanded on this with either grey (inferior), white (common), blue (magic), yellow (rare), orange (unique) or green (set). Blizzard Entertainment later re-used the system for the 2004 game World of Warcraft, where items were either grey (poor), white (common), green (uncommon), blue (rare), purple (epic) or orange (legendary). Following World of Warcraft'''s popularity, most loot-driven games have since based their own system off this same color-coding hierarchy, (e.g. Titan Quest, Borderlands, Overwatch, Torchlight, Destiny, and Fortnite''). The quality of loot often scales with the tiers but not always, and higher tier loot can sometimes only be found in later stages of the game. Loot boxes Loot boxes are a particular type of randomized loot system that consists of boxes that can be unlocked through normal play, or by purchasing more via microtransaction. They originated in massively multiplayer online role-playing games and mobile games, but have since been adopted by many AAA console games in recent years. The system has garnered a great deal of controversy for being too similar to gambling, along with giving players a means to circumvent normal progression through additional monetary transactions. Games that allow for certain players to have unfair advantages over other players via paid loot boxes are referred to as "pay-to-win" by critics. References MUD terminology Video game terminology
Loot (video games)
Technology
827
25,790,152
https://en.wikipedia.org/wiki/Ecological%20triage
Ecological triage refers to the decision making of environmental conservation using the concepts of medical triage. In medicine, the allocation of resources in an urgent situation is prioritized for those with the greatest need and those who would receive the greatest benefit. Similarly, the two parameters of ecological triage are the level of threat and the probability of ecological recovery. Because there are limitations to resources such as time, money, and manpower, it is important to prioritize specific efforts and distribute resources efficiently. Ecological triage differentiates between areas with an attainable emergent need, those who would benefit from preventive measures, and those that are beyond repair. Methods Ecological triage is not simple, dichotomous decision making. It involves a complex array of factors including assumptions, mathematical calculations, and planning for uncertainties. When assessing an ecosystem, there are a myriad of factors conservationists consider, but there are also variables which they are unable to account for. Conservationists and scientists often have incomplete understanding of population dynamics, impacts of external threats, and efficacy of different conservation tactics. It is important to incorporate these unknowns when assessing a population or ecosystem. By following the principles of triage, we are able to allow for the efficient allocation of resources as conservationists continue to develop the best options for ecological preservation and restoration. Info-Gap Decision Model Due to the multitude of variables within a population or ecosystem, it is important to address the unknown factors which may not initially be accounted for. Many ecologists utilize the Info-gap decision theory, which focuses on strategies that are most likely to succeed despite uncertainties. This process is composed of three main elements: Mathematical calculations which assess performance as a result of management. This step determines the number of management options, evaluates the existing subpopulations, estimates the management period, and assesses the impact of inaction. Expectations of performance. To evaluate performance, an extinction-investment curve is utilized. It evaluates data regarding the probability of species extinction (without intervention), budget allocation, and budget required to halve probability of extinction. This step sets forth a threshold below which performance is considered unacceptable. A model describing uncertainty. The uncertainty model examines the possible values which may render the extinction-investment curve incorrect. It considers how factors may vary in dynamic situations and creates a function of uncertainty. Criticisms Some critics of environmental triage believe the process chooses "winners" and "losers" and therefore abandons certain demographics. Other criticism argues that ecological triage allows for the government to justify under-funding environmental programs. By utilizing a formal decision-making model, the government can deem certain projects as a lost cause and choose to withhold funding. Critics and supporters alike stress the necessity of expanding the environmental budget to provide the best conservation and restoration efforts. References Ecology
Ecological triage
Biology
566
50,852,937
https://en.wikipedia.org/wiki/Morchella%20rigidoides
Morchella rigidoides is a species of fungus in the family Morchellaceae. Described as new to science in 1966 by Roger Heim, it is found in Papua New Guinea. References External links rigidoides Edible fungi Fungi described in 1966 Fungi of New Guinea Fungus species
Morchella rigidoides
Biology
57
53,571,464
https://en.wikipedia.org/wiki/Orphaned%20wells%20in%20the%20United%20States
Though different jurisdictions have varying criteria for what exactly qualifies as an orphaned or abandoned oil well, generally speaking, an oil well is considered abandoned when it has been permanently taken out of production. Similarly, orphaned wells may have different legal definitions across different jurisdictions, but can be thought of as wells whose legal owner it is not possible to determine. Once a well is abandoned, it can be a source of toxic emissions and pollution contaminating groundwater and releasing methane, making orphan wells a significant contributor to national greenhouse gas emissions. For this reason, several state and federal programs have been initiated to plug wells; however, many of these programs are under capacity. In states like Texas and New Mexico, these programs do not have enough funding or staff to fully evaluate and implement mitigation programs. North Dakota dedicated $66 million of its CARES Act pandemic relief funds for plugging and reclaiming abandoned and orphaned wells. According to the Government Accountability Office, the 2.1 million unplugged abandoned wells in the United States could cost as much as $300 billion. A joint Grist and The Texas Observer investigation in 2021 highlighted how government estimates of abandoned wells in Texas and New Mexico were likely underestimated and that market forces might have reduced prices so much creating peak oil conditions that would lead to more abandonment. Advocates of programs like the Green New Deal and broader climate change mitigation policy in the United States have advocated for funding plugging programs that would address stranded assets and provide a Just Transition for skilled oil and gas workers. The REGROW Act, which is part of the Infrastructure Investment and Jobs Act, includes $4.7 billion in funds for plugging and maintaining orphaned wells. The Interior Department has documented the existence of 130,000 orphaned wells nationwide. An EPA study estimated that there are as many as two to three million wells across the nation. New York State is expecting to receive $70 million from the Act in 2022 which will be used to plug orphaned wells. The state has 6,809 orphaned wells, and the NYSDEC estimates it will cost $248 million to plug them all. The NYSDEC uses a fleet of drones carrying magnetometers to find orphaned wells. In 2023, state governments in Pennsylvania, Ohio, and California reported a shortage of trained staff necessary to implement federally funded well capping programs. Qualified oil field workers were also in short supply in Pennsylvania and Ohio. Federally funded well plugging contracts are required to meet Davis-Bacon Act standards for prevailing wages, in order to ensure that the training of new oil field workers will contribute to local economic development in rural areas. State definitions State legislatures in the United States have specific definitions based on local needs and priorities. For example, the section on abandoned wells in Texas' Natural Resource Code defines an "inactive well" as "an unplugged well that has had no reported production, disposal, injection, or other permitted activity for a period of greater than 12 months." Pennsylvania's definition of abandoned well includes not producing for 12 months, "considered dry and not equipped for production within 60 days after drilling, re-drilling or deepening, and from which the equipment needed to extract resources or produce energy has been removed." Ohio legislation defines "idle and orphaned wells" based on whether or not a well bond has been forfeited or the money to plug it is unavailable. It defines a "temporary inactive well status" as not having produced for two (non-horizontal wells) or eight (horizontal wells) statutorily defined reporting periods or one that has produced "less than 100,000 cubic feet of natural gas or 15 barrels of crude oil." Environmental impacts Orphaned and abandoned wells can cause environmental damage by leaking pollutants into the atmosphere or water supplies. Important determinants of how much orphaned and abandoned wells impact the environment include the techniques used and precautions taken when first drilling the well, whether it is a gas well, oil well, or combined oil and gas well, and if and how the well was sealed. If wells are not properly sealed when orphaned or abandoned, there can allow oil and gas to contaminate groundwater. It is also possible for orphaned and abandoned wells to be significant emitters of methane and hydrogen sulfide into the atmosphere. Furthermore, brine present in wells dug into shale formations can contain some radioactive and toxic substances that contaminate groundwater if the well leaks. Plugging wells can reduce the risk of explosions and protect groundwater, but does not always prevent methane emissions. The costs to mitigate the impact of orphaned and abandoned wells varies, but may include removing all equipment from the site, restoring the land and topsoil, and planting local species, in addition to plugging the well itself. For example, plugging a well and restoring the surrounding land costs an average of $100,000 for wells in the Marcellus Shale. One problem with studying the impacts of orphaned and abandoned wells is that data about them can be scarce and incomplete. In the United States, it is possible for wells to have been orphaned or abandoned for over a century, and information about them, if it exists at all, can be difficult to find. Responses One way to encourage well owners not to abandon or orphan wells and to make sure wells are safely abandoned is to use well bonds. These are bonds paid by well operators to a surety company and are held by an obligee (state or federal entity) until the well has been satisfactorily plugged and the land surface restored. A significant challenge of making well bonds an effective policy tool is to set their price to a point that does not make market entry prohibitively expensive, but also does not incentivize well operators to forfeit the bond instead of undertaking the abandonment requirements specified in local law. Another way to encourage well owners not to abandon or orphan wells is to retrofit oil and gas wells to produce geothermal energy. One benefit of this approach is that it is less expensive to retrofit an abandoned well to produce geothermal energy than it is to drill a new oil or gas well. It also saves the cost of exploring sites for geothermal fields. Avoiding new exploration and drilling avoids the environmental impacts of these activities. However, geothermal fluids can contain environmentally hazardous chemicals such as hydrogen sulfide, ammonia, methane, arsenic, mercury, and lead. A third option is to mandate that well operators establish reclamation trusts which would be used to pay reclamation costs if the operator does not perform the necessary plugging and land restoration within a given time period after abandoning the well. This policy option has been used to mitigate the environmental impact of mines in the United States as part of a combined command-and-control and market incentive policy response to environmental protection. One risk attached to this policy option is that if wells become economically unproductive before the period planned for in the trust agreement, the abandoned well could become a liability held by the relevant government authority. A Montana-based non-profit, the Well Done Foundation, was founded in 2019 by a retired oil and gas industry executive to start plugging wells, one well at a time. See also Orphan wells in Alberta, Canada References Abandoned buildings and structures in the United States Oil wells Environmental mitigation Mining in the United States
Orphaned wells in the United States
Chemistry,Engineering
1,472
2,082
https://en.wikipedia.org/wiki/Aeronautics
Aeronautics is the science or art involved with the study, design, and manufacturing of air flight-capable machines, and the techniques of operating aircraft and rockets within the atmosphere. While the term originally referred solely to operating the aircraft, it has since been expanded to include technology, business, and other aspects related to aircraft. The term "aviation" is sometimes used interchangeably with aeronautics, although "aeronautics" includes lighter-than-air craft such as airships, and includes ballistic vehicles while "aviation" technically does not. A significant part of aeronautical science is a branch of dynamics called aerodynamics, which deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft. History Early ideas Attempts to fly without any real aeronautical understanding have been made from the earliest times, typically by constructing wings and jumping from a tower with crippling or lethal results. Wiser investigators sought to gain some rational understanding through the study of bird flight. Medieval Islamic Golden Age scientists such as Abbas ibn Firnas also made such studies. The founders of modern aeronautics, Leonardo da Vinci in the Renaissance and Cayley in 1799, both began their investigations with studies of bird flight. Man-carrying kites are believed to have been used extensively in ancient China. In 1282 the Italian explorer Marco Polo described the Chinese techniques then current. The Chinese also constructed small hot air balloons, or lanterns, and rotary-wing toys. An early European to provide any scientific discussion of flight was Roger Bacon, who described principles of operation for the lighter-than-air balloon and the flapping-wing ornithopter, which he envisaged would be constructed in the future. The lifting medium for his balloon would be an "aether" whose composition he did not know. In the late fifteenth century, Leonardo da Vinci followed up his study of birds with designs for some of the earliest flying machines, including the flapping-wing ornithopter and the rotating-wing helicopter. Although his designs were rational, they were not based on particularly good science. Many of his designs, such as a four-person screw-type helicopter, have severe flaws. He did at least understand that "An object offers as much resistance to the air as the air does to the object." (Newton would not publish the Third law of motion until 1687.) His analysis led to the realisation that manpower alone was not sufficient for sustained flight, and his later designs included a mechanical power source such as a spring. Da Vinci's work was lost after his death and did not reappear until it had been overtaken by the work of George Cayley. Balloon flight The modern era of lighter-than-air flight began early in the 17th century with Galileo's experiments in which he showed that air has weight. Around 1650 Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today; by carrying ballast which may be dropped overboard to gain height, and by venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases. From the mid-18th century the Montgolfier brothers in France began experimenting with balloons. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke" and, despite not fully understanding the principles at work, made some successful launches and in 1783 were invited to give a demonstration to the French Académie des Sciences. Meanwhile, the discovery of hydrogen led Joseph Black in to propose its use as a lifting gas, though practical demonstration awaited a gas-tight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon. Charles and two craftsmen, the Robert brothers, developed a gas-tight material of rubberised silk for the envelope. The hydrogen gas was to be generated by chemical reaction during the filling process. The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière. Charles and the Robert brothers' next balloon, La Caroline, was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 km, between Paris and Beuvry, despite the man-powered propulsive devices proving useless. In an attempt the next year to provide both endurance and controllability, de Rozier developed a balloon having both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. The principle was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeater's skin. The first flight ended in disaster and the approach has seldom been used since. Cayley and the foundation of modern aeronautics Sir George Cayley (1773–1857) is widely acknowledged as the founder of modern aeronautics. He was first called the "father of the aeroplane" in 1846 and Henson called him the "father of aerial navigation." He was the first true scientific aerial investigator to publish his work, which included for the first time the underlying principles and forces of flight. In 1809 he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air." He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs. He developed the modern conventional form of the fixed-wing aeroplane having a stabilising tail with both horizontal and vertical surfaces, flying gliders both unmanned and manned. He introduced the use of the whirling arm test rig to investigate the aerodynamics of flight, using it to discover the benefits of the curved or cambered aerofoil over the flat wing he had used for his first glider. He also identified and described the importance of dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes. Another significant invention was the tension-spoked wheel, which he devised in order to create a light, strong wheel for aircraft undercarriage. The 19th century: Otto Lilienthal and the first human flights During the 19th century Cayley's ideas were refined, proved and expanded on, culminating in the works of Otto Lilienthal. Lilienthal was a German engineer and businessman who became known as the "flying man". He was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favourably influencing public and scientific opinion about the possibility of flying machines becoming practical. His work lead to him developing the concept of the modern wing. His flight attempts in Berlin in the year 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first air plane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world. Otto Lilienthal is often referred to as either the "father of aviation" or "father of flight". Other important investigators included Horatio Phillips. Branches Aeronautics may be divided into three main branches, Aviation, Aeronautical science and Aeronautical engineering. Aviation Aviation is the art or practice of aeronautics. Historically aviation meant only heavier-than-air flight, but nowadays it includes flying in balloons and airships. Aeronautical engineering Aeronautical engineering covers the design and construction of aircraft, including how they are powered, how they are used and how they are controlled for safe operation. A major part of aeronautical engineering is aerodynamics, the science of passing through the air. With the increasing activity in space flight, nowadays aeronautics and astronautics are often combined as aerospace engineering. Aerodynamics The science of aerodynamics deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft. The study of aerodynamics falls broadly into three areas: Incompressible flow occurs where the air simply moves to avoid objects, typically at subsonic speeds below that of sound (Mach 1). Compressible flow occurs where shock waves appear at points where the air becomes compressed, typically at speeds above Mach 1. Transonic flow occurs in the intermediate speed range around Mach 1, where the airflow over an object may be locally subsonic at one point and locally supersonic at another. Rocketry A rocket or rocket vehicle is a missile, spacecraft, aircraft or other vehicle which obtains thrust from a rocket engine. In all rockets, the exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forwards simply by throwing their exhaust backwards extremely fast. Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, including setting foot on the Moon. Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency. Chemical rockets are the most common type of rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks. See also References Citations Sources Wilson, E. B. (1920) Aeronautics: A Class Text, via Internet Archive External links Aeronautics Aviation Terminology Jeppesen The AVIATION DICTIONARY for pilots and aviation technicians DTIC ADA032206: Chinese-English Aviation and Space Dictionary Courses Research > Articles containing video clips
Aeronautics
Physics
2,378
10,875,316
https://en.wikipedia.org/wiki/Knowledge%20policy
Knowledge policies provide institutional foundations for creating, managing, and using organizational knowledge as well as social foundations for balancing global competitiveness with social order and cultural values. Knowledge policies can be viewed from a number of perspectives: the necessary linkage to technological evolution, relative rates of technological and institutional change, as a control or regulatory process, obstacles posed by cyberspace, and as an organizational policy instrument. Policies are the paradigms of government and all bureaucracies. Policies provide a context of rules and methods to guide how large organizations meet their responsibilities. Organizational knowledge policies describe the institutional aspects of knowledge creation, management, and use within the context of an organization's mandate or business model. Social knowledge policies balance between progress in the knowledge economy to promote global competitiveness with social values, such as equity, unity, and the well-being of citizens. From a technological perspective, Thomas Jefferson (1816) noted that laws and institutions must keep pace with the progress of the human mind. Institutions must advance as new discoveries are made, new truths are discovered, and as opinions and circumstances change. Fast-forwarding to the late 20th century, Martin (1985) stated that any society with a high level of automation must frame its laws and safeguards so that computers can police other computers. Tim Berners-Lee (2000) noted that both policy and technology must be designed with an understanding of the implications of each other. Finally, Sparr (2001) points out that rules will emerge in cyberspace because even on the frontier, pioneers need property rights, standards, and rules of fair play to protect them from pirates. Government is the only entity that can enforce such rules, but they could be developed by others. From a rate of change point of view, McGee and Prusak (1993) note that when an organization changes its culture, information policies are among the last thing to change. From a market perspective, Martin (1996) points out that although cyberspace mechanisms change very rapidly, laws change very slowly, and that some businesses will use this gap for competitive advantage. Similarly, Sparr (2001) discerned that governments have the interest and means to govern new areas of technology, but that past laws generally do not yet cover these emerging technologies and new laws take time to create. A number of authors have indicated that it will be very difficult to monitor and regulate cyberspace. Negroponte (1997) uses a metaphor of limiting the freedom of bit radiation is like the Romans attempting to stop Christianity, even though early data broadcasters may be eaten by Washington lions. Brown (1997) questions whether it will even be possible for governments to monitor compliance with regulations in the face of exponentially increasing encrypted traffic within private networks. As cybernetic environments become central to commercial activity, monitoring electronic markets will become increasingly problematic. From a corporate point of view, Flynn (1956) notes that employee use of corporate computer resources poses liability risks and jeopardizes security and that no organization can afford to engage in electronic communications and e-commerce unprepared. A key attribute of cyberspace is that it is a virtual rather than a real place. Thus, a growing share of social and commercial electronic activity does not have a national physical location (Cozel (1997)), raising a key question of whether legislatures can even set national policies or coordinate international policies. Similarly, Berners-Lee (2000) explains that key criterion of Trademark law – separation in location or market – does not work for World-Wide Web domain names because the Internet crosses all geographic boundaries and has no concept of a market area. From an organizational perspective, Simard (2000) states that "if traditional policies are applied directly [to a digital environment], the Canadian Forest Service could become marginalized in a dynamic knowledge-based economy." Consequently, the CFS developed and implemented an Access to Knowledge Policy that "fosters the migration of the CFS towards providing free, open access to its knowledge assets, while recognizing the need for cost recovery and the need to impose restrictions on access in some cases" (Simard, 2005). The policy comprises a framework of objectives, guiding principles, staff responsibilities, and policy directives. The directives include ownership and use; roles, rights, and responsibilities; levels of access and accessibility; service to clients; and cost of access. See also References Berners-Lee, Tim. 2000. Weaving the Web. Harper Collins, New York, NY p 40, 124 Brown, David. 1997. Cybertrends, Penguin Books, London UK. p 100, 120 Cozel, Diane. 1997. The Weightless World. MIT Press, Cambridge, MA. p 18 Flynn, Nancy. 2001. The ePolicy Handbook. American Management Association. p 15 Hearn, G., & Rooney, D. (Eds.) 2008. Knowledge Policy: Challenges for the Twenty First Century. Cheltenham: Edward Elgar. Jefferson, Thomas. 1816. Letter to Samuel Kercheval (July 12, 1816) Martin, James. 1985. In: Information Processing Systems for Management (Hussain, 1985). Richard D. Irwin, Homewood, IL. p339 Martin, James. 1996. Cybercorp, The New Business Revolution. American Management Association, New York, NY. p19 Mcgee, James and Lawrence Prusak. 1993. Managing information Strategically. John Wiley & Sons, New York, NY. p167 Negroponte, Nicholas. 1996. Being Digital. Random House, New York, NY. P55 Rooney, D., Hearn, G., Mandeville T. & Joseph, R. (2003). Public Policy in Knowledge-Based Economies: Foundations and Frameworks, Cheltenham: Edward Elgar. Rooney, D., Hearn, G., & Ninan, A. (Eds.) 2005. Handbook on the Knowledge Economy. Cheltenham: Edward Elgar. Simard, Albert. 2000. Managing Knowledge at the Canadian Forest Service. Natural Resources Canada, Canadian Forest Service, Ottawa, ON. p51 Simard, Albert. 2005. Canadian Forest Service Access to Knowledge Policy. Natural Resources Canada, Canadian Forest Service, Ottawa, ON. 30p Sparr, Debora. 2001. Ruling the Waves. Harcourt, Inc. New York, NY. p14, 370 Knowledge management Business terms Information society
Knowledge policy
Technology
1,292
7,893,964
https://en.wikipedia.org/wiki/Electronicam
Electronicam was a television recording system that shot an image on film and television at the same time through a common lens. It was developed by James L. Caddigan for the DuMont Television Network in the 1950s, before electronic recording on videotape was available. Since the film directly captured the live scene, its quality was much higher than the commonly used kinescope films, which were shot from a TV screen. How it worked The image passes through a lens into a beam splitter that sends half the light to a 35 mm or 16 mm camera mounted on the right side of the television camera. The other half of the light passes to the other side, through a 45-degree angle mirror and into a video camera tube. Because the camera dollies had to support two cameras—one conventional electronic image orthicon TV camera tube, and one 35mm motion picture camera—the system was bulky and heavy, and somewhat clumsy in operation. This made complex productions problematic. Single-stage shows, such as The Honeymooners, were relatively easy since they had few sets and generally small casts. In the studio, when two or three Electronicam cameras were used, a kinescope system recorded the live feed (as broadcast), so the Electronicam films could later be edited to match. The audio was recorded separately, onto either a magnetic fullcoat (1952, and all later) or as an optical soundtrack negative (pre-1952). Usage The DuMont Television Network used Electronicams in 1955 to produce most of its studio-based programming since it had (except for occasional sports events) discontinued use of coaxial cable and microwave links to connect stations. Stations were sent films of shows for broadcast. The "Classic 39" episodes of The Honeymooners aired during the 1955–56 television season on CBS were shot with Electronicams, which meant they could be rerun on broadcast TV and eventually transferred to home video. Without Electronicams, the half-hour The Honeymooners episodes in the 1955-56 season may have been broadcast live and only exist as poor-quality kinescopes. Also, around 1956 British producer J. Arthur Rank brought three Electronicams to the United Kingdom to experiment but eventually was disappointed with the picture quality. The introduction of Ampex's videotape recorder in mid-1956 began to eliminate the need for Electronicam and similar systems, allowing electronic recording from live video cameras. See also Electronovision External links Chuck Pharis Electronicam web page, with many photos and diagrams. Cinematography quotes, A Whale of a Camera!, with Jackie Gleason, by Ryan Patrick O'Hara, June 3, 2012 1955: The DuMont Electronicam (via Eyes of a Generation). May 1 2016 DuMont Television Network Cameras by type Television technology The Honeymooners
Electronicam
Technology
563
74,725,614
https://en.wikipedia.org/wiki/Flame%20deflector
A flame deflector, flame diverter or flame trench is a structure or device designed to redirect or disperse the flame, heat, and exhaust gases produced by rocket engines or other propulsion systems. The amount of thrust generated by a rocket launch, along with the sound it produces during liftoff, can damage the launchpad and service structure, as well as the launch vehicle. The primary goal of the diverter is to prevent the flame from causing damage to equipment, infrastructure, or the surrounding environment. Flame diverters can be found at rocket launch sites and test stands where large volumes of exhaust gases are expelled during engine testing or vehicle launch. Design and operation The diverter typically comprises a robust, heat-resistant structure that channels the force of the exhaust gases and flames in a specific direction, typically away from the rocket or equipment. This is essential to prevent the potentially destructive effects of the high-temperature gases and to reduce the acoustic impact of the ignition. A flame trench can also be used in combination with a diverter to form a trench-deflector system. The flames from the rocket travel through openings in the launchpad onto a flame deflector situated in the flame trench, which runs underneath the launch structure and extends well beyond the launchpad itself. To further reduce the acoustic effects a water sound suppression system may be also used. Notable examples Apollo program During the Apollo program the need for a flame deflector was a determining factor in the design of the Kennedy Space Center Launch Complex 39. NASA designers chose a two-way, wedge-type metal flame deflector. It measured 13 meters in height and 15 meters in width, with a total weight of 317 tons. Since the water table was close to the surface of the ground, the designers wanted the bottom of the flame trench at ground level. The flame deflector and trench determined the height and width of the octagonal shaped launch pad. Space Shuttle program During the Space Shuttle program NASA modified Launch Complex 39B at Kennedy Space Center. They installed a flame trench that was 150 meters long, 18 meters wide, and 13 meters deep. It was built with concrete and refractory brick. The main flame deflector was situated inside the trench directly underneath the rocket boosters. The V-shaped steel structure was covered with a high-temperature concrete material. It separated the exhaust of the orbiter main engines and of the solid rocket boosters into two flame trenches. It was approximately 11.6 meters high, 17.5 meters wide, and 22 meters long. The Shuttle flame trench-diverter system was refurbished for the SLS program. Baikonur Cosmodrome The main launch pads at the Russian launch complex of Baikonur Cosmodrome use a flame pit to manage launch exhaust. The launch vehicles are transported by rail to the launch pad, where they are vertically erected over a large flame deflector pit. A similar structure was built by the European Space Agency at its Guiana Space Centre. SpaceX Starship launch mount During the first orbital test flight of SpaceX's Starship vehicle in April 2023, the launch mount of Starbase was substantially damaged due to the lack of a flame diverter system. The 33 Raptor rocket engines dug a crater and scattered debris and dust over a wide area. The company designed a new water deluge based flame diverter that protects the launch mount and vehicle by spraying large quantities of water from a piece of steel equipment under the rocket. In November of the same year, the new water deluge system successfully protected the launchpad during the second orbital flight test of Starship, avoiding the cloud of dust and debris that rose up during the first test. References Rocket launch technologies Fire Rocketry Explosion protection
Flame deflector
Chemistry,Engineering
753
8,559,385
https://en.wikipedia.org/wiki/Saccharomyces%20bayanus
Saccharomyces bayanus is a yeast of the genus Saccharomyces, and is used in winemaking and cider fermentation, and to make distilled beverages. Saccharomyces bayanus, like Saccharomyces pastorianus, is now accepted to be the result of multiple hybridisation events between three pure species, Saccharomyces uvarum, Saccharomyces cerevisiae and Saccharomyces eubayanus. Notably, most commercial yeast cultures sold as pure S. bayanus for wine making, e.g. Lalvin EC-1118 strain, have been found to contain S. cerevisiae cultures instead. S. bayanus is used intensively in comparative genomics studies. Based on a computation-based experimental design system, Caudy et al. generated a rich resource for expression profiles for S. bayanus, which has been used in several comparative studies in yeast systems, including expression patterns and nucleosome profiles. See also Yeast in winemaking References External links Saccharomyces bayanus at ENTREZ genome project bayanus Yeasts Yeasts used in brewing Fungi described in 1895 Fungus species
Saccharomyces bayanus
Biology
249
59,602,196
https://en.wikipedia.org/wiki/Digital%20media%20use%20and%20mental%20health
The relationships between digital media use and mental health have been investigated by various researchers—predominantly psychologists, sociologists, anthropologists, and medical experts—especially since the mid-1990s, after the growth of the World Wide Web and rise of text messaging. A significant body of research has explored "overuse" phenomena, commonly known as "digital addictions", or "digital dependencies". These phenomena manifest differently in many societies and cultures. Some experts have investigated the benefits of moderate digital media use in various domains, including mental health, and treating mental health problems with novel technological solutions. Studies have also suggested that certain digital media use, such as online support communities, may offer mental health benefits, although the effects are quite complex. The delineation between beneficial and pathological use of digital media has not been established. There are no widely accepted diagnostic criteria, although some experts consider overuse a manifestation of underlying psychiatric disorders. The prevention and treatment of pathological digital media use is also not standardized, although guidelines for safer media use for children and families have been developed. The 2013 fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and the International Classification of Diseases (ICD-11) do not include diagnoses for problematic internet use and problematic social media use; the ICD-11 includes a diagnosis for gaming disorder (commonly known as video game addiction), whereas the DSM-5 does not. Debate over how and when to diagnose these conditions is ongoing as of 2023. The use of the term addiction to refer to these phenomena and diagnoses has been questioned. Digital media and screen time amongst modern social media apps such as Instagram, TikTok, Snapchat and Facebook have changed how children think, interact and develop in positive and negative ways, but researchers are unsure about the existence of hypothesized causal links between digital media use and mental health outcomes. Those links appear to depend on the individual and the platforms they use. Several large technology firms have made commitments or announced strategies to try to reduce the risks of digital media use. History and terminology The relationship between digital technology and mental health has been investigated from many perspectives. Benefits of digital media use in childhood and adolescent development have been found. Concerns have been expressed by researchers, clinicians and the public in regard to apparent compulsive behaviors of digital media users, as correlations between technology overuse and mental health problems become apparent. Terminologies used to refer to compulsive digital-media-use behaviours are not standardized or universally recognised. They include "digital addiction", "digital dependence", "problematic use", or "overuse", often delineated by the digital media platform used or under study (such as problematic smartphone use or problematic internet use). Unrestrained use of technological devices may affect developmental, social, mental and physical well-being and may result in symptoms akin to other psychological dependence syndromes, or behavioral addictions. The focus on problematic technology use in research, particularly in relation to the behavioural addiction paradigm, is becoming more accepted, despite poor standardization and conflicting research. Internet addiction has been proposed as a diagnosis since the 1998 and social media and its relation to addiction has been examined since 2009. A 2018 Organisation for Economic Co-operation and Development (OECD) report stated there were benefits of structured and limited internet use in children and adolescents for developmental and educational purposes, but that excessive use can have a negative impact on mental well-being. It also noted an overall 40% increase in internet use in school-age children between 2010 and 2015, and that different OECD nations had marked variations in rates of childhood technology use, as well as differences in the platforms used. Hence, why it is so important for adolescents' to be trained to use social media, as it will ensure that users have developed psychologically informed competencies and skills that will maximize the chances for balanced, safe, and meaningful social media use. The Diagnostic and Statistical Manual of Mental Disorders has not formally codified problematic digital media use in diagnostic categories, but it deemed internet gaming disorder to be a condition for further study in 2013. Gaming disorder, commonly known as video game addiction, has been recognised in the ICD-11. Different recommendations in the DSM and the ICD are due partly to the lack of expert consensus, the differences in emphasis in the classification manuals, as well as difficulties using animal models for behavioural addictions. The utility of the term addiction in relation to the overuse of digital media has been questioned, in regard to its suitability to describe new, digitally mediated psychiatric categories, as opposed to overuse being a manifestation of other psychiatric disorders. Usage of the term has also been criticised for drawing parallels with substance use behaviours. Careless use of the term may cause more problems—both downplaying the risks of harm in seriously affected people, as well as overstating risks of excessive, non-pathological use of digital media. The evolution of terminology relating excessive digital media use to problematic use rather than addiction was encouraged by Panova and Carbonell, psychologists at Ramon Llull University, in a 2018 review. Due to the lack of recognition and consensus on the concepts used, diagnoses and treatments are difficult to standardize or develop. Heightened levels of public anxiety around new media (including social media, smartphones and video games) further obfuscate population-based assessments, as well as posing management dilemmas. Radesky and Christakis, the 2019 editors of JAMA Paediatrics, published a review that investigated "concerns about health and developmental/behavioral risks of excessive media use for child cognitive, language, literacy, and social-emotional development." Due to the ready availability of multiple technologies to children worldwide, the problem is bi-directional, as taking away digital devices may have a detrimental effect, in areas such as learning, family relationship dynamics, and overall development. Problematic use Though associations have been observed between digital media use and mental health symptoms or diagnoses, causality has not been established; nuances and caveats published by researchers are often misunderstood by the general public, or misrepresented by the media. Females are likelier to overuse social media and males video games. Following from this, problematic digital media use may not be singular constructs, may be delineated based on the digital platform used, or reappraised in terms of specific activities (rather than addiction to the digital medium). Problematic social media use can also result in fear of missing out (FoMO) in which symptoms of anxiety and psychological stress exasperated with the fear of potentially missing content present online leaving the individual feeling unfulfilled or left out of the loop. Research also suggests that FoMo can worsen mental health issues, such as anxiety and depression, especially among younger digital users since they are prone to comparison. When an individual has FoMo they will be more likely to constantly check their social media accounts using their personal devices to check social media or messages to ensure they are up to date with information that is occurring within the individual's social network. This constant need to check social media platforms for information induces feelings of anxiety driving individuals to get involved with problematic social media use. Access to means of communication In 1999, 58% of Finnish citizens had a mobile phone, including 75% of 15-17 year olds. In 2000, a majority of U.S. households had at least one personal computer and internet access the following year. In 2002, a majority of U.S. survey respondents reported having a mobile phone. In September and December 2006 respectively, Luxembourg and the Netherlands became the first countries to completely transition from analog to digital television, while the United States commenced its transition in 2008. In September 2007, a majority of U.S. survey respondents reported having broadband internet at home. In January 2013, a majority of U.S. survey respondents reported owning a smartphone. An estimated 40% of U.S. households in 2006 owned a dedicated home video game console, and by 2015, 51 percent of U.S. households owned a dedicated home video game console. In April 2015, one survey of U.S. teenagers ages 13 to 17 reported that nearly three-quarters of them either owned or had access to a smartphone, and 92 percent went online daily, with 24 percent saying they went online "almost constantly." In a 2024 survey, U.S. teenagers reported that 95 percent have access to smartphone, spent 97 percent of their time online daily, and 48 percent is spent online "almost constantly". Screen time and mental health Some types of potentially problematic internet use are associated with psychiatric or behavioral problems such as depression, anxiety, hostility, aggression, and attention deficit hyperactivity disorder (ADHD). The studies could not determine if causal relationships exist; it was unclear, for example, whether people with depression might overuse the internet because they were already depressed, or if using the internet too much triggered the depression. Some studies also suggest that social media can have both negative and positive effects, depending on the specific context of the individual. While overuse of digital media has been associated with depressive symptoms, digital media may also be used in some situations to improve mood. Symptoms of ADHD have been positively correlated with digital media use in a large prospective study. The ADHD symptom of hyperfocus may cause affected individuals to overuse video games, social media, or online chatting; however the correlation between hyperfocus and problematic social media use is weak. A 2018 review found associations between the self-reported mental health symptoms by users of the Chinese social media platform WeChat and excessive platform use. However, the motivations and usage patterns of WeChat users affected overall psychological health, rather than the amount of time spent using the platform. An analysis of data from the Monitoring the Future survey, the Millennium Cohort Study, and the Youth Risk Behavior Surveillance System found that digital technology use (including, playing video games, watching television, using social media, etc.) accounted for only 0.4% of the variation in adolescent well-being. Additional research found little evidence for substantial negative associations for digital screen engagement and adolescent well-being. However, looking exclusively at the effect social media usage has on girls, there was a strong association between using social media and poor mental health. The evidence, although of mainly low to moderate quality, shows a correlation between heavy screen time and a variety of health physical and mental health problems. However, moderate use of digital media is also correlated with benefits for young people in terms of social integration, mental health, and overall well-being. However, in some cases, moderate use of certain digital platforms has been associated with improved mental health. A 2017 UK large-scale study of the "Goldilocks hypothesis"—of avoiding both too much and too little digital media use—was described as the "best quality" evidence to date by experts and non-government organisations (NGOs) reporting to a 2018 UK parliamentary committee. That study concluded that modest digital media use may have few adverse affects, and some positive associations in terms of well-being. In a 2022 review, it was discovered that when it comes to adolescents' well-being that perhaps there is too much focus on locating a negative correlation between digital technologies and adolescents' well-being, If a negative correlation between the two are located the impact would potentially be minimal to the point where it would have little to no impact on adolescent well-being or quality of life. Social media and mental health Excessive time spent on social media may be more harmful than digital screen time as a whole, especially for young people. Some research found a "substantial" association between social media use and mental health issues, but most found only a weak or inconsistent relationship. Social media can have both positive and negative effects on mental health; whether the overall affect is harmful or helpful may depend on a variety of factors, including the quality and quantity of social media usage. In the case of over 65s, studies have found high levels of social media usage was associated with positive outcomes overall, such as flourishing, though it remains unclear if social media use is a causative factor. Social media can be beneficial to individuals as a tool which if used correctly can bring about positive impacts to users online and offline. When it comes to social media, adolescence can benefit from its use by allowing users to build and maintain online and offline relationships, access information, connect to other in real time, and help adolescence to express themselves by creating and engaging with content. Social media can also be detrimental to users when used incorrectly. Adolescence who use social media can be exposed or placed at risk from the following: Cyberbullying, sexual predators, adult content, substance use, and content that uses unrealistic representations of people and lifestyles. In a 2021 study, it was reported that adolescents who are associated with problematic media use are three times more likely to experience health complications such as irritability, nervousness, tiredness, and insomnia. Digital technologies tend to focus more on hedonic well-being, in which users are exposed to content that evokes joy and laughter towards positive content, to anger and sadness towards negative content. In turn these negative impacts on adolescence or any users of social media will only experience temporary impacts on mental well-being, which will not have a permanent effect on the user's quality of life and life satisfaction. In 2023, it was discovered that 57 percent of teenagers between the ages of 13-17 would find it difficult to give up using social media, while the remaining 46 percent reported it would be easy. Older teenagers ranging from 15 to 17 years of age found it more difficult to give up social media, especially among teenage girls. There is a significant association between social media use and depression, with the association especially high for adolescent girls. When asked about the amount of time spent on social media teenagers reported that 55 percent have the right amount of time spent on social media. 35 percent of teenagers reported they spent too much time on social media, while 8 percent stated they spent too little time on social media. Proposed diagnostic categories Gaming disorder has been considered by the DSM-5 task force as warranting further study (as the subset internet gaming disorder), and was included in the ICD-11. Concerns have been raised by Aarseth and colleagues over this inclusion, particularly in regard to stigmatization of heavy gamers. Christakis has asserted that internet addiction may be "a 21st century epidemic". In 2018, he commented that childhood Internet overuse may be a form of "uncontrolled experiment[s] on ... children". International estimates of the prevalence of internet overuse have varied considerably, with marked variations by nation. A 2014 meta-analysis of 31 nations yielded an overall worldwide prevalence of six percent. A different perspective in 2018 by Musetti and colleagues reappraised the internet in terms of its necessity and ubiquity in modern society, as a social environment, rather than a tool, thereby calling for the reformulation of the internet addiction model. Some medical and behavioural scientists recommend adding a diagnosis of "social media addiction" (or similar) to the next Diagnostic and Statistical Manual of Mental Disorders update. A 2015 review concluded there was a probable link between basic psychological needs and social media addiction. "Social network site users seek feedback, and they get it from hundreds of people—instantly. It could be argued that the platforms are designed to get users 'hooked'." Internet sex addiction, also known as cybersex addiction, has been proposed as a sexual addiction characterized by virtual internet sexual activity that causes serious negative consequences to one's physical, mental, social, and/or financial well-being. It may be considered a form of problematic internet use. Related phenomena Online problem gambling A 2015 review found evidence of higher rates of mental health comorbidities, as well as higher amounts of substance use, among internet gamblers, compared to non-internet gamblers. Causation, however, has not been established. The review postulates that there may be differences in the cohorts between internet and land-based problem gamblers. Cyberbullying Cyberbullying, bullying or harassment using social media or other electronic means, has been shown to have effects on mental health. Victims may have lower self-esteem, increased suicidal ideation, decreased motivation for usual hobbies, and a variety of emotional responses, including being scared, frustrated, angry, anxious or depressed. These victims may also begin to distance themselves from friends and family members. According to the EU Kids Online project, the incidence of cyberbullying across seven European countries in children aged increased from 8% to 12% between 2010 and 2014. Similar increases were shown in the United States and Brazil. Media multitasking Concurrent use of multiple digital media streams, commonly known as media multitasking, has been shown to be associated with depressive symptoms, social anxiety, impulsivity, sensation seeking, lower perceived social success and neuroticism. A 2018 review found that while the literature is sparse and inconclusive, overall, heavy media multitaskers also have poorer performance in several cognitive domains. One of the authors commented that the data does not "unambiguously show that media multitasking causes a change in attention and memory", therefore it is possible to argue that it is inefficient to multitask on digital media. Distracted road use In March 2023, Accident Analysis & Prevention published a systematic review of 47 samples across 45 studies investigating associations between problematic mobile phone use and road safety outcomes (including 32 samples of drivers, 9 samples of pedestrians, 5 samples with road use type unspecified, and 1 sample of motorcyclists and bicyclists) that found that problematic mobile phone use was associated with greater risk of simultaneous mobile phone use and road use and risk of vehicle collisions and pedestrian collisions or falls. Noise-induced hearing loss Assessment and treatment Rigorous, evidence-based assessment of problematic digital media use is yet to be comprehensively established. This is due partially to a lack of consensus around the various constructs and lack of standardization of treatments. The American Academy of Pediatrics (AAP) has developed a Family Media Plan, intending to help parents assess and structure their family's use of electronic devices and media more safely. It recommends limiting entertainment screen time to two hours or less per day. The Canadian Paediatric Society produced a similar guideline. Ferguson, a psychologist, has criticised these and other national guidelines for not being evidence-based. Other experts, cited in a 2017 UNICEF Office of Research literature review, have recommended addressing potential underlying problems rather than arbitrarily enforcing screen time limits. Different methodologies for assessing pathological internet use have been developed, mostly self-report questionnaires, but none have been universally recognised as a gold standard. For gaming disorder, both the American Psychiatric Association and the World Health Organization (through the ICD-11) have released diagnostic criteria. There is some limited evidence of the effectiveness of cognitive behavioral therapy and family-based interventions for treatment. In randomized controlled trials, medications have not been shown to be effective. A 2016 study of 901 adolescents suggested mindfulness may assist in preventing and treating problematic internet use. A 2019 UK parliamentary report deemed parental engagement, awareness and support to be essential in developing "digital resilience" for young people, and to identify and manage the risks of harm online. Treatment centres have proliferated in some countries, and China and South Korea have treated digital dependence as a public health crisis, opening 300 and 190 centres nationwide, respectively. Other countries have also opened treatment centres. NGOs, support and advocacy groups provide resources to people overusing digital media, with or without codified diagnoses, including the American Academy of Child and Adolescent Psychiatry. A 2022 study outlines the mechanisms by which media-transmitted stressors affect mental well-being. Authors suggest a common denominator related to problems with the media's construction of reality is increased uncertainty, which leads to defensive responses and chronic stress in predisposed individuals. Associated psychiatric disorders ADHD In April 2018, the International Journal of Environmental Research and Public Health published a systematic review of 24 studies researching associations between internet gaming disorder (IGD) and various psychopathologies that found an 85% correlation between IGD and ADHD. In October 2018, PNAS USA published a systematic review of four decades of research on the relationship between children and adolescents' screen media use and ADHD-related behaviours and concluded that a statistically small relationship between children's media use and ADHD-related behaviours exists. In November 2018, Cyberpsychology published a systematic review and meta-analysis of 5 studies that found evidence for a relationship between problematic smartphone use and impulsivity traits. In October 2020, the Journal of Behavioral Addictions published a systematic review and meta-analysis of 40 studies with 33,650 post-secondary student subjects that found a weak-to-moderate positive association between mobile phone addiction and impulsivity. In January 2021, the Journal of Psychiatric Research published a systematic review of 29 studies including 56,650 subjects that found that ADHD symptoms were consistently associated with gaming disorder and more frequent associations between inattention and gaming disorder than other ADHD scales. In July 2021, Frontiers in Psychiatry published a meta-analysis reviewing 40 voxel-based morphometry studies and 59 functional magnetic resonance imaging studies comparing subjects with IGD or ADHD to control groups that found that IGD and ADHD subjects had disorder-differentiating structural neuroimage alterations in the putamen and orbitofrontal cortex (OFC) respectively, and functional alterations in the precuneus for IGD subjects and in the rewards circuit (including the OFC, the anterior cingulate cortex, and striatum) for both IGD and ADHD subjects. In March 2022, JAMA Psychiatry published a systematic review and meta-analysis of 87 studies with 159,425 subjects 12 years of age or younger that found a small but statistically significant correlation between screen time and ADHD symptoms in children. In April 2022, Developmental Neuropsychology published a systematic review of 11 studies where the data from all but one study suggested that heightened screen time for children is associated with attention problems. In July 2022, the Journal of Behavioral Addictions published a meta-analysis of 14 studies comprising 2,488 subjects aged 6 to 18 years that found significantly more severe problematic internet use in subjects diagnosed with ADHD to control groups. In December 2022, European Child & Adolescent Psychiatry published a systematic literature review of 28 longitudinal studies published from 2011 through 2021 of associations between digital media use by children and adolescents and later ADHD symptoms and found reciprocal associations between digital media use and ADHD symptoms (i.e. that subjects with ADHD symptoms were more likely to develop problematic digital media use and that increased digital media use was associated with increased subsequent severity of ADHD symptoms). In May 2023, Reviews on Environmental Health published a meta-analysis of 9 studies with 81,234 child subjects that found a positive correlation between screen time and ADHD risk in children and that higher amounts of screen time in childhood may significantly contribute to the development of ADHD. In December 2023, the Journal of Psychiatric Research published a meta-analysis of 24 studies with 18,859 subjects with a mean age of 18.4 years that found significant associations between ADHD and problematic internet use, while Clinical Psychology Review published a systematic review and meta-analysis of 48 studies examining associations between ADHD and gaming disorder that found a statistically significant association between the disorders. Anxiety In April 2018, the International Journal of Environmental Research and Public Health published a systematic review of 24 studies researching associations between internet gaming disorder (IGD) and various psychopathologies that found a 92% correlation between IGD and anxiety and a 75% correlation between IGD and social anxiety. In August 2018, Wiley Stress & Health published a meta-analysis of 39 studies comprising 21,736 subjects that found a small-to-medium association between smartphone use and anxiety. In December 2018, Frontiers in Psychiatry published a systematic review of 9 studies published after 2014 investigating associations between problematic social networking sites (SNS) use and comorbid psychiatric disorders that found a positive association between problematic SNS use and anxiety. In March 2019, the International Journal of Adolescence and Youth published a systematic review of 13 studies comprising 21,231 adolescent subjects aged 13 to 18 years that found that social media screen time, both active and passive social media use, the amount of personal information uploaded, and social media addictive behaviors all correlated with anxiety. In February 2020, Psychiatry Research published a systematic review and meta-analysis of 14 studies that found positive associations between problematic smartphone use and anxiety and positive associations between higher levels of problematic smartphone use and elevated risk of anxiety, while Frontiers in Psychology published a systematic review of 10 studies of adolescent or young adult subjects in China that concluded that the research reviewed mostly established an association between social networks use disorder and anxiety among Chinese adolescents and young adults. In April 2020, BMC Public Health published a systematic review of 70 cross-sectional and longitudinal studies investigating moderating factors for associations for screen-based sedentary behaviors and anxiety symptoms among youth that found that while screen types was the most consistent factor, the body of evidence for anxiety symptoms was more limited than for depression symptoms. In October 2020, the Journal of Behavioral Addictions published a systematic review and meta-analysis of 40 studies with 33,650 post-secondary student subjects that found a weak-to-moderate positive association between mobile phone addiction and anxiety. In November 2020, Child and Adolescent Mental Health published a systematic review of research published between January 2005 and March 2019 on associations between SNS use and anxiety symptoms in subjects between ages of 5 to 18 years that found that increased SNS screen time or frequency of SNS use and higher levels of investment (i.e. personal information added to SNS accounts) were significantly associated with higher levels of anxiety symptoms. In January 2021, Frontiers in Psychiatry published a systematic review of 44 studies investigating social media use and development of psychiatric disorders in childhood and adolescence that concluded that the research reviewed established a direct association between levels of anxiety, social media addiction behaviors, and nomophobia, longitudinal associations between social media use and increased anxiety, that fear of missing out and nomophobia are associated with severity of Facebook usage, and suggested that fear of missing out may trigger social media addiction and that nomophobia appears to mediate social media addiction. In March 2021, Computers in Human Behavior Reports published a systematic review of 52 studies published before May 2020 that found that social anxiety was associated with problematic social media use and that socially anxious persons used social media to seek social support possibly to compensate for a lack of offline social support. In June 2021, Clinical Psychology Review published a systematic review of 35 longitudinal studies published before August 2020 that found that evidence for longitudinal associations between screen time and anxiety among young people was lacking. In August 2021, a meta-analysis was presented at the 2021 International Conference on Intelligent Medicine and Health of articles published before January 2011 that found evidence for a negative impact of social media on anxiety. In January 2022, The European Journal of Psychology Applied to Legal Context published a meta-analysis of 13 cross-sectional studies comprising 7,348 subjects that found a statistically significant correlation between cybervictimization and anxiety with a moderate-to-large effect size. In March 2022, JAMA Psychiatry published a systematic review and meta-analysis of 87 studies with 159,425 subjects 12 years of age or younger that found a small but statistically significant correlation between screen time and anxiety in children, while Adolescent Psychiatry published a systematic review of research published from June 2010 through June 2020 studying associations between social media use and anxiety among adolescent subjects aged 13 to 18 years that established that 78.3% of studies reviewed reported positive associations between social media use and anxiety. In April 2022, researchers in the Department of Communication at Stanford University performed a meta-analysis of 226 studies comprising 275,728 subjects that found a small but positive association between social media use and anxiety, while JMIR Mental Health published a systematic review and meta-analysis of 18 studies comprising 9,269 adolescent and young adult subjects that found a moderate but statistically significant association between problematic social media use and anxiety. In May 2022, Computers in Human Behavior published a meta-analysis of 82 studies comprising 48,880 subjects that found a significant positive association between social anxiety and mobile phone addiction. In August 2022, the International Journal of Environmental Research and Public Health published a systematic review and meta-analysis of 16 studies comprising 8,077 subjects that established a significant association between binge-watching and anxiety. In November 2022, Cyberpsychology, Behavior, and Social Networking published a systematic review of 1,747 articles on problematic social media use that found a strong bidirectional relationship between social media use and anxiety. In March 2023, the Journal of Public Health published a meta-analysis of 27 studies published after 2014 comprising 120,895 subjects that found a moderate and robust association between problematic smartphone use and anxiety. In July 2023, Healthcare published a systematic review and meta-analysis of 16 studies that established correlation coefficients of 0.31 and 0.39 between nomophobia and anxiety and nomophobia and smartphone addiction respectively. In September 2023, Frontiers in Public Health published a systematic review and meta-analysis of 37 studies comprising 36,013 subjects aged 14 to 24 years that found a positive and statistically significant association between problematic internet use and social anxiety, while BJPsych Open published a systematic review of 140 studies published from 2000 through 2020 found that social media use for more than 3 hours per day and passive browsing was associated with increased anxiety. In January 2024, the Journal of Computer-Mediated Communication published a meta-analysis of 141 studies comprising 145,394 subjects that found that active social media use was associated with greater symptoms of anxiety and passive social media use was associated with greater symptoms of social anxiety. In February 2024, Addictive Behaviors published a systematic review and meta-analysis of 53 studies comprising 59,928 subjects that found that problematic social media use and social anxiety are highly and positively correlated, while The Egyptian Journal of Neurology, Psychiatry and Neurosurgery published a systematic review of 15 studies researching associations between problematic social media use and anxiety in subjects from the Middle East and North Africa (including 4 studies with subjects exclusively between the ages of 12 and 19 years) that established that most studies found a significant association. Autism In September 2018, the Review Journal of Autism and Developmental Disorders published a systematic review of 47 studies published from 2005 to 2016 that concluded that associations between autism spectrum disorder (ASD) and screen time was inconclusive. In May 2019, the Journal of Developmental and Behavioral Pediatrics published a systematic review of 16 studies that found that children and adolescents with ASD are exposed to more screen time than typically developing peers and that the exposure starts at a younger age. In April 2021, Research in Autism Spectrum Disorders published a systematic review of 12 studies of video game addiction in ASD subjects that found that children, adolescents, and adults with ASD are at greater risk of video game addiction than those without ASD, and that the data from the studies suggested that internal and external factors (sex, attention and oppositional behavior problems, social aspects, access and time spent playing video games, parental rules, and game genre) were significant predictors of video game addiction in ASD subjects. In March 2022, the Review Journal of Autism and Developmental Disorders published a systematic review of 21 studies investigating associations between ASD, problematic internet use, and gaming disorder where the majority of the studies found positive associations between the disorders. In August 2022, the International Journal of Mental Health and Addiction published a review of 15 studies that found that high rates of video game use in boys and young males with ASD was predominantly explained by video game addiction, but also concluded that greater video game use could be a function of ASD restricted interest and that video game addiction and ASD restricted interest could have an interactive relationship. In December 2022, the Review Journal of Autism and Developmental Disorders published a systematic review of 10 studies researching the prevalence of problematic internet use with ASD that found that ASD subjects had more symptoms of problematic internet use than control group subjects, had higher screen time online and an earlier age of first-time use of the internet, and also greater symptoms of depression and ADHD. In July 2023, Cureus published a systematic review of 11 studies that concluded that earlier and longer screen time exposure for children was associated with higher probability of a child "developing" ASD. In December 2023, JAMA Network Open published a meta-analysis of 46 studies comprising 562,131 subjects that concluded that while screen time may be a developmental cause of ASD in childhood, associations between ASD and screen time were not statistically significant when accounting for publication bias. Bipolar disorder In November 2018, Cyberpsychology published a systematic review and meta-analysis of 5 studies that found evidence for a relationship between problematic smartphone use and impulsivity traits. In October 2020, the Journal of Behavioral Addictions published a systematic review and meta-analysis of 40 studies with 33,650 post-secondary student subjects that found that a weak-to-moderate positive association between mobile phone addiction and impulsivity. In April 2021, a meta-analysis of 3 studies comprising 9,142 subjects was presented at the International Conference on Big Data and Informatization Education that found that problematic internet use is a risk factor for bipolar disorder. In December 2023, the Journal of Psychiatric Research published a meta-analysis of 24 studies with 18,859 subjects with a mean age of 18.4 years that found significant associations between problematic internet use and impulsivity. Depression In April 2018, the International Journal of Environmental Research and Public Health published a systematic review of 24 studies researching associations between internet gaming disorder (IGD) and various psychopathologies that found an 89% correlation between IGD and depression. In July 2018, JMIR Mental Health published a systematic review of 11 studies investigating social media use and depression among lesbian, gay, and bisexual (LGB) users that found that while qualitative research found that social media use could lead to greater social support and less loneliness for LGB users, LGB users were more likely to be cyberbullied than heterosexual users, that cyberbullying of LGB users was associated with depression among victims, and constant monitoring of accounts by LGB users was also found to be a stressor associated with depression. In December 2018, Frontiers in Psychiatry published a systematic review of 9 studies published after 2014 investigating associations between problematic SNS use and comorbid psychiatric disorders that found a positive association between problematic SNS use and depression. In March 2019, the International Journal of Adolescence and Youth published a systematic review of 13 studies comprising 21,231 adolescent subjects aged 13 to 18 years that found that social media screen time, both active and passive social media use, the amount of personal information uploaded, and social media addictive behaviors all correlated with depression. In April 2019, the Journal of Affective Disorders published a meta-analysis assessing associations between SNS use and higher levels of depression that found that greater SNS screen time and frequency of checking SNS accounts had small but statistically significant associations with higher levels of depression, that greater general social comparisons on SNS had a small to moderate association, and greater upward social comparisons on SNS had a moderate association. In November 2019, BMC Public Health published a systematic review and meta-analysis of 12 cross-sectional studies and 7 longitudinal studies that found that screen time-based sedentary behavior is associated with depression risk. In January 2020, Translational Psychiatry published a meta-analysis of 12 prospective studies comprising 128,553 subjects that found that while sedentary behavior and depression risk had a significant positive association, television viewing and other mentally passive sedentary behaviors were positively associated with depression risk but computer use and other mentally active sedentary behaviors were not. In February 2020, Psychiatry Research published a systematic review and meta-analysis of 14 studies that found positive associations between problematic smartphone use and depression and positive associations between higher levels of problematic smartphone use and elevated risk of depression. Also in February 2020, Frontiers in Psychology published a systematic review of 10 studies of adolescent or young adult subjects in China that concluded that the research reviewed mostly established an association between social networks use disorder and depression among Chinese adolescents and young adults. In March 2020, the Review of General Psychology published a meta-analysis that found a small association between social networking service (SNS) use and self-reported depression. In April 2020, BMC Public Health published a systematic review of 70 cross-sectional and longitudinal studies investigating moderating factors for associations for screen-based sedentary behaviors and depression symptoms among youth that found that the most consistent factor was for screen type since television viewing was not as strongly associated with depression symptoms as other screen types. In August 2020, the Journal of Medical Internet Research published an umbrella review of 7 systematic reviews on research investigating associations between depression and use of mobile technologies and social media by adolescents that concluded that while mobile technology and social media may promote social support, excess social comparison and personal involvement (i.e. increased exposure in general, exposure to specific content that promotes depressive symptoms, and the degree of personal information posted on social media) could be associated with symptoms of depression. In October 2020, the Journal of Affective Disorders published a meta-analysis of 12 studies with subjects aged 11 to 18 years that found a small but statistically significant positive correlation between social media use and depressive symptoms among adolescents, while the Journal of Behavioral Addictions published a systematic review and meta-analysis of 40 studies with 33,650 post-secondary student subjects that found a weak-to-moderate positive association between mobile phone addiction and depression. In November 2020, Child and Adolescent Mental Health published a systematic review of research published between January 2005 and March 2019 on associations between SNS use and depression in subjects between ages of 5 to 18 years that found that increased SNS screen time or frequency of SNS use and problematic and addictive SNS use were significantly associated with higher levels of depression symptoms. In January 2021, Frontiers in Psychiatry published a systematic review of 44 studies investigating social media use and development of psychiatric disorders in childhood and adolescence that concluded that passive social media use (e.g. browsing other user photos or scrolling through comments or news feeds) and depression are bidirectionally associated and that problematic social media use and depressive symptoms are mediated by social comparisons. In February 2021, Research on Child and Adolescent Psychopathology published a meta-analysis of 62 studies comprising 451,229 subjects that found SNS screen time and SNS use intensity to have weak but statistically significant associations with depression symptoms, while problematic SNS use was found to have a moderate association with depression symptoms. In March 2021, Youth & Society published a systematic review of 9 studies that found an association between SNS use and adolescent subjective well-being including mood, but that the results over whether the association was positive or negative were mixed. In April 2021, the Journal of Affective Disorders published a systematic review and meta-analysis of 92 studies comprising 15,148 subjects across 25 countries investigating associations between depression and internet gaming disorder found that one-third of the IGD subjects had been diagnosed with depression and major severe depressive symptoms were found in IGD subjects globally without a formal diagnosis in comparison to the general population. In May 2021, Current Psychology published a meta-analysis of 55 studies comprising 80,533 subjects that found a small but positive and statistically significant association between SNS use and self-reported depression symptoms. In June 2021, Clinical Psychology Review published a systematic review of 35 longitudinal studies published before August 2020 that found that an association between screen time and subsequent depressive symptoms among young people was small and varied by device type and use. In July 2021, Translational Medicine Communications published a systematic review of 9 studies published between October 2010 and December 2018 with Instagram user subjects between the ages of 19 and 35 years that found an association between Instagram use and depression symptoms. In January 2022, The European Journal of Psychology Applied to Legal Context published a meta-analysis of 13 cross-sectional studies comprising 7,348 subjects that found a statistically significant correlation between cybervictimization and depression with a moderate-to-large effect size. In February 2022, the International Journal of Social Psychiatry published a meta-analysis of 131 studies comprising 244,676 subjects that found a moderate mean correlation between problematic social media use and depression. In March 2022, Computers in Human Behavior published a systematic review and meta-analysis of 531 cross-sectional or longitudinal studies with subjects aged 10 to 24 years that found a small bidirectional association between online media use and depressive symptoms and that the effect size did not differ between general internet use, smartphone use, social media use, or online gaming, but also found that studies that measured online media use with media addiction scales rather than by screen time found significantly greater associations. Also in March 2022, JAMA Psychiatry published a systematic review and meta-analysis of 87 studies with 159,425 subjects 12 years of age or younger that found a small but statistically significant correlation between screen time and depression in children, while Adolescent Psychiatry published a systematic review of research published from June 2010 through June 2020 studying associations between social media use and depression among adolescent subjects aged 13 to 18 years that established that 82.6% of studies reviewed reported positive associations between social media use and depression. In April 2022, the International Journal of Environmental Research and Public Health published a meta-analysis of 21 cross-sectional studies and 5 longitudinal studies comprising 55,340 adolescent subjects that found that social media screen time had a linear dose–response association with depression risk among adolescents and that depression risk increased by 13% for each additional hour of social media screen time. Also in April 2022, researchers in the Department of Communication at Stanford University performed a meta-analysis of 226 studies comprising 275,728 subjects that found a small but positive association between social media use and depression, while JMIR Mental Health published a systematic review and meta-analysis of 18 studies comprising 9,269 adolescent and young adult subjects that found a moderate but statistically significant association between problematic social media use and depression. In August 2022, the International Journal of Environmental Research and Public Health published a systematic review and meta-analysis of 16 studies comprising 8,077 subjects that established a significant association between binge-watching and depression and a stronger association between binge-watching and depression was found during the COVID-19 pandemic than pre-pandemic. In November 2022, Cyberpsychology, Behavior, and Social Networking published a systematic review of 1,747 articles on problematic social media use that found a strong bidirectional relationship between social media use and depression. In December 2022, Frontiers in Psychiatry published a meta-analysis of 18 cohort studies comprising 241,398 subjects that found that screen time is a predictor of depressive symptoms. In March 2023, the Journal of Public Health published a meta-analysis of 27 studies published after 2014 comprising 120,895 subjects that found a moderate and robust association between problematic smartphone use and depression. In April 2023, Trauma, Violence, & Abuse published a systematic review and meta-analysis of 17 studies comprising 79,202 adolescent subjects between the ages of 10 and 19 years that found that depression was three times more common among cyberbullying victims than control groups. In July 2023, Current Psychology published a meta-analysis of 38 studies comprising 14,935 subjects in Turkey that found a small but positive association between problematic social media use and depression. In September 2023, Clinical Psychological Science published a preregistered review and meta-analysis of 34 articles published between 2018 and 2020 studying associations between adolescent depression and social media use to identify the proportion of samples taken from the Global North and Global South, and found that more than 70% examined Global North populations and that associations in the Global North were positive and significant while associations in the Global South were null and non-significant. In September 2023, BJPsych Open published a systematic review of 140 studies published from 2000 through 2020 that found that social media use for more than 3 hours per day and passive browsing was associated with increased depression in children, adolescents, and young adults. In February 2024, The Egyptian Journal of Neurology, Psychiatry and Neurosurgery published a systematic review of 15 studies researching associations between problematic social media use and depression in subjects from the Middle East and North Africa (including 4 studies with subjects exclusively between the ages of 12 and 19 years) that established that most studies found a significant association. Insomnia In August 2018, Sleep Science and Practice published a systematic review and meta-analysis of 19 studies comprising 253,904 adolescent subjects that found that excessive technology use had a strong and consistent association with reduced sleep duration and prolonged sleep onset latency for adolescents 14 years of age or older. Also in August 2018, Sleep Science published a systematic review of 12 studies investigating associations between exposure to video games, sleep outcomes, and post-sleep cognitive abilities that found the data present in the studies indicated associations between a reduction in sleep duration, increased sleep onset latency, modifications to rapid eye movement sleep and slow-wave sleep, increased sleepiness and self-perceived fatigue, and impaired post-sleep attention span and verbal memory. In October 2019, Sleep Medicine Reviews published a systematic review and meta-analysis of 23 studies comprising 35,684 subjects that found a statistically significant odds ratio for sleep problems and reduced sleep duration for subjects with internet addiction. In February 2020, Psychiatry Research published a systematic review and meta-analysis of 14 studies that found positive associations between problematic smartphone use and poor sleep quality and between higher levels of problematic smartphone use and elevated risk of poor sleep quality. Also in February 2020, Sleep Medicine Reviews published a systematic review of 31 studies examining associations between screen time and sleep outcomes in children younger than 5 years and found that screen time is associated with poorer sleep outcomes for children under the age of 5, with meta-analysis only confirming poor sleep outcomes among children under 2 years. In March 2020, Developmental Review published a systematic review of 9 studies that found a weak-to-moderate association between sleep quantity and quality and problematic smartphone use among adolescents. In October 2020, the International Journal of Environmental Research and Public Health published a systematic review and meta-analysis of 80 studies that found that greater screen time was associated with shorter sleep duration among toddlers and preschoolers, while the Journal of Behavioral Addictions published a systematic review and meta-analysis of 40 studies with 33,650 post-secondary student subjects that found a weak-to-moderate positive association between mobile phone addiction and poor sleep quality. In April 2021, Sleep Medicine Reviews published a systematic review of 36 cross-sectional studies and 6 longitudinal studies that found that 24 of the cross-sectional studies and 5 of the longitudinal studies established significant associations between more frequent social media use and poor sleep outcomes. In June 2021, Frontiers in Psychiatry published a systematic review and meta-analysis of 34 studies comprising 51,901 subjects that established significant associations between problematic gaming and sleep duration, poor sleep quality, daytime sleepiness, and other sleep problems. In September 2021, BMC Public Health published a systematic review of 49 studies investigating associations between electronic media use and various sleep outcomes among children and adolescents 15 years of age or younger that found a strong association with sleep duration and stronger evidence for an association with sleep duration between the ages of 6 and 15 years than for 5 years of age or younger, while evidence for associations between electronic media use with other sleep outcomes was more inconclusive. In December 2021, Frontiers in Neuroscience published a systematic review of 12 studies published from January 2000 to April 2020 that found that adult subjects with higher gaming addiction scores were more likely to have shorter sleep quantity, poorer sleep quality, delayed sleep timing, and greater daytime sleepiness and insomnia scores than subjects with lower gaming addiction scores and non-gamer subjects. In January 2022, Early Childhood Research Quarterly published a systematic review and meta-analysis of 26 studies that found a weak but statistically significant association with increased smartphone and tablet computer use and poorer sleep in early childhood. In May 2022, the Journal of Affective Disorders published a meta-analysis of 29 studies comprising 20,041 subjects that found a weak-to-moderate association between mobile phone addiction and sleep disorder and that adolescents with mobile phone addiction were at higher risk of developing sleep disorder. In August 2022, the International Journal of Environmental Research and Public Health published a systematic review and meta-analysis of 16 studies comprising 8,077 subjects that established a significant association between binge-watching and sleep problems and a stronger association between binge-watching and sleep problems was found during the COVID-19 pandemic than pre-pandemic. In October 2022, Reports in Public Health published a systematic review of 23 studies that found that excessive use of digital screens by adolescents was associated with poor sleep quality, nighttime awakenings, long sleep latency, and daytime sleepiness. In December 2022, Sleep Epidemiology published a systematic review of 18 studies investigating associations between sleep problems and screen time during COVID-19 lockdowns that found that the increased screen time during the lockdowns negatively impacted sleep duration, sleep quality, sleep onset latency, and wake time. In March 2023, the Journal of Clinical Sleep Medicine published a systematic review and meta-analysis of 17 studies comprising 36,485 subjects that found that smartphone overuse was closely associated with self-reported poor sleep quality, sleep deprivation, and prolonged sleep latency. In April 2023, Sleep Medicine Reviews published a systematic review of 42 studies that found digital media use to be associated with shorter sleep duration and poorer sleep quality and bedtime or nighttime use with poor sleep outcomes, but only found associations for general screen use, mobile phone use, computer and internet use, internet, and social media and not for television, game console, and tablet use. In July 2023, Healthcare published a systematic review and meta-analysis of 16 studies that established a correlation coefficient of 0.56 between nomophobia and insomnia. In September 2023, PLOS One published a systematic review and meta-analysis of 16 studies of smartphone addiction and sleep among medical students found that 57% of subjects had poor sleep and 39% of subjects had smartphone addiction with a correlation index of 0.3, while Computers in Human Behavior published a meta-analysis of 23 longitudinal studies comprising 116,431 adolescent subjects that found that adolescent screen time with computers, smartphones, social media, and television are positively associated with negative impacts on sleep health later in life. Narcissism In April 2018, a meta-analysis published in the Journal of Personality found that the positive correlation between grandiose narcissism and social networking site (SNS) usage was replicated across platforms (including Facebook and Twitter). In July 2018, a meta-analysis published in Psychology of Popular Media found that grandiose narcissism positively correlated with time spent on social media, frequency of status updates, number of friends or followers, and frequency of posting self-portrait digital photographs. In March 2020, the Review of General Psychology published a meta-analysis that found a small-to-moderate association between SNS use and narcissism. In June 2020, Addictive Behaviors published a systematic review finding a consistent, positive, and significant correlation between grandiose narcissism and problematic social media use. OCD In April 2018, the International Journal of Environmental Research and Public Health published a systematic review of 24 studies researching associations between internet gaming disorder (IGD) and various psychopathologies that found a significant correlation between IGD and obsessive–compulsive disorder symptoms in 3 of 4 studies. Mental health benefits Individuals with mental illness can develop social connections over social media, that may foster a sense of social inclusion in online communities. People with mental illness may share personal stories in a perceived safer space, as well as gaining peer support for developing coping strategies. A mediated model research study was done to see the effects of social media use on psychological well-being both in positive and negative ways. Although social media has a stigma of negative influence, this study looks into the positive as well. The positive influence of social media resulted in the feeling of connectedness and relevance with others. It is a way to interact with people from a far and may send a sense of relief of social isolation. Research published in 2024 investigated the impact of 'labels' on mental health. The participants received indications through labels whether specific webpages were likely to make them feel better or worse. The labeling system influenced participants' preferences and helped them evaluate the best choices when presented with relevant information. The results suggested that digital labels can be a useful tool to encourage healthy behaviors and improving behaviour, demonstrating the potential of technology to improve mental health. People with mental illness are likely to report avoiding stigma and gaining further insight into their mental health condition by using social media. This comes with the risk of unhealthy influences, misinformation, and delayed access to traditional mental health outlets. Other benefits include connections to supportive online communities, including illness or disability specific communities, as well as the LGBTQIA community. Young cancer patients have reported an improvement in their coping abilities due to their participation in an online community. The uses of social media for healthcare communication include providing reducing stigma and facilitating dialogue between patients and between patients and health professionals. Furthermore, in children, the educational benefits of digital media use are well established. For example, screen-based programs can help increase both independent and collaborative learning. A variety of quality apps and software can also decrease learning gaps and increase skill in certain educational subjects. Other disciplines Digital anthropology Daniel Miller from University College London has contributed to the study of digital anthropology, especially ethnographic research on the use and consequences of social media and smartphones as part of the everyday life of ordinary people around the world. He notes the effects of social media are very specific to individual locations and cultures. He contends "a layperson might dismiss these stories as superficial. But the anthropologist takes them seriously, empathetically exploring each use of digital technologies in terms of the wider social and cultural context." Digital anthropology is a developing field which studies the relationship between humans and digital-era technology. It aims to consider arguments in terms of ethical and societal scopes, rather than simply observing technological changes. Brian Solis, a digital analyst and anthropologist, stated in 2018, "we've become digital addicts: it's time to take control of technology and not let tech control us". Digital sociology Digital sociology explores how people use digital media using several research methodologies, including surveys, interviews, focus groups, and ethnographic research. It intersects with digital anthropology, and studies cultural geography. It also investigates longstanding concerns, and contexts around young people's overuse of "these technologies, their access to online pornography, cyber bullying or online sexual predation". A 2012 cross-sectional sociological study in Turkey showed differences in patterns of internet use that related to levels of religiosity in 2,698 subjects. With increasing religiosity, negative attitudes towards internet use increased. Highly religious people showed different motivations for internet use, predominantly searching for information. A study of 1,296 Malaysian adolescent students found an inverse relationship between religiosity and internet addiction tendency in females, but not males. A 2018 review published in Nature considered that young people may have different experiences online, depending on their socio-economic background, noting lower-income youths may spend up to three hours more per day using digital devices, compared to higher-income youths. They theorized that lower-income youths, who are already vulnerable to mental illness, may be more passive in their online engagements, being more susceptible to negative feedback online, with difficulty self-regulating their digital media use. It concluded that this may be a new form of digital divide between at-risk young people and other young people, pre-existing risks of mental illness becoming amplified among the already vulnerable population. Neuroscience A 2018 neuroscientific review published in Nature found the density of the amygdala, a brain region involved in emotional processing, is related to the size of both offline and online social networks in adolescents. They considered that this and other evidence "suggests an important interplay between actual social experiences, both offline and online, and brain development". The authors postulated that social media may have benefits, namely social connections with other people, as well as managing impressions people have of other people such as "reputation building, impression management, and online self-presentation". It identified "adolescence [as] a tipping point in development for how social media can influence their self-concept and expectations of self and others", and called for further study into the neuroscience behind digital media use and brain development in adolescence. Although brain-imaging modalities are under study, neuroscientific findings in individual studies often fail to be replicated in future studies, similar to other behavioural addictions; as of 2017, the exact biological or neural processes that could lead to excessive digital media use are unknown. Impact on cognition There is research and development about the cognitive impacts of smartphones and digital technology. A group reported that, contrary to widespread belief, scientific evidence does not show that these technologies harm biological cognitive abilities and that they instead only change predominant ways of cognition – such as a reduced need to remember facts or conduct mathematical calculations by pen and paper outside contemporary schools. However, some activities – like reading novels – that require long focused attention-spans and do not feature ongoing rewarding stimulation may become more challenging in general. How extensive online media usage impacts cognitive development in youth is under investigation and impacts may substantially vary by the way and which technologies are being used – such as which and how digital media platforms are being used – and how these are designed. Impacts may vary to a degree such studies have not yet taken into account and may be modulatable by the design, choice and use of technologies and platforms, including by the users themselves. A study suggests that in children aged 8–12 during two years, time digital gaming or watching digital videos can be positively correlated with measures intelligence, albeit correlations with overall screen time (including social media, socializing and TV) were not investigated and 'time gaming' did not differentiate between categories of video games (e.g. shares of games' platform and genre), and digital videos did not differentiate between categories of videos. Impact on social life Worldwide adolescent loneliness in contemporary schools and depression increased substantially after 2012 and a study found this to be associated with smartphone access and Internet use. Mitigation Industry Several technology firms have implemented changes intending to mitigate the adverse effects of excessive use of their platforms. In December 2017, Facebook admitted passive consumption of social media could be harmful to mental health, although they said active engagement can have a positive effect. In January 2018, the platform made major changes to increase user engagement. In January 2019, Facebook's then head of global affairs, Nick Clegg, responding to criticisms of Facebook and mental health concerns, stated they would do "whatever it takes to make this environment safer online especially for youngsters". Facebook admitted "heavy responsibilities" to the global community, and invited regulation by governments. In 2018 Facebook and Instagram announced new tools that they asserted may assist with overuse of their products. In 2019, Instagram, which has been investigated specifically in one study in terms of addiction, began testing a platform change in Canada to hide the number of "likes" and views that photos and videos received in an effort to create a "less pressurised" environment. It then continued this trial in Australia, Italy, Ireland, Japan, Brazil and New Zealand before extending the experiment globally in November of that year. The platform also developed artificial intelligence to counter cyberbullying. In 2018, Alphabet Inc. released an update for Android smartphones, including a dashboard app enabling users to set timers on application use. Apple Inc. purchased a third-party application and then incorporated it in iOS 12 to measure "screen time". Journalists have questioned the functionality of these products for users and parents, as well as the companies' motivations for introducing them. Alphabet has also invested in a mental health specialist, Quartet, which uses machine learning to collaborate and coordinate digital delivery of mental health care. Two activist investors in Apple Inc voiced concerns in 2018 about the content and amount of time spent by youth. They called on Apple Inc. to act before regulators and consumers potentially force them to do so. Apple Inc. responded that they have, "always looked out for kids, and [they] work hard to create powerful products that inspire, entertain, and educate children while also helping parents protect them online". The firm is planning new features that they asserted may allow them to play a pioneering role in regard to young people's health. Public sector In China, Japan, South Korea and the United States, governmental efforts have been enacted to address issues relating to digital media use and mental health. China's Ministry of Culture has enacted several public health efforts from as early as 2006 to address gaming and internet-related disorders. In 2007, an "Online Game Anti-Addiction System" was implemented for minors, restricting their use to 3 hours or less per day. The ministry also proposed a "Comprehensive Prevention Program Plan for Minors' Online Gaming Addiction" in 2013, to promulgate research, particularly on diagnostic methods and interventions. China's Ministry of Education in 2018 announced that new regulations would be introduced to further limit the amount of time spent by minors in online games. In response, Tencent, the owner of WeChat and the world's largest video game publisher, restricted the amount of time that children could spend playing one of its online games, to one hour per day for children 12 and under, and two hours per day for children aged . Effective 2 September 2023, those under the age of 18 can no longer access the Internet on their mobile device between 10 pm and 6 am without parental bypass. Smartphone usage is similarly capped by default at 40 minutes a day for children younger than eight and at two hours for 16- and 17-year-olds. Japan's Ministry of Internal Affairs and Communications coordinates Japanese public health efforts in relation to problematic internet use and gaming disorder. Legislatively, the Act on Development of an Environment that Provides Safe and Secure Internet Use for Young People was enacted in 2008, to promote public awareness campaigns, and support NGOs to teach young people safe internet use skills. South Korea has eight government ministries responsible for public health efforts in relation to internet and gaming disorders. A review article published in Prevention Science in 2018 stated that the "region is unique in that its government has been at the forefront of prevention efforts, particularly in contrast to the United States, Western Europe, and Oceania." Efforts are coordinated by the Ministry of Science and ICT, and include awareness campaigns, educational interventions, youth counseling centres, and promoting healthy online culture. In May 2023, the United States' Surgeon general took the rare measure of issuing an advisory on Social media and mental health. In October, 41 U.S. states commenced legal proceedings against Meta. This included the attorneys general of 33 states filing a combined lawsuit over concerns about the addictive nature of Instagram and its impact on the mental health of young people. In November 2024, Australia passed the world's first ban on social media for under-16s. Digital mental health care Digital technologies have also provided opportunities for delivery of mental health care online; benefits have been found with computerized cognitive behavioral therapy for depression and anxiety. Mindfulness based online intervention has been shown to have small to moderate benefits on mental health. The greatest effect size was found for the reduction of psychological stress. Benefits were also found regarding depression, anxiety, and well-being. The Lancet commission on global mental health and sustainability report from 2018 evaluated both benefits and harms of technology. It considered the roles of technologies in mental health, particularly in public education; patient screening; treatment; training and supervision; and system improvement. A study in 2019 published in Front Psychiatry in the National Center for Biotechnology Information states that despite proliferation of many mental health apps there has been no "equivalent proliferation of scientific evidence for their effectiveness." Steve Blumenfield and Jeff Levin-Scherz, writing in the Harvard Business Review, claim that "most published studies show telephonic mental health care is as effective as in-person care in treating depression, anxiety and obsessive-compulsive disorder." The also cite a 2020 study done with the Veterans Administration as evidence of this as well. See also Computer-induced medical problems Evolutionary psychiatry Instagram Screen time Social aspects of television References Further reading Woods, H. C., & Scott, H. (2016). #Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self‐esteem. Journal of Adolescence, 51(1), 41–49. https://doi.org/10.1016/j.adolescence.2016.05.008 Jones, A., Hook, M., Podduturi, P., McKeen, H., Beitzell, E., & Liss, M. (2022). Mindfulness as a mediator in the relationship between social media engagement and depression in young adults. Personality and Individual Differences, 185. https://doi.org/10.1016/j.paid.2021.111284 White-Gosselin, C.-É., & Poulin, F. (2022). Associations between young adults' social media addiction, relationship quality with parents, and internalizing problems: A path analysis model. Canadian Journal of Behavioural Science / Revue Canadienne Des Sciences Du Comportement. https://doi.org/10.1037/cbs0000326 Hammad, M. A., & Alqarni, T. M. (2021). Psychosocial effects of social media on the Saudi society during the Coronavirus Disease 2019 pandemic: A cross-sectional study. PLoS ONE, 16(3). https://doi.org/10.1371/journal.pone.0248811 Huang, Chiungjung. “A Meta-Analysis of the Problematic Social Media Use and Mental Health.” Https://Doi.Org/10.1177/0020764020978434, December 9, 2020. https://doi.org/10.1177/0020764020978434. Weigle, Paul E., and Pamela Hurst-Della Pietra. “Children and Screens: Youth Digital Media Use and Mental Health Outcomes.” Journal of the American Academy of Child & Adolescent Psychiatry 60, no. 10 (October 2, 2021): S297–S297. https://doi.org/10.1016/j.jaac.2021.07.700. External links Anthropology of Social Media: Why We Post, University College London, Free online five-week course, asking "What are the consequences of social media?" Social Media Use and Mental Health: A Review – ongoing review curated by Jean Twenge & Jonathan Haidt. Cultural anthropology Cyberspace Technology in society Child and adolescent psychiatry Educational psychology Mental health
Digital media use and mental health
Technology
13,885
61,169,662
https://en.wikipedia.org/wiki/C25H28O6
{{DISPLAYTITLE:C25H28O6}} The molecular formula C25H28O6 (molar mass: 424.49 g/mol, exact mass: 424.1886 u) may refer to: Arugosin C Sophoraflavanone G
C25H28O6
Chemistry
65
27,750,422
https://en.wikipedia.org/wiki/Harry%20L.%20Nelson
Harry Lewis Nelson (born January 8, 1932) is an American mathematician and computer programmer. He was a member of the team that won the World Computer Chess Championship in 1983 and 1986, and was a co-discoverer of the 27th Mersenne prime in 1979 (at the time, the largest known prime number). He also served as editor of the Journal of Recreational Mathematics for five years. Most of his professional career was spent at Lawrence Livermore National Laboratory where he worked with some of the earliest supercomputers. He was particularly noted as one of the world's foremost experts in writing optimized assembly language routines for the Cray-1 and Cray X-MP computers. Nelson had a lifelong interest in puzzles of all types, and founded the MiniMax Game Company, a small venture that helps puzzle inventors to develop and market their products. In 1994, Nelson donated his correspondence from his days as editor of the Journal of Recreational Mathematics to the University of Calgary Library as part of the Eugène Strens Recreational Mathematics Special Collection. Biography Early years Nelson was born on January 8, 1932, in Topeka, Kansas, the third of four children. He attended local schools and was active in the Boy Scouts, earning the rank of Eagle Scout. Nelson attended Harvard University as a freshman, but then had to drop out for financial reasons. He attended the University of Kansas as a sophomore, but was able to return to Harvard for his junior and senior years, receiving a bachelor's degree in mathematics from Harvard in 1953. In 1952, just before the start of his senior year, he married his high school sweetheart, Claire (née Rachael Claire Ensign). After graduating, he was inducted into the U.S. Army, but was never deployed overseas. He was honorably discharged in 1955, having attained the rank of sergeant. He enrolled in graduate studies at the University of Kansas, earning a master's degree in mathematics in 1957. It was during this period that he became fascinated by the then-new programmable digital computer. Nelson worked towards a Ph.D. until 1959, but the combination of his GI Bill educational benefits running out, needing to support a wife and three children, and the mathematics department rejecting his proposal to do his thesis on computers convinced him to leave the university without completing his Ph.D., and to get a job. Initially, Nelson worked for Autonetics, an aerospace company in southern California. In 1960 he went to work for the Lawrence Radiation Laboratory (later renamed Lawrence Livermore National Laboratory or LLNL), in Livermore, California. He remained working there until his retirement in 1991. Nelson worked on a variety of computers at LLNL, beginning with the IBM 7030 (nicknamed Stretch). In the 1960s, early units of a new computer were typically delivered as "bare metal," i.e. no software of any kind, including no compiler and no operating system. Programs needed to be written in assembly language, and the programmer needed to have intimate and detailed knowledge of the machine. A lifelong puzzle enthusiast, Nelson sought to understand every detail of the hardware, and earned a reputation as an expert on the features and idiosyncrasies of each new machine. Over time he became the principal person at LLNL in charge of doing acceptance testing of new hardware. 27th Mersenne prime During the process of acceptance testing, a new supercomputer would typically run diagnostic programs at night, looking for problems. During the acceptance testing of LLNL's first Cray-1 computer, Nelson teamed up with Cray employee David Slowinski to devise a program that would hunt for the next Mersenne prime, while simultaneously being a legitimate diagnostic program. On April 8, 1979, the team found the 27th Mersenne prime: 244497 - 1, the largest prime number known at that time. Computer chess In 1980, Nelson came across a copy of the chess program Cray Blitz written by Robert Hyatt. Using his detailed knowledge of the Cray-1 architecture, Nelson re-wrote a key routine in assembly language and was able to significantly speed up the program. The two began collaborating along with a third team member, Albert Gower, a strong correspondence chess player. In 1983, Cray Blitz won the World Computer Chess Championship, and successfully defended its title in 1986. The 1986 Championship was marred by controversy when the HiTech team, led by Hans Berliner, accused the Cray Blitz team of cheating. The charge was investigated for a few months by the tournament director, David Levy, and dismissed. Despite the dismissal, the experience somewhat soured the computer chess scene for Nelson, although he remained active until the ACM discontinued the annual computer chess tournaments in 1994. Puzzles and problems Nelson was active with the International Puzzle Party, and was a longtime contributor to the Journal of Recreational Mathematics. He served as Editor of the Journal for 5 years, and sat on its editorial board for many years after stepping down as Editor. References Further reading Robert M. Hyatt and Harry L. Nelson, "Chess and Supercomputers, details on optimizing Cray Blitz", proceedings of Supercomputing '90 in New York (354-363). External links Transcript of a talk about Cray Blitz given at a University of California, Davis computer science seminar Video of an interview with Harry Nelson from the Computer History Museum Image of Nelson in front of a Cray X-MP 1932 births Living people American computer programmers Computer chess people American academic journal editors Recreational mathematicians Harvard University alumni University of Kansas alumni
Harry L. Nelson
Mathematics
1,135
13,280,234
https://en.wikipedia.org/wiki/90%2C000
90,000 (ninety thousand) is the natural number following 89,999 and preceding 90,001. It is the sum of the cubes of the first 24 positive integers, and is the square of 300. Selected numbers in the range 90,000–99,999 90,625 = the only five-digit automorphic number: 906252 = 8212890625 91,125 = 453 91,144 = Fine number 92,205 = number of 23-bead necklaces (turning over is allowed) where complements are equivalent 92,706 = There is a math puzzle called KAYAK + KAYAK + KAYAK + KAYAK + KAYAK + KAYAK = SPORT, where each letter represents a digit. When one solves the puzzle, KAYAK = 15451, and when one added this up, SPORT = 92,706. 93,312 = Leyland number: 66 + 66. Also a 3-smooth number. 94,249 = palindromic square: 3072 94,932 = Leyland number: 75 + 57 95,121 = Kaprekar number: 951212 = 9048004641; 90480 + 04641 = 95121 95,420 = number of 22-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed 96,557 = Markov number: 52 + 64662 + 965572 = 3 × 5 × 6466 × 96557 97,336 = 463, the largest 5-digit cube 98,304 = 3-smooth number 99,066 = largest number whose square uses all of the decimal digits once: 990662 = 9814072356. It is also strobogrammatic in decimal. 99,856 = 3162, the largest 5-digit square 99,991 = largest five-digit prime number 99,999 = repdigit, Kaprekar number: 999992 = 9999800001; 99998 + 00001 = 99999 Primes There are 879 prime numbers between 90000 and 100000. References External links 90000
90,000
Mathematics
454
14,145,089
https://en.wikipedia.org/wiki/CALCRL
Calcitonin receptor-like (CALCRL), also known as the calcitonin receptor-like receptor (CRLR), is a human protein; it is a receptor for calcitonin gene-related peptide. Function The protein encoded by the CALCRL gene is a G protein-coupled receptor related to the calcitonin receptor. CALCRL is linked to one of three single transmembrane domain receptor activity-modifying proteins (RAMPs) that are essential for functional activity. The association of CALCRL with different RAMP proteins produces different receptors: with RAMP1: produces a CGRP receptor with RAMP2: produces an adrenomedullin (AM) receptor, designated AM1 with RAMP3: produces a dual CGRP/AM receptor designated AM2 These receptors are linked to the G protein Gs, which activates adenylate cyclase and activation results in the generation of intracellular cyclic adenosine monophosphate (cAMP). CGRP receptors are found throughout the body, suggesting that the protein may modulate a variety of physiological functions in all major systems (e.g., respiratory, endocrine, gastrointestinal, immune, and cardiovascular). Wounds In wounds, CGRP receptors found in nerve cells deactivate the immune system, to prevent collateral damage in case of a clean wound (common case). In very preliminary research, nerve blockers like e.g. lidocaine or botox have been demonstrated to block CGRP cascade, thereby allowing immune system involvement and control of pathogens, resulting in complete control and recovery. Structure CALCRL associated with RAMP1 produces the CGRP receptor which is a trans-membrane protein receptor that is made up of four chains. Two of the four chains contain unique sequences. It is a heterodimer protein composed of two polypeptide chains differing in composition of their amino acid residues. The sequence reveals multiple hydrophobic and hydrophilic regions throughout the four chains in the protein. The CGRP family of receptors including CALCRL can couple to G-protein Gαs, Gαi and Gαq subunits to transduce their signals. Furthermore binding of ligands to CALCRL can bias coupling to these G-protein. Peptide agonist bind to the extracellular loops of CALCRL. This binding in turn causes TM5 (transmembrane helix 5) and TM6 to pivot around TM3 which in turn facilitates Gαs binding. Adrenomedullin receptor Expression The RNA expression charts show a high level in fetal lung. Clinical significance Calcitonin gene-related peptide receptor antagonists are approved for the treatment of migraine. References Further reading External links G protein-coupled receptors
CALCRL
Chemistry
575
21,839,925
https://en.wikipedia.org/wiki/Runge%E2%80%93Gross%20theorem
In quantum mechanics, specifically time-dependent density functional theory, the Runge–Gross theorem (RG theorem) shows that for a many-body system evolving from a given initial wavefunction, there exists a one-to-one mapping between the potential (or potentials) in which the system evolves and the density (or densities) of the system. The potentials under which the theorem holds are defined up to an additive purely time-dependent function: such functions only change the phase of the wavefunction and leave the density invariant. Most often the RG theorem is applied to molecular systems where the electronic density, ρ(r,t) changes in response to an external scalar potential, v(r,t), such as a time-varying electric field. The Runge–Gross theorem provides the formal foundation of time-dependent density functional theory. It shows that the density can be used as the fundamental variable in describing quantum many-body systems in place of the wavefunction, and that all properties of the system are functionals of the density. The theorem was published by and in 1984. As of September 2021, the original paper has been cited over 5,700 times. Overview The Runge–Gross theorem was originally derived for electrons moving in a scalar external field. Given such a field denoted by v and the number of electron, N, which together determine a Hamiltonian Hv, and an initial condition on the wavefunction Ψ(t = t0) = Ψ0, the evolution of the wavefunction is determined by the Schrödinger equation (written in atomic units) At any given time, the N-electron wavefunction, which depends upon 3N spatial and N spin coordinates, determines the electronic density through integration as Two external potentials differing only by an additive time-dependent, spatially independent, function, c(t), give rise to wavefunctions differing only by a phase factor exp(-i α(t)), with dα(t)/dt = c(t), and therefore the same electronic density. These constructions provide a mapping from an external potential to the electronic density: The Runge–Gross theorem shows that this mapping is invertible, modulo c(t). Equivalently, that the density is a functional of the external potential and of the initial wavefunction on the space of potentials differing by more than the addition of c(t): Proof Given two scalar potentials denoted as v(r,t) and v'(r,t), which differ by more than an additive purely time-dependent term, the proof follows by showing that the density corresponding to each of the two scalar potentials, obtained by solving the Schrödinger equation, differ. The proof relies heavily on the assumption that the external potential can be expanded in a Taylor series about the initial time. The proof also assumes that the density vanishes at infinity, making it valid only for finite systems. The Runge–Gross proof first shows that there is a one-to-one mapping between external potentials and current densities by invoking the Heisenberg equation of motion for the current density so as to relate time-derivatives of the current density to spatial derivatives of the external potential. Given this result, the continuity equation is used in a second step to relate time-derivatives of the electronic density to time-derivatives of the external potential. The assumption that the two potentials differ by more than an additive spatially independent term, and are expandable in a Taylor series, means that there exists an integer k ≥ 0, such that is not constant in space. This condition is used throughout the argument. Step 1 From the Heisenberg equation of motion, the time evolution of the current density, j(r,t), under the external potential v(r,t) which determines the Hamiltonian Hv, is Introducing two potentials v and v', differing by more than an additive spatially constant term, and their corresponding current densities j and j', the Heisenberg equation implies The final line shows that if the two scalar potentials differ at the initial time by more than a spatially independent function, then the current densities that the potentials generate will differ infinitesimally after t0. If the two potentials do not differ at t0, but uk(r) ≠ 0 for some value of k, then repeated application of the Heisenberg equation shows that ensuring the current densities will differ from zero infinitesimally after t0. Step 2 The electronic density and current density are related by a continuity equation of the form Repeated application of the continuity equation to the difference of the densities ρ and ρ', and current densities j and j', yields The two densities will then differ if the right-hand side (RHS) is non-zero for some value of k. The non-vanishing of the RHS follows by a reductio ad absurdum argument. Assuming, contrary to our desired outcome, that integrate over all space and apply Green's theorem. The second term is a surface integral over an infinite sphere. Assuming that the density is zero at infinity (in finite systems, the density decays to zero exponentially) and that ∇uk2(r) increases slower than the density decays, the surface integral vanishes and, because of the non-negativity of the density, implying that uk is a constant, contradicting the original assumption and completing the proof. Extensions The Runge–Gross proof is valid for pure electronic states in the presence of a scalar field. The first extension of the RG theorem was to time-dependent ensembles, which employed the Liouville equation to relate the Hamiltonian and density matrix. A proof of the RG theorem for multicomponent systems—where more than one type of particle is treated within the full quantum theory—was introduced in 1986. Incorporation of magnetic effects requires the introduction of a vector potential (A(r)) which together with the scalar potential uniquely determine the current density. Time-dependent density functional theories of superconductivity were introduced in 1994 and 1995. Here, scalar, vector, and pairing (D(t)) potentials map between current and anomalous (ΔIP(r,t)) densities. References Density functional theory Theorems in quantum mechanics
Runge–Gross theorem
Physics,Chemistry,Mathematics
1,322
33,548,237
https://en.wikipedia.org/wiki/IRIS%20%28biosensor%29
Interferometric reflectance imaging sensor (IRIS), formerly known as the spectral reflectance imaging biosensor (SRIB), is a system that can be used as a biosensing platform capable of high-throughput multiplexing of protein–protein, protein–DNA, and DNA–DNA interactions without the use of any fluorescent labels. The sensing surface is prepared by robotic spotting of biological probes that are immobilized on functionalized Si/SiO2 substrates. IRIS is capable of quantifying biomolecular mass accumulated on the surface. Measurement To perform a measurement, the sample is illuminated with multiple different wavelengths from either a tunable laser or different color LEDs; typically speaking, a relatively narrow bandwidth optical source. The reflection intensity is imaged using a CCD or CMOS camera. By using interferometric techniques, nanometer changes can be detected. Applications Applications for IRIS include microarray format immunoassays, single nucleotide polymorphism (SNP) detection, pathogen detection and bio-defense monitoring, kinetic analysis of biomolecular interactions, and general biomolecular interaction studies for research applications. References External links IRIS homepage Biotechnology Sensors
IRIS (biosensor)
Technology,Engineering,Biology
246
2,880,009
https://en.wikipedia.org/wiki/Bedtime%20story
A bedtime story is a traditional form of storytelling, where a story is told to a child at bedtime to prepare the child for sleep. The bedtime story has long been considered "a definite institution in many families". History The term "bedtime story" was coined by Louise Chandler Moulton in her 1873 book, Bed-time Stories. The "ritual of an adult reading out loud to a child at bedtime formed mainly in the second half of the nineteenth century and achieved prominence in the early twentieth century in tandem with the rising belief that soothing rituals were necessary for children at the end of the day. The practice of reading bedtime stories contributed to the growth of the picture book industry, and may also have contributed to the practice of isolated sleeping for children. Western culture Within the Western culture, many parents read bedtime stories to their kids to assist with falling asleep. Among other benefits, this ritual is considered to reinforce the parent-child relationship. The type of stories and the time at which they are read may differ on cultural basis. Western bedtime stories are full of the traditional value and stories from the predominant sub-cultures. Mentioning of cowboys and hippie lifestyle is a prominent form of verbal storytelling. Bedtime stories may be used to teaching the child abstract virtues such as sympathy, altruism, and self-control, and empathy by helping children to imagine the feelings of others. The stories can be used to discuss darker subjects such as death and racism. Europe A vast number of bedtime stories, now famous around the world, originated in Europe. The European culture of bedtime stories is inspired in part by Aesop's fables and Greek fables. Aesop's fables The Aesop's fables are a collection of fables that were written by a Greek storyteller named Aesop, who derived them from oral traditions of the Greek people. The fables were collected and compiled after his death, and have been translated into many modern languages. These fables include different animal characters, providing a moral lesson or a great piece of wisdom for the young minds to understand. The many fables include, The Ant and the Grasshopper The Boy Who Cried Wolf The Cock, the Dog and the Fox The Dog and Its Reflection These fables may be used to teach children ethical and moral values. These fables may be read as bedtime stories. Scientific research The fixed routine of a bedtime story before sleeping can improve the child's brain development, language acquisition, and problem solving skills. The storyteller-listener relationship creates an emotional bond between the parent and the child. Due to "the strength of the imitative instinct" of a child, the parent and the stories that they tell act as a model for the child to follow. Being read bedtime stories increases children's vocabularies. References External links Folklore Performing arts Sleep Children's literature Childhood Culture of beds
Bedtime story
Biology
593
8,646,237
https://en.wikipedia.org/wiki/Polloc%20and%20Govan%20Railway
The Polloc and Govan Railway was an early mineral railway near Glasgow in Scotland, constructed to bring coal and iron from William Dixon's collieries and ironworks to the River Clyde for onward transportation. When the Clydesdale Junction Railway was projected in the nineteenth century, it used part of the alignment of the Polloc line to reach Glasgow from Rutherglen, and that part of the route is in use today as the main access to Glasgow Central station from the Motherwell direction. John Dixon: first waggonway John Dixon came from Sunderland to Glasgow and established coal pits at Knightswood and Gartnavel, in what are now the western suburbs of Glasgow. About 1750 he purchases a glassworks at Dumbarton, and to transport his coal to the works, he built a wooden waggonway from the pit mouth to Yoker. The coal was loaded into barges, which went down with the ebb tide to Leven. By 1785 the glassworks was the largest in the United Kingdom, consuming 1,500 tons of coal per annum. A newspaper correspondent wrote in 1852: The coal from the pits of the Woodside district about the middle of the last century was mostly consumed at the glass works at Dumbarton. My informant says that there was at this time a wooden tram road commencing at the Woodsise coal pits which crossed the Dumbarton Road, and extended to a quay situated on the river, nearly opposite to Renfrew, from which quay the coals were shipped by gabberts to Dumbarton. I do not think that this tram road existed in my day, but about 70 years ago, I walked on the tram road from the Little Govan Coal Works to the Coal Quay, then situated on the south banks of the river at the grounds lately of Todd and Higginbotham, and I rather think that the Dumbarton Glass Works Company were at that time interested in the Little Govan Coal Works as well as the Woodside Coal Works. The Govan Waggonway The Knightswood pit became exhausted and Dixon acquired mineral rights in the Little Govan estate. Between 1775 and 1778, his son William Dixon built a line from Govan coal pits to Springfield on the south bank of the Clyde. At that time "Govan" extended to the south-east of the city; the coal pits were in the area bounded by the present-day M74, Polmadie Road and Aikenhead Road. "Springfield" was a quay on the south bank of the Clyde, immediately west of West Street, although Wherry Wharf was the actual quay used. The alignment of the waggonway was broadly south-east to north-west, skirting round the south of the built up area of the time, and the approach to the Clyde was along what became West Street. Privately built and not requiring Parliamentary authority, this became known as the Govan Waggonway. Dixon built it on the principle familiar to him from Tyneside, with timber cross-sleepers and timber rails, and wagons with flanged wheels were pulled by horses. In 1810 the Glasgow, Paisley and Johnstone Canal was nearing completion, with its Glasgow termination at Port Eglinton; this faced the west side of Eglinton Street immediately south of, and opposite, the Cumberland Street junction; the area is long since built over. According to Paterson (page 207), "On 1 August 1811, William Dixon (Junior), coalmaster, bought 1,242 square yards of ground from the Corporation of Glasgow of building a tramway on which to convey coal from his Govan pits to the Ardrossan Canal basin at Port Eglinton." The main line of the waggonway was of course already long established, and this must refer to Dixon's intention to build a short connecting branch to the canal basin. Upgrading the line Dixon later built an ironworks a little to the west of the Govan coal pit, in the area immediately east of the point where Cathcart Road crosses the M74. From the flames issuing from the furnaces the works became known as Dixon's Blazes. The Govan coal pits had expanded with surface equipment over a wide area; the ironworks was connected to the pits by local tramways, but the coal and iron needed to transported further afield. The Govan Waggonway, with wooden rails and horse traction, was technologically inadequate. By 1830 railways using stone block sleepers and cast iron rails were well established technology, and Dixon commissioned Thomas Grainger and John Miller to design a conversion of his waggonway to a railway. Grainger and Miller had been responsible for several of the "coal railways" in central Scotland, notably the Monkland and Kirkintilloch Railway, opened in 1826. The track gauge was 4 ft 6in, which Grainger and Miller had adopted on most of the other lines. On 29 May 1830, the Polloc and Govan Railway was authorised as a public company by an act of Parliament, the (11 Geo. 4 & 1 Will. 4. c. lxii), with capital of £10,000 and authorised borrowing of £5,000. At the eastern end the terminal was in lands in the ownership of the Trustees of Hutcheson's Hospital, "whereby the fair advantage which the measure was calculated to produce might be secured to the institution". Robertson also shows a short westward spur from Eglinton Street towards Shields Bridge; this is referred to as the "Polloc Estate branch" by Robertson. The total length of the lines authorised was 0.85 miles (main line) and 0.34 miles (branches), a total of nearly 2 km. The line opens The line opened on 22 August 1840, "from Rutherglen to the Broomielaw Harbour", after two further acts of Parliament were obtained, the (1 & 2 Will. 4. c. lviii) and the (7 Will. 4 & 1 Vict. c. cxviii), authorising considerably more capital: £36,000 in share value. Cobb suggests that the 1840 opening was from Polmadie Bridge, i.e. Dixon's ironworks and coalpits, with an eastward extension to a station at Rutherglen in 1842. The Clydesdale Junction Railway The Caledonian Railway (CR) opened its main line from Glasgow in 1849; the route was from Townhead over the Glasgow, Garnkirk and Coatbridge Railway (GG&CR), an extended successor to the earlier Garnkirk and Glasgow Railway, which had been built as a coal line. The GG&CR had been upgraded but the route was roundabout. A shorter route between Motherwell and Glasgow had been promoted earlier; it obtained authority by an act of Parliament, the Clydesdale Junction Railway Act 1845 (8 & 9 Vict. c. clx), on 31 July 1845, and was called the Clydesdale Junction Railway. The CR made provisional arrangements to lease the Polloc and Govan line on 29 January 1845, and soon afterwards to lease the Clydesdale Junction line itself. The CR purchased the Polloc and Govan Railway on 18 August 1846, and William Dixon received 2,400 CR shares in payment. The CR upgraded the Polloc and Govan and regauged it to standard gauge, and used its alignment for part of the route: it formed an end-on junction with the line at Rutherglen. At Eglinton Street the new line diverged to the north and terminated at the Southside railway station, which was shared with the Glasgow, Barrhead and Neilston Direct Railway. On 30 March 1849 the General Terminus opened; it was a large goods handling depot on the River Clyde, immediately to the west of the Polloc and Govan's "Broomielaw" terminal at Windmillcroft, and superseding it. The obsolete rails in West Street remained in place for another eighteen years: on 14 March 1867 an act of Parliament was obtained to lift part of the line, in West Street to the River Clyde. The Clydesdale Junction Railway was absorbed by the Caledonian Railway. Links to other lines Clydesdale Junction Railway. End to end link made: General Terminus and Glasgow Harbour Railway Notes References Sources Cameron, Jim (Compiler) (2006). Glasgow Central: Central to Glasgow. Boat of Garten: Strathwood. . C.J.A., Robertson(1983). The Origins of the Scottish Railway System: 1722 – 1844, Edinburgh: John Donald. . Thomas, John (1971). A Regional History of the Railways of Great Britain, Volume 6, Scotland: The Lowlands and the Borders. Newton Abbott: David & Charles. . See also Caledonian Railway Early Scottish railway companies Mining railways Horse-drawn railways Pre-grouping British railway companies Transport in Glasgow Railway companies established in 1830 Railway lines opened in 1840 Railway companies disestablished in 1846 Standard gauge railways in Scotland 1830 establishments in Scotland British companies established in 1830 British companies disestablished in 1846
Polloc and Govan Railway
Engineering
1,849
2,555,501
https://en.wikipedia.org/wiki/Prova
Prova is an open source programming language that combines Prolog with Java. Description Prova is a rule-based scripting system that is used for middleware. The language combines imperative and declarative programming by using a prolog syntax that allows calls to Java functions. In this way a strong Java code base is combined with Prolog features such as backtracking. Prova is derived from Mandarax, a Java-based inference system developed by Jens Dietrich. Prova extends Mandarax by providing a proper language syntax, native syntax integration with Java, agent messaging and reaction rules. The development of this language was supported by the grant provided within the EU projects GeneStream and BioGRID. In the project, the language is used as a rule-based backbone for distributed web applications in biomedical data integration, in particular, the GoPubMed system. The design goals of Prova: Combine declarative and object-oriented programming. Expose logic and agent behavior as rules. Access data sources via wrappers written in Java or command-line shells like Perl. Make the Java API of various packages accessible as rules. Run within the Java runtime. Enable rapid prototyping of applications. Offer a rule-based platform for distributed agent programming. Prova aims to provide support for data integration tasks when the following is important: Location transparency (local, remote, mirrors); Format transparency (database, RDF, XML, HTML, flat files, computation resource); Resilience to change (databases and web sites change often); Use of open and open source technologies; Understandability and modifiability by a non-IT specialist; Economical knowledge representation; Extensibility with additional functionality; Leveraging ontologies. Prova has been used as the key service integration engine in the Xcalia product where it is used for computing efficient global execution plans across multiple data sources such as Web services, TP monitors transactions like CICS or IMS, messages of MOM like MQ-Series, packaged applications with a JCA connector, legacy data sources on mainframes with a JCA connector, remote EJB Java objects considered as data providers or even local Java objects. Prova allows to deliver an innovative software platform for Service-oriented architecture implementations. References A. Kozlenkov and M. Schroeder. PROVA: Rule-based Java-Scripting for a Bioinformatics Semantic Web. In E. Rahm, editor, International Workshop on Data Integration in the Life Sciences, Leipzig, Germany, in Lecture Notes in Computer Science, Springer-Verlag, vol. 2994, pp. 17–30, 2004. N. Combs and J.-L. Ardoint. Rules versus Scripts in Games Artificial Intelligence, AAAI 2004 Workshop on Challenges in Game AI, 2004. J. Dietrich, A. Kozlenkov, M. Schroeder, and G. Wagner. Rule-based Agents for the Semantic Web, Electronic Commerce Research and Applications, vol. 2, no. 4, pp. 323–338, 2004. A. Paschke, M. Bichler, and J. Dietrich. ContractLog: An Approach to Rule Based Monitoring and Execution of Service Level Agreements, Int. Conf. on Rules and Rule Markup Languages for the Semantic Web (RuleML 2005), Galway, Ireland, 2005. A. Kozlenkov, R. Penaloza, V. Nigam, L. Royer, G. Dawelbait, and M. Schroeder. Prova: Rule-based Java Scripting for Distributed Web Applications: A Case Study in Bioinformatics, Reactivity on the Web Workshop, Munich 2006. External links Prova homepage Middleware Logic programming languages Declarative programming languages
Prova
Technology,Engineering
779
60,531,978
https://en.wikipedia.org/wiki/MS%200302%2B17
MS 0302+17 is a galaxy supercluster located in the constellation Aries at a distance of 4.485 billion light years (lookback time), equivalent to a comoving distance of 5.338 billion light years. The dimensions are around 6 million parsecs. Overview MS 0302+17 contains three massive galaxy clusters. Of these the first, known as CL 0303+1706 was discovered by Alan Вressler and Jim Gunn, using a conventional optical telescope; it is located along the eastern edge of the supercluster and consists of an important concentration of reddish galaxies. Observations made with the Einstein X-ray Observatory revealed the existence of two other clusters: MS 0302+1659 and MS 0302+1717, which are located near the northern edge of the observation field. The MS prefix derives from Medium Sensitivity because X-ray observations are part of the Einstein Medium Sensitivity Survey. An interesting fact of the survey is a couple of giant arches located near the luminous central galaxies of MS0302+1659, images of remote galaxies enhanced by the gravitational lensing phenomenon created by the supercluster. References Galaxy superclusters Aries (constellation)
MS 0302+17
Astronomy
249
600,892
https://en.wikipedia.org/wiki/Arbitrary-precision%20arithmetic
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are potentially limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision. Several modern programming languages have built-in support for bignums, and others have libraries available for arbitrary-precision integer and floating-point math. Rather than storing values as a fixed number of bits related to the size of the processor register, these implementations typically use variable-length arrays of digits. Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required. It should not be confused with the symbolic computation provided by many computer algebra systems, which represent numbers by expressions such as , and can thus represent any computable number with infinite precision. Applications A common application is public-key cryptography, whose algorithms commonly employ arithmetic with integers having hundreds of digits. Another is in situations where artificial limits and overflows would be inappropriate. It is also useful for checking the results of fixed-precision calculations, and for determining optimal or near-optimal values for coefficients needed in formulae, for example the that appears in Gaussian integration. Arbitrary precision arithmetic is also used to compute fundamental mathematical constants such as π to millions or more digits and to analyze the properties of the digit strings or more generally to investigate the precise behaviour of functions such as the Riemann zeta function where certain questions are difficult to explore via analytical methods. Another example is in rendering fractal images with an extremely high magnification, such as those found in the Mandelbrot set. Arbitrary-precision arithmetic can also be used to avoid overflow, which is an inherent limitation of fixed-precision arithmetic. Similar to an automobile's odometer display which may change from 99999 to 00000, a fixed-precision integer may exhibit wraparound if numbers grow too large to represent at the fixed level of precision. Some processors can instead deal with overflow by saturation, which means that if a result would be unrepresentable, it is replaced with the nearest representable value. (With 16-bit unsigned saturation, adding any positive amount to 65535 would yield 65535.) Some processors can generate an exception if an arithmetic result exceeds the available precision. Where necessary, the exception can be caught and recovered from—for instance, the operation could be restarted in software using arbitrary-precision arithmetic. In many cases, the task or the programmer can guarantee that the integer values in a specific application will not grow large enough to cause an overflow. Such guarantees may be based on pragmatic limits: a school attendance program may have a task limit of 4,000 students. A programmer may design the computation so that intermediate results stay within specified precision boundaries. Some programming languages such as Lisp, Python, Perl, Haskell, Ruby and Raku use, or have an option to use, arbitrary-precision numbers for all integer arithmetic. Although this reduces performance, it eliminates the possibility of incorrect results (or exceptions) due to simple overflow. It also makes it possible to guarantee that arithmetic results will be the same on all machines, regardless of any particular machine's word size. The exclusive use of arbitrary-precision numbers in a programming language also simplifies the language, because a number is a number and there is no need for multiple types to represent different levels of precision. Implementation issues Arbitrary-precision arithmetic is considerably slower than arithmetic using numbers that fit entirely within processor registers, since the latter are usually implemented in hardware arithmetic whereas the former must be implemented in software. Even if the computer lacks hardware for certain operations (such as integer division, or all floating-point operations) and software is provided instead, it will use number sizes closely related to the available hardware registers: one or two words only. There are exceptions, as certain variable word length machines of the 1950s and 1960s, notably the IBM 1620, IBM 1401 and the Honeywell 200 series, could manipulate numbers bound only by available storage, with an extra bit that delimited the value. Numbers can be stored in a fixed-point format, or in a floating-point format as a significand multiplied by an arbitrary exponent. However, since division almost immediately introduces infinitely repeating sequences of digits (such as 4/7 in decimal, or 1/10 in binary), should this possibility arise then either the representation would be truncated at some satisfactory size or else rational numbers would be used: a large integer for the numerator and for the denominator. But even with the greatest common divisor divided out, arithmetic with rational numbers can become unwieldy very quickly: 1/99 − 1/100 = 1/9900, and if 1/101 is then added, the result is 10001/999900. The size of arbitrary-precision numbers is limited in practice by the total storage available, and computation time. Numerous algorithms have been developed to efficiently perform arithmetic operations on numbers stored with arbitrary precision. In particular, supposing that digits are employed, algorithms have been designed to minimize the asymptotic complexity for large . The simplest algorithms are for addition and subtraction, where one simply adds or subtracts the digits in sequence, carrying as necessary, which yields an algorithm (see big O notation). Comparison is also very simple. Compare the high-order digits (or machine words) until a difference is found. Comparing the rest of the digits/words is not necessary. The worst case is , but it may complete much faster with operands of similar magnitude. For multiplication, the most straightforward algorithms used for multiplying numbers by hand (as taught in primary school) require operations, but multiplication algorithms that achieve complexity have been devised, such as the Schönhage–Strassen algorithm, based on fast Fourier transforms, and there are also algorithms with slightly worse complexity but with sometimes superior real-world performance for smaller . The Karatsuba multiplication is such an algorithm. For division, see division algorithm. For a list of algorithms along with complexity estimates, see computational complexity of mathematical operations. For examples in x86 assembly, see external links. Pre-set precision In some languages such as REXX and ooRexx, the precision of all calculations must be set before doing a calculation. Other languages, such as Python and Ruby, extend the precision automatically to prevent overflow. Example The calculation of factorials can easily produce very large numbers. This is not a problem for their usage in many formulas (such as Taylor series) because they appear along with other terms, so that—given careful attention to the order of evaluation—intermediate calculation values are not troublesome. If approximate values of factorial numbers are desired, Stirling's approximation gives good results using floating-point arithmetic. The largest representable value for a fixed-size integer variable may be exceeded even for relatively small arguments as shown in the table below. Even floating-point numbers are soon outranged, so it may help to recast the calculations in terms of the logarithm of the number. But if exact values for large factorials are desired, then special software is required, as in the pseudocode that follows, which implements the classic algorithm to calculate 1, 1×2, 1×2×3, 1×2×3×4, etc. the successive factorial numbers. constants: Limit = 1000 % Sufficient digits. Base = 10 % The base of the simulated arithmetic. FactorialLimit = 365 % Target number to solve, 365! tdigit: Array[0:9] of character = ["0","1","2","3","4","5","6","7","8","9"] variables: digit: Array[1:Limit] of 0..9 % The big number. carry, d: Integer % Assistants during multiplication. last: Integer % Index into the big number's digits. text: Array[1:Limit] of character % Scratchpad for the output. digit[*] := 0 % Clear the whole array. last := 1 % The big number starts as a single-digit, digit[1] := 1 % its only digit is 1. for n := 1 to FactorialLimit: % Step through producing 1!, 2!, 3!, 4!, etc. carry := 0 % Start a multiply by n. for i := 1 to last: % Step along every digit. d := digit[i] * n + carry % Multiply a single digit. digit[i] := d mod Base % Keep the low-order digit of the result. carry := d div Base % Carry over to the next digit. while carry > 0: % Store the remaining carry in the big number. if last >= Limit: error("overflow") last := last + 1 % One more digit. digit[last] := carry mod Base carry := carry div Base % Strip the last digit off the carry. text[*] := " " % Now prepare the output. for i := 1 to last: % Translate from binary to text. text[Limit - i + 1] := tdigit[digit[i]] % Reversing the order. print text[Limit - last + 1:Limit], " = ", n, "!" With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base. The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable must be able to hold the result of a single-digit multiply plus the carry from the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value of is not itself limited to a single digit. This has the consequence that the method will fail for or so. In a more general implementation, would also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value of carry may need to be carried into multiple higher-order digits, not just one. There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array digit, but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so this implementation builds the representation in a space-padded text variable and then prints that. The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations of mod and div as in the example, and nearly all arithmetic units provide a carry flag which can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run faster than the result of the compilation of a high-level language, which does not provide direct access to such facilities but instead maps the high-level statements to its model of the target machine using an optimizing compiler. For a single-digit multiply the working variables must be able to hold the value (base−1) + carry, where the maximum value of the carry is (base−1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (block i, digit j) where i and j would be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size. History IBM's first business computer, the IBM 702 (a vacuum-tube machine) of the mid-1950s, implemented integer arithmetic entirely in hardware on digit strings of any length from 1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that in Maclisp. Later, around 1980, the operating systems VAX/VMS and VM/CMS offered bignum facilities as a collection of string functions in the one case and in the languages EXEC 2 and REXX in the other. An early widespread implementation was available via the IBM 1620 of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used lookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered 60 000 digits, however Fortran compilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory. Software libraries Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides data types and subroutines to store numbers with the requested precision and to perform computations. Different libraries have different ways of representing arbitrary-precision numbers, some libraries work only with integer numbers, others store floating point numbers in a variety of bases (decimal or binary powers). Rather than representing a number as single value, some store numbers as a numerator/denominator pair (rationals) and some can fully represent computable numbers, though only up to some storage limit. Fundamentally, Turing machines cannot represent all real numbers, as the cardinality of exceeds the cardinality of . See also Fürer's algorithm Karatsuba algorithm Mixed-precision arithmetic Schönhage–Strassen algorithm Toom–Cook multiplication Little Endian Base 128 References Further reading , Section 4.3.1: The Classical Algorithms , Chapter 9: Fast Algorithms for Large-Integer Arithmetic External links Chapter 9.3 of The Art of Assembly by Randall Hyde discusses multiprecision arithmetic, with examples in x86-assembly. Rosetta Code task Arbitrary-precision integers Case studies in the style in which over 95 programming languages compute the value of 5**4**3**2 using arbitrary precision arithmetic. Computer arithmetic Computer arithmetic algorithms Management cybernetics
Arbitrary-precision arithmetic
Mathematics
3,303
59,087,742
https://en.wikipedia.org/wiki/Killough%20platform
A Killough platform is a three-wheel drive system that uses traditional wheels to achieve omni-directional movement without the use of omni-directional wheels (such as omni wheels/Mecanum wheels). Designed by Stephen Killough, after which the platform is named, with help from Francois Pin, wanted to achieve omni-directional movement without using the complicated six motor arrangement required to achieve a controllable three caster wheel system (one motor to control wheel rotation and one motor to control pivoting of the wheel). He first looked into solutions by other inventors that used rollers on the rims larger wheels but considered them flawed in some critical way. This led to the Killough system: With Francois Pin, who helped with the computer control and choreography aspects of the design, Killough and Pin readied a public demonstration in 1994. This led to a partnership with Cybertrax Innovative Technologies in 1996, which was developing a motorized wheelchair. By combining two the motion of two-wheel the vehicle can move in the direction of the perpendicular wheel, or, by rotating all the wheels in the same direction, the vehicle can rotate in place. By using the resultant motion of the vector addition of the wheels a Killough platform is able to achieve omni-directional motion. References Robotics engineering
Killough platform
Technology,Engineering
267
16,058,982
https://en.wikipedia.org/wiki/Dihydrostreptomycin
Dihydrostreptomycin is a derivative of streptomycin that has a bactericidal properties. It is a semisynthetic aminoglycoside antibiotic used in the treatment of tuberculosis. It acts by irreversibly binding the S12 protein in the bacterial 30S ribosomal subunit, after being actively transported across the cell membrane, which interferes with the initiation complex between the mRNA and the bacterial ribosome. This leads to the synthesis of defective, nonfunctional proteins, which results in the bacterial cell's death. It causes ototoxicity, which is why it is no longer used in humans. See also Translation (biology) References External links Dihydrostreptomycin | C21H41N7O12 - PubChem Aminoglycoside antibiotics Guanidines
Dihydrostreptomycin
Chemistry
176
11,210,523
https://en.wikipedia.org/wiki/Gaussian%20network%20model
The Gaussian network model (GNM) is a representation of a biological macromolecule as an elastic mass-and-spring network to study, understand, and characterize the mechanical aspects of its long-time large-scale dynamics. The model has a wide range of applications from small proteins such as enzymes composed of a single domain, to large macromolecular assemblies such as a ribosome or a viral capsid. Protein domain dynamics plays key roles in a multitude of molecular recognition and cell signalling processes. Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics. The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains. The Gaussian network model is a minimalist, coarse-grained approach to study biological molecules. In the model, proteins are represented by nodes corresponding to α-carbons of the amino acid residues. Similarly, DNA and RNA structures are represented with one to three nodes for each nucleotide. The model uses the harmonic approximation to model interactions. This coarse-grained representation makes the calculations computationally inexpensive. At the molecular level, many biological phenomena, such as catalytic activity of an enzyme, occur within the range of nano- to millisecond timescales. All atom simulation techniques, such as molecular dynamics simulations, rarely reach microsecond trajectory length, depending on the size of the system and accessible computational resources. Normal mode analysis in the context of GNM, or elastic network (EN) models in general, provides insights on the longer-scale functional dynamic behaviors of macromolecules. Here, the model captures native state functional motions of a biomolecule at the cost of atomic detail. The inference obtained from this model is complementary to atomic detail simulation techniques. Another model for protein dynamics based on elastic mass-and-spring networks is the Anisotropic Network Model. Gaussian network model theory The Gaussian network model was proposed by Bahar, Atilgan, Haliloglu and Erman in 1997. The GNM is often analyzed using normal mode analysis, which offers an analytical formulation and unique solution for each structure. The GNM normal mode analysis differs from other normal mode analyses in that it is exclusively based on inter-residue contact topology, influenced by the theory of elasticity of Flory and the Rouse model and does not take the three-dimensional directionality of motions into account. Representation of structure as an elastic network Figure 2 shows a schematic view of elastic network studied in GNM. Metal beads represent the nodes in this Gaussian network (residues of a protein) and springs represent the connections between the nodes (covalent and non-covalent interactions between residues). For nodes i and j, equilibrium position vectors, R0i and R0j, equilibrium distance vector, R0ij, instantaneous fluctuation vectors, ΔRi and ΔRj, and instantaneous distance vector, Rij, are shown in Figure 2. Instantaneous position vectors of these nodes are defined by Ri and Rj. The difference between equilibrium position vector and instantaneous position vector of residue i gives the instantaneous fluctuation vector, ΔRi = Ri - R0i. Hence, the instantaneous fluctuation vector between nodes i and j is expressed as ΔRij = ΔRj - ΔRi = Rij - R0ij. Potential of the Gaussian network The potential energy of the network in terms of ΔRi is where γ is a force constant uniform for all springs and Γij is the ijth element of the Kirchhoff (or connectivity) matrix of inter-residue contacts, Γ, defined by rc is a cutoff distance for spatial interactions and taken to be 7 Å for amino acid pairs (represented by their α-carbons). Expressing the X, Y and Z components of the fluctuation vectors ΔRi as ΔXT = [ΔX1 ΔX2 ..... ΔXN], ΔYT = [ΔY1 ΔY2 ..... ΔYN], and ΔZT = [ΔZ1 ΔZ2 ..... ΔZN], above equation simplifies to Statistical mechanics foundations In the GNM, the probability distribution of all fluctuations, P(ΔR) is isotropic and Gaussian where kB is the Boltzmann constant and T is the absolute temperature. p(ΔY) and p(ΔZ) are expressed similarly. N-dimensional Gaussian probability density function with random variable vector x, mean vector μ and covariance matrix Σ is normalizes the distribution and |Σ| is the determinant of the covariance matrix. Similar to Gaussian distribution, normalized distribution for ΔXT = [ΔX1 ΔX2 ..... ΔXN] around the equilibrium positions can be expressed as The normalization constant, also the partition function ZX, is given by where is the covariance matrix in this case. ZY and ZZ are expressed similarly. This formulation requires inversion of the Kirchhoff matrix. In the GNM, the determinant of the Kirchhoff matrix is zero, hence calculation of its inverse requires eigenvalue decomposition. Γ−1 is constructed using the N-1 non-zero eigenvalues and associated eigenvectors. Expressions for p(ΔY) and p(ΔZ) are similar to that of p(ΔX). The probability distribution of all fluctuations in GNM becomes For this mass and spring system, the normalization constant in the preceding expression is the overall GNM partition function, ZGNM, Expectation values of fluctuations and correlations The expectation values of residue fluctuations, <ΔRi2> (also called mean-square fluctuations, MSFs), and their cross-correlations, <ΔRi · ΔRj> can be organized as the diagonal and off-diagonal terms, respectively, of a covariance matrix. Based on statistical mechanics, the covariance matrix for ΔX is given by The last equality is obtained by inserting the above p(ΔX) and taking the (generalized Gaussian) integral. Since, <ΔRi2> and <ΔRi · ΔRj> follows Mode decomposition The GNM normal modes are found by diagonalization of the Kirchhoff matrix, Γ = UΛUT. Here, U is a unitary matrix, UT = U−1, of the eigenvectors ui of Γ and Λ is the diagonal matrix of eigenvalues λi. The frequency and shape of a mode is represented by its eigenvalue and eigenvector, respectively. Since the Kirchhoff matrix is positive semi-definite, the first eigenvalue, λ1, is zero and the corresponding eigenvector have all its elements equal to 1/. This shows that the network model translationally invariant. Cross-correlations between residue fluctuations can be written as a sum over the N-1 nonzero modes as It follows that, [ΔRi · ΔRj], the contribution of an individual mode is expressed as where [uk]i is the ith element of uk. Influence of local packing density By definition, a diagonal element of the Kirchhoff matrix, Γii, is equal to the degree of a node in GNM that represents the corresponding residue's coordination number. This number is a measure of the local packing density around a given residue. The influence of local packing density can be assessed by series expansion of Γ−1 matrix. Γ can be written as a sum of two matrices, Γ = D + O, containing diagonal elements and off-diagonal elements of Γ. Γ−1 = (D + O)−1 = [ D (I + D−1O) ]−1 = (I + D−1O)−1D−1 = (I - D−1O + ...)D−1 = D−1 - D−1O D−1 + ... This expression shows that local packing density makes a significant contribution to expected fluctuations of residues. The terms that follow inverse of the diagonal matrix, are contributions of positional correlations to expected fluctuations. GNM applications Equilibrium fluctuations Equilibrium fluctuations of biological molecules can be experimentally measured. In X-ray crystallography the B-factor (also called Debye-Waller or temperature factor) of each atom is a measure of its mean-square fluctuation near its equilibrium position in the native structure. In NMR experiments, this measure can be obtained by calculating root-mean-square differences between different models. In many applications and publications, including the original articles, it has been shown that expected residue fluctuations obtained by the GNM are in good agreement with the experimentally measured native state fluctuations. The relation between B-factors, for example, and expected residue fluctuations obtained from GNM is as follows Figure 3 shows an example of GNM calculation for the catalytic domain of the protein Cdc25B, a cell division cycle dual-specificity phosphatase. Physical meanings of slow and fast modes Diagonalization of the Kirchhoff matrix decomposes the conformational motions into a spectrum of collective modes. The expected values of fluctuations and cross-correlations are obtained from linear combinations of fluctuations along these normal modes. The contribution of each mode is scaled with the inverse of that modes frequency. Hence, slow (low frequency) modes contribute most to the expected fluctuations. Along the few slowest modes, motions are shown to be collective and global and potentially relevant to functionality of the biomolecules. Fast (high frequency) modes, on the other hand, describe uncorrelated motions not inducing notable changes in the structure. GNM-based methods do not provide real dynamics but only an approximation based on the combination and interpolation of normal modes. Their applicability strongly depends on how collective the motion is. Other specific applications There are several major areas in which the Gaussian network model and other elastic network models have proved to be useful. These include: Spring bead based network model: In spring-bead based network model, the springs and beads are used as components in the crosslinked network. Springs are cross-linked to represent mechanical behavior of the material and bridge molecular dynamics (MD) model and finite element (FE) model (see Figure. 5). The beads represent material mass of cluster bonds. Each spring is used to represent a cluster of polymer chains, instead of part of a single polymer chain. This simplification allows to bridge different models at multiple length scales and improves the simulation efficiency significantly. At each iteration step in the simulation, forces in the springs are applied to the nodes at the center of the beads, and the equilibrated nodal displacements throughout the system are calculated. Different from the traditional FE method for obtaining stress and strain, the spring–bead model provides the displacements of the nodes and forces in the springs. The equivalent strain and strain energy of spring–bead based network model can be defined and calculated using the displacements of nodes and the spring characteristics. Furthermore, the results from the network model can be scaled up to obtain the structural response at the macroscale using FE analysis. Decomposition of flexible/rigid regions and domains of proteins Characterization of functional motions and functionally important sites/residues of proteins, enzymes and large macromolecular assemblies Refinement and dynamics of low-resolution structural data, e.g. Cryo-electron microscopy Molecular replacement for solving X-ray structures, when a conformational change occurred, with respect to a known structure Integration with atomistic models and simulations Investigation of folding/unfolding pathways and kinetics. Annotation of functional implication in molecular evolution Web servers In practice, two kinds of calculations can be performed. The first kind (the GNM per se) makes use of the Kirchhoff matrix. The second kind (more specifically called either the Elastic Network Model or the Anisotropic Network Model) makes use of the Hessian matrix associated to the corresponding set of harmonic springs. Both kinds of models can be used online, using the following servers. GNM servers iGNM: A database of protein functional motions based on GNM http://ignm.ccbb.pitt.edu oGNM: Online calculation of structural dynamics using GNM https://web.archive.org/web/20070516042756/http://ignm.ccbb.pitt.edu/GNM_Online_Calculation.htm ENM/ANM servers Anisotropic Network Model web server http://www.ccbb.pitt.edu/anm elNemo: Web-interface to The Elastic Network Model http://www.sciences.univ-nantes.fr/elnemo/ AD-ENM: Analysis of Dynamics of an Elastic Network Model http://enm.lobos.nih.gov/ WEBnm@: Web-server for Normal Mode Analysis of proteins http://apps.cbu.uib.no/webnma/home Other relevant servers ProDy: An Application Programming Interface (API) in Python, that integrates GNM and ANM analyses and several molecular structure and sequence analyses and visualization tools: http://prody.csb.pitt.edu HingeProt: An algorithm for protein hinge prediction using elastic network models http://www.prc.boun.edu.tr/appserv/prc/hingeprot/, or http://bioinfo3d.cs.tau.ac.il/HingeProt/hingeprot.html DNABindProt: A Server for Determination of Potential DNA Binding Sites of Proteins http://www.prc.boun.edu.tr/appserv/prc/dnabindprot/ MolMovDB: A database of macromolecular motions: http://www.molmovdb.org/ See also Gaussian distribution Harmonic oscillator Hooke's law Molecular dynamics Normal mode Principal component analysis Protein dynamics Rubber elasticity Statistical mechanics References Primary sources Cui Q, Bahar I, (2006). Normal Mode Analysis: Theory and applications to biological and chemical systems, Chapman & Hall/CRC, London, UK Specific citations Molecular modelling
Gaussian network model
Chemistry
3,002
10,474,625
https://en.wikipedia.org/wiki/H2AFX
H2A histone family member X (usually abbreviated as H2AX) is a type of histone protein from the H2A family encoded by the H2AFX gene. An important phosphorylated form is γH2AX (S139), which forms when double-strand breaks appear. In humans and other eukaryotes, the DNA is wrapped around histone octamers, consisting of core histones H2A, H2B, H3 and H4, to form chromatin. H2AX contributes to nucleosome-formation, chromatin-remodeling and DNA repair, and is also used in vitro as an assay for double-strand breaks in dsDNA. Formation of γH2AX H2AX becomes phosphorylated on serine 139, then called γH2AX, as a reaction on DNA double-strand breaks (DSB). The kinases of the PI3-family (Ataxia telangiectasia mutated, ATR and DNA-PKcs) are responsible for this phosphorylation, especially ATM. The modification can happen accidentally during replication fork collapse or in the response to ionizing radiation but also during controlled physiological processes such as V(D)J recombination. γH2AX is a sensitive target for looking at DSBs in cells. The presence of γH2AX by itself, however, is not the evidence of the DSBs. The role of the phosphorylated form of the histone in DNA repair is under discussion but it is known that because of the modification the DNA becomes less condensed, potentially allowing space for the recruitment of proteins necessary during repair of DSBs. Mutagenesis experiments have shown that the modification is necessary for the proper formation of ionizing radiation induced foci in response to double strand breaks, but is not required for the recruitment of proteins to the site of DSBs. Function DNA damage response The histone variant H2AX constitutes about 2-25% of the H2A histones in mammalian chromatin. When a double-strand break occurs in DNA, a sequence of events occurs in which H2AX is altered. Very early after a double-strand break, a specific protein that interacts with and affects the architecture of chromatin is phosphorylated and then released from the chromatin. This protein, heterochromatin protein 1 (HP1)-beta (CBX1), is bound to histone H3 methylated on lysine 9 (H3K9me). Half-maximum release of HP1-beta from damaged DNA occurs within one second. A dynamic alteration in chromatin structure is triggered by HP1-beta release. This alteration in chromatin structure promotes H2AX phosphorylation by ATM, ATR and DNA-PK, allowing formation of γH2AX (H2AX phosphorylated on serine 139). γH2AX can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. Chromatin with phosphorylated γH2AX extends to about a million base pairs on each side of a DNA double-strand break. MDC1 (mediator of DNA damage checkpoint protein 1) then binds to γH2AX and the γH2AX/MDC1 complex then orchestrates further interactions in double-strand break repair. The ubiquitin ligases RNF8 and RNF168 bind to the γH2AX/MDC1 complex, ubiquitylating other chromatin components. This allows the recruitment of BRCA1 and 53BP1 to the long, modified γH2AX/MDC1 chromatin. Other proteins that stably assemble on the extensive γH2AX-modified chromatin are the MRN complex (a protein complex consisting of Mre11, Rad50 and Nbs1), RAD51 and the ATM kinase. Further DNA repair components, such as RAD52 and RAD54, rapidly and reversibly interact with the core components stably associated with γH2AX-modified chromatin. The constitutive level of γH2AX expression in live cells, untreated by exogenous agents, likely represents DNA damage by endogenous oxidants generated during cellular respiration. In chromatin remodeling The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow DNA repair, the chromatin must be remodeled. γH2AX, the phosphorylated form of H2AX, is involved in the steps leading to chromatin decondensation after DNA double-strand breaks. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of ionizing radiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. γH2AX as an assay for double-strand breaks An assay for γH2AX generally reflects the presence of double-strand breaks in DNA, though the assay may indicate other minor phenomena as well. On the one hand, overwhelming evidence supports a strong, quantitative correlation between γH2AX foci formation and DNA double-strand break induction following ionizing radiation exposure, based on absolute yields and distributions induced per unit dose. On the other hand, not only the formation of distinct γH2AX foci but also the induction of pan-nuclear γH2AX signals have been reported as a cellular reaction to various stressors other than ionizing radiation. The γH2AX signal is always stronger at DNA double-strand breaks than in undamaged chromatin. γH2AX in undamaged chromatin is thought to possibly be generated via direct phosphorylation of H2AX by activated kinases, most likely diffusing from DNA damage sites. In using γH2AX as a marker for double strand breaks, it is important to recognize that it is a down-stream proxy that can be useful for representing DNA damage repair. It does not represent double strand breaks themselves and this needs careful consideration when interpreting data from such assays. The γH2AX-assay has several disadvantages, therefore new assays have been created. Interactions H2AX has been shown to interact with: BARD1, BRCA1 Bloom syndrome protein, MDC1, Nibrin, and TP53BP1. References Further reading Genes DNA repair
H2AFX
Biology
1,445
26,885,202
https://en.wikipedia.org/wiki/Boot%20folder
In Unix-like operating systems, a boot folder is the directory which holds files used in booting the operating system, typically . The usage is standardized within Linux in the Filesystem Hierarchy Standard. Contents The contents are mostly Linux kernel files or boot loader files, depending on the boot loader, most commonly (on Linux) LILO or GRUB. Linux vmlinux – the Linux kernel initrd.img – a temporary file system, used prior to loading the kernel System.map – a symbol lookup table LILO LILO creates and uses the following files: map – a key file, which records where files needed by LILO during boot are stored. Following kernel upgrades, this file must be regenerated by running the "map installer", which is otherwise the system will not boot. boot.xxyy – these 512-byte files are backups of boot sectors, either the master boot record (MBR) or volume boot record (VBR), created when LILO overwrites a boot sector. xx and yy are the major and minor device numbers in hex; for example, the drive has numbers 8, 0, hence its MBR is backed up to while the partition has numbers 8,3, hence its VBR is backed up to . LILO may also use other files, such as and also stores a non-boot configuration file in . GRUB GRUB stores its files in the subdirectory (i.e. ). These files are mostly modules (), with configuration stored in . Location is often simply a directory on the main (or only) hard drive partition. However, it may be a separate partition. A separate partition is generally only used when bootloaders are incapable of reading the main filesystem (e.g. LILO does not recognize XFS) or other problems not easily resolvable by users. On UEFI systems, including most modern PCs, the EFI system partition is often mounted at , or . References Linux kernel Unix file system technology System administration File system directories
Boot folder
Technology
430
491,073
https://en.wikipedia.org/wiki/Field%20%28computer%20science%29
In data hierarchy, a field (data field) is a variable in a record. A record, also known as a data structure, allows logically related data to be identified by a single name. Identifying related data as a single group is central to the construction of understandable computer programs. The individual fields in a record may be accessed by name, just like any variable in a computer program. Each field in a record has two components. One component is the field's datatype declaration. The other component is the field's identifier. Memory fields Fields may be stored in random access memory (RAM). The following Pascal record definition has three field identifiers: firstName, lastName, and age. The two name fields have a datatype of an array of character. The age field has a datatype of integer. type PersonRecord = record lastName : array [ 1 .. 20 ] of Char; firstName : array [ 1 .. 20 ] of Char; age : Integer end; In Pascal, the identifier component precedes a colon, and the datatype component follows the colon. Once a record is defined, variables of the record can be allocated. Once the memory of the record is allocated, a field can be accessed like a variable by using the dot notation. var alice : PersonRecord; alice.firstName := 'Alice'; The term field has been replaced with the terms data member and attribute. The following Java class has three attributes: firstName, lastName, and age. public class PersonRecord { private String firstName; private String lastName; private int age; } File fields Fields may be stored in a random access file. A file may be written to or read from in an arbitrary order. To accomplish the arbitrary access, the operating system provides a method to quickly seek around the file. Once the disk head is positioned at the beginning of a record, each file field can be read into its corresponding memory field. File fields are the main storage structure in the Indexed Sequential Access Method (ISAM). In relational database theory, the term field has been replaced with the terms column and attribute. See also References Data modeling
Field (computer science)
Engineering
452
16,669,839
https://en.wikipedia.org/wiki/1922%20Zulu
1922 Zulu, provisional designation , is a carbonaceous asteroid in a strongly unstable resonance with Jupiter, located in the outermost regions of the asteroid belt, and approximately 20 kilometers in diameter. It was discovered on 25 April 1949, by South African astronomer Ernest Johnson at Union Observatory in Johannesburg, and named for the South African Zulu people. Orbit and classification Zulu is one of few strongly unstable asteroids located near the 2:1 orbital resonance with the gas giant Jupiter, that corresponds to one of the prominent Kirkwood gaps in the asteroid belt. It orbits the Sun at a distance of 1.7–4.8 AU once every 5 years and 10 months (2,126 days). Its orbit has an eccentricity of 0.48 and an inclination of 35° with respect to the ecliptic. The body's observation arc begins with its official discovery observation at Johannesburg, as no precoveries were taken and no prior identifications were made. Zulu was lost shortly after its 1949-discovery (see Lost asteroid), and only rediscovered in 1974 by Richard Eugene McCrosky, Cheng-yuan Shao and JH Bulger based on a predicted position by C. M. Bardwell of the Cincinnati Observatory. It is quite highly inclined for asteroids in the asteroid belt, with an inclination of 35.4 degrees. This may be related to its 2:1 resonance with Jupiter. Physical characteristics In May 2002, a rotational lightcurve of Zulu was obtained from photometric observations by American astronomer Robert Stephens at the Santana Observatory in California. Lightcurve analysis gave a well-defined rotation period of 18.64 hours with a brightness variation of 0.11 magnitude (). One month later, French amateur astronomers René Roy and Laurent Brunetto obtained another lightcurve with a concurring period of 18.65 hours and an amplitude of 0.09 magnitude (). According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Zulu measures 12.41 and 20.561 kilometers in diameter and its surface has an albedo of 0.055 and 0.16. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous C-type asteroid of 0.057 and calculates a diameter of 19.30 kilometers with an absolute magnitude of 12.3. Naming This minor planet was named after the South African Zulu people, in recognition of the tribesmen who devotedly worked at the Johannesburg Union Observatory. The name also closely relates to 1362 Griqua and 1921 Pala, which also received tribal names and librate in the 2:1 ratio of Jupiter's mean motion as well. The official was published by the Minor Planet Center on 20 February 1976 (). See also Griqua group References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001922 Discoveries by Ernest Leonard Johnson Named minor planets 19490425 Recovered astronomical objects
1922 Zulu
Astronomy
655
43,167,221
https://en.wikipedia.org/wiki/Coastal%20upwelling%20of%20the%20South%20Eastern%20Arabian%20Sea
Coastal upwelling of the South Eastern Arabian Sea (SEAS) is a typical eastern boundary upwelling system (EBUS) similar to the California, Bengulea, Canary Island, and Peru-Chile systems. Unlike those four, the SEAS upwelling system needs to be explored in a much focused manner to clearly understand the chemical and biological responses associated with this coastal process. The coastal upwelling in the south-eastern Arabian Sea occurs seasonally. It begins in the mid Spring (Mid May) along the southern tip of India and as the season advances it spreads northward. It is not a uniform wind-driven upwelling system, but is driven by various factors. While at Cape Comorin it can be modeled as just wind-driven, as the phenomena rises along the west coast of India, longshore wind stresses play an increasing role, as do atmospheric effects from the Bay of Bengal, such as Kelvin and Rossby waves. References Physical oceanography Aquatic ecology
Coastal upwelling of the South Eastern Arabian Sea
Physics,Biology
199
8,921,507
https://en.wikipedia.org/wiki/Subacute%20combined%20degeneration%20of%20spinal%20cord
Subacute combined degeneration of spinal cord, also known as myelosis funiculus, or funicular myelosis, also Lichtheim's disease, and Putnam-Dana syndrome, refers to degeneration of the posterior and lateral columns of the spinal cord as a result of vitamin B12 deficiency (most common). It may also occur similarly as result of vitamin E deficiency, and copper deficiency. It is usually associated with pernicious anemia. Signs and symptoms The onset is gradual and uniform. The pathological findings of subacute combined degeneration consist of patchy losses of myelin in the dorsal and lateral columns. Patients present with weakness of the legs, arms, and trunk, and tingling and numbness that progressively worsens. Vision changes and change of mental state may also be present. Bilateral spastic paresis may develop and pressure, vibration, and touch sense are diminished. A positive Babinski sign may be seen. Prolonged deficiency of vitamin B12 leads to irreversible nervous system damage. HIV-associated vacuolar myelopathy can present with a similar pattern of dorsal column and corticospinal tract demyelination. It has been thought that if someone is deficient in vitamin B12 and folic acid, the vitamin B12 deficiency must be treated first. However, the basis for this has been challenged, although due to ethical considerations it is no longer able to be tested if "neuropathy is made more severe as a result of giving folic acid to vitamin B12- deficient individuals". And that if this were the case, then the mechanism remains unclear. Administration of nitrous oxide anesthesia can precipitate subacute combined degeneration in people with subclinical vitamin B12 deficiency, while chronic nitrous oxide exposure can cause it even in persons with normal B12 levels. Posterior column dysfunction decreases vibratory sensation and proprioception (joint sense). Lateral corticospinal tract dysfunction produces spasticity and dorsal spinocerebellar tract dysfunction causes ataxia. Cause In general, the most common cause of this condition is a deficiency of vitamin B12. This may be due to a dietary deficiency, malabsorption in the terminal ileum, lack of intrinsic factor secreted from gastric parietal cells, or low gastric pH inhibiting attachment of intrinsic factor to ileal receptors. The disease can also be caused by inhalation of nitrous oxide, which inactivates vitamin B12. Vitamin E deficiency, which is associated with malabsorption disorders such as cystic fibrosis and Bassen-Kornzweig syndrome, can cause a similar presentation due to the degeneration of the dorsal columns. Diagnosis Serum vitamin B12, methylmalonic acid, Schilling test, and a complete blood count, looking for megaloblastic anemia if there is also folic acid deficiency or macrocytic anemia. The Schilling test is no longer available in most areas. MRI-T2 images may reveal increased signal within the white matter of the spinal cord, predominantly in the posterior columns and possibly in the spinothalamic tracts. Treatment Therapy with vitamin B12 results in partial to full recovery where SACD has been caused by vitamin B12 deficiency, depending on the duration and extent of neurodegeneration. References External links Histopathology Neurodegenerative disorders
Subacute combined degeneration of spinal cord
Chemistry
711
445,020
https://en.wikipedia.org/wiki/Dilution%20refrigerator
A 3He/4He dilution refrigerator is a cryogenic device that provides continuous cooling to temperatures as low as 2 mK, with no moving parts in the low-temperature region. The cooling power is provided by the heat of mixing of the helium-3 and helium-4 isotopes. The dilution refrigerator was first proposed by Heinz London in the early 1950s, and was experimentally realized in 1964 in the Kamerlingh Onnes Laboratorium at Leiden University. Theory of operation The refrigeration process uses a mixture of two isotopes of helium: helium-3 and helium-4. When cooled below approximately 870 millikelvins, the mixture undergoes spontaneous phase separation to form a 3He-rich phase (the concentrated phase) and a 3He-poor phase (the dilute phase). As shown in the phase diagram, at very low temperatures the concentrated phase is essentially pure 3He, while the dilute phase contains about 6.6% 3He and 93.4% 4He. The working fluid is 3He, which is circulated by vacuum pumps at room temperature. The 3He enters the cryostat at a pressure of a few hundred millibar. In the classic dilution refrigerator (known as a wet dilution refrigerator), the 3He is precooled and purified by liquid nitrogen at 77 K and a 4He bath at 4.2 K. Next, the 3He enters a vacuum chamber where it is further cooled to a temperature of 1.2–1.5 K by the 1 K bath, a vacuum-pumped 4He bath (as decreasing the pressure of the helium reservoir depresses its boiling point). The 1 K bath liquefies the 3He gas and removes the heat of condensation. The 3He then enters the main impedance, a capillary with a large flow resistance. It is cooled by the still (described below) to a temperature 500–700 mK. Subsequently, the 3He flows through a secondary impedance and one side of a set of counterflow heat exchangers where it is cooled by a cold flow of 3He. Finally, the pure 3He enters the mixing chamber, the coldest area of the device. In the mixing chamber, two phases of the 3He–4He mixture, the concentrated phase (practically 100% 3He) and the dilute phase (about 6.6% 3He and 93.4% 4He), are in equilibrium and separated by a phase boundary. Inside the chamber, the 3He is diluted as it flows from the concentrated phase through the phase boundary into the dilute phase. The heat necessary for the dilution is the useful cooling power of the refrigerator, as the process of moving the 3He through the phase boundary is endothermic and removes heat from the mixing chamber environment. The 3He then leaves the mixing chamber in the dilute phase. On the dilute side and in the still the 3He flows through superfluid 4He which is at rest. The 3He is driven through the dilute channel by a pressure gradient just like any other viscous fluid. On its way up, the cold, dilute 3He cools the downward flowing concentrated 3He via the heat exchangers and enters the still. The pressure in the still is kept low (about 10 Pa) by the pumps at room temperature. The vapor in the still is practically pure 3He, which has a much higher partial pressure than 4He at 500–700 mK. Heat is supplied to the still to maintain a steady flow of 3He. The pumps compress the 3He to a pressure of a few hundred millibar and feed it back into the cryostat, completing the cycle. Cryogen-free dilution refrigerators Modern dilution refrigerators can precool the 3He with a cryocooler in place of liquid nitrogen, liquid helium, and a 1 K bath. No external supply of cryogenic liquids is needed in these "dry cryostats" and operation can be highly automated. However, dry cryostats have high energy requirements and are subject to mechanical vibrations, such as those produced by pulse tube refrigerators. The first experimental machines were built in the 1990s, when (commercial) cryocoolers became available, capable of reaching a temperature lower than that of liquid helium and having sufficient cooling power (on the order of 1 Watt at 4.2 K). Pulse tube coolers are commonly used cryocoolers in dry dilution refrigerators. Dry dilution refrigerators generally follow one of two designs. One design incorporates an inner vacuum can, which is used to initially precool the machine from room temperature down to the base temperature of the pulse tube cooler (using heat-exchange gas). However, every time the refrigerator is cooled down, a vacuum seal that holds at cryogenic temperatures needs to be made, and low temperature vacuum feed-throughs must be used for the experimental wiring. The other design is more demanding to realize, requiring heat switches that are necessary for precooling, but no inner vacuum can is needed, greatly reducing the complexity of the experimental wiring. Cooling power The cooling power (in watts) at the mixing chamber is approximately given by where is the 3He molar circulation rate, Tm is the mixing-chamber temperature, and Ti the temperature of the 3He entering the mixing chamber. There will only be useful cooling when This sets a maximum temperature of the last heat exchanger, as above this all cooling power is used up only cooling the incident 3He. Inside of a mixing chamber there is negligible thermal resistance between the pure and dilute phases, and the cooling power reduces to A low Tm can only be reached if Ti is low. In dilution refrigerators, Ti is reduced by using heat exchangers as shown in the schematic diagram of the low-temperature region above. However, at very low temperatures this becomes more and more difficult due to the so-called Kapitza resistance. This is a heat resistance at the surface between the helium liquids and the solid body of the heat exchanger. It is inversely proportional to T4 and the heat-exchanging surface area A. In other words: to get the same heat resistance one needs to increase the surface by a factor 10,000 if the temperature reduces by a factor 10. In order to get a low thermal resistance at low temperatures (below about 30 mK), a large surface area is needed. The lower the temperature, the larger the area. In practice, one uses very fine silver powder. Limitations There is no fundamental limiting low temperature of dilution refrigerators. Yet the temperature range is limited to about 2 mK for practical reasons. At very low temperatures, both the viscosity and the thermal conductivity of the circulating fluid become larger if the temperature is lowered. To reduce the viscous heating, the diameters of the inlet and outlet tubes of the mixing chamber must go as T, and to get low heat flow the lengths of the tubes should go as T. That means that, to reduce the temperature by a factor 2, one needs to increase the diameter by a factor of 8 and the length by a factor of 256. Hence the volume should be increased by a factor of 214 = 16,384. In other words: every cm3 at 2 mK would become 16,384 cm3 at 1 mK. The machines would become very big and very expensive. There is a powerful alternative for cooling below 2 mK: nuclear demagnetization. See also Adiabatic demagnetization Magnetic refrigeration Helium-3 refrigerator Refrigerated transport Dewar Timeline of low-temperature technology References External links Lancaster University, Ultra Low Temperature Physics – Description of dilution refrigeration. Harvard University, Marcus Lab – Hitchhiker's Guide to the Dilution Refrigerator. Cryogenics Cooling technology
Dilution refrigerator
Physics
1,616
16,123,532
https://en.wikipedia.org/wiki/Sleeping%20while%20on%20duty
Sleeping while on duty or sleeping on the job – falling asleep while one is not supposed to – is considered gross misconduct and grounds for disciplinary action, including termination of employment, in some occupations. Recently however, there has been a movement in support of sleeping, or napping at work, with scientific studies highlighting health and productivity benefits, and over 6% of employers in some countries providing facilities to do so. In some types of work, such as firefighting or live-in caregiving, sleeping at least part of the shift may be an expected part of paid work time. While some employees who sleep while on duty in violation do so intentionally and hope not to get caught, others intend in good faith to stay awake, and accidentally doze. Sleeping while on duty is such an important issue that it is addressed in the employee handbook in some workplaces. Concerns that employers have may include the lack of productivity, the unprofessional appearance, and danger that may occur when the employee's duties involve watching to prevent a hazardous situation. In some occupations, such as pilots, truck and bus drivers, or those operating heavy machinery, falling asleep while on duty puts lives in danger. However, in many countries, these workers are supposed to take a break and rest every few hours. Frequency The frequency of sleeping while on duty that occurs varies depending on the time of day. Daytime employees are more likely to take short naps, while graveyard shift workers have a higher likelihood of sleeping for a large portion of their shift, sometimes intentionally. A survey by the National Sleep Foundation has found that 30% of participants have admitted to sleeping while on duty. More than 90% of Americans have experienced a problem at work because of a poor night's sleep. One in four admit to shirking duties on the job for the same reason, either calling in sick or napping during work hours. Views Employers have varying views of sleeping while on duty. Some companies have instituted policies to allow employees to take napping breaks during the workday in order to improve productivity while others are strict when dealing with employees who sleep while on duty and use high-tech means, such as video surveillance, to catch their employees who may be sleeping on the job. Those who are caught in violation may face disciplinary action such as suspension or firing. Some employees sleep, nap, or take a power-nap only during their allotted break time at work. This may or may not be permitted, depending on the employer's policies. Some employers may prohibit sleeping, even during unpaid break time, for various reasons, such as the unprofessional appearance of a sleeping employee, the need for an employee to be available during an emergency, or legal regulations. Employees who may endanger others by sleeping on the job may face more serious consequences, such as legal sanctions. For example, airline pilots risk loss of their licenses. In some industries and work cultures sleeping at work is permitted and even encouraged. Such work cultures typically have flexible schedules, and variant work loads with extremely demanding periods where employees feel unable to spend time commuting. In such environments it is common for employers to provide makeshift sleeping materials for employees, such as a couch and/or inflatable mattress and blankets. This practice is particularly common in start-ups and during political campaigns. In those work cultures sleeping in the office is seen as evidence of dedication. In 1968, New York police officers admitted that sleeping while on duty was customary. In Japan, the practice of napping in public, called , may occur in work meetings or classes. Brigitte Steger, a scholar who focuses on Japanese culture, writes that sleeping at work is considered a sign of dedication to the job, such that one has stayed up late doing work or worked to the point of complete exhaustion, and may therefore be excusable. Notable incidents Airline pilots February 2008 – the pilots on a Go! airline flight were suspended during an investigation when it was suspected they fell asleep mid-flight from Honolulu, Hawaii to Hilo, Hawaii, resulting in their overshooting Hilo Airport by about 24 kilometers (15 miles) before turning around to land safely. January 2024 – the pilots on a Batik Air flight were suspended during an investigation when it was suspected they fell asleep mid-flight from Haluoleo International Airport to Soekarno-Hatta International Airport, resulting in their overshooting 210 nautical miles from last record of SIC activity. The co-pilot had month-old twin babies at home and was busy moving to a new house with his family during the rest period. Air traffic controllers October 1984 – Aeroflot Flight 3352 hit maintenance vehicles on the runway while attempting to land in Omsk, Russia. The ground controller, who had been up at nights due to recently becoming a father of two, allowed the workers to dry the runway during heavy rain and fell asleep on the job. 178 people were killed in the crash; the controller later killed himself in prison. October 2007 – four Italian air traffic controllers were suspended after they were caught asleep while on duty. March 2011 – the lone night shift air traffic controller at Ronald Reagan Washington National Airport fell asleep on duty. During the period he was asleep two airliners landed uneventfully. In the weeks that followed, there were other similar incidents and it was revealed that other lone air traffic controllers on duty fell asleep in the towers. This led to the resignation of United States air traffic chief Hank Krakowski and a new policy being set requiring two controllers to be on duty at all times. Bus drivers March 2011 – a tour bus driver crashed while returning from a casino in Connecticut to New York City. Fifteen people were killed and many others injured. Although the driver, who was found to be sober, denied sleeping, a survivor who witnessed the crash reported that he was speeding and sleeping. Police officers/security officers December 1947 – a Washington, D.C. police officer was fined $75 for sleeping while on duty. October 2007 – a CBS news story revealed nearly a dozen security guards at a nuclear power plant who were videotaped sleeping while on duty. December 2009 – The New York Post published a photo of a prison guard sleeping next to an inmate at the Rikers Island penitentiary. The photo was allegedly captured on the cell phone camera of another guard. Both guards were disciplined for this action, the sleeping officer for sleeping and the officer who took the photo for violating a prison policy forbidding cell phones while on duty. The inmate was not identified. August 2019 - The prison guards in charge of guarding Jeffrey Epstein were publicized as sleeping on duty and online shopping while he was on suicide watch. Epstein was found dead in his cell, leading to the investigation of the prison guards. U.S. Prosecutors eventually dropped the criminal case against the guards. Other March 1987 – The Peach Bottom Nuclear Generating Station was ordered shut down by the Nuclear Regulatory Commission after four operators were found sleeping while on duty. See also Nap Power nap References Duty Grounds for termination of employment Occupational hazards
Sleeping while on duty
Biology
1,417
44,950,562
https://en.wikipedia.org/wiki/Design%E2%80%93Expert
Design–Expert is a statistical software package from Stat-Ease Inc. that is specifically dedicated to performing design of experiments (DOE). Design–Expert offers comparative tests, screening, characterization, optimization, robust parameter design, mixture designs and combined designs. Design–Expert provides test matrices for screening up to 50 factors. Statistical significance of these factors is established with analysis of variance (ANOVA). Graphical tools help identify the impact of each factor on the desired outcomes and reveal abnormalities in the data. History Stat-Ease released its first version of Design–Expert in 1988. In 1996 the firm released version 5 which was the first version of the software designed for Microsoft Windows. Version 6.0 moved to a full 32-bit architecture and fuller compliance with Windows visual convention and also allowed up to 256 runs for two-level blocked designs. Version 7.0 added 3D surface plots for category factors and a t-value effects Pareto chart among many other functional additions. This version also includes the ability to type in variable constraints directly to the design in ratio form. Version 9 incorporates split-plot factorial designs including two-level full and fractional factorials, general factorials and optimal factorials. Features Design–Expert offers test matrices for screening up to 50 factors. A power calculator helps establish the number of test runs needed. ANOVA is provided to establish statistical significance. Based on the validated predictive models, a numerical optimizer helps the user determine the ideal values for each of the factors in the experiment. Design–Expert provides 11 graphs in addition to text output to analyze the residuals. The software determines the main effects of each factor as well as the interactions between factors by varying the values of all factors in parallel. A response surface model (RSM) can be used to map out a design space using a relatively small number of experiments. RSM provides an estimate for the value of responses for every possible combination of the factors by varying the values of all factors in parallel, making it possible to comprehend a multi-dimensional surface with non-linear shapes. The optimization feature can be used to calculate the optimum operating parameters for a process. Distribution and events Whilst Stat-Ease, Inc. is based in Minneapolis, MN, the software is used globally. To assist with distribution, the company partner with a number of external international software resellers and statistical support providers. Alongside running regular online webinars demonstrating the software's functionality and tools, Stat-Ease, Inc. also host an annual DOE Summit; since 2020 this has been hosted online, although biennial conferences were held across Europe prior to this in collaboration with their international reseller partners. Books referencing Design-Expert Douglas C. Montgomery, “Design and Analysis of Experiments, 8th Edition,” John Wiley & Sons Inc; 8th edition (April 2012, ©2013). Raymond H. Myers, Douglas C. Montgomery, Christine M. Anderson-Cook, “Response Surface Methodology: Process and Product Optimization Using Designed Experiments,” John Wiley & Sons Inc; 3 edition (January 14, 2009). Mark J. Anderson, Patrick J. Whitcomb, “DOE Simplified: Practical Tools for Effective Experimentation, 2nd Edition,” Productivity Press (July 30, 2007). Patrick J. Whitcomb, Mark J. Anderson, “RSM Simplified: Optimizing Processes Using Response Surface Methods for Design of Experiments,” Productivity Press (November 17, 2004). References Design of experiments Statistical software
Design–Expert
Mathematics
704
66,885,303
https://en.wikipedia.org/wiki/Center%20for%20Resilient%20Networks%20and%20Applications
Center for Resilient Networks and Applications, CRNA. is owned by Simula Research Laboratories (SRL) and Oslo Metropolitan University. The center was established in 2014 as a response to modern society's increasing dependability on applications running on top of the Internet and the serious societal consequences of outages. CRNA is a culmination of earlier research undertaken between 2006 and 2014 in projects called Resilient Networks and Resilient Networks II. The center receives its base funding from the Norwegian government, initially from the Ministry of Transport and Communications but has from 2019 been moved under the Ministry of Digitalisation. CRNA's mandate includes operating a country-wide infrastructure for monitoring the reliability and performance of mobile networks, the NorNet Edge. So far seven annual reports have been published tracking the evolution of Norwegian mobile operators and highlighting points with scope for improvements. The research undertaken at CRNA is informing the Norwegian Government's policy on communications infrastructure, and the importance of the work is expressed in the Government's Digital agenda. References Telecommunications Internet in Norway Mobile technology Computer networks
Center for Resilient Networks and Applications
Technology
218
2,402,369
https://en.wikipedia.org/wiki/Timolol
Timolol is a beta blocker medication used either by mouth or as eye drops. As eye drops it is used to treat increased pressure inside the eye such as in ocular hypertension and glaucoma. By mouth it is used for high blood pressure, chest pain due to insufficient blood flow to the heart, to prevent further complications after a heart attack, and to prevent migraines. Common side effects with the drops is irritation of the eye. Common side effects by mouth include tiredness, slow heart beat, itchiness, and shortness of breath. Other side effects include masking the symptoms of low blood sugar in those with diabetes. Use is not recommended in those with asthma, uncompensated heart failure, or chronic obstructive pulmonary disease (COPD). It is unclear if use during pregnancy is safe for the fetus. Timolol is a non-selective beta blocker. Timolol was patented in 1968, and came into medical use in 1978. It is on the World Health Organization's List of Essential Medicines. Timolol is available as a generic medication. In 2022, it was the 155th most commonly prescribed medication in the United States, with more than 3million prescriptions. Medical uses By mouth In its by mouth or oral form, it is used: to treat high blood pressure to prevent heart attacks to prevent migraine headaches The combination of timolol and the alpha-1 blocker prazosin has sedative effects. Eye drops In its eye drop form it is used to treat open-angle and, occasionally, secondary glaucoma. The mechanism of action of timolol is probably the reduction of the formation of aqueous humor in the ciliary body in the eye. It was the first beta blocker approved for topical use in treatment of glaucoma in the United States (1978). When used by itself, it depresses intraocular pressure (IOP) 18–34% below baseline within first few treatments. However, there are short-term escape and long-term drift effects in some people. That is, tolerance develops. It may reduce the extent of the daytime IOP curve up to 50%. The IOP is higher during sleep. Efficacy of timolol in lowering IOP during the sleep period may be limited. It is a 5–10× more potent beta blocker than propranolol. Timolol is light-sensitive; it is usually preserved with 0.01% benzalkonium chloride (BAC), but also comes BAC-free. It can also be used in combination with pilocarpine, carbonic anhydrase inhibitors or prostaglandin analogs. A Cochrane review compared the effect of timolol versus brimonidine in slowing the progression of open angle glaucoma in adults but found insufficient evidence to come to conclusions. On the skin In its gel form it is used on the skin to treat infantile hemangiomas. Contraindications The medication should not be taken by individuals with: An allergy to timolol or any other beta-blockers Asthma or severe chronic obstructive bronchitis A slow heart rate (bradycardia), or a heart block Heart failure Side effects The most serious possible side effects include cardiac arrhythmias and severe bronchospasms. Timolol can also lead to fainting, congestive heart failure, depression, confusion, worsening of Raynaud's syndrome and impotence. Side effects when given in the eye include: burning sensation, eye redness, superficial punctate keratopathy, corneal numbness. Formulations It is available in tablet and liquid formulations. For ophthalmic use, timolol is also available combined: with carbonic anhydrase inhibitors: timolol and brinzolamide timolol and dorzolamide with α2 agonists: timolol and brimonidine with prostaglandin analogs: timolol and latanoprost timolol and travoprost Brand names Timolol is sold under many brand names worldwide. Timolol eye drops are sold under the brand names Timoptic and Istalol among others. References External links 4-Morpholinyl compounds Beta blockers Ethers Ophthalmology drugs Secondary alcohols Thiadiazoles Wikipedia medicine articles ready to translate World Health Organization essential medicines
Timolol
Chemistry
936
30,950,412
https://en.wikipedia.org/wiki/Chronostasis
Chronostasis (from Greek , , 'time' and , , 'standing') is a type of temporal illusion in which the first impression following the introduction of a new event or task-demand to the brain can appear to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade (i.e., quick eye movement). This elicits an overestimation in the temporal duration for which that target stimulus (i.e., postsaccadic stimulus) was perceived. This effect can extend apparent durations by up to half a second and is consistent with the idea that the visual system models events prior to perception. A common occurrence of this illusion is known as the stopped-clock illusion, where the second hand of an analog clock appears to stay still for longer than normal when looking at it for the first time. This illusion can also occur in the auditory and tactile domain. For instance, a study suggests that when someone listens to a ringing tone through a telephone, while repetitively switching the receiver from one ear to the other, it causes the caller to overestimate the temporal duration between rings. Mechanism of action Overall, chronostasis occurs as a result of a disconnection in the communication between visual sensation and perception. Sensation, information collected from our eyes, is usually directly interpreted to create our perception. This perception is the collection of information that we consciously interpret from visual information. However, quick eye movements known as saccades disrupt this flow of information. Because research into the neurology associated with visual processing is ongoing, there is renewed debate regarding the exact timing of changes in perception that lead to chronostasis. However, below is a description of the general series of events that lead to chronostasis, using the example of a student looking up from his desk toward a clock in the classroom. The eyes receive information from the environment regarding one particular focus. This sensory input is sent directly to the visual cortex to be processed. After visual processing, we consciously perceive this object of focus. In the context of a student in a classroom, the student's eyes focus on a paper on his desk. After his eyes collect light reflected off the paper and this information is processed in his visual cortex, the student consciously perceives the paper in front of him. Following either a conscious decision or an involuntary perception of a stimulus in the periphery of the visual field, the eyes intend to move to a second target of interest. For the student described above, this may occur as he decides that he wishes to check the clock at the front of the classroom. The muscles of the eye contract and it begins to quickly move towards the second object of interest through an action known as a saccade. As soon as this saccade begins, a signal is sent from the eye back to the brain. This signal, known as an efferent cortical trigger or efference copy, communicates to the brain that a saccade is about to begin. During saccades, the sensitivity of visual information collected by the eyes is greatly reduced and, thus, any image collected during this saccade is very blurry. In order to prevent the visual cortex from processing blurred sensory information, visual information collected by the eyes during a saccade is suppressed through a process known as saccadic masking. This is also the same mechanism used to prevent the experience of motion blur. Following the completion of the saccade, the eyes now focus on the second object of interest. As soon as the saccade concludes, another efferent cortical trigger is sent from the eyes back to the brain. This signal communicates to the brain that the saccade has concluded. Prompted by this signal, the visual cortex once again resumes processing visual information. For the student, his eyes have now reached the clock and his brain's visual cortex begins to process information from his eyes. However, this second efferent trigger also communicates to the brain that a period of time has been missing from perception. To fill this gap in perception, visual information is processed in a manner known as neural antedating or backdating. In this visual processing, the gap in perception is "filled in" with information gathered after the saccade. For the student, the gap of time that occurred during the saccade is substituted with the processed image of the clock. Thus, immediately following the saccade, the second hand of the clock appears to stop in place before moving. In studying chronostasis and its underlying causes, there is potential bias in the experimental setting. In many experiments, participants are asked to perform some sort of task corresponding to sensory stimuli. This could cause the participants to anticipate stimuli, thus leading to bias. Also, many mechanisms involved in chronostasis are complex and difficult to measure. It is difficult for experimenters to observe the perceptive experiences of participants without "being inside their mind." Furthermore, experimenters normally do not have access to the neural circuitry and neurotransmitters located inside the braincases of their subjects. Modulating factors Because of its complexity, there are various characteristics of stimuli and physiological actions that can alter the way one experiences chronostasis. Saccadic amplitude The greater the amplitude (or duration) of a saccade, the more severe the resulting overestimation. The further the student in the above example's eyes must travel in order to reach the clock, the more dramatic his perception of chronostasis. This connection supports the assertion that overestimation occurs in order to fill in the length of time omitted by saccadic masking. This would mean that, if the saccade lasted for a longer period of time, there would be more time that needed to be filled in with overestimation. Attention redirection When shifting focus from one object to a second object, the saccadic movement of one's eyes is also accompanied by a conscious shift of attention. In the context of the stopped clock illusion, not only do your eyes move, but you also shift your attention to the clock. This led researchers to question whether the movement of the eyes or simply the shift of the observer's attention towards the second stimulus initiated saccadic masking. Experiments in which subjects diverted only their attention without moving their eyes revealed that the redirection of attention alone was not enough to initiate chronostasis. This suggests that attention is not the time marker used when perception is filled back in. Rather, the physical movement of the eyes themselves serves as this critical marker. However, this relationship between attention and perception in the context of chronostasis is often difficult to measure and may be biased in a laboratory setting. Because subjects may be biased as they are instructed to perform actions or to redirect their attention, the concept of attention serving as a critical time marker for chronostasis may not be entirely dismissed. Spatial continuity Following investigation, one may wonder if chronostasis still occurs if the saccadic target is moving. In other words, would you still experience chronostasis if the clock you looked at were moving? Through experimentation, researchers found that the occurrence of chronostasis in the presence of a moving stimulus was dependent on the awareness of the subject. If the subject were aware that the saccadic target was moving, they would not experience chronostasis. Conversely, if the subject were not aware of the saccadic target's movement, they did experience chronostasis. This is likely because antedating does not occur in the case of a consciously moving target. If, after the saccade, the eye correctly falls on the target, the brain assumes this target has been at this location throughout the saccade. If the target changes position during the saccade, the interruption of spatial continuity makes the target appear novel. Stimulus properties Properties of stimuli themselves have shown to have significant effects on the occurrence of chronostasis. In particular, the frequency and pattern of stimuli affect the observer's perception of chronostasis. In regard to frequency, the occurrence of many, similar events can exaggerate duration overestimation and makes the effects of chronostasis more severe. In regard to repetition, repetitive stimuli appear to be of shorter subjective duration than novel stimuli. This is due to neural suppression within the cortex. Investigation using various imaging techniques has shown that repetitive firing of the same cortical neurons cause them to be suppressed over time. This occurs as a form of neural adaptation. Sensory domain The occurrence of chronostasis extends beyond the visual domain into the auditory and tactile domains. In the auditory domain, chronostasis and duration overestimation occur when observing auditory stimuli. One common example is a frequent occurrence when making telephone calls. If, while listening to the phone's ring tone, research subjects move the phone from one ear to the other, the length of time between rings appears longer. In the tactile domain, chronostasis has persisted in research subjects as they reach for and grasp objects. After grasping a new object, subjects overestimate the time in which their hand has been in contact with this object. In other experiments, subjects turning a light on with a button were conditioned to experience the light before the button press. This suggests that, much in the same way subjects overestimate the duration of the second hand as they watch it, they may also overestimate the duration of auditory and tactile stimuli. This has led researchers to investigate the possibility that a common timing mechanism or temporal duration scheme is used for temporal perception of stimuli across a variety of sensory domains. See also References External links Michael Stevens provides a brief overview of the stopped clock illusion A brief overview of temporal perception by Laci Green Dan Lewis of 'Now I Know' describes saccadic masking and other visual illusions Greek words and phrases Illusions Measurement Perception Vision
Chronostasis
Physics,Mathematics
2,041
10,582,138
https://en.wikipedia.org/wiki/Nitrile%20hydratase
Nitrile hydratases (NHases; ) are mononuclear iron or non-corrinoid cobalt enzymes that catalyse the hydration of diverse nitriles to their corresponding amides: R-C≡N + → Metal cofactor Nitrile hydratases use Fe(III) or Co(III) at their active sites. These ions are low-spin. The cobalt-based nitrile hydratases are rare examples of enzymes that use cobalt. Cobalt, when it occurs in enzymes, is usually bound to a corrin ring, as in vitamin B12. The mechanism by which the cobalt is transported to NHase without causing toxicity is unclear, although a cobalt permease has been identified, which transports cobalt across the cell membrane. The identity of the metal in the active site of a nitrile hydratase can be predicted by analysis of the sequence data of the alpha subunit in the region where the metal is bound. The presence of the amino acid sequence VCTLC indicates a Co-centred NHase and the presence of VCSLC indicates Fe-centred NHase. Metabolic pathway Nitrile hydratase and amidase are two hydrating and hydrolytic enzymes responsible for the sequential metabolism of nitriles in bacteria that are capable of utilising nitriles as their sole source of nitrogen and carbon, and in concert act as an alternative to nitrilase activity, which performs nitrile hydrolysis without formation of an intermediate primary amide. A sequence in genome of the choanoflagellate Monosiga brevicollis was suggested to encode for a nitrile hydratase. The M. brevicollis gene consisted of both the alpha and beta subunits fused into a single gene. Similar nitrile hydratase genes consisting of a fusion of the beta and alpha subunits have since been identified in several eukaryotic supergroups, suggesting that such nitrile hydratases were present in the last common ancestor of all eukaryotes. Industrial applications NHases have been efficiently used for the industrial production of acrylamide from acrylonitrile on a scale of 600 000 tons per annum, and for removal of nitriles from wastewater. Photosensitive NHases intrinsically possess nitric oxide (NO) bound to the iron centre, and its photodissociation activates the enzyme. Nicotinamide is produced industrially by the hydrolysis of 3-cyanopyridine catalysed by the nitrile hydratase from Rhodococcus rhodochrous J1, producing 3500 tons per annum of nicotinamide for use in animal feed. Structure NHases are composed of two types of subunits, α and β, which are not related in amino acid sequence. NHases exist as αβ dimers or α2β2 tetramers and bind one metal atom per αβ unit. The 3-D structures of a number of NHases have been determined. The α subunit consists of a long extended N-terminal "arm", containing two α-helices, and a C-terminal domain with an unusual four-layered structure (α-β-β-α). The β subunit consists of a long N-terminal loop that wraps around the α subunit, a helical domain that packs with N-terminal domain of the α subunit, and a C-terminal domain consisting of a β-roll and one short helix. Assembly An assembly pathway for nitrile hydratase was first proposed when gel filtration experiments found that the complex exists in both αβ and α2β2 forms. In vitro experiments using mass spectrometry further revealed that the α and β subunits first assemble to form the αβ dimer. The dimers can then subsequently interact to form a tetramer. Mechanism The metal centre is located in the central cavity at the interface between two subunits. All protein ligands to the metal atom are provided by the α subunit. The protein ligands to the iron are the sidechains of the three cysteine (Cys) residues and two mainchain amide nitrogens. The metal ion is octahedrally coordinated, with the protein ligands at the five vertices of an octahedron. The sixth position, accessible to the active site cleft, is occupied either by NO or by a solvent-exchangeable ligand (hydroxide or water). The two Cys residues coordinated to the metal are post-translationally modified to Cys-sulfinic (Cys-SO2H) and -sulfenic (Cys-SOH) acids. Quantum chemical studies predicted that the Cys-SOH residue might play a role as either a base (activating a nucleophilic water molecule) or as a nucleophile. Subsequently, the functional role of the SOH center as nucleophile has obtained experimental support. References Further reading Metalloproteins Cobalt enzymes Iron enzymes EC 4.2.1
Nitrile hydratase
Chemistry
1,048
1,201,394
https://en.wikipedia.org/wiki/Mobile%20radio%20telephone
Mobile radio telephone systems were mobile telephony systems that preceded modern cellular network technology. Since they were the predecessors of the first generation of cellular telephones, these systems are sometimes retroactively referred to as pre-cellular (or sometimes zero generation, that is, 0G) systems. Technologies used in pre-cellular systems included the Push-to-talk (PTT or manual), Mobile Telephone Service (MTS), Improved Mobile Telephone Service (IMTS), and Advanced Mobile Telephone System (AMTS) systems. These early mobile telephone systems can be distinguished from earlier closed radiotelephone systems in that they were available as a commercial service that was part of the public switched telephone network, with their own telephone numbers, rather than part of a closed network such as a police radio or taxi dispatching system. These mobile telephones were usually mounted in cars or trucks (thus called car phones), although portable briefcase models were also made. Typically, the transceiver (transmitter-receiver) was mounted in the vehicle trunk and attached to the "head" (dial, display, and handset) mounted near the driver seat. They were sold through WCCs (Wireline Common Carriers, a.k.a. telephone companies), RCCs (Radio Common Carriers), and two-way radio dealers. Origins Early examples of this technology include: Motorola, in conjunction with the Bell System, operated the first commercial mobile telephone service (MTS) in the US in 1946, as a service of the wireline telephone company. The A-Netz launched in 1952 in West Germany as the country's first public commercial mobile phone network. System 1, launched in 1959 in the United Kingdom as the 'Post Office South Lancashire Radiophone Service', covering South Lancashire and operated from a telephone exchange in Manchester, is cited as the country's first mobile phone network. However, it was manual (needed to be connected via an operator) and for several decades exercised very little coverage. The first automatic system was the Bell System's IMTS which became available in 1964, offering automatic dialing to and from the mobile. The "Altai" mobile telephone system launched into experimental service in 1963 in the Soviet Union, becoming fully operational in 1965; the first automatic mobile phone system in Europe. Televerket opened its first manual mobile telephone system in Norway in 1966. Norway was later the first country in Europe to get an automatic mobile telephone system. The Autoradiopuhelin (ARP) launched in 1971 in Finland as the country's first public commercial mobile phone network. The Automatizovaný městský radiotelefon (AMR) launched in 1978, fully operational in 1983, in Czechoslovakia as the first analog mobile radio telephone in the whole Eastern Bloc. The B-Netz launched in 1972 in West Germany as the country's second public commercial mobile phone network (albeit the first one that did not require human operators to connect calls). Radio Common Carrier Parallel to Improved Mobile Telephone Service (IMTS) in the US until the rollout of cellular AMPS systems, a competing mobile telephone technology was called Radio Common Carrier (RCC). The service was provided from the 1960s until the 1980s when cellular AMPS systems made RCC equipment obsolete. These systems operated in a regulated environment in competition with the Bell System's MTS and IMTS. RCCs handled telephone calls and were operated by private companies and individuals. Some systems were designed to allow customers of adjacent RCCs to use their facilities, but the universe of RCCs did not comply with any single interoperable technical standard (a capability known in modern systems as roaming). For example, the phone of an Omaha, Nebraska-based RCC service would not be likely to work in Phoenix, Arizona. At the end of RCC's existence, industry associations were working on a technical standard that would potentially have allowed roaming, and some mobile users had multiple decoders to enable operation with more than one of the common signaling formats (600/1500, 2805, and Reach). Manual operation was often a fallback for RCC roamers. Roaming was not encouraged, in part because there was no centralized industry billing database for RCCs. Signaling formats were not standardized. For example, some systems used two-tone sequential paging to alert a mobile or handheld that a wired phone was trying to call them. Other systems used DTMF. Some used a system called Secode 2805 which transmitted an interrupted 2805 Hz tone (in a manner similar to IMTS signaling) to alert mobiles of an offered call. Some radio equipment used with RCC systems was half-duplex, push-to-talk equipment such as Motorola hand-helds or RCA 700-series conventional two-way radios. Other vehicular equipment had telephone handsets, rotary or push-button dialing, and operated full duplex like a conventional wired telephone. A few users had full-duplex briefcase telephones (which were radically advanced for their day). RCCs used paired UHF 454/459 MHz and VHF 152/158 MHz frequencies near those used by IMTS. See also Walkie-talkie List of mobile phone generations 1G 2G 3G 4G 5G 6G Mobile rig Mobile Telephone Service – a pre-cellular VHF radio system that linked to the PSTN Radiotelephone – a communications device for transmission of speech over radio Satellite telephone References External links Mobile Phone History Mobile Phone Generations Storno.co.uk Evolution of Mobile Wireless Technology from 0G to 5G Radio telephone
Mobile radio telephone
Technology
1,141
17,212,981
https://en.wikipedia.org/wiki/Van%20Oord
Royal Van Oord is a Dutch maritime contracting company that specializes in dredging, land reclamation and constructing man made islands. Royal Van Oord has undertaken many projects throughout the world, including land reclamation, dredging and beach nourishment. The company has one of the world's largest dredging fleets.<ref>Dredging, Rabobank, sept 2013'</ref> History The company was founded by Govert van Oord in 1868. In 1990 it acquired Aannemers Combinatie Zinkwerken ('ACZ') and in 2003 it acquired Ballast HAM Dredging (formed from the merger of Ballast Nedam's dredging division with Hollandse Aanneming Maatschappij ('HAM') two years earlier). King Willem-Alexander of the Netherlands awarded Van Oord the right to use the designation "Koninklijk"'' (Royal) on 23 November 2018. Major projects Projects undertaken by the company include the Oosterscheldekering between Schouwen-Duiveland and Noord-Beveland completed in 1986, the Palm Jumeirah in Dubai completed in 2003, the IJsselmeer pipeline in the Netherlands completed in 2006 and the World in Dubai completed in 2008. A US sales and support office was opened in Houston, Texas in 2010. In December 2016, the company entered a consortium with partners Shell, Eneco, and Mitsubishi/DGE and was awarded the Borssele III & IV project. It was obtained for the strike price of 54.50 euro cents per megawatt-hour, the Netherlands’ lowest-ever strike price at that time. In mid-2018, Van Oord announced the acquisition of MPI Offshore from the Vroon Group. MPI Offshore has specialized in offshore wind installations since 2003 as a contractor. Van Oord also takes over the two ships and crew. The transaction is subject to antitrust clearance, but is expected to close by the end of September 2018. Van Oord is mainly strengthening its position in the British offshore wind energy market. Offices References External links Royal Van Oord official website Dredging companies Multinational companies headquartered in the Netherlands Companies based in Rotterdam
Van Oord
Engineering
452
44,954,760
https://en.wikipedia.org/wiki/77%20Ceti
77 Ceti is a single, orange-hued star located 489 light years away in the equatorial constellation of Cetus. It is faintly visible to the naked eye, having an apparent visual magnitude of 5.7. This is an evolved giant star with a stellar classification of K2 III. It is radiating 187 times the Sun's luminosity from its photosphere at an effective temperature of 4,206 K. References K-type giants Cetus Durchmusterung objects Ceti, 77 016074 012002 0752
77 Ceti
Astronomy
115
32,181,185
https://en.wikipedia.org/wiki/Calreticulin%20protein%20family
In molecular biology, the calreticulin protein family is a family of calcium-binding proteins. This family includes calreticulin, calnexin and camlegin. References Protein families
Calreticulin protein family
Biology
41
31,808,616
https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SC%2027
ISO/IEC JTC 1/SC 27 Information security, cybersecurity and privacy protection is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). ISO/IEC JTC 1/SC 27 develops International Standards, Technical Reports, and Technical Specifications within the field of information security. Standardization activity by this subcommittee includes general methods, management system requirements, techniques and guidelines to address information security, cybersecurity and privacy. Drafts of International Standards by ISO/IEC JTC 1 or any of its subcommittees are sent out to participating national standardization bodies for ballot, comments and contributions. Publication as an ISO/IEC International Standard requires approval by a minimum of 75% of the national bodies casting a vote. The international secretariat of ISO/IEC JTC 1/SC 27 is the Deutsches Institut für Normung (DIN) located in Germany. History ISO/IEC JTC 1/SC 27 was founded by ISO/IEC JTC 1 in 1990. The subcommittee was formed when ISO/IEC JTC 1/SC 20, which covered standardization within the field of security techniques, covering "secret-key techniques" (ISO/IEC JTC 1/SC 20/WG 1), "public-key techniques" (ISO/IEC JTC 1/SC 20/WG 2), and "data encryption protocols" (ISO/IEC JTC 1/SC 20/WG 3) was disbanded. This allowed for ISO/IEC JTC 1/SC 27 to take over the work of ISO/IEC JTC 1/SC 20 (specifically that of its first two working groups) as well as to extend its scope to other areas within the field of IT security techniques. Since 1990, the subcommittee has extended or altered its scope and working groups to meet the current standardization demands. ISO/IEC JTC 1/SC 27, which started with three working groups, eventually expanded its structure to contain five. The two new working groups were added in April 2006, at the 17th Plenary Meeting in Madrid, Spain. Scope The scope of ISO/IEC JTC 1/SC 27 is "The development of standards for the protection of information and ICT. This includes generic methods, techniques and guidelines to address both security and privacy aspects, such as: Security requirements capture methodology; Management of information and ICT security; in particular information security management systems, security processes, security controls and services; Cryptographic and other security mechanisms, including but not limited to mechanisms for protecting the accountability, availability, integrity and confidentiality of information; Security management support documentation including terminology, guidelines as well as procedures for the registration of security components; Security aspects of identity management, biometrics and privacy; Conformance assessment, accreditation and auditing requirements in the area of information security management systems; Security evaluation criteria and methodology. SC 27 engages in active liaison and collaboration with appropriate bodies to ensure the proper development and application of SC 27 standards and technical reports in relevant areas." Structure ISO/IEC JTC 1/SC 27 is made up of five working groups (WG), each of which is responsible for the technical development of information and IT security standards within the programme of work of ISO/IEC JTC 1/SC 27. In addition, ISO/IEC JTC 1/SC 27 has two special working groups (SWG): (i) SWG-M, which operates under the direction of ISO/IEC JTC 1/SC 27 with the primary task of reviewing and evaluating the organizational effectiveness of ISO/IEC JTC 1/SC 27 processes and mode of operations; and (ii) SWG-T, which operates under the direction of ISO/IEC JTC 1/SC 27 to address topics beyond the scope of the respective existing WGs or that can affect directly or indirectly multiple WGs. ISO/IEC JTC 1/SC 27 also has a Communications Officer whose role is to promote the work of ISO/IEC JTC 1/SC 27 through different channels: press releases and articles, conferences and workshops, interactive ISO chat forums and other media channels. The focus of each working group is described in the group's terms of reference. Working groups of ISO/IEC JTC 1/SC 27 are: Collaborations ISO/IEC JTC 1/SC 27 works in close collaboration with a number of other organizations or subcommittees, both internal and external to ISO or IEC, in order to avoid conflicting or duplicative work. Organizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 27 include: ISO/IEC JTC 1/SWG 6, Management ISO/IEC JTC 1/WG 7, Sensor networks ISO/IEC JTC 1/WG 9, Big Data ISO/IEC JTC 1/WG 10, Internet of Things (IoT) ISO/IEC JTC 1/SC 6, Telecommunications and information exchange between systems ISO/IEC JTC 1/SC 7, Software and systems engineering ISO/IEC JTC 1/SC 17, Cards and personal identification ISO/IEC JTC 1/SC 22, Programming languages, their environments and system software interfaces ISO/IEC JTC 1/SC 25, Interconnection of information technology equipment ISO/IEC JTC 1/SC 31, Automatic identification and data capture techniques ISO/IEC JTC 1/SC 36, Information technology for learning, education and training ISO/IEC JTC 1/SC 37, Biometrics ISO/IEC JTC 1/SC 38, Cloud computing and distributed platforms ISO/IEC JTC 1/SC 40, IT Service Management and IT Governance ISO/TC 8, Ships and marine technology ISO/TC 46, Information and documentation ISO/TC 46/SC 11, Archives/records management ISO/TC 68, Financial services ISO/TC 68/SC 2, Financial Services, security ISO/TC 68/SC 7, Core banking ISO/TC 171, Document management applications ISO/TC 176, Quality management and quality assurance ISO/TC 176/SC 3, Supporting technologies ISO/TC 204, Intelligent transport systems ISO/TC 215, Health informatics ISO/TC 251, Asset management ISO/TC 259, Outsourcing ISO/TC 262, Risk management ISO/TC 272, Forensic sciences ISO/TC 292, Security and resilience ISO/CASCO, Committee on Conformity Assessments ISO/TMB/JTCG, Joint technical Coordination Group on MSS ISO/TMB/SAG EE 1, Strategic Advisory Group on Energy Efficiency IEC/SC 45A, Instrumentation, control and electrical systems of nuclear facilities IEC/TC 57, Power systems management and associated information exchange IEC/TC 65, Industrial-process measurement, control and automation IEC Advisory Committee on Information security and data privacy (ACSEC) Some organizations external to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 27 include: Attribute-based Credentials for Trust (ABC4Trust) Article 29 Data Protection Working Party Common Criteria Development Board (CCDB) Consortium of Digital Forensic Specialists (CDFS) CEN/TC 377 CEN/PC 428 e-Competence and ICT professionalism Cloud Security Alliance (CSA) Cloud Standards Customer Council (CSCC) Common Study Center of Telediffusion and Telecommunication (CCETT) The Cyber Security Naming & Information Structure Groups (Cyber Security) Ecma International European Committee for Banking Standards (ECBS) European Network and Information Security Agency (ENISA) European Payments Council (EPC) European Telecommunications Standards Institute (ETSI) European Data Centre Association (EUDCA) Eurocloud Future of Identity in the Information Society (FIDIS) Forum of Incident Response and Security Teams (FIRST) Information Security Forum (ISF) Latinoamerican Institute for Quality Assurance (INLAC) Institute of Electrical and Electronics Engineers (IEEE) International Conference of Data Protection and Privacy Commissioners International Information Systems Security Certification Consortium ((ISC)2) International Smart Card Certification Initiatives (ISCI) The International Society of Automation (ISA) INTERPOL ISACA International Standardized Commercial Identifier (ISCI) Information Security Forum (ISF) ITU-T Kantara Initiative MasterCard PReparing Industry to Privacy-by-design by supporting its Application in REsearch (PRIPARE) Technology-supported Risk Estimation by Predictive Assessment of Socio-technical Security (TREsPASS) Privacy and Identity Management for Community Services (PICOS) Privacy-Preserving Computation in the Cloud (PRACTICE) The Open Group The OpenID Foundation (OIDF) TeleManagement Forum (TMForum) Trusted Computing Group (TCG) Visa Member countries Countries pay a fee to ISO to be members of subcommittees. The 51 "P" (participating) members of ISO/IEC JTC 1/SC 27 are: Algeria, Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Cyprus, Czech Republic, Côte d'Ivoire, Denmark, Finland, France, Germany, India, Ireland, Israel, Italy, Jamaica, Japan, Kazakhstan, Kenya, Republic of Korea, Luxembourg, Malaysia, Mauritius, Mexico, Netherlands, New Zealand, Norway, Peru, Poland, Romania, Russian Federation, Rwanda, Singapore, Slovakia, South Africa, Spain, Sri Lanka, Sweden, Switzerland, Thailand, the Republic of Macedonia, Ukraine, United Arab Emirates, United Kingdom, United States of America, and Uruguay. The 20 "O" (observing) members of ISO/IEC JTC 1/SC 27 are: Belarus, Bosnia and Herzegovina, Costa Rica, El Salvador, Estonia, Ghana, Hong Kong, Hungary, Iceland, Indonesia, Islamic Republic of Iran, Lithuania, Morocco, State of Palestine, Portugal, Saudi Arabia, Serbia, Slovenia, Swaziland, and Turkey. As of August 2014, the spread of meeting locations since Spring 1990 has been as shown below: Published standards ISO/IEC JTC 1/SC 27 currently has 147 published standards within the field of IT security techniques, including: See also ISO/IEC JTC 1 List of ISO standards Deutsches Institut für Normung International Organization for Standardization International Electrotechnical Commission References External links ISO/IEC JTC 1/SC 27 home page ISO/IEC JTC 1/SC 27 page at ISO ISO/IEC Joint Technical Committee 1 - Information Technology (public website) ISO/IEC Joint Technical Committee 1 (Livelink password-protected available documents) ISO/IEC Joint Technical Committee 1 (freely available documents), JTC 1 Supplement, Standing Documents and Templates ISO and IEC procedural documentation ISO DB Patents (including JTC 1 patents) ITU-T Study Group 17 (SG17) ISO International Organization for Standardization IEC International Electrotechnical Commission Access to ISO/IEC JTC 1/SC 27 Freely Available Standards 027 Identity management initiative Information assurance standards
ISO/IEC JTC 1/SC 27
Technology
2,214
18,504,221
https://en.wikipedia.org/wiki/Compression%20of%20morbidity
The compression of morbidity in public health is a hypothesis put forth by James Fries, professor of medicine at Stanford University School of Medicine. The hypothesis was supported by a 1998 study of 1700 University of Pennsylvania alumni over a period of 20 years. Fries' hypothesis is that the burden of lifetime illness may be compressed into a shorter period before the time of death, if the age of onset of the first chronic infirmity can be postponed. This hypothesis contrasts to the view that as the age of countries' populations tends to increase over time, they will become increasingly infirm and consume an ever-larger proportion of the national budget in healthcare costs. Fries posited that if the hypothesis is confirmed, healthcare costs and patient health overall will be improved. In order to confirm this hypothesis, the evidence must show that it is possible to delay the onset of infirmity, and that corresponding increases in longevity will at least be modest. The evidence is at best mixed. Vincent Mor's "The Compression of Morbidity Hypothesis: A Review of Research and Prospects for the Future" argues that "Cross-national evidence for the validity of the compression of morbidity hypothesis originally proposed by Fries is generally accepted. Generational improvements in education and the increased availability of adaptive technologies and even medical treatments that enhance quality of life have facilitated continued independence of older persons in the industrialized world. Whether this trend continues may depend upon the effect of the obesity epidemic on the next generation of older people." See also "Mortality and Morbidity Trends: Is There Compression of Morbidity?" for recent evidence against the hypothesis. There may also be age versus cohort effects. See also Successful aging References Further reading Gretchen Reynolds (8 February 2017) "Lessons on Aging Well from a 105-Year-Old Cyclist" The New York Times accessdate=2017-02-14 Public health Hypotheses Medical aspects of death Senescence
Compression of morbidity
Chemistry,Biology
388
34,218,270
https://en.wikipedia.org/wiki/Adam%20S.%20Veige
Adam S. Veige is a professor of Chemistry at the University of Florida. His research focuses on catalysis and the usage of inorganic compounds, including tungsten and chromium complexes. Education Veige received a Ph.D. degree in chemistry from Cornell University in 2003 under the direction of Peter T. Wolczanski. He pursued postdoctoral research under the direction of Daniel G. Nocera at Massachusetts Institute of Technology. Career Veige joined the faculty of the University of Florida as an assistant professor of chemistry (inorganic chemistry) in 2004. In 2010, Veige received the Alfred P. Sloan fellowship award, the only researcher to be so honored in Florida in 2010. He was promoted to an associate professor in 2011. He is currently the director of the Center for Catalysis in the Department of Chemistry at the University of Florida. His research focuses on the design, synthesis, isolation, and characterization of novel inorganic molecules for application in the production of fertilizers, polymers, and pharmaceuticals. His research has included the preparation of chiral catalysts, synthesis of nitriles via N-atom transfer to acid chlorides, chromium catalyzed aerobic oxidation, an alkene isomerization catalyst, a highly active alkene polymerization catalyst, and a highly active alkyne polymerization catalyst. Awards Camille and Henry Dreyfus New Faculty Award (2004) National Science Foundation (NSF) Career Award (2008) Alfred P. Sloan Fellowship Award (2010) Heaton Family Faculty Award (2011) Selected publications References External links Veige at the University of Florida Chemistry Department Website University of Florida Chemistry Department Website Oboro Labs Cyclic Polymer and Catalyst Technologies Website Living people 21st-century American chemists Cornell University alumni Year of birth missing (living people) University of Florida faculty Inorganic chemists
Adam S. Veige
Chemistry
370
1,242,991
https://en.wikipedia.org/wiki/Clay%E2%80%93water%20interaction
Clay-water interaction is an all-inclusive term to describe various progressive interactions between clay minerals and water. In the dry state, clay packets exist in face-to-face stacks like a deck of playing cards, but clay packets begin to change when exposed to water. Five descriptive terms describe the progressive interactions in a clay-water system. (1) Hydration occurs as clay packets absorb water and swell. (2) Dispersion (or disaggregation) causes clay platelets to break apart and disperse into the water due to the loss of attractive forces as water moves the platelets further apart. (3) Flocculation begins when mechanical shearing stops, and platelets previously dispersed come together due to the attractive force of surface charges on the platelets. (4) Deflocculation, or peptization, the opposite effect, occurs when a chemical de-flocculant is added to flocculated mud; the positive edge charges are covered, and attraction forces are greatly reduced. (5) Aggregation, a result of ionic or thermal conditions, alters the hydrational layer around clay platelets, removes the deflocculant from positive edge charges, and allows platelets to assume a face-to-face structure. See also Dispersity Quick clay behaviour References Water Clay Colloids Colloidal chemistry
Clay–water interaction
Physics,Chemistry,Materials_science,Environmental_science
279
18,543,884
https://en.wikipedia.org/wiki/V598%20Puppis
V598 Puppis is the name given to a nova in the Milky Way Galaxy. USNO-A2.0 0450-03360039, a catalog number for the star, was discovered to be much brighter than normal in X-ray emissions on October 9, 2007, by the European Space Agency's XMM-Newton telescope. The star was confirmed to be over 10 magnitudes, or 10,000 times, brighter than normal by the Magellan-Clay telescope Magellan-Clay telescope at Las Campanas Observatory in Chile. Pre-discovery images and identification of the progenitor would ultimately shows that the nova brightened from visual magnitude 16.6 to brighter than magnitude 4. The nova has been officially given the variable star designation V598 Puppis and is one of the brightest in the last decade. Despite its brightness, the nova was apparently missed by amateur and professional astronomers alike until XMM-Newton spotted the unusual X-ray source while turning from one target to another. The All Sky Automated Survey determined that the nova had occurred between June 2nd and 5th, 2007, peaking in brightness on June 5th. The orbital period of the two stars in V598 Puppis is 0.1628714 days, or 3 hours, 54 minutes, and 32 seconds. References External links The Exploding Star That Everyone Missed XMM-Newton discovers the star that everyone missed Novae Puppis Puppis, V598 20071009 J07054250-3814394
V598 Puppis
Astronomy
306
22,908,095
https://en.wikipedia.org/wiki/SOPHIE%20%C3%A9chelle%20spectrograph
The SOPHIE (Spectrographe pour l’Observation des Phénomènes des Intérieurs stellaires et des Exoplanètes, literally meaning "spectrograph for the observation of the phenomena of the stellar interiors and of the exoplanets") échelle spectrograph is a high-resolution echelle spectrograph installed on the 1.93m reflector telescope at the Haute-Provence Observatory located in south-eastern France. The purpose of this instrument is asteroseismology and extrasolar planet detection by the radial velocity method. It builds upon and replaces the older ELODIE spectrograph. This instrument was made available for use by the general astronomical community October 2006. Characteristics The electromagnetic spectrum wavelength range is from 387.2 to 694.3 nanometers. The spectrograph is fed from the Cassegrain focus through either one of two separate optical fiber sets, yielding two different spectral resolutions (HE and HR modes). The instrument is entirely computer-controlled. A standard data reduction pipeline automatically processes the data upon every CCD readout cycle. HR mode is the high resolution mode. This mode incorporates a 40 micrometre exit slit to achieve high spectral resolution of R = 75000. HE mode is the high efficiency mode. This mode is used when a higher throughput is desired particularly in the case of faint objects spectral resolution is set to R = 40000. The R2 échelle diffraction grating has 52.65 grooves per millimeter and was manufactured by Richardson Gratings. It is blazed at 65° and its size is 20.4 cm x 40.8 cm. It is mounted in a fixed configuration. The spectrum is projected onto the E2V Technologies type 44-82 CCD detector of 4096 x 2048 pixels kept at a constant temperature of –100 °C. This grating yields 41 spectral orders, of which 39 are currently extracted, to obtain wavelengths between 387.2 nm and 694.3 nm. Performance In HE mode, a signal-to-noise ratio (per pixel) of 27 was reached in 90 min for an object of magnitude 14.5 in the V band. The stability of the instrument can be described by the lowest dispersion possible for radial velocity observations, in m/s. In HR mode the short term stability has been measured to be 1.3 m/s, while it is 2 m/s for longer timescales. See also CORALIE spectrograph HARPS spectrograph References External links SOPHIE Home Page Spectrographs Astronomical instruments Exoplanet search projects
SOPHIE échelle spectrograph
Physics,Chemistry,Astronomy
537
38,857,194
https://en.wikipedia.org/wiki/CEB%20VER
CEB VER is a quality standard for voluntary carbon offset industry created by Commodity Exchange Bratislava. Based on the Kyoto Protocol's Clean Development Mechanism, CEB VER establishes criteria for validating, measuring, and monitoring carbon offset projects with option to trade carbon credits and use them for surrendering for individuals, organizations or companies that want to be carbon neutral. Methodologies Methodology express exact calculation on how many carbon credits can be issued for project developers. These carbon credits can be traded at Carbon Place. First methodology issued under CEB VER standard was CEB VER Solar. References External links CEB Website VER Registry The VER+ Standard profile on database of Market Governance Mechanisms Certification marks Carbon finance
CEB VER
Mathematics
143
573,846
https://en.wikipedia.org/wiki/Manne%20Siegbahn
Karl Manne Georg Siegbahn (; 3 December 1886 – 26 September 1978) was a Swedish physicist who received the Nobel Prize in Physics in 1924 "for his discoveries and research in the field of X-ray spectroscopy". Biography Siegbahn was born in Örebro, Sweden, the son of Georg Siegbahn and his wife, Emma Zetterberg. He graduated in Stockholm 1906 and began his studies at Lund University in the same year. During his education he was secretarial assistant to Johannes Rydberg. In 1908 he studied at the University of Göttingen. He obtained his doctorate (PhD) at the Lund University in 1911, his thesis was titled Magnetische Feldmessungen (magnetic field measurements). He became acting professor for Rydberg when his (Rydberg's) health was failing, and succeeded him as full professor in 1920. However, in 1922 he left Lund for a professorship at Uppsala University. In 1937, Siegbahn was appointed Director of the Physics Department of the Nobel Institute of the Royal Swedish Academy of Sciences. In 1988 this was renamed the Manne Siegbahn Institute (MSI). The institute research groups have been reorganized since, but the name lives on in the Manne Siegbahn Laboratory hosted by Stockholm University. X-ray spectroscopy Manne Siegbahn began his studies of X-ray spectroscopy in 1914. Initially he used the same type of spectrometer as Henry Moseley had done for finding the relationship between the wavelength of some elements and their place at the periodic system. Shortly thereafter he developed improved experimental apparatus which allowed him to make very accurate measurements of the X-ray wavelengths produced by atoms of different elements. Also, he found that several of the spectral lines that Moseley had discovered consisted of more components. By studying these components and improving the spectrometer, Siegbahn got an almost complete understanding of the electron shell. He developed a convention for naming the different spectral lines that are characteristic to elements in X-ray spectroscopy, the Siegbahn notation. Siegbahn's precision measurements drove many developments in quantum theory and atomic physics. Awards and honours Siegbahn was awarded the Nobel Prize in Physics in 1924. He won the Hughes Medal 1934 and Rumford Medal 1940. In 1944, he patented the Siegbahn pump. Siegbahn was elected a Foreign Member of the Royal Society in 1954. There is a street, Route Siegbahn, named after Siegbahn at CERN, on the Prévessin site in France. Personal life Siegbahn married Karin Högbom in 1914. They had two children: Bo Siegbahn (1915–2008), a diplomat and politician, and Kai Siegbahn (1918–2007), a physicist who received the Nobel Prize in Physics in 1981 for his contribution to the development of X-ray photoelectron spectroscopy. Awards and decorations Commander Grand Cross of the Order of the Polar Star (6 June 1947) Nobel Prize in Physics (1924) Hughes Medal (1934) Rumford Medal (1940) Works The Spectroscopy of X-Rays (1925) References External links including the Nobel Lecture, December 11, 1925 The X-ray Spectra and the Structure of the Atoms 1886 births 1978 deaths 20th-century Swedish physicists People from Örebro Experimental physicists Lund University alumni Nobel laureates in Physics Swedish Nobel laureates Academic staff of Uppsala University Members of the French Academy of Sciences Foreign members of the Royal Society Foreign members of the USSR Academy of Sciences Spectroscopists Amanuenses Commanders Grand Cross of the Order of the Polar Star Presidents of the International Union of Pure and Applied Physics Members of the Royal Society of Sciences in Uppsala
Manne Siegbahn
Physics,Chemistry
746
10,533,603
https://en.wikipedia.org/wiki/Atkinson%27s%20theorem
In operator theory, Atkinson's theorem (named for Frederick Valentine Atkinson) gives a characterization of Fredholm operators. The theorem Let H be a Hilbert space and L(H) the set of bounded operators on H. The following is the classical definition of a Fredholm operator: an operator T ∈ L(H) is said to be a Fredholm operator if the kernel Ker(T) is finite-dimensional, Ker(T*) is finite-dimensional (where T* denotes the adjoint of T), and the range Ran(T) is closed. Atkinson's theorem states: A T ∈ L(H) is a Fredholm operator if and only if T is invertible modulo compact perturbation, i.e. TS = I + C1 and ST = I + C2 for some bounded operator S and compact operators C1 and C2. In other words, an operator T ∈ L(H) is Fredholm, in the classical sense, if and only if its projection in the Calkin algebra is invertible. Sketch of proof The outline of a proof is as follows. For the ⇒ implication, express H as the orthogonal direct sum The restriction T : Ker(T)⊥ → Ran(T) is a bijection, and therefore invertible by the open mapping theorem. Extend this inverse by 0 on Ran(T)⊥ = Ker(T*) to an operator S defined on all of H. Then I − TS is the finite-rank projection onto Ker(T*), and I − ST is the projection onto Ker(T). This proves the only if part of the theorem. For the converse, suppose now that ST = I + C2 for some compact operator C2. If x ∈ Ker(T), then STx = x + C2x = 0. So Ker(T) is contained in an eigenspace of C2, which is finite-dimensional (see spectral theory of compact operators). Therefore, Ker(T) is also finite-dimensional. The same argument shows that Ker(T*) is also finite-dimensional. To prove that Ran(T) is closed, we make use of the approximation property: let F be a finite-rank operator such that ||F − C2|| < r. Then for every x in Ker(F), ||S||⋅||Tx|| ≥ ||STx|| = ||x + C2x|| = ||x + Fx +C2x − Fx|| ≥ ||x|| − ||C2 − F||⋅||x|| ≥ (1 − r)||x||. Thus T is bounded below on Ker(F), which implies that T(Ker(F)) is closed. On the other hand, T(Ker(F)⊥) is finite-dimensional, since Ker(F)⊥ = Ran(F*) is finite-dimensional. Therefore, Ran(T) = T(Ker(F)) + T(Ker(F)⊥) is closed, and this proves the theorem. A more complete treatment of Atkinson's Theorem is in the reference by Arveson: it shows that if B is a Banach space, an operator is Fredholm iff it is invertible modulo a finite rank operator (and that the latter is equivalent to being invertible modulo a compact operator, which is significant in view of Enflo's example of a separable, reflexive Banach space with compact operators that are not norm-limits of finite rank operators). For Banach spaces, a Fredholm operator is one with finite dimensional kernel and range of finite codimension (equivalent to the kernel of its adjoint being finite dimensional). Note that the hypothesis that Ran(T) is closed is redundant since a space of finite codimension that is also the range of a bounded operator is always closed (see Arveson reference below); this is a consequence of the open-mapping theorem (and is not true if the space is not the range of a bounded operator, for example the kernel of a discontinuous linear functional). References Arveson, William B., A Short Course on Spectral Theory, Springer Graduate Texts in Mathematics, vol 209, 2002, Fredholm theory Theorems in functional analysis
Atkinson's theorem
Mathematics
919
9,625,980
https://en.wikipedia.org/wiki/Phoebe%20%28daughter%20of%20Leucippus%29
In Greek mythology, Phoebe ( ; , associated with phoîbos, "shining") was a Messenian princess. Family Phoebe was the daughter of Leucippus and Philodice, daughter of Inachus. She and her sister Hilaera are commonly referred to as Leucippides (that is, "daughters of Leucippus"). In another account, they were the daughters of Apollo. Phoebe married Pollux and bore him a son, named either Mnesileos or Mnasinous. Mythology Phoebe and Hilaera were priestesses of Athena and Artemis, and betrothed to Idas and Lynceus, the sons of Aphareus. Castor and Pollux were charmed by their beauty and carried them off. When Idas and Lynceus tried to rescue their brides-to-be they were both slain, but Castor himself fell. Pollux persuaded Zeus to allow him to share his immortality with his brother. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Ovidius Naso, Fasti translated by James G. Frazer. Online version at the Topos Text Project. Publius Ovidius Naso, Fasti. Sir James George Frazer. London; Cambridge, MA. William Heinemann Ltd.; Harvard University Press. 1933. Latin text available at the Perseus Digital Library. Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website. Theocritus, Idylls from The Greek Bucolic Poets translated by Edmonds, J M. Loeb Classical Library Volume 28. Cambridge, MA. Harvard Univserity Press. 1912. Online version at theoi.com Theocritus, Idylls edited by R. J. Cholmeley, M.A. London. George Bell & Sons. 1901. Greek text available at the Perseus Digital Library. Princesses in Greek mythology Children of Apollo Mythological rape victims Mythological Messenians Castor and Pollux Greek mythological priestesses
Phoebe (daughter of Leucippus)
Astronomy
671
3,133,314
https://en.wikipedia.org/wiki/Steve%20Ciarcia
Steve Ciarcia is an American embedded control systems engineer. He became popular through his Ciarcia's Circuit Cellar column in BYTE magazine, and later through the Circuit Cellar magazine that he published. He is also the author of Build Your Own Z80 Computer, edited in 1981 and Take My Computer...Please!, published in 1978. He has also compiled seven volumes of his hardware project articles that appeared in BYTE magazine. In 1982 and 1983, he published a series of articles on building the MPX-16, a 16-bit single-board computer that was hardware-compatible with the IBM PC. In December 2009, Steve Ciarcia announced that for the American market a strategic cooperation would be entered between Elektor and his Circuit Cellar magazine. In November 2012, Steve Ciarcia announced that he was quitting Circuit Cellar and Elektor would take it over. In October 2014, Ciarcia purchased Circuit Cellar, audioXpress, Voice Coil, Loudspeaker Industry Sourcebook, and their respective websites, newsletters, and products from Netherlands-based Elektor International Media. The aforementioned magazines will continue to be published by Ciarcia's US-based team. In July 2016, Steve Ciarcia sold the company to long time employee KC Prescott operating under the company name KCK Media Corp. References External links Circuit Cellar magazine Index on Steve Ciarcia's articles in BYTE American magazine editors American technology writers Control theorists Living people Year of birth missing (living people)
Steve Ciarcia
Engineering
308
209,103
https://en.wikipedia.org/wiki/List%20of%20numbers
This is a list of notable numbers and articles about notable numbers. The list does not contain all numbers in existence as most of the number sets are infinite. Numbers may be included in the list based on their mathematical, historical or cultural notability, but all numbers have qualities that could arguably make them notable. Even the smallest "uninteresting" number is paradoxically interesting for that very property. This is known as the interesting number paradox. The definition of what is classed as a number is rather diffuse and based on historical distinctions. For example, the pair of numbers (3,4) is commonly regarded as a number when it is in the form of a complex number (3+4i), but not when it is in the form of a vector (3,4). This list will also be categorized with the standard convention of types of numbers. This list focuses on numbers as mathematical objects and is not a list of numerals, which are linguistic devices: nouns, adjectives, or adverbs that designate numbers. The distinction is drawn between the number five (an abstract object equal to 2+3), and the numeral five (the noun referring to the number). Natural numbers Natural numbers are a subset of the integers and are of historical and pedagogical value as they can be used for counting and often have ethno-cultural significance (see below). Beyond this, natural numbers are widely used as a building block for other number systems including the integers, rational numbers and real numbers. Natural numbers are those used for counting (as in "there are six (6) coins on the table") and ordering (as in "this is the third (3rd) largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". Defined by the Peano axioms, the natural numbers form an infinitely large set. Often referred to as "the naturals", the natural numbers are usually symbolised by a boldface (or blackboard bold , Unicode ). The inclusion of 0 in the set of natural numbers is ambiguous and subject to individual definitions. In set theory and computer science, 0 is typically considered a natural number. In number theory, it usually is not. The ambiguity can be solved with the terms "non-negative integers", which includes 0, and "positive integers", which does not. Natural numbers may be used as cardinal numbers, which may go by various names. Natural numbers may also be used as ordinal numbers. Mathematical significance Natural numbers may have properties specific to the individual number or may be part of a set (such as prime numbers) of numbers with a particular property. Cultural or practical significance Along with their mathematical properties, many integers have cultural significance or are also notable for their use in computing and measurement. As mathematical properties (such as divisibility) can confer practical utility, there may be interplay and connections between the cultural or practical significance of an integer and its mathematical properties. Classes of natural numbers Subsets of the natural numbers, such as the prime numbers, may be grouped into sets, for instance based on the divisibility of their members. Infinitely many such sets are possible. A list of notable classes of natural numbers may be found at classes of natural numbers. Prime numbers A prime number is a positive integer which has exactly two divisors: 1 and itself. The first 100 prime numbers are: Highly composite numbers A highly composite number (HCN) is a positive integer with more divisors than any smaller positive integer. They are often used in geometry, grouping and time measurement. The first 20 highly composite numbers are: 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, 180, 240, 360, 720, 840, 1260, 1680, 2520, 5040, 7560 Perfect numbers A perfect number is an integer that is the sum of its positive proper divisors (all divisors except itself). The first 10 perfect numbers: Integers The integers are a set of numbers commonly encountered in arithmetic and number theory. There are many subsets of the integers, including the natural numbers, prime numbers, perfect numbers, etc. Many integers are notable for their mathematical properties. Integers are usually symbolised by a boldface (or blackboard bold , Unicode ); this became the symbol for the integers based on the German word for "numbers" (Zahlen). Notable integers include −1, the additive inverse of unity, and 0, the additive identity. As with the natural numbers, the integers may also have cultural or practical significance. For instance, −40 is the equal point in the Fahrenheit and Celsius scales. SI prefixes One important use of integers is in orders of magnitude. A power of 10 is a number 10k, where k is an integer. For instance, with k = 0, 1, 2, 3, ..., the appropriate powers of ten are 1, 10, 100, 1000, ... Powers of ten can also be fractional: for instance, k = -3 gives 1/1000, or 0.001. This is used in scientific notation, real numbers are written in the form m × 10n. The number 394,000 is written in this form as 3.94 × 105. Integers are used as prefixes in the SI system. A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. Each prefix has a unique symbol that is prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand: one kilogram is equal to one thousand grams. The prefix milli-, likewise, may be added to metre to indicate division by one thousand; one millimetre is equal to one thousandth of a metre. Rational numbers A rational number is any number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . Since may be equal to 1, every integer is trivially a rational number. The set of all rational numbers, often referred to as "the rationals", the field of rationals or the field of rational numbers is usually denoted by a boldface (or blackboard bold , Unicode ); it was thus denoted in 1895 by Giuseppe Peano after quoziente, Italian for "quotient". Rational numbers such as 0.12 can be represented in infinitely many ways, e.g. zero-point-one-two (0.12), three twenty-fifths (), nine seventy-fifths (), etc. This can be mitigated by representing rational numbers in a canonical form as an irreducible fraction. A list of rational numbers is shown below. The names of fractions can be found at numeral (linguistics). Real numbers Real numbers are least upper bounds of sets of rational numbers that are bounded above, or greatest lower bounds of sets of rational numbers that are bounded below, or limits of convergent sequences of rational numbers. real numbers that are not rational numbers are called irrational numbers. The real numbers are categorised as algebraic numbers (which are the root of a polynomial with rational coefficients) or transcendental numbers, which are not; all rational numbers are algebraic. Algebraic numbers Transcendental numbers Irrational but not known to be transcendental Some numbers are known to be irrational numbers, but have not been proven to be transcendental. This differs from the algebraic numbers, which are known not to be transcendental. Real but not known to be irrational, nor transcendental For some numbers, it is not known whether they are algebraic or transcendental. The following list includes real numbers that have not been proved to be irrational, nor transcendental. Numbers not known with high precision Some real numbers, including transcendental numbers, are not known with high precision. The constant in the Berry–Esseen Theorem: 0.4097 < C < 0.4748 De Bruijn–Newman constant: 0 ≤ Λ ≤ 0.2 Chaitin's constants Ω, which are transcendental and provably impossible to compute. Bloch's constant (also 2nd Landau's constant): 0.4332 < B < 0.4719 1st Landau's constant: 0.5 < L < 0.5433 3rd Landau's constant: 0.5 < A ≤ 0.7853 Grothendieck constant: 1.67 < k < 1.79 Romanov's constant in Romanov's theorem: 0.107648 < d < 0.49094093, Romanov conjectured that it is 0.434 Hypercomplex numbers Hypercomplex number is a term for an element of a unital algebra over the field of real numbers. The complex numbers are often symbolised by a boldface (or blackboard bold , Unicode ), while the set of quaternions is denoted by a boldface (or blackboard bold , Unicode ). Algebraic complex numbers Imaginary unit: nth roots of unity: , while , GCD(k, n) = 1 Other hypercomplex numbers The quaternions The octonions The sedenions The trigintaduonions The dual numbers (with an infinitesimal) Transfinite numbers Transfinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers, yet not necessarily absolutely infinite. Aleph-null: ℵ0, the smallest infinite cardinal, and the cardinality of , the set of natural numbers Aleph-one: ℵ1, the cardinality of ω1, the set of all countable ordinal numbers Beth-one: or , the cardinality of the continuum 2 Omega: ω, the smallest infinite ordinal Numbers representing physical quantities Physical quantities that appear in the universe are often described using physical constants. Avogadro constant: Electron mass: Fine-structure constant: Gravitational constant: Molar mass constant: Planck constant: Rydberg constant: Speed of light in vacuum: Vacuum electric permittivity: Numbers representing geographical and astronomical distances , the average equatorial radius of Earth in kilometers (following GRS 80 and WGS 84 standards). , the length of the Equator in kilometers (following GRS 80 and WGS 84 standards). , the semi-major axis of the orbit of the Moon, in kilometers, roughly the distance between the center of Earth and that of the Moon. , the average distance between the Earth and the Sun or Astronomical Unit (AU), in meters. , one light-year, the distance travelled by light in one Julian year, in meters. , the distance of one parsec, another astronomical unit, in whole meters. Numbers without specific values Many languages have words expressing indefinite and fictitious numbers—inexact terms of indefinite size, used for comic effect, for exaggeration, as placeholder names, or when precision is unnecessary or undesirable. One technical term for such words is "non-numerical vague quantifier". Such words designed to indicate large quantities can be called "indefinite hyperbolic numerals". Named numbers Hardy–Ramanujan number, 1729 Kaprekar's constant, 6174 Eddington number, ~1080 Googol, 10100 Shannon number Centillion, 10303 Skewes's number Googolplex, 10(10100) Mega/Circle(2) Moser's number Graham's number TREE(3) SSCG(3) Rayo's number Kanahiya's Constant, 2592 See also Absolute infinite English numerals Floating-point arithmetic Fraction Integer sequence Interesting number paradox Large numbers List of mathematical constants List of prime numbers List of types of numbers Mathematical constant Metric prefix Names of large numbers Names of small numbers Negative number Numeral (linguistics) Numeral prefix Order of magnitude Orders of magnitude (numbers) Ordinal number The Penguin Dictionary of Curious and Interesting Numbers Perfect numbers Power of two Power of 10 Surreal number Table of prime factors References . Further reading Kingdom of Infinite Number: A Field Guide by Bryan Bunch, W.H. Freeman & Company, 2001. External links What's Special About This Number? A Zoology of Numbers: from 0 to 500 Name of a Number See how to write big numbers Robert P. Munafo's Large Numbers page Different notations for big numbers – by Susan Stepney Names for Large Numbers, in How Many? A Dictionary of Units of Measurement by Russ Rowlett What's Special About This Number? (from 0 to 9999) Mathematical tables
List of numbers
Mathematics
2,632
43,806,809
https://en.wikipedia.org/wiki/Esenin-Volpin%27s%20theorem
In mathematics, Esenin-Volpin's theorem states that weight of an infinite compact dyadic space is the supremum of the weights of its points. It was introduced by . It was generalized by and . References General topology Theorems in topology
Esenin-Volpin's theorem
Mathematics
55
17,042,372
https://en.wikipedia.org/wiki/Sony%20Ericsson%20W960
The Sony Ericsson W960i is a 3G phone that Sony Ericsson announced in June 2007, as an upgrade to the W950. Features The W960 is a successor to the W950, and belongs to the Walkman series of phones. Its features include 8 GB of integrated flash memory, UMTS (3G) and Wi-Fi connectivity and an autofocus 3.2 megapixel camera. The phone features a touchscreen and an integrated walkman player. Specifications Camera 3.2 MP (up to 2048x1536), with autofocusing and QVGA@15fps Video Recording Networks GSM 900/1800/1900 + UMTS 2100, GPRS Connectivity Bluetooth 2.0 + EDR A2DP supported USB 2.0 Wi-Fi 802.11 b(11 Mbit/s) Storage 8 GB, no slot Dimensions 109 x 55 x 16 mm Operating System Symbian 9.1, UIQ 3.0 Display 2.6 inches QVGA (240х320 pixels) 262K colors touchscreen Hardware Philips Nexperia PNX4008 ARM 9 processor at 208 MHz 128 MB RAM 256 MB ROM Gallery References External links W960i UIQ 3 Phones Mobile phones introduced in 2007
Sony Ericsson W960
Technology
268
46,532,350
https://en.wikipedia.org/wiki/Information%20culture
Information culture is closely linked with information technology, information systems, and the digital world. It is difficult to give one definition of information culture, and many approaches exist. Overview The literature regarding information culture focuses on the relationship between individuals and information in their work. Curry and Moore are most frequently cited in the information culture literature, and there is consensus of that values accorded to information, and attitudes towards it are indicators of information culture (McMillan et al., 2012; Curry and Moore, 2003; Furness, 2010; Oliver, 2007; Davenport and Prusak, 1997; Widén-Wulff, 2000; Jarvenpaa and Staples, 2001). information culture is a culture that is conducive to effective information management where "the value and utility of information in achieving operational and strategic goals is recognized, where information forms the basis of organizational decision making, and information technology is readily exploited as an enabler for effective information systems". Information culture is a part of the whole organizational culture. It is only by understanding the organisation that progress can be made with information management activities. Ginman defines information culture as the culture in which the transformation of intellectual resources is maintained alongside the transformation of material resources. Information culture is the environment where knowledge is produced with social intelligence, social interaction and work knowledge. Multinational organizations (MNOs) are characterized by their engagement in global markets. In order to remain competitive in today's global marketplace. In many organizations, information culture is described as a form of information technology. As Davenport writes, many executives think they solve all information problems with buying IT-equipment. Information culture is about effective information management to use information, not machines, and information technology is just a part of information culture, which has an interactive role in it. Information culture is the part of organizational culture where evaluation and attitudes towards information depend on the situation in which the organization works. In an organization everyone has different attitudes, but the information profile must be explained, so the importance of information should be realized by executives. The information culture is also about formal information systems (technology), common knowledge, individual information systems (attitudes), and information ethics. information culture does not include written or conscious behavior and what seemingly happening in the organization. Information culture is affected by the behaviors of internal factors of organization more than external factors, which comes in form of information culture, the attitudes and the traditions. Information culture deals with information, information channels, the attitudes, the use and ability to forward or gather information with the environmental circumstances effectively. Knowledge base of any organization can be viewed according to Nonaka's theories about the organizational knowledge production and Cronin & Davenport's theories about the social intelligence. According to these theories it is important to look at the organization information culture how the user uses the information. Cultural differences need to be understood before information technology developed for an organization in one country can be effectively implemented in an organization in another country. It is well understood that any form of information security technology cannot be properly comprehended and appreciated among user employees without guidelines that focus on the human rather than technical perspective, such as information ethics and national security policy. A highly developed information culture leads the organization to success and work as a strategic goal that positively associated with organizational practices and performance. Choo et al. looked at information culture as the socially shared patterns of behaviors, norms and values that define the significance and use of information in an organization. Also, scholars like Manuel Castells posits that the information culture transcends the confines of organizations and government participation through policies is relevant for achieving the norms and values. Norms are standards and values are beliefs and together they mold the information behavior as normal that are expected by the people in organization. In so far, information behavior is the reflection of cultural norms and values. Marchand, Kettinger and Rollins identifies six information behaviors and values to profile an organization's information culture: Information integrity is defined as the use of information in a trustful and principled manner. Information formality is the willingness to use and trust formal information over informal sources. Information control is the extent to which information is used to manage and monitor performance. Information transparency is the openness in reporting on errors and failures. Information sharing is the willingness to provide others with information. Proactiveness is actively using new information to innovate and respond quickly to changes. Information culture typologies Based on a widely applied construct from Cameron and Quinn that has been used to differentiate organizational culture types and their relationships to organizational effectiveness, Choo develops a typology of information culture. He emphasizes elements from information behavior research. The information culture typologies are characterized by a set of five attributes: the primary goal of information management information values and norms information behaviors in terms of information needs information seeking information use In addition, Choo classifies information culture into four categories: Relationship-based Culture Risk-taking Culture Result-oriented Culture Rule-following Culture Relationship-based Culture: information management supports communication, participation, and a sense of identity. Information values and norms emphasize sharing and the proactive use of information. These values promote collaboration and cooperation. The focus is on internal information. Risk-taking Culture: innovation, creativity, and the exploration of new ideas are encouraged while information is managed. Information values and norms emphasize sharing and the proactive use of information. These values promote innovation, development of new products or capabilities, and the boldness to take the initiative. The focus is on external information. Information is used to identify and evaluate opportunities, and promote entrepreneurial risk-taking. Result-oriented Culture: information management enables the organization to compete and succeed in its market or sector. Information values and norms call attention to control and integrity: accurate information is valued in order to assess performance and goal attainment. Information is used to understand clients and competitors, and to evaluate results. Rule-following Culture: information management reinforces the control of internal operations, rules and policies. Information values and norms emphasize control and standardized processes. The focus is on internal information. The organization seeks information about workflows, as well as information about regulatory or accountability requirements. Information is used to control operations, improve efficiency, and provide accountability. Information culture in government organization Information governance is beginning to gain traction within organizations, particularly where compliance is a concern, and Davenport and Prusak's models of governance are useful tools to inform the design of information governance. Most public sector organizations in Canada have informal information governance models (or policies) Davenport, Eccles and Prusak have developed four models of information governance, to inform a progression of control. They describe the levels of information governance using political terms: information federalism, information feudalism, information monarchy, and information anarchy. Their observations allow to evaluate the effectiveness of their governance models in terms of information quality, efficiency, commonality, and access. Oliver's research on three case study organizations found several factors that characterized and differentiated the information cultures were associated with the organizational information management framework, as well as attitudes and values accorded to information. Compliance requirements for the management of information have a significant place in shaping information culture. Research suggests that poor compliance to formal information governance policies reinforces the fact that sound knowledge and records management practices are often neglected. Information culture affects support, enthusiasm and cooperation of staff and management of information, asserts Curry and Moore. If such an information culture is critical to the successful management of information assets, then it becomes vital to develop and nurture the commitment from both management and staff at all levels. Curry and Moore have developed an exploratory model of information culture, which included components needed within a strong information culture: effective communication flows, cross-organizational partnerships, co-operative working practices and open access to relevant information, management of information systems in accordance with business strategy, and clear guidelines and documentation for information and data management. Trust is a characteristic that has more recently come to the forefront in literature. The social dynamics between supervisors and workers relies upon trust, or the lack of trust, which will also have an effect on information sharing. Information culture and information use Curry and Moore define information culture as "a culture in which the value and utility of information in achieving operational and strategic success is recognised, where information forms the basis of organizational decision making and information technology is readily exploited as an enabler for effective information systems". Information culture is manifested in the organization's values, norms, and practices that affect how information is perceived, created and used. The six information behaviors and values identified by Marchand to characterize the information culture of an organization are information integrity, formality, control, sharing, transparency, and proactiveness. A part of culture that deals specifically with information —the perceptions, values, and norms that people have about creating, sharing, and applying information— has a significant effect on information use outcomes. It is possible to systematically identify behaviors and values that describe an organization's information culture. It is possible to systematically identify behaviours and values that characterize an organization's information culture, and that this characterization could be helpful in understanding the information use effectiveness of all sorts of organizations, including private businesses, government agencies, and publicly funded institutions such as libraries and museums. A study by Choo and others suggested that organizations might do well to remember that in the rush to implement strategies and systems, information values and information culture will always have a defining influence on how people share and use information. Information culture and organizational culture In industrialised countries, most of the diseases and injuries are related to mental health problems and are the main reason of employees absenteeism. There are number of risk factors or stressors that may cause psychological strain and ill health, resulted in occupational stress interventions that occur in isolation, independent of organizational culture. Paying more attention to organizational culture paves the way for a contextualized analysis of stress and distress in the workplace. An integrated framework is used in which the association between organizational culture and mental health is mediated by the work organization conditions that qualify the task environment like information management, information sharing and decision making. Organizational cultures somehow intertwined with the information culture. Information culture is a part of organizational Culture as values, behaviour of employees in the organisation somehow effect the information culture. The framework links organizational culture to mental health via work organization conditions and is inscribed within the functionalist perspective that views culture as an organizational construct that influences and shapes organizational characteristics. Organizational culture is conceptualized in terms of the four quadrants of the Quinn and Rohrbaugh typology, which are: Group Culture Developmental Culture Hierarchical Culture Rational Culture By knowing these cultures, organisations can easily adopt the relevant culture according to their work related conditions. Although work organization conditions and organizational culture are closely intertwined, they should not be confounded. Just as societal cultural values would influence organizationally relevant outcomes (Taras, Kirkman, & Steel, 2010), organizational culture might influence work organization conditions. Schein views organizational culture as a multilayered construct that includes artifacts, values, social ideals, and basic assumptions. Artifacts such as behaviors, structures, processes, and technology form a first layer. At a more latent level, organizational culture is noticed in the values and social ideals shared by members of the organization (i.e., ideology of the organization). These values and ideals are revealed in symbolic mechanisms such as myths, rituals, stories, legends, and a codified language, as well as in corporate objectives, strategies, management philosophies, and in the justifications given for these. Group Culture encourages employees to make suggestions regarding how to improve their own work and overall performance. As a result, the group culture creates an empowering environment in which individuals perceive they have autonomy and influence. Consequently, in the Group Culture, individuals recognize that their work has meaning and that they have the skills to carry it out. Considering also that information sharing is an important feature of employee participation, informational support from leaders is likely to be high in the group culture. Group Culture tends to develop task designs that promote the use of skills and decision authority, which are protective factors and also implement work organization conditions that promote social support whether from colleagues or from supervisors, which thereby have a beneficial influence on employee mental health. Developmental Culture is helpful to develop decentralised work design that promotes the use of skills and decision authority with benefit to employee mental health. In Developmental Culture, employees are likely to enjoy significant rewards that could have beneficial effects on employee mental health. Hierarchical Culture is helpful to promote social support and thereby play a beneficial role in employee mental health. In this type of culture, it could well be seniority that determines both compensation and career advancement, giving employees a certain level of job security that could prove beneficial for employee mental health. Rational Culture with clear performance indicators and measurements is likely to minimize conflicting demands that could be beneficial for employee mental health. So these integrated model can help the organisations and the managers to choose the suitable culture. Integration of organizational culture into occupational stress models is a fruitful avenue to achieve a deeper understanding of occupational mental health problems in the workplace and this framework can also helpful to serve as a starting point for multilevel occupational stress research. See also Social relation References Information systems Information society
Information culture
Technology
2,671
75,245,736
https://en.wikipedia.org/wiki/Tropicoporus%20linteus
Tropicoporus linteus is a tropical American mushroom. Its former name Phellinus linteus is applied wider, including to an East Asian mushroom. Taxonomy Polyporus linteus was named by Miles Joseph Berkeley and Moses Ashley Curtis and first reported with specimen from Nicaragua in 1860. Phellinus linteus was a rename by Shu Chün Teng in 1963. It was renamed Tropicoporus linteus by Li-Wei Zhou and Yu-Cheng Dai in 2015. The following mushrooms are applied with the name Phellinus linteus: Americas Phellinus linteus per se, the tropical American species, now Tropicoporus linteus In subtropical South America, Phellinus linteus on Cordia americana is actually Tropicoporus drechsleri; specimens collected on other plant hosts require further studies. Asia Phellinus linteus in East Asia Africa Xanthochrous rudis, an African species formerly regarded as a synonym of Phellinus linteus, regained taxon independency and was renamed Tropicoporus rudis. Description A description was made by Tian et al. (2012) for the epitype. This mushroom's tube trama is dimitic, contains generative and skeletal hyphae. Ecology and habitat Tropicoporus mushrooms cause a white rot. This mushroom is known distributed in Nicaragua, United States (Florida) and Brazil. Tropicoporus linteus grows on oak and tamarind. References Fungi of North America Fungi of Central America Fungi of South America Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis Hymenochaetaceae Fungus species
Tropicoporus linteus
Biology
351
391,097
https://en.wikipedia.org/wiki/Matrilocal%20residence
In social anthropology, matrilocal residence or matrilocality (also uxorilocal residence or uxorilocality) is the societal system in which a married couple resides with or near the wife's parents. Description Frequently, visiting marriage is being practiced, meaning that husband and wife are living apart, in their separate birth families, and seeing each other in their spare time. The children of such marriages are raised by the mother's extended matrilineal clan. The father does not have to be involved in the upbringing of his own children; he does, however, in that of his sisters' children (his nieces and nephews). In direct consequence, property is inherited from generation to generation, and, overall, remains largely undivided. Matrilocal residence is found most often in horticultural societies. Examples of matrilocal societies include the people of Ngazidja in the Comoros, the Ancestral Puebloans of Chaco Canyon, the Nair community in Kerala in South India, the Moso of Yunnan and Sichuan in southwestern China, the Siraya of Taiwan, and the Minangkabau of western Sumatra. Among indigenous people of the Amazon basin this residence pattern is often associated with the customary practice of brideservice, as seen among the Urarina of northeastern Peru. During the Song Dynasty in medieval China, matrilocal marriage became common for wealthy non-aristocratic families. In other regions of the world, such as Japan, during the Heian period, a marriage of this type was not a sign of high status, but rather an indication of the patriarchal authority of the woman's family (her father or grandfather), who was sufficiently powerful to demand it. Another matrilocal society is the !Kung San of Southern Africa. They practice uxorilocality for the bride service period, which lasts until the couple has produced three children or they have been together for more than ten years. At the end of the bride service period, the couple has a choice of which clan they want to live with. (Technically, uxorilocality differs from matrilocality; uxorilocality means the couple settles with the wife's family, while matrilocality means the couple settles with the wife's lineage. Because the !Kung do not live in lineages, they cannot be matrilocal; they are uxorilocal.) Early theories explaining the determinants of postmarital residence (by, for example, Lewis Henry Morgan, Edward Tylor, and George Peter Murdock) connected it with the sexual division of labor. However, for many years cross-cultural tests of this hypothesis using worldwide samples failed to find any significant relationship between these two variables. On the other hand, Korotayev's tests have shown that the female contribution to subsistence does correlate significantly with matrilocal residence in general; however, this correlation is masked by a general polygyny factor. Although an increase in the female contribution to subsistence tends to lead to matrilocal residence, it also tends simultaneously to lead to general non-sororal polygyny which effectively destroys matrilocality. If this polygyny factor is controlled (e.g., through a multiple regression model), division of labor turns out to be a significant predictor of postmarital residence. Thus, Murdock's hypotheses regarding the relationships between the sexual division of labor and postmarital residence were basically correct, though, as has been shown by Korotayev, the actual relationships between those two groups of variables are more complicated than he expected. Matrilocality in the Arikari culture in the 17th–18th centuries was studied anew within feminist archaeology by Christi Mitchell, in a critique of a previous study, the critique challenging whether men were virtually the sole agents of societal change while women were only passive. According to Barbara Epstein, anthropologists in the 20th century criticized feminist promatriarchal views and said that "the goddess worship or matrilocality that evidently existed in many paleolithic societies was not necessarily associated with matriarchy in the sense of women's power over men. Many societies can be found that exhibit those qualities along with female subordination. Furthermore, militarism, destruction of the natural environment, and hierarchical social structures can be found in societies in which goddess worship, matrilocality, or matriliny exist." In sociobiology, matrilocality refers to animal societies in which a pair bond is formed between animals born or hatched in different areas or different social groups, and the pair becomes resident in the female's home area or group. In present-day mainland China, matrilocal residence has been encouraged by the government in an attempt to counter the problem of unbalanced male-majority sex ratios caused by the abortion, infanticide and abandonment of girls. Because girls traditionally marry out in virilocal marriage (living with or near the husband's parents) they have been seen as "mouths from another family" or as a waste of resources to raise. List of matrilocal societies Bajuni People Bribri Filipinos (both matrilocal and patrilocal) Garo Hopi Iban (both matrilocal and patrilocal) Iroquois Jaintia Karen Kerinci Khasi Marshallese Minangkabau Mosuo (separate residence; each lives in mother's household) Nair people of Kerala Pueblos, among whom "matrilineality ... seemed to be associated with matrilocality" Siraya Tlingit Vanatinai Sinixt See also Gharjamai, a South Asian term that refers to a man who lives with his wife's family Matrifocal family Neolocal residence Patrilocal residence Yobai Night hunting Notes References Bibliography Marriage Sociobiology Cultural anthropology Matriarchy
Matrilocal residence
Biology
1,230
22,378,837
https://en.wikipedia.org/wiki/Significance%20of%20numbers%20in%20Judaism
Various numbers play a significant role in Jewish texts or practice. Some such numbers were used as mnemonics to help remember concepts, while other numbers were considered to have intrinsic significance or allusive meaning. The song Echad Mi Yodea ("who knows one?"), sung at the Passover Seder, is known for recounting a religious concept or practice associated with each of the first 13 numbers. In Jewish History In Jewish historical study, numbers were believed to be a means for understanding the divine. This marriage between the symbolic and the physical found its pinnacle in the creation of the Tabernacle. The numerical dimensions of the temple are a "microcosm of creation ... that God used to create the Olamot-Universes." In the thought system of Maharal, each number has a consistent philosophical meaning: 1 - unity. 2 - dualism and multiplicity. 3 - the unity between two extremes. 4 - multiplicity in two directions, like the cardinal directions. 5 - the center point which unifies those four extremes. 6 - multiplicity in three dimensions. 7 - the center point which unifies all of nature, as with Shabbat. 8 - the supernatural realm which feeds nature, and the striving of man for a connection with the supernatural. 9 - the most complete multiplicity, including division between the natural and supernatural. 10 - the final unification between natural and supernatural. 1 Echad Mi Yodea begins with the line "One is Hashem, in the heavens and the earth - אחד אלוהינו שבשמיים ובארץ." The monotheistic nature of normative Judaism, referenced also as the "oneness of God," is a common theme in Jewish liturgy—such as the central prayer—as well as Rabbinic literature. Maimonides writes in the 13 Principles of Faith that 2 Two "defines the concept of evenness," and can represent God's relationship with humanity or the people Israel. It is also linked to the two tablets of the covenant (such as in Echad Mi Yodea) and the two inclinations; the yetzer hara and yetzer hatov. On Shabbat, it's traditional to light two candles; one to represent keeping (שמור) the Sabbath, and the other to represent remembering (זכור) it. There are several common re-interpretations of this custom. The two candles may also represent husband and wife, the second soul received on Shabbat, or the division between light and dark in the creation story. 3 Three are the Fathers (Patriarchs) - שלושה אבות (Abraham, Isaac and Jacob) The three sons of Noah (Ham, Shem and Japheth) Number of aliyot on a non-Yom Tov Monday and Thursday Torah reading and number of aliyot in Shabbat Mincha The Holy of Holies occupied one-third of the area of the Temple (and previously, Tabernacle) The angels declared that God was "Holy, holy, holy" for a total of three times The Priestly Blessing contains three sections On the third day the Jewish people received the Torah 4 Four are the Mothers (Matriarchs) - ארבע אימהות (Sarah, Rebecca, Rachel, and Leah) The number of aliyot on Rosh Chodesh At the Passover Seder four cups of wine are drunk, and four expressions of redemption are recited Both the heavens and earth were described as having four sides or corners, similar to the cardinal directions. 5 Five are the books of the Torah - חמישה חומשי תורה Of the Ten Commandments, five commandments were written on each of the two tablets as believed by Rabbi Hanina ben Gamaliel. Although the Sages believe each tablet had all 10 commandments on them The sections of the book of Psalms The number of knots in the tzitzit Number of aliyot on Yom Tov that does not coincide with Shabbat Five species of grain 6 Six are the books of the Mishnah - שישה סידרי משנה The six working days of the week The six days of Creation 7 Number of days in the weekly cycle including counting of the Sabbath - שיבעה ימי שבתא According to a midrash, "All sevens are beloved": There are seven terms for the heavens and seven terms for the earth; Enoch was the seventh generation from Adam; Moses was the seventh generation from Abraham; David was the seventh son in his family; Asa (who called out to God) was the seventh generation of Israelite kings; the seventh day (Shabbat), month (Tishrei), year (shmita) and shmita (jubilee) all have special religious status. The Seven Laws of Noah The Seven Species of the Land of Israel The counting of the Omer consists of seven weeks, each of seven days Number of blessings in the Sheva Brachot The red heifer passage discusses seven items of purification, each mentioned seven times. A woman in niddah following menstruation must count seven "clean days" prior to immersion in the mikvah Acts of atonement and purification were accompanied by a sevenfold sprinkling The menorah in the Temple had seven lamps The shiva mourning period is seven days Number of days of Sukkot and Pesach (Israel) Number of blessings in the Amidah of Shabbat, Yom Tov, and all Musaf prayers (except Rosh Hashanah) Number of aliyot on Shabbat There were seven of every pure animal in Noah's Ark The number seven is said to symbolize completion, association with God, or the covenant of holiness and sanctification Moses died on the seventh of Adar Jacob bowed to Esau seven times upon meeting him (Genesis, 33:3) 8 Eight are the days of the circumcision - שמונה ימי מילה Total number of days of Yom Tov in a year in Israel Number of days of Chanukah 8 days of sukkos Number of days of Pesach (Diaspora) According to the Zohar, the number eight signifies new beginnings because the eighth day was the first day after creation when God returned to work; the week began again. 9 The first nine days of the Hebrew month of Av are collectively known as "The Nine Days" (Tisha HaYamim), and are a period of semi-mourning leading up to Tisha B'Av, the ninth day of Av on which both Temples in Jerusalem were destroyed 10 The Ten Commandments - עשרה דיבריא The ten Plagues of Egypt Ten Jewish people form a minyan There are ten Sefirot (human and Godly characteristics) depicted in Kabbalah According to the Mishna, the world was created by ten divine utterances; ten generations passed between Adam and Noah and between Noah and Abraham; Abraham received ten trials from God; the Israelites received ten trials in the desert; there were ten plagues in Egypt; ten miracles occurred in the Temple; ten apparently supernatural phenomena were created during twilight in the sixth day of creation. The number ten in this Mishna indicates a large number (e.g. the Mishna declares that Abraham's willingness to undergo ten trials "indicates his love for God"). Yud is the tenth letter in the Hebrew alefbet that links unity (Yeḥudi) with God (Yhwh) and the Jew (Yehudi). 11 Eleven are the stars of the Joseph's dream - אחד עשר כוכביא There are eleven spices in the Incense offering 12 Twelve are the tribes of Israel - שנים עשר שיבטיא Ritual items frequently came in twelves to represent the role of each tribe. The high priest's breastplate (hoshen) had twelve precious stones embedded within them, representing the 12 tribes. Elijah built his altar with 12 stones to represent the tribes, Moses built 12 pillars at Sinai representing the tribes, and Joshua erected twelve memorial stones at the Jordan River representing the tribes. "All of God's creations are equal in number to the 12 tribes: 12 astrological signs, 12 months, 12 hours of the day, 12 hours of the night, 12 stones that Aaron [the high priest] would wear." The Temple Mount could be accessed through twelve gates Age of Bat Mitzvah, when a Jewish female becomes obligated to follow Jewish law There were twelve loaves of show-bread on the shulchan (table) in the Beit Hamikdash Sons of Jacob Number of springs of water Elim 13 Thirteen are the attributes of Hashem - שלושה עשר מידיא Age of Bar Mitzvah, when a Jewish male becomes obligated to follow Jewish law Jewish principles of faith according to Maimonides Hermeneutic rules of Rabbi Ishmael Number of days of Yom Tov in a year (Diaspora) Months in a leap year on the Hebrew calendar 14 The number of steps in the Passover Seder The number of books in the Mishnah Torah, also entitled Yad Hahazaka in which the word Yad has gematria 14 15 One of two numbers that is written differently from the conventions of writing numbers in Hebrew in order to avoid writing the name of God. The other is 16. The number of words in the Priestly Blessing The date of many Jewish Holidays, including: Pesach, Sukkot, Tu B'Shevat, and Tu B'Av The number of chapters in Psalms that begin with the words Shir Hama'alos 16 One of two numbers that is written differently from the conventions of writing numbers in Hebrew in order to avoid writing the name of God. The other is 15. 18 Gematria of "chai", the Hebrew word for life. Multiples of this number are considered good luck and are often used in gift giving. The Amidah is also known as "Shemoneh Esreh" ("Eighteen"), due to originally having 18 blessings, though a 19th blessing was later added 19 The number of years in a cycle of the Hebrew calendar, after which the date on the lunar calendar matches the date on the solar calendar Blessings in the weekday Amidah 20 Minimum age to join the Israelite army In halakhah, the death penalty was only carried out if the offender was at least 20 years old 22 The number of letters in the Hebrew alphabet The number of the almond blossoms on the menorah 24 Total number of books in the Tanakh Twenty-four priestly gifts 24 priestly divisions 24 questions that Reish Lakish would ask Rabbi Yochanan 24 blessings recited in the Amidah on fast days 24,000 people that died in the plague that Pinchas stops 24,000 students of R Akiva that died 26 Gematria of the Tetragrammaton 28 The number of Hebrew letters in Genesis 1:1 30 The number of days in some months of the Hebrew calendar 33 The 33rd day of the Omer, on which Lag BaOmer falls 36 The world is said to be sustained by the merit of 36 hidden righteous individuals It's the double of 18 - See above 40 Moses stayed on mount Sinai for 40 days after the giving of Torah. After the golden calf, he spend there another 40 days (and nights) The number of days the spies were in the land of Canaan Years in the desert—a generation Just as Mozes, the reign of king David and Solomon was also 40 years Days and nights of rain during the flood that occurred at the time of Noah Isaac's age at marriage to Rebecca Esau's age at marriage to his first two wives Number of days Jonah prophesies will pass before Nineveh is destroyed. (They repent) A mikveh must contain at least 40 se'ah (volume measurement) of water Number of years of the reign of David, Solomon, and the most righteous judges in the book of Judges Number of lashes for one who transgresses a commandment Number of days which the Torah was given Number of weeks a person is formed in their mother's womb Number of curses on Adam Minimum age at which a man could join the Sanhedrin 42 Letters in one of God's Divine Names 42 cities that refugees (See Cities of Refuge) can go to when they kill accidentally There were 42 journeys of the sons of Israel through the desert 42 Juveniles mauled by 2 she bears at Bethel after identifying prophet Elisha as 'Baldy' (head uncovered) 50 The 50th year of the sabbatical cycle was the Jubilee year 54 The Torah is divided into 54 weekly Torah portions 60 Considered the beginning of old age () 70 The 70 nations of the world (Generations of Noah) Members of the Sanhedrin Lifespan of King David Years between the destruction of the first and construction of the Second Temple Number of date-palms at Elim Number of members of Jacob's family who descended to Egypt Number of the Jewish elders led by Moshe 86 The gematria of Elohim (אלהים) 130 The age of Jochebed when she gave birth to Moses. The 130 shekels of silver was offered during the dedication of the altar. 137 The number of years Ishmael, Levi and Amram (the father of Mozes) lived. See Gen. 25:17, Ex. 6:16 and 6:20. It is the gematria of the word קבלה / Kabbalah. [See also the book: "137: Jung, Pauli, and the Pursuit of a Scientific Obsession".] 176 The amount of verses in Psalm 119, the longest Psalm in the entire Tanakh. The amount of verses found in Parashat Naso, the longest of the weekly Torah portions. The number of pages in the Gemara of Bava Batra, the most of any tractate in the Babylonian Talmud. 248 Number of positive commandments Number of limbs (איברים) in man's body 314 The gematria of Shaddai, one of God's names 318 Number of men Abraham took to battle against the 4 kings (); also gematria of Eliezer (Abraham's servant) 365 Length of the solar calendar (which has significance in Judaism) Number of prohibitive commandments Number of arteries in the body 374 Total number of years the First Temple stood 400 The amount of shekalim Abraham paid Ephron (Bereishit 23:15) The amount of men with Esav Years in Egypt 613 The 613 commandments, the number of mitzvot in the Torah 620 The total number of mitzvot, including those of Torah and Rabbinic origin. See also Bible code, a purported set of secret messages encoded within the Torah. Biblical and Talmudic units of measurement Chronology of the Bible Gematria Hebrew calendar Hebrew numerals Jewish symbolism Notarikon, a method of deriving a word by using each of its initial letters. Notes References Rashi, The Sapirstein edition (1999). Book of Shemos, Parashas Mishpatim. p.307. . Numbers Numbers Judaism Judaism
Significance of numbers in Judaism
Mathematics
3,206
23,405,094
https://en.wikipedia.org/wiki/Hymenopellis%20radicata
Hymenopellis radicata, commonly known as the deep root mushroom, beech rooter, or the rooting shank, is a widespread agaric readily identified by its deeply rooted stalk (stipe). Description The cap is medium to large, flat, grayish or yellowish brown and streaked, with a central hump and has a size of between 5 and 12.5 cm. The surface of the cap is sticky or slimy when moist, with the underside displaying wide white gills, or lamellae. The brittle stalk tapers at both ends and is nearly white above to brown below the soil. The stem grows into a long deeply rooting tap root until it touches a piece of wood. This may grow up to 20 cm in length in some specimens. Similar species Similar to Oudemansiella longipes. References Physalacriaceae Fungi of Europe Fungi of North America Fungus species
Hymenopellis radicata
Biology
188
56,311,337
https://en.wikipedia.org/wiki/HotelOnline
HotelOnline is a travel technology company, offering a suite of digital tools for e-commerce, online marketing and operations automation to the hotel industry in Sub-Saharan Africa. The company helps hotels digitize operations and offers managed distribution to online channels, such as Online Travel Agents (OTAs), Global Distribution Systems, and metasearch engines. Background HotelOnline was founded in 2014, by Endre Opdal and Håvar Bauck, to facilitate e-commerce and online marketing for hotels in Eastern Africa. Initially operating as Savanna Sunrise, the company grew rapidly in Kenya, Uganda and Rwanda in 2015 and 2016. After merging with a Polish competitor in 2017, the company rebranded to HotelOnline and expanded operations to Nigeria. Later in 2017, HotelOnline raised USD 250,000 in an equity crowdfunding, becoming the first known case of an African company successfully using this funding method. In May 2018, HotelOnline acquired Senegalese travel technology company Teranga Solutions as part of their expansion into francophone Western Africa. The company at the same time also acquired European Travel Group AS. In 2019, prominent startup investor Shravan Shroff invested an undisclosed amount in HotelOnline, and joined their Board of Directors. Shroff is known for his role as the first investor in traveltech unicorn OYO Rooms. Trond Riiber-Knudsen, known as Norway's most active startup investor, also co-invested an undisclosed amount. In September 2019, as part of their expansion into the Nordic market, HotelOnline acquired the Norwegian operations of Key Butler, a leading Nordic short term apartment rental company. In 2020, HotelOnline acquired former competitor Africabookings, and the Cloud9 Lifestyle app. This received some notable attention, as HotelOnline was seen as going against the stream when most of the travel industry in Africa was in a crisis. In April 2022, Yanolja, a South Korean travel technology firm backed by Softbank and Booking.com announced their investment of an undisclosed amount in HotelOnline. Later, in September 2022, HotelOnline announced the USD 1.9 million acquisition of Kenyan competitor HotelPlus. Technology HotelOnline provides a cloud-based suite of tools for automated online distribution and hotel operations, reservations management and AI-driven dynamic pricing. References Travel technology Digital marketing companies E-commerce Technology companies of Kenya Tourism in Africa Hospitality industry in Africa
HotelOnline
Technology
492
4,603
https://en.wikipedia.org/wiki/Booch%20method
The Booch method is a method for object-oriented software development. It is composed of an object modeling language, an iterative object-oriented development process, and a set of recommended practices. The method was authored by Grady Booch when he was working for Rational Software (acquired by IBM), published in 1992 and revised in 1994. It was widely used in software engineering for object-oriented analysis and design and benefited from ample documentation and support tools. The notation aspect of the Booch methodology was superseded by the Unified Modeling Language (UML), which features graphical elements from the Booch method along with elements from the object-modeling technique (OMT) and object-oriented software engineering (OOSE). Methodological aspects of the Booch method have been incorporated into several methodologies and processes, the primary such methodology being the Rational Unified Process (RUP). Content of the method The Booch notation is characterized by cloud shapes to represent classes and distinguishes the following diagrams: The process is organized around a macro and a micro process. The macro process identifies the following activities cycle: Conceptualization : establish core requirements Analysis : develop a model of the desired behavior Design : create an architecture Evolution: for the implementation Maintenance : for evolution after the delivery The micro process is applied to new classes, structures or behaviors that emerge during the macro process. It is made of the following cycle: Identification of classes and objects Identification of their semantics Identification of their relationships Specification of their interfaces and implementation References External links Class diagrams, Object diagrams, State Event diagrams and Module diagrams. The Booch Method of Object-Oriented Analysis & Design Software design Object-oriented programming Programming principles de:Grady Booch#Booch-Notation
Booch method
Engineering
344
1,858,534
https://en.wikipedia.org/wiki/Vascular%20endothelial%20growth%20factor
Vascular endothelial growth factor (VEGF, ), originally known as vascular permeability factor (VPF), is a signal protein produced by many cells that stimulates the formation of blood vessels. To be specific, VEGF is a sub-family of growth factors, the platelet-derived growth factor family of cystine-knot growth factors. They are important signaling proteins involved in both vasculogenesis (the de novo formation of the embryonic circulatory system) and angiogenesis (the growth of blood vessels from pre-existing vasculature). It is part of the system that restores the oxygen supply to tissues when blood circulation is inadequate such as in hypoxic conditions. Serum concentration of VEGF is high in bronchial asthma and diabetes mellitus. VEGF's normal function is to create new blood vessels during embryonic development, new blood vessels after injury, muscle following exercise, and new vessels (collateral circulation) to bypass blocked vessels. It can contribute to disease. Solid cancers cannot grow beyond a limited size without an adequate blood supply; cancers that can express VEGF are able to grow and metastasize. Overexpression of VEGF can cause vascular disease in the retina of the eye and other parts of the body. Drugs such as aflibercept, bevacizumab, ranibizumab, and pegaptanib can inhibit VEGF and control or slow those diseases. History In 1970, Judah Folkman et al. described a factor secreted by tumors causing angiogenesis and called it tumor angiogenesis factor. In 1983 Senger et al. identified a vascular permeability factor secreted by tumors in guinea pigs and hamsters. In 1989 Ferrara and Henzel described an identical factor in bovine pituitary follicular cells which they purified, cloned and named VEGF. A similar VEGF alternative splicing was discovered by Tischer et al. in 1991. Between 1996 and 1997, Christinger and De Vos obtained the crystal structure of VEGF, first at 2.5 Å resolution and later at 1.9 Å. Fms-like tyrosine kinase-1 (flt-1) was shown to be a VEGF receptor by Ferrara et al. in 1992. The kinase insert domain receptor (KDR) was shown to be a VEGF receptor by Terman et al. in 1992 as well. In 1998, neuropilin 1 and neuropilin 2 were shown to act as VEGF receptors. Classification In mammals, the VEGF family comprises five members: VEGF-A, placenta growth factor (PGF), VEGF-B, VEGF-C and VEGF-D. The latter members were discovered after VEGF-A; before their discovery, VEGF-A was known as VEGF. A number of VEGF-related proteins encoded by viruses (VEGF-E) and in the venom of some snakes (VEGF-F) have also been discovered. Activity of VEGF-A, as its name implies, has been studied mostly on cells of the vascular endothelium, although it does have effects on a number of other cell types (e.g., stimulation monocyte/macrophage migration, neurons, cancer cells, kidney epithelial cells). In vitro, VEGF-A has been shown to stimulate endothelial cell mitogenesis and cell migration. VEGF-A is also a vasodilator and increases microvascular permeability and was originally referred to as vascular permeability factor. Isoforms There are multiple isoforms of VEGF-A that result from alternative splicing of mRNA from a single, 8-exon VEGFA gene. These are classified into two groups which are referred to according to their terminal exon (exon 8) splice site: the proximal splice site (denoted VEGFxxx) or distal splice site (VEGFxxxb). In addition, alternate splicing of exon 6 and 7 alters their heparin-binding affinity and amino acid number (in humans: VEGF121, VEGF121b, VEGF145, VEGF165, VEGF165b, VEGF189, VEGF206; the rodent orthologs of these proteins contain one fewer amino acids). These domains have important functional consequences for the VEGF splice variants, as the terminal (exon 8) splice site determines whether the proteins are pro-angiogenic (proximal splice site, expressed during angiogenesis) or anti-angiogenic (distal splice site, expressed in normal tissues). In addition, inclusion or exclusion of exons 6 and 7 mediate interactions with heparan sulfate proteoglycans (HSPGs) and neuropilin co-receptors on the cell surface, enhancing their ability to bind and activate the VEGF receptors (VEGFRs). Recently, VEGF-C has been shown to be an important inducer of neurogenesis in the murine subventricular zone, without exerting angiogenic effects. Mechanism All members of the VEGF family stimulate cellular responses by binding to tyrosine kinase receptors (the VEGFRs) on the cell surface, causing them to dimerize and become activated through transphosphorylation, although to different sites, times, and extents. The VEGF receptors have an extracellular portion consisting of 7 immunoglobulin-like domains, a single transmembrane spanning region, and an intracellular portion containing a split tyrosine-kinase domain. VEGF-A binds to VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1). VEGFR-2 appears to mediate almost all of the known cellular responses to VEGF. The function of VEGFR-1 is less well-defined, although it is thought to modulate VEGFR-2 signaling. Another function of VEGFR-1 may be to act as a dummy/decoy receptor, sequestering VEGF from VEGFR-2 binding (this appears to be particularly important during vasculogenesis in the embryo). VEGF-C and VEGF-D, but not VEGF-A, are ligands for a third receptor (VEGFR-3/Flt4), which mediates lymphangiogenesis. The receptor (VEGFR3) is the site of binding of main ligands (VEGFC and VEGFD), which mediates perpetual action and function of ligands on target cells. Vascular endothelial growth factor-C can stimulate lymphangiogenesis (via VEGFR3) and angiogenesis via VEGFR2. Vascular endothelial growth factor-R3 has been detected in lymphatic endothelial cells in CL of many species, cattle, buffalo and primate. In addition to binding to VEGFRs, VEGF binds to receptor complexes consisting of both neuropilins and VEGFRs. This receptor complex has increased VEGF signalling activity in endothelial cells (blood vessels). Neuropilins (NRP) are pleiotropic receptors and therefore other molecules may interfere with the signalling of the NRP/VEGFR receptor complexes. For example, Class 3 semaphorins compete with VEGF165 for NRP binding and could therefore regulate VEGF-mediated angiogenesis. Expression VEGF-A production can be induced in a cell that is not receiving enough oxygen. When a cell is deficient in oxygen, it produces HIF, hypoxia-inducible factor, a transcription factor. HIF stimulates the release of VEGF-A, among other functions (including modulation of erythropoiesis). Circulating VEGF-A then binds to VEGF receptors on endothelial cells, triggering a tyrosine kinase pathway leading to angiogenesis. The expression of angiopoietin-2 in the absence of VEGF leads to endothelial cell death and vascular regression. Conversely, a German study done in vivo found that VEGF concentrations actually decreased after a 25% reduction in oxygen intake for 30 minutes. HIF1 alpha and HIF1 beta are constantly being produced but HIF1 alpha is highly O2 labile, so, in aerobic conditions, it is degraded. When the cell becomes hypoxic, HIF1 alpha persists and the HIF1alpha/beta complex stimulates VEGF release. the combined use of microvesicles and 5-FU resulted in enhanced chemosensitivity of squamous cell carcinoma cells more than the use of either 5-FU or microvesicle alone. In addition, down regulation of VEGF gene expression was associated with decreased CD1 gene expression. Clinical significance In disease VEGF-A and the corresponding receptors are rapidly up-regulated after traumatic injury of the central nervous system (CNS). VEGF-A is highly expressed in the acute and sub-acute stages of CNS injury, but the protein expression declines over time. This time-span of VEGF-A expression corresponds with the endogenous re-vascularization capacity after injury. This would suggest that VEGF-A / VEGF165 could be used as target to promote angiogenesis after traumatic CNS injuries. However, there are contradicting scientific reports about the effects of VEGF-A treatments in CNS injury models. Although it has not been associated as a biomarker for the diagnosis of acute ischemic stroke, high levels of serum VEGF in the first 48 hours after an cerebral infarct have been associated with a poor prognosis after 6 months and 2 years. VEGF-A has been implicated with poor prognosis in breast cancer. Numerous studies show a decreased overall survival and disease-free survival in those tumors overexpressing VEGF. The overexpression of VEGF-A may be an early step in the process of metastasis, a step that is involved in the "angiogenic" switch. Although VEGF-A has been correlated with poor survival, its exact mechanism of action in the progression of tumors remains unclear. VEGF-A is also released in rheumatoid arthritis in response to TNF-α, increasing endothelial permeability and swelling and also stimulating angiogenesis (formation of capillaries). VEGF-A is also important in diabetic retinopathy (DR). The microcirculatory problems in the retina of people with diabetes can cause retinal ischaemia, which results in the release of VEGF-A, and a switch in the balance of pro-angiogenic VEGFxxx isoforms over the normally expressed VEGFxxxb isoforms. VEGFxxx may then cause the creation of new blood vessels in the retina and elsewhere in the eye, heralding changes that may threaten the sight. VEGF-A plays a role in the disease pathology of the wet form age-related macular degeneration (AMD), which is the leading cause of blindness for the elderly of the industrialized world. The vascular pathology of AMD shares certain similarities with diabetic retinopathy, although the cause of disease and the typical source of neovascularization differs between the two diseases. VEGF-D serum levels are significantly elevated in patients with angiosarcoma. Once released, VEGF-A may elicit several responses. It may cause a cell to survive, move, or further differentiate. Hence, VEGF is a potential target for the treatment of cancer. The first anti-VEGF drug, a monoclonal antibody named bevacizumab, was approved in 2004. Approximately 10–15% of patients benefit from bevacizumab therapy; however, biomarkers for bevacizumab efficacy are not yet known. Current studies show that VEGFs are not the only promoters of angiogenesis. In particular, FGF2 and HGF are potent angiogenic factors. Patients suffering from pulmonary emphysema have been found to have decreased levels of VEGF in the pulmonary arteries. VEGF-D has also been shown to be over expressed in lymphangioleiomyomatosis and is currently used as a diagnostic biomarker in the treatment of this rare disease. In the kidney, increased expression of VEGF-A in glomeruli directly causes the glomerular hypertrophy that is associated with proteinuria. VEGF alterations can be predictive of early-onset pre-eclampsia. Gene therapies for refractory angina establish expression of VEGF in epicardial cells to promote angiogenesis. See also Proteases in angiogenesis Withaferin A, a potent inhibitor of angiogenesis References Further reading External links – the Vascular Endothelial Growth Factor Structure in Interactive 3D Angiogenesis Drugs acting on the cardiovascular system Growth factors Neurotrophic factors Human proteins
Vascular endothelial growth factor
Chemistry,Biology
2,747
58,445,345
https://en.wikipedia.org/wiki/Lipid%20pump
The lipid pump sequesters carbon from the ocean's surface to deeper waters via lipids associated with overwintering vertically migratory zooplankton. Lipids are a class of hydrocarbon rich, nitrogen and phosphorus deficient compounds essential for cellular structures. This lipid carbon enters the deep ocean as carbon dioxide produced by respiration of lipid reserves and as organic matter from the mortality of zooplankton. Compared to the more general biological pump, the lipid pump also results in a "lipid shunt", where other nutrients like nitrogen and phosphorus that are consumed in excess must be excreted back to the surface environment, and thus are not removed from the surface mixed layer of the ocean. This means that the carbon transported by the lipid pump does not limit the availability of essential nutrients in the ocean surface. Carbon sequestration via the lipid pump is therefore decoupled from nutrient removal, allowing carbon uptake by oceanic primary production to continue. In the Biological Pump, nutrient removal is always coupled to carbon sequestration; primary production is limited as carbon and nutrients are transported to depth together in the form of organic matter. The contribution of the lipid pump to the sequestering of carbon in the deeper waters of the ocean can be substantial: the carbon transported below 1,000 metres (3,300 ft) by copepods of the genus Calanus in the Arctic Ocean almost equals that transported below the same depth annually by particulate organic carbon (POC) in this region. A significant fraction of this transported carbon would not return to the surface due to respiration and mortality. Research is ongoing to more precisely estimate the amount that remains at depth. The export rate of the lipid pump may vary from 1–9.3 g C m−2 y−1 across temperate and subpolar regions containing seasonally-migrating zooplankton. The role of zooplankton, and particularly copepods, in the food web is crucial to the survival of higher trophic level organisms whose primary source of nutrition is copepods. With warming oceans and increasing melting of ice caps due to climate change, the organisms associated with the lipid pump may be affected, thus influencing the survival of many commercially important fish and endangered marine mammals. As a new and previously unquantified component of oceanic carbon sequestration, further research on the lipid pump can improve the accuracy and overall understanding of carbon fluxes in global oceanic systems. Lipid pump vs. biological pump Through the seasonal vertical migration of zooplankton, the lipid pump creates a net difference between lipids transported to the deep during the fall (when zooplankton enter diapause) and what returns to the surface during the spring, resulting in the sequestration of lipid carbon at depth. The biological pump encompasses many processes that sequester the CO2 taken up in the surface ocean by phytoplankton through the export of POC to the deep ocean. Although zooplankton are known to play important roles in the biological pump through grazing and the repackaging of particulate matter, the active transport of seasonally-migrating zooplankton through the lipid pump has not been incorporated into global estimates of the biological pump. Comparison between net fluxes The biological pump transports 1–4 g C m−2 y−1 of POC below the thermocline annually. The export flux of POC in the temperate North Atlantic out of the surface waters was found to be 29 ± 10 g C m−2 y−1. However, studies have shown that processes such as consumption and remineralization contribute to a significant amount of this POC being attenuated as it sinks below the thermocline (near overwintering depths of ~1000 m). Furthermore, the remaining quantity of carbon in the North Atlantic from the export of POC below the thermocline has been calculated (2–8 g C m−2 y−1) to be comparable to the seasonal migration of C. finmarchicus in the North Atlantic (1–4 g C m−2 y−1) through the lipid pump. Therefore, the lipid pump may contribute 50–100% of C sequestration to the biological pump as net transport that has not been included in its current estimates. Lipid shunt Although the sequestration of marine carbon is a primary outcome of the biological pump, the recycling of nutrients such as N and P in organic matter plays a comparatively important role in maintaining the processes that facilitate this carbon export without removing nutrients for primary production. One key difference between the lipid pump and biological pump is that the ratios of nutrients such as nitrogen and phosphorus relative to carbon are minimal or zero in lipids, whereas the exported POC in the biological pump retains the standard Redfield ratios found throughout the world's oceans. This is primarily due to zooplankton in their copepodite stages releasing an excessive amount of nitrogen and phosphorus from excretion back into the surface. Thus, the production, transport, and metabolism of lipid carbon during overwintering do not contribute to a net consumption or removal of essential nutrients in the surface ocean, which is unlike many components of the biological pump. This process creates what is known as a "lipid shunt" in the biological pump, as the carbon sequestration of the lipid pump is decoupled from nutrient removal. Overwintering diapause vs. Diel vertical migration Diel Vertical Migration (DVM) is a well-studied phenomenon, widespread in the temperate and tropical oceans, and previously understood to be the most significant contributor to the active export of carbon as a result of zooplankton migration. The most common form is the nocturnal DVM, a night-time ascent to the upper pelagic and a daytime descent to deeper waters. A relatively unique variation of this form is the twilight DVM, where the ascent happens during dusk and the descent around midnight (i.e., midnight sinking). While DVM occurs on a daily basis, overwintering diapause (hibernation) occurs on an annual time-scale and enables zooplankton species, particularly Calanus spp., to adapt to seasonal variation in primary productivity in specific ocean basins. Individuals enter diapause and migrate deeper in the water column to overwinter below the thermocline. During diapause they survive on stored lipid reserves that are generated at the end of their time at the surface when nutrients are widely available. The seasonal end of diapause must be closely timed with the beginning of the spring phytoplankton bloom to enable acquisition of food to permit proper egg development and hatching. If the timing is disrupted, eggs that are hatched during diapause will have limited growth time and a lower likelihood of surviving overwintering, as thus is an example of match-mismatch hypothesis. Calanus spp. in ocean basins with shorter growth seasons will be increasingly sensitive to the timing of the spring bloom, such as polar regions. In the Arctic and Antarctic environments, the productive season is typically short and certain copepods species vertically migrate during overwintering diapause. During the productive seasons of spring and summer, younger developmental stages of these copepods usually thrive in food-rich, warmer, near-surface waters, and they rapidly develop and grow. During late summer and fall, grazing pressure, nutrient limitation, and annual variations of irradiance combine to limit the pelagic primary production. Consequently, the food supply fades toward fall, and overwintering diapause initiates. These copepods migrate to deeper waters with accumulated lipid reserves for overwintering. The overwintering diapause stages remain in deeper waters with limited physical and physiological activity and ascend back to the near-surface waters and complete the life cycle at the onset of the following productive season. Calanus spp. Ecology Calanus spp. are abundantly distributed copepods, particularly in the polar and temperate North Atlantic. Studies attempting to quantify the lipid pump have primarily focused on the cousin species of C. finmarchicus, Calanus glacialis and Calanus helgolandicus, C. hyperboreus. C. hyperboreous, the largest of these species, uses an overwintering diapause (hibernation) strategy, and its life-history will be described in more detail as a representative Calanus spp. With a life cycle of two to six years on average, each C. hyperboreous individual can go through multiple overwintering periods. Positively buoyant eggs are spawned by females at depth and rise to the surface. Larvae (nauplii) first develop from these eggs, and complete their maturation into an early juvenile (copepodite) within one season, after which they undergo their first overwintering. Copepodite have three stages before maturing to the adult stages. While female Calanus spp. are generally expected to experience mortality after spawning, some may return to the surface to build up lipid stores before entering another overwintering and reproductive cycle. Lipid accumulation and metabolism Lipids are stored by all copepodite and adult Calanus spp. in an oil sac, which can account for up to 60% of an individual's dry weight. Calanus spp. accumulate these lipids while feeding closer to the ocean surface during the spring and summer months, aligning with phytoplankton blooms. Early in the growing season, Calanus spp. biogenergetics are allocated to reproduction, feeding and growth, but eventually shift to the production of lipids to provide energy during diapause. These lipids take the form of wax esters, energy-rich compounds like omega-3 fatty acids, and long-chain carbon molecules. At the end of the feeding/growing season, Calanus spp. migrate downward, with to depths varying from 600 to 3000m, but with the requirement that Calanus spp. settle below the thermocline to prevent premature return to the surface waters. Stored lipids are metabolized at these depths, accounting for approximately 25% of the basal metabolic rate. A 6–8 month-long overwintering period can drain a substantial fraction (44–93%) of the stored lipids despite the decreased metabolism. Physical characteristics The physical characteristics of Calanus spp. (i.e., dry weight, prosome length, lipid content, and carbon content) are always changing, varying between different regions, temporally, and across life stages. Based on isomorphism, or the similarity in form or structure of organisms, Calanus spp. may deviate in size but their basic physical structure remains constant across different overwintering stages and between different copepod species. The only significant taxonomic difference is the number of segments on the tail across developmental stage CIII and older (CIV, CV). With an outcome of isomorphism, dry weight (d [mg]) and prosome length (p [mm]) can be scaled as they are related as d = cp3, where c is a coefficient. Observations identify the relationship between dry weight and prosome length with a coefficient between 3.3 and 3.5 for C. hyperboreus. Although this relationship is not supported extensively by empirical evidence, it has been used for model frameworks to observe Calanus spp. carbon content. Relationships between NAO and Calanus spp. populations In the North Atlantic and Nordic Seas, a primary long-term forcing that affects Calanus spp. and its habitat is the North Atlantic Oscillation (NAO) index, defined as the normalized difference in sea surface pressure between the Azores High and the Icelandic Low. While high NAO index values indicate a net flow of Atlantic water to the northeast and into the Norwegian Sea, low NAO index values indicate a reduced Atlantic water inflow into the Nordic Seas. In the Northwestern Atlantic, positive trends in the abundances of Calanus spp. correspond with higher sea surface temperatures and positive NAO forcing with a lag of one or two years. However, the influence of the NAO in explaining Calanus spp. abundance was substantially diminished when temporal autocorrelation and detrending analyses were involved. Regional differences Certain aspects of the lipid pump such as the diapause depth and duration of zooplankton can vary among regions that have different overwintering temperatures and resident community characteristics. There are other subarctic regions that have shown similar carbon export rates to those found in the temperate North Atlantic (1–4 g C m−2 y−1) via seasonally-migrating zooplankton. For instance, C. glacialis and C. hyperboreus are the most dominant zooplankton species found in the Arctic Ocean at similar latitudes, and they contribute to a 3.1 g C m−2 y−1 flux of lipid carbon below 100 m during overwintering. A slightly higher maximum flux in lipid carbon (2–4.3 g C m−2 y−1) below 150 m was observed in the subarctic North Pacific and was primarily attributed to the Neocalanus genus of copepods. In these areas, N. flemingeri, N. cristatus, and N. plumchrus are the primary contributors to the lipid pump, whereas, the subantarctic Southern Ocean consists primarily of N. tonsus contributing to a lipid carbon flux of 1.7–9.3 g C m−2 y−1 out of the euphotic zone. The rates or magnitude of these processes may slightly vary due to characteristic differences between these subpolar regions, which have largely been under-studied relative to their contributions to the lipid pump. Ecological impacts Role in the food web The zooplanktonic Calanus spp. are not only important for moving carbon out of the photic zone and into the deep ocean, but these lipid-rich organisms play a critical role in the success of many marine species that depend on them as food. They comprise the majority of diets for fishes, seabirds and even large mammals such as whales. Copepods can account for about 70–90% of total zooplankton biomass, depending on region. Additionally, their eggs are a main source of food for commercially important fish stocks. The copepod eggs are buoyant and will rise to the sea surface, but are susceptible to predation by fish and other organisms. Copepods also provide the benthic community with food via sinking fecal pellets, meaning that as fish and smaller invertebrates excrete waste, that waste falls to the sea floor and organisms on the sea floor compete for the pellets as food. The role of copepods in the food web is crucially intertwined amongst other organisms. Copepod abundance, specifically the C. finmarchicus, has a direct impact on the endangered right whales of the North Atlantic. North Atlantic right whales rely on copepods as their primary prey in order to meet their nutritional needs. To meet the right whale's energetic requirements they need about 500 kg of C. finmarchicus a day. Each copepod measures about 2–4 millimetres long which is about the size of a grain of rice and they weigh, on average, between 1.0274 and 1.0452 g cm−3. A loss in C. finmarchicus has the potential to affect the right whale's migration, reproduction, and/or ability to successfully nurse their young (only for lactating females). Economic impacts Many commercial and subsistence fisheries in arctic and subarctic regions fish for cod, salmon, crab, groundfish, and pollock depend on this energy-rich zooplankton as food. In 2017, the highest value of commercial fish species for the US was salmon ($688 million), crabs ($610 million), shrimp ($531 million), scallops ($512 million), and pollock ($413 million). Pollock alone is the largest fishery in the US based on volume, but is also the second largest fishery in the world supporting 2–5% of the global fishery production. Not only do millions of people rely on fish for subsistence, but recreational fishing is one of the most popular activities in the US. Recreational fishing contributes about $202 million to the US economy. Changes in the abundance and distribution of copepods could drastically affect the economic livelihoods of millions of people connected to the fishing industry or who rely on fishing as a primary source of protein. Climate change impacts Anthropogenic climate change is estimated to impact the marine environment in a variety of ways. In the arctic and subarctic environments where a vast majority of Calanus spp. reside, melting ice caps and timing of the spring phytoplankton bloom could have implications for copepod density, distribution and timing of return from overwintering. A phytoplankton bloom occurs in the spring in arctic and subarctic environments when sea ice melts, allowing an increase in light to penetrate deeper into the water column, thus supporting photosynthesis. An input of freshwater from the sea ice melting increases the stratification of the ocean in the summertime. Stratification leaves nutrient-rich water on the bottom and nutrient-poor water on the top due to an increase in freshwater from the ice. However, in the wintertime, this region of the world experiences an increase in storms that bring nutrient-rich waters into the more nutrient-poor surface waters. Climate change alters the timing of the spring bloom by promoting an earlier or later ice melt. Warmer waters could lead to weaker stratification, meaning the density differences between the first and second layer of the ocean are increasing due to an increased flux of freshwater from ice melt. Typically, the amount of total annual primary productivity in the Bering Sea associated with a spring bloom is approximately 10–65%, however warmer waters could impact the amount of primary production occurring. Reproduction and changes to the food web For the C. finmarchicus species specifically, the start of reproduction is linked to the start of the spring bloom. Thus, changes in the timing of the spring bloom would directly influence the reproductive capabilities of C. finmarchicus and alter the food chain from the bottom-up. However, the food chain could also be altered from the top-down through habitat disturbance and the removal of marine mammals and fish. Large-scale commercial fisheries exert top-down effects by lowering the abundance of larger species and increasing the amount of lipid-rich copepods and even paving way for other species to consume them. Under warming ocean conditions, prey switching is to be expected. Egg production and hatching success may also be affected with increasing sea surface temperatures and ocean acidification. Physical ocean Other climate change factors to consider that might influence these lipid-rich copepods are shifts of current systems, storm activity and sea-ice cover. In some regions of the arctic, specifically the Bering Sea, studies have forecasted a decrease in storms due to warming. This impacts the mixing of the water column that brings nutrient-rich water upwards. Copepods consume primary producers that require nutrients to survive. Limiting the amount of nutrients in the water column could decrease the abundance of these primary producers and subsequently reduce Calanus spp. abundance as well. Changes in the water masses and temperature could have a direct effect on the zooplankton's vertical migration. The distribution of the zooplankton in the water column is controlled by the currents. The Calanus spp. use the water column for their vertical migration. Changes to the currents while Calanus spp. are in diapause could result in a reduction in the abundance of the copepods in the Norwegian Sea. Since the lipid pump is controlled through the movement of copepods, particularly Calanus spp., impacts of climate change that affect copepod abundance or seasonal migration will directly impact the lipid pump and carbon export to the deep ocean. Climate modeling A study that utilized climate modeling to simulate the effects of predicted increases in water temperature and salinity as a result of climate change on C. finmarchicus of the eastern shelf of North America forecasts lower abundance of copepods. The decrease in favorable environmental conditions is expected to decrease the size and density of C. finmarchius, and will likely have negative effects on whales and other components of the food web that are inextricably tied to copepods. The impact of diapause and variation in seasonal productivity was not explicitly included as increasing model complexity and more accurate accounting for Calanus spp. metabolic processes during diapause is required. The importance of diapause timing with spring plankton blooms is well-established, suggesting that there is potential for additional population impacts as a result of climate change, which would further reverberate throughout the ecosystem. Key implications The 2015 paper by Jónasdóttir et al., marked the first comprehensive accounting for the amount of carbon sequestration resulting from the movement of lipids by vertically migrating zooplankton during their overwintering diapuse. Although only elucidating the impact of one particular species, in this case, C. finmarchicus, both the magnitude of carbon flux and widespread global distribution of Calanus spp. suggest the possible importance of the lipid pump in global carbon cycling by contributing an estimated 50–100% of carbon sequestration to the biological pump. Subsequent research has underscored this significance as estimates that attempt to more accurately account for the mortality and respiration rates of other overwintering Calanus spp. have suggested similar, although regionally variable, magnitudes of carbon export from the lipid pump. Overwintering diapause is an ecological strategy to enable Calanus spp. to adapt to the seasonal variability in food availability in ocean basins. Changes in the timing or length of high food periods are likely to negatively impact the distribution and abundance of Calanus spp. Changes in ocean temperature and salinity due to anthropogenic climate change are also predicted to decrease concentrations of Calanus spp. in some ocean basins. In addition to potential ecosystem impacts due to the large number of species that rely on copepods as a major constituent of their diets, there may be implications for oceanic carbon sequestration from consequent changes in the magnitude of the lipid pump due to overwintering zooplankton. Future directions The global estimates of the biological pump have yet to include the elements of the lipid pump which could represent 50–100% of C export that is not accounted for. This is likely due to many observational challenges pertaining to the analysis of these seasonal migrations. As described above, more accurate ways to measure both mortality and respiration rates of overwintering zooplankton are being conducted in recent work, which are the two factors that primarily control the amount of lipid carbon that is sequestered at depth. For the zooplankton that survive overwintering, their upward migration during the spring returns a fraction of the lipid reserves to the surface as nonrespired carbon, with losses attributed to predation by deep-dwelling predators, disease, starvation, and other sources of mortality generally not accounted for. Similar to the lysis shunt, the dynamics of the lipid shunt causes uncertainty in observational methods of the lipid pump when comparing its efficiency to that of the biological pump. Additionally, large zooplankton usually avoid mooring instruments such as sediment traps during seasonal migrations which further explains why the lipid pump has yet to become incorporated into estimates of the global carbon export flux. These observations can be challenging to make given the remote locations they are conducted in and the harsh, deep sampling conditions, but these adaptations in the data collection are needed to better integrate global estimates of the carbon export flux provided by the lipid pump. See also Biological pump Oceanic carbon cycle References Chemical oceanography Carbon dioxide removal
Lipid pump
Chemistry
4,929
21,053,500
https://en.wikipedia.org/wiki/Plague%20vaccine
Plague vaccine is a vaccine used against Yersinia pestis to prevent the plague. Inactivated bacterial vaccines have been used since 1890 but are less effective against the pneumonic plague, so live, attenuated vaccines and recombinant protein vaccines have been developed to prevent the disease. Plague immunization The first plague vaccine was developed by bacteriologist Waldemar Haffkine in 1897. He tested the vaccine on himself to prove that the vaccine was safe. Later, Haffkine conducted a massive inoculation program in British India, and it is estimated that 26 million doses of Haffkine's anti-plague vaccine were sent out from Bombay between 1897 and 1925, reducing the plague mortality by 50%-85%. A plague vaccine is used for an induction of active specific immunity in an organism susceptible to plague by means of administrating an antigenic material (a vaccine) via a variety of routes to people at risk of contracting any clinical form of plague. This method is known as plague immunization. There is strong evidence for the efficacy of administration of some plague vaccines in preventing or ameliorating the effects of a variety of clinical forms of infection by Yersinia pestis. Plague immunization also encompasses incurring a state of passive specific immunity to plague in a susceptible organism after administration of a plague serum or plague immunological in people with an immediate risk of developing the disease. A systematic review by the Cochrane Collaboration found no studies of sufficient quality to be included in the review, and were thus unable to make any statement on the efficacy of modern vaccines. References Vaccines Live vaccines Vaccine
Plague vaccine
Biology
337
77,921,851
https://en.wikipedia.org/wiki/Disordered%20local%20moment%20picture
The disordered local moment (DLM) picture is a method, in condensed matter physics, for describing the electronic structure of a magnetic material at a finite temperature, where a probability distribution of sizes and orientations of atomic magnetic moments must be considered. Its was pioneered, among others, by Balázs Győrffy, Julie Staunton, Malcolm Stocks, and co-workers. The underlying assumption of the DLM picture is similar to the Born-Oppenheimer approximation for the separation of solution of the ionic and electronic problems in a material. In the disordered local moment picture, it is assumed that 'local' magnetic moments which form around atoms are sufficiently long-lived that the electronic problem can be solved for an assumed, fixed distribution of magnetic moments. Many such distributions can then be averaged over, appropriately weighted by their probabilities, and a description of the paramagnetic state obtained. (A paramagnetic state is one where the magnetic order parameter, , is equal to the zero vector.) The picture is typically based on density functional theory (DFT) calculations of the electronic structure of materials. Most frequently, DLM calculations employ either the Korringa–Kohn–Rostoker (KKR) (sometimes referred to as multiple scattering theory) or linearised muffin-tin orbital (LMTO) formulations of DFT, where the coherent potential approximation (CPA) can be used to average over multiple orientations of magnetic moment. However, the picture has also been applied in the context of supercells containing appropriate distributions of magnetic moment orientations. Though originally developed as a means by which to describe the electronic structure of a magnetic material above its magnetic critical temperature (Curie temperature), it has since been applied in a number of other contexts. This includes precise calculation of Curie temperatures and magnetic correlation functions for transition metals, rare-earth elements, and transition metal oxides; as well as a description of the temperature dependance of magnetocrystalline anisotropy. The approach has found particular success in describing the temperature-dependence of magnetic quantities of interest in rare earth-transition metal permanent magnets such as SmCo5 and Nd2Fe14B, which are of interest for a range of energy generation and conversion technologies. References Condensed matter physics
Disordered local moment picture
Physics,Chemistry,Materials_science,Engineering
473
57,761,321
https://en.wikipedia.org/wiki/Rimostil
Rimostil (developmental code name P-081) is a dietary supplement and extract of isoflavones from red clover which was under development by Kazia Therapeutics (formerly Novogen) for the prevention of postmenopausal osteoporosis and cardiovascular disease and for the treatment of menopausal symptoms and hyperlipidemia but was never approved for medical use. It is enriched with isoflavone phytoestrogens such as formononetin, biochanin A, daidzein, and genistein, and is proposed to act as a selective estrogen receptor modulator, with both estrogenic and antiestrogenic effects in different tissues. The extract reached phase II clinical trials for cardiovascular disorders, hyperlipidemia, and postmenopausal osteoporosis prior to the discontinuation of its development in 2007. See also Femarelle Menerba References External links P-081 (Rimostil) - AdisInsight Abandoned drugs Botanical drugs Dietary supplements Herbalism Isoflavones Phytoestrogens Selective estrogen receptor modulators
Rimostil
Chemistry
234
8,407,149
https://en.wikipedia.org/wiki/Mutant
In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene. Mutants arise by mutation Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone. Etymology Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change". Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a developmental abnormality, the DNA of the organism is unchanged and the abnormality cannot be passed on to progeny. Conjoined twins are the result of developmental abnormalities. Chemicals that cause developmental abnormalities are called teratogens; these may also cause mutations, but their effect on development is not related to mutations. Chemicals that induce mutations are called mutagens. Most mutagens are also considered to be carcinogens. Epigenetic alterations Mutations are distinctly different from epigenetic alterations, although they share some common features. Both arise as a chromosomal alteration that can be replicated and passed on to subsequent cell generations. Both, when occurring within a gene, may silence expression of the gene. Whereas mutant cell lineages arise as a change in the sequence of standard bases, epigenetically altered cell lineages retain the sequence of standard bases but have gene sequences with changed levels of expression that can be passed down to subsequent cell generations. Epigenetic alterations include methylation of CpG islands of a gene promoter as well as specific chromatin histone modifications. Faulty repair of chromosomes at sites of DNA damage can give rise both to mutant cell lineages and/or epigenetically altered cell lineages. See also Evolution Genetic engineering Genetically modified organism Mutants in fiction Mutationism Synthetic lethality Synthetic viability References External links Antennapedia mutant Evolutionary biology Classical genetics Mutation
Mutant
Biology
674
2,781,278
https://en.wikipedia.org/wiki/Adhil
The name Adhil has been applied to a number of stars, especially in the constellation of Andromeda. It is the name approved by the International Astronomical Union for Xi Andromedae. Origin Adhil was originally applied to the description of Ptolemy's 21st and 22nd of Andromeda in his star catalogue in Latin translated version of Almagest. Etymology Adhil is a lingua franca term from an Arabic phrase al-dhayl [að-ðáil] meaning "the train [of a garment]" (literally "the tail"). Identification There are two kind of the identification of Ptolemy's 21st and 22nd of Andromeda. Renaissance times However Bayer gave Adhil for 60/b And in his prominent work Uranometria in 1603, and Bode followed Bayer in his great star atlas Uranographia in 1801. Recent times Adhil is applied to Xi Andromedae from Manitius' identification of Ptolemy's 21st of Andromeda. See also Xi Andromedae (recent Adhil) 60 Andromedae (Bayer and Bode's Adhil) 49 Andromedae (one of adhil in the Almagest) Chi Andromedae (one of adhil in the Almagest) Syrma (Iota Virginis) Notes Andromeda (constellation)
Adhil
Astronomy
279
42,806,211
https://en.wikipedia.org/wiki/Conway%20criterion
In the mathematical theory of tessellations, the Conway criterion, named for the English mathematician John Horton Conway, is a sufficient rule for when a prototile will tile the plane. It consists of the following requirements: The tile must be a closed topological disk with six consecutive points A, B, C, D, E, and F on the boundary such that: the boundary part from A to B is congruent to the boundary part from E to D by a translation T where T(A) = E and T(B) = D. each of the boundary parts BC, CD, EF, and FA is centrosymmetric—that is, each one is congruent to itself when rotated by 180-degrees around its midpoint. some of the six points may coincide but at least three of them must be distinct. Any prototile satisfying Conway's criterion admits a periodic tiling of the plane—and does so using only 180-degree rotations. The Conway criterion is a sufficient condition to prove that a prototile tiles the plane but not a necessary one. There are tiles that fail the criterion and still tile the plane. Every Conway tile is foldable into either an isotetrahedron or a rectangle dihedron and conversely, every net of an isotetrahedron or rectangle dihedron is a Conway tile. History The Conway criterion applies to any shape that is a closed disk—if the boundary of such a shape satisfies the criterion, then it will tile the plane. Although the graphic artist M.C. Escher never articulated the criterion, he discovered it in the mid 1920s. One of his earliest tessellations, later numbered 1 by him, illustrates his understanding of the conditions in the criterion. Six of his earliest tessellations all satisfy the criterion. In 1963 the German mathematician Heinrich Heesch described the five types of tiles that satisfy the criterion. He shows each type with notation that identifies the edges of a tile as one travels around the boundary: CCC, CCCC, TCTC, TCTCC, TCCTCC, where C means a centrosymmetric edge, and T means a translated edge. Conway was likely inspired by Martin Gardner's July 1975 column in Scientific American that discussed which convex polygons can tile the plane. In August 1975, Gardner revealed that Conway had discovered his criterion while trying to find an efficient way to determine which of the 108 heptominoes tile the plane. Examples In its simplest form, the criterion simply states that any hexagon with a pair of opposite sides that are parallel and congruent will tessellate the plane. In Gardner's article, this is called a type 1 hexagon. This is also true of parallelograms. But the translations that match the opposite edges of these tiles are the composition of two 180° rotations—about the midpoints of two adjacent edges in the case of a hexagonal parallelogon, and about the midpoint of an edge and one of its vertices in the case of a parallelogram. When a tile that satisfies the Conway Criterion is rotated 180° about the midpoint of a centrosymmetric edge, it creates either a generalized parallelogram or a generalized hexagonal parallelogon (these have opposite edges congruent and parallel), so the doubled tile can tile the plane by translations. The translations are the composition of 180° rotations just as in the case of the straight-edge hexagonal parallelogon or parallelograms. The Conway criterion is surprisingly powerful—especially when applied to polyforms. With the exception of four heptominoes, all polyominoes up through order 7 either satisfy the Conway criterion or two copies can form a patch which satisfies the criterion. References External links Conway’s Magical Pen An online app where you can create your own original Conway criterion tiles and their tessellations. Tessellation John Horton Conway
Conway criterion
Physics,Mathematics
812
880,860
https://en.wikipedia.org/wiki/Content%20delivery%20network
A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social media sites. CDNs are a layer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN pays Internet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers. CDN is an umbrella term spanning different types of content delivery services: video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance, load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security, DDoS protection and web application firewalls (WAF), and WAN optimization. Notable content delivery service providers include Akamai Technologies, Edgio, Cloudflare, Amazon CloudFront, Fastly, and Google Cloud CDN. Technology CDN nodes are usually deployed in multiple locations, often over multiple Internet backbones. Benefits include reducing bandwidth costs, improving page load times, and increasing the global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs. Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops, the lowest number of network seconds away from the requesting client, or the highest availability in terms of server performance (both current and historical), to optimize delivery across local networks. When optimizing for cost, locations that are the least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, as edge servers that are close to the end user at the edge of the network may have an advantage in performance or cost. Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired, such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user. Security and privacy CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customers' websites inside their browser origin. As such these services are being pointed out as potential privacy intrusions for the purpose of behavioral targeting and solutions are being created to restore single-origin serving and caching of resources. In particular, a website using a CDN may violate the EU's General Data Protection Regulation (GDPR). For example, in 2021 a German court forbade the use of a CDN on a university website, because this caused the transmission of the user's IP address to the CDN, which violated the GDPR. CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them. Subresource Integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author. Content networking techniques The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets. Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services. Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching). Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks. A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network. Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting, and anycasting. Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring. CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers. Content service protocols Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol. This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes or ESI is a small markup language for edge-level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI. Peer-to-peer CDNs In peer-to-peer (P2P) content-delivery networks, clients provide resources as well as use them. This means that, unlike client–server systems, the content-centric networks can actually perform better as more users begin to access the content (especially with protocols such as Bittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. Private CDNs If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers, reverse proxies or application delivery controllers. It can be as simple as two caching servers, or large enough to serve petabytes of content. Large content distribution networks may even build and set up their own private network to distribute copies of content across cache locations. Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of the private network is not enough or there is a failure which leads to capacity reduction. Since the same content has to be distributed across many locations, a variety of multicasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity. CDN trends Emergence of telco CDNs The rapid growth of streaming video traffic uses large capital expenditures by broadband providers in order to meet this demand and retain subscribers by delivering a sufficiently good quality of experience. To address this, telecommunications service providers have begun to launch their own content delivery networks as a means to lessen the demands on the network backbone and reduce infrastructure investments. Telco CDN advantages Because they own the networks over which video content is transmitted, telco CDNs have advantages over traditional CDNs. They own the last mile and can deliver content closer to the end-user because it can be cached deep in their networks. This deep caching minimizes the distance that video data travels over the general Internet and delivers it more quickly and reliably. Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator's margin into their own cost model. In addition, by operating their own content delivery infrastructure, telco operators have better control over the utilization of their resources. Content management operations performed by CDNs are usually applied without (or with very limited) information about the network (e.g., topology, utilization etc.) of the telco-operators with which they interact or have business relationships. These pose a number of challenges for the telco-operators who have a limited sphere of action in face of the impact of these operations on the utilization of their resources. In contrast, the deployment of telco-CDNs allows operators to implement their own content management operations, which enables them to have a better control over the utilization of their resources and, as such, provide better quality of service and experience to their end users. Federated CDNs and Open Caching In June 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX) to interconnect their networks and compete more directly against large traditional CDNs like Akamai and Limelight Networks, which have extensive PoPs worldwide. This way, telcos are building a Federated CDN offering, which is more interesting for a content provider willing to deliver its content to the aggregated audience of this federation. It is likely that in a near future, other telco CDN federations will be created. They will grow by enrollment of new telcos joining the federation and bringing network presence and their Internet subscriber bases to the existing ones. The Open Caching specification by Streaming Media Alliance defines a set of APIs that allows a Content Provider to deliver its content using several CDNs in a consistent way, seeing each CDN provider the same way through these APIs. Improving CDN performance using Extension Mechanisms for DNS Traditionally, CDNs have used the IP of the client's recursive DNS resolver to geo-locate the client. While this is a sound approach in many situations, this leads to poor client performance if the client uses a non-local recursive DNS resolver that is far away. For instance, a CDN may route requests from a client in India to its edge server in Singapore, if that client uses a public DNS resolver in Singapore, causing poor performance for that client. Indeed, a recent study showed that in many countries where public DNS resolvers are in popular use, the median distance between the clients and their recursive DNS resolvers can be as high as a thousand miles. In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of the edns-client-subnet IETF Internet Draft, which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS service providers, such as Google Public DNS, and CDN service providers as well. With the edns-client-subnet EDNS0 option, CDNs can now utilize the IP address of the requesting client's subnet when resolving DNS requests. This approach, called end-user mapping, has been adopted by CDNs and it has been shown to drastically reduce the round-trip latencies and improve performance for clients who use public DNS or other non-local resolvers. However, the use of EDNS0 also has drawbacks as it decreases the effectiveness of caching resolutions at the recursive resolvers, increases the total DNS resolution traffic, and raises a privacy concern of exposing the client's subnet. Virtual CDN (vCDN) Virtualization technologies are being used to deploy virtual CDNs (vCDNs) with the goal to reduce content provider costs, and at the same time, increase elasticity and decrease service delay. With vCDNs, it is possible to avoid traditional CDN limitations, such as performance, reliability and availability since virtual caches are deployed dynamically (as virtual machines or containers) in physical servers distributed across the provider's geographical coverage. As the virtual cache placement is based on both the content type and server or end-user geographic location, the vCDNs have a significant impact on service delivery and network congestion. Image Optimization and Delivery (Image CDNs) In 2017, Addy Osmani of Google started referring to software solutions that could integrate naturally with the Responsive Web Design paradigm (with particular reference to the <picture> element) as Image CDNs. The expression referred to the ability of a web architecture to serve multiple versions of the same image through HTTP, depending on the properties of the browser requesting it, as determined by either the browser or the server-side logic. The purpose of Image CDNs was, in Google's vision, to serve high-quality images (or, better, images perceived as high-quality by the human eye) while preserving download speed, thus contributing to a great User experience (UX). Arguably, the Image CDN term was originally a misnomer, as neither Cloudinary nor Imgix (the examples quoted by Google in the 2017 guide by Addy Osmani) were, at the time, a CDN in the classical sense of the term. Shortly afterwards, though, several companies offered solutions that allowed developers to serve different versions of their graphical assets according to several strategies. Many of these solutions were built on top of traditional CDNs, such as Akamai, CloudFront, Fastly, Edgecast and Cloudflare. At the same time, other solutions that already provided an image multi-serving service joined the Image CDN definition by either offering CDN functionality natively (ImageEngine) or integrating with one of the existing CDNs (Cloudinary/Akamai, Imgix/Fastly). While providing a universally agreed-on definition of what an Image CDN is may not be possible, generally speaking, an Image CDN supports the following three components: A Content Delivery Network (CDN) for the fast serving of images. Image manipulation and optimization, either on-the-fly through URL directives, in batch mode (through manual upload of images) or fully automatic (or a combination of these). Device Detection (also known as Device Intelligence), i.e. the ability to determine the properties of the requesting browser and/or device through analysis of the User-Agent string, HTTP Accept headers, Client-Hints or JavaScript. The following table summarizes the current situation with the main software CDNs in this space: Notable content delivery service providers Free cdnjs Cloudflare JSDelivr Traditional commercial Akamai Technologies Amazon CloudFront Aryaka Ateme CDN Azure CDN CacheFly CDNetworks CenterServ ChinaCache Cloudflare Cotendo Edgio Fastly Gcore Google Cloud CDN HP Cloud Services Incapsula Instart Internap LeaseWeb Lumen Technologies MetaCDN NACEVI OnApp GoDaddy OVHcloud Rackspace Cloud Files Speedera Networks StreamZilla Wangsu Science & Technology Yottaa Telco CDNs AT&T Inc. Bharti Airtel Bell Canada BT Group China Telecom Chunghwa Telecom Deutsche Telekom KT KPN Lumen Technologies Megafon NTT Pacnet PCCW Singtel SK Broadband Tata Communications Telecom Argentina Telefonica Telenor TeliaSonera Telin Telstra Telus TIM Türk Telekom Verizon Commercial using P2P for delivery BitTorrent, Inc. Internap Pando Networks Rawflow Multi MetaCDN Warpcache In-house Netflix See also Application software Bel Air Circuit Comparison of streaming media systems Comparison of video services Content delivery network interconnection Content delivery platform Data center Digital television Dynamic site acceleration Edge computing Internet radio Internet television InterPlanetary File System IPTV List of music streaming services List of streaming media systems Multicast NetMind Open Music Model Over-the-top content P2PTV Protection of Broadcasts and Broadcasting Organizations Treaty Push technology Software as a service Streaming media Webcast Web syndication Web television References Further reading Computer networks engineering Applications of distributed computing Cloud storage Digital television Distributed algorithms Distributed data storage Distributed data storage systems File sharing File sharing networks Film and video technology Internet broadcasting Internet radio Streaming television Multimedia Online content distribution Peer-to-peer computing Peercasting Streaming Streaming media systems Video hosting Video on demand
Content delivery network
Technology,Engineering
3,740
61,302,173
https://en.wikipedia.org/wiki/Brazilian%20Mathematical%20Society%20Award
The Brazilian Mathematical Society Award is the highest award for mathematical expository writing. It consists of a prize of R$20,000 and a certificate, and is awarded biennial by the Brazilian Mathematical Society in recognition of an outstanding expository article on a mathematical topic. Winners See also List of mathematics awards External links Brazilian Mathematical Society Award . Regulations Governing the Brazilian Mathematical Society Award. References Mathematics awards Awards established in 2013 Brazilian Mathematical Society
Brazilian Mathematical Society Award
Technology
88
50,855,966
https://en.wikipedia.org/wiki/Cortinarius%20morrisii
Cortinarius morrisii is a species of fungus in the family Cortinariaceae native to North America. It was described by Peck in 1905. References External links morrisii Fungi described in 1905 Fungi of North America Fungus species
Cortinarius morrisii
Biology
48
1,384,140
https://en.wikipedia.org/wiki/Passenger%20name%20record
A passenger name record (PNR) is a record in the database of a computer reservation system (CRS) that contains the itinerary for a passenger or a group of passengers travelling together. The concept of a PNR was first introduced by airlines that needed to exchange reservation information in case passengers required flights of multiple airlines to reach their destination ("interlining"). For this purpose, IATA and ATA have defined standards for interline messaging of PNR and other data through the "ATA/IATA Reservations Interline Message Procedures - Passenger" (AIRIMP). There is no general industry standard for the layout and content of a PNR. In practice, each CRS or hosting system has its own proprietary standards, although common industry needs, including the need to map PNR data easily to AIRIMP messages, has resulted in many general similarities in data content and format between all of the major systems. When a passenger books an itinerary, the travel agent or travel website user will create a PNR in the computer reservation system it uses. This is typically one of the large global distribution systems, such as Amadeus, Sabre, or Travelport (Apollo, Galileo, and Worldspan) but if the booking is made directly with an airline the PNR can also be in the database of the airline's CRS. This PNR is called the Master PNR for the passenger and the associated itinerary. The PNR is identified in the particular database by a record locator. When portions of the travel are not provided by the holder of the master PNR, then copies of the PNR information are sent to the CRSs of the airlines that will be providing transportation. These CRSs will open copies of the original PNR in their own database to manage the portion of the itinerary for which they are responsible. Many airlines have their CRS hosted by one of the GDSs, which allows sharing of the PNR. The record locators of the copied PNRs are communicated back to the CRS that owns the Master PNR, so all records remain tied together. This allows exchanging updates of the PNR when the status of trip changes in any of the CRSs. Although PNRs were originally introduced for air travel, airlines systems can now also be used for bookings of hotels, car rental, airport transfers, and train trips. Parts From a technical point of view, there are five parts of a PNR required before the booking can be completed. They are: The name of the passenger Contact details for the travel agent or airline office. Ticketing details, either a ticket number or a ticketing time limit. Itinerary of at least one segment, which must be the same for all passengers listed. Name of the person providing the information or making the booking. Other information, such as a timestamp and the agency's pseudo-city code, will go into the booking automatically. All entered information will be retained in the "history" of the booking. Once the booking has been completed to this level, the CRS will issue a unique all alpha or alpha-numeric record locator, which will remain the same regardless of any further changes made (except if a multi-person PNR is split). Each airline will create their own booking record with a unique record locator, which, depending on service level agreement between the CRS and the airline(s) involved, will be transmitted to the CRS and stored in the booking. If an airline uses the same CRS as the travel agency, the record locator will be the same for both. A considerable amount of other information is often desired by both the airlines and the travel agent to ensure efficient travel. This includes: Fare details, (although the amount may be suppressed, the type of fare will be shown), and any restrictions that may apply to the ticket. Tax amounts paid to the relevant authorities involved in the itinerary. The form of payment used, as this will usually restrict any refund if the ticket is not used. Further contact details, such as agency phone number and address, additional phone contact numbers at passenger address and intended destination. Age details if it is relevant to the travel, e.g., unaccompanied children or elderly passengers requiring assistance. Frequent flyer data. Seat allocation (or seat type request). Special Service Requests (SSR) such as meal requirements, wheelchair assistance, and other similar requests. "Optional Services Instruction" or "Other Service Information" (OSI) - information sent to a specific airline, or all airlines in the booking, which enables them to better provide a service. This information can include ticket numbers, local contacts details (the phone section is limited to only a few entries), airline staff onload and upgrade priority codes, and other details such as a passenger's language or details of a disability. Vendor Remarks. VRs are comments made by the airline, typically generated automatically once the booking or request is completed. These will normally include the airline's own record locator, replies to special requests, and advice on ticketing time limits. While normally sent by the airlines to an agent, it is also possible for an agent to send a VR to an airline. In more recent times, many governments now require the airline to provide further information included assisting investigators tracing criminals or terrorists. These include: Passengers' gender Passport details - nationality, number, and date of expiry Date and place of birth Redress number (if previously given to the passenger by the US authorities). All available payment/billing information. The components of a PNR are identified internally in a CRS by a one-character code. This code is often used when creating a PNR via direct entry into a terminal window (as opposed to using a graphical interface). The following codes are standard across all CRSs based on the original PARS system: - Name 0 Segment (flight) information, including number of seats booked, status code (for example HK1 - confirmed for one passenger) and fare class 1 Related PNR record ids. 2 PNR owner identification (airline, CRS user name and role) 3 Other airline Other Service Information (OSI) or Special Service Request (SSR) items 4 Host airline OSI or SSR items 5 Remarks 6 Received from 7 Ticketing information (including ticket number) 8 Ticketing time limit 9 Contact phone numbers Storage The majority of airlines and travel agencies choose to host their PNR databases with a computer reservations system (CRS) or global distribution system (GDS) company such as Sabre, Galileo, Worldspan and Amadeus. Privacy concerns Some privacy organizations are concerned at the amount of personal data that a PNR might contain. While the minimum data for completing a booking is quite small, a PNR will typically contain much more information of a sensitive nature. This will include the passenger's full name, date of birth, home and work address, telephone number, e-mail address, credit card details, IP address if booked online, as well as the names and personal information of emergency contacts. Designed to "facilitate easy global sharing of PNR data," the CRS-GDS companies "function both as data warehouses and data aggregators, and have a relationship to travel data analogous to that of credit bureaus to financial data.". A canceled or completed trip does not erase the record since "copies of the PNRs are ‘purged’ from live to archival storage systems, and can be retained indefinitely by CRSs, airlines, and travel agencies." Further, CRS-GDS companies maintain web sites that allow almost unrestricted access to PNR data – often, the information is accessible by just the reservation number printed on the ticket. Additionally, "[t]hrough billing, meeting, and discount eligibility codes, PNRs contain detailed information on patterns of association between travelers. PNRs can contain religious meal preferences and special service requests that describe details of physical and medical conditions (e.g., "Uses wheelchair, can control bowels and bladder") – categories of information that have special protected status in the European Union and some other countries as “sensitive” personal data.” Despite the sensitive character of the information they contain, PNRs are generally not recognized as deserving the same privacy protection afforded to medical and financial records. Instead, they are treated as a form of commercial transaction data. International PNR sharing agreements European Union to United States European Union to Australia On 16 January 2004, the Article 29 Working Party released their Opinion 1/2004 (WP85) on the level of PNR protection ensured in Australia for the transmission of Passenger Name Record data from airlines. In 2010 the European Commission's Directorate-General for Justice, Freedom and Security was split in two. The resulting bodies were the Directorate-General for Justice (European Commission) and the Directorate-General for Home Affairs (European Commission). On 4 May 2011, Stefano Manservisi, Director-General at the Directorate-General for Home Affairs (European Commission) wrote to the European Data Protection Supervisor (EDPS) with regards to a PNR sharing agreement with Australia, a close ally of the US and signatory to the UKUSA Agreement on signals intelligence. The EDPS responded on 5 May in Letter 0420 D845: European Union to Canada The Article 29 Working Party document Opinion 1/2005 on the level of protection ensured in Canada for the transmission of Passenger Name Record and Advance Passenger Information from airlines (WP 103), 19 January 2005, offers information on the nature of PNR agreements with Canada. . See also Advance Passenger Information System Flight manifest Machine-readable passport Further reading Farrell, Henry and Abraham Newman. 2019. Of Privacy and Power: The Transatlantic Struggle over Freedom and Security. Princeton University Press. References External links European Union Freedom of Information Request for Information on PNR Agreements, AskTheEU.org. Guidelines on Passenger Name Record (PNR) Data, iata.org Airline tickets Information privacy Information sensitivity Travel technology
Passenger name record
Engineering
2,051
9,592,375
https://en.wikipedia.org/wiki/Decimal%20%28unit%29
A decimal (also spelled decimil or dismil; ) is a unit of area in India and Bangladesh. After metrication in the mid-20th century by both countries, the unit became officially obsolete. However, it is still in use among the rural population in Northern Bangladesh and West Bengal. A decimal is one hundredth of an acre of land, and is equal to 48.4 square yards or . Decimals are also used as a measure of land in West Africa. Conversion chart See also List of customary units of measurement in South Asia References Customary units in India Obsolete units of measurement Units of area External Links Decimal to Square Feet Calculator
Decimal (unit)
Mathematics
134
70,629,678
https://en.wikipedia.org/wiki/2D%20Materials%20%28journal%29
2D Materials is a monthly peer-reviewed scientific journal published by IOP Publishing. It covers fundamental and applied research on graphene and related two-dimensional materials. The editor-in-chief is Wencai Ren (Chinese Academy of Sciences). Abstracting and indexing The journal is abstracted and indexed in: Astrophysics Data System Chemical Abstracts Ei Compendex Inspec International Nuclear Information System ProQuest databases Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.5. References External links Materials science journals IOP Publishing academic journals Monthly journals English-language journals Academic journals established in 2014
2D Materials (journal)
Materials_science,Engineering
136
10,290,414
https://en.wikipedia.org/wiki/Tetramer%20assay
A tetramer assay (also known as a tetramer stain) is a procedure that uses tetrameric proteins to detect and quantify T cells that are specific for a given antigen within a blood sample. The tetramers used in the assay are made up of four major histocompatibility complex (MHC) molecules, which are found on the surface of most cells in the body. MHC molecules present peptides to T-cells as a way to communicate the presence of viruses, bacteria, cancerous mutations, or other antigens in a cell. If a T-cell's receptor matches the peptide being presented by an MHC molecule, an immune response is triggered. Thus, MHC tetramers that are bioengineered to present a specific peptide can be used to find T-cells with receptors that match that peptide. The tetramers are labeled with a fluorophore, allowing tetramer-bound T-cells to be analyzed with flow cytometry. Quantification and sorting of T-cells by flow cytometry enables researchers to investigate immune response to viral infection and vaccine administration as well as functionality of antigen-specific T-cells. Generally, if a person's immune system has encountered a pathogen, the individual will possess T cells with specificity toward some peptide on that pathogen. Hence, if a tetramer stain specific for a pathogenic peptide results in a positive signal, this may indicate that the person's immune system has encountered and built a response to that pathogen. History This methodology was first published in 1996 by a lab at Stanford University. Previous attempts to quantify antigen-specific T-cells involved the less accurate limiting dilution assay, which estimates numbers of T-cells at 50-500 times below their actual levels. Stains using soluble MHC monomers were also unsuccessful due to the low binding affinity of T-cell receptors and MHC-peptide monomers. MHC tetramers can bind to more than one receptor on the target T-cell, resulting in an increased total binding strength and lower dissociation rates. Uses CD8+ T-cells Tetramer stains usually analyze cytotoxic T lymphocyte (CTL) populations. CTLs are also called CD8+ T-cells, because they have CD8 co-receptors that bind to MHC class I molecules. Most cells in the body express MHC class I molecules, which are responsible for processing intracellular antigens and presenting at the cell's surface. If the peptides being presented by MHC class I molecules are foreign—for example, derived from viral proteins instead of the cell's own proteins—the CTL with a receptor that matches the peptide will destroy the cell. Tetramer stains allow for the visualization, quantification, and sorting of these cells by flow cytometry, which is extremely useful in immunology. T-cell populations can be tracked over the duration of a virus or after the application of a vaccine. Tetramer stains can also be paired with functional assays like ELIspot, which detects the number of cytokine secreting cells in a sample. MHC Class I Tetramer Construction MHC tetramer molecules developed in a lab can mimic the antigen presenting complex on cells and bind to T-cells that recognize the antigen. Class I MHC molecules are made up of a polymorphic heavy α-chain associated with an invariant light chain beta-2 microglobulin (β2m). Escherichia coli are used to synthesize the light chain and a shortened version of the heavy chain that includes the biotin 15 amino acid recognition tag. These MHC chains are biotinylated with the enzyme BirA and refolded with the antigenic peptide of interest. Biotin is a small molecule that forms a strong bond with another protein called streptavidin. Fluorophore tagged streptavidin is added to the bioengineered MHC monomers, and the biotin-streptavidin interaction causes four MHC monomers to bind to the streptavidin and create a tetramer. When the tetramers are mixed with a blood sample, they will bind to T-cells expressing the appropriate antigen specific receptor. Any MHC tetramers that are not bound are washed out of the sample before it is analyzed with flow cytometry. Recent advancements within recombinant MHC molecules have democratised peptide MHC complex formulation and subsequent multimerisation. Highly active formulations of a broad range of MHC class I molecules now allows non-experts users to make their own custom peptide-MHC complexes from day-to-day in any lab without special equipment. CD4+ T-cells Tetramers that bind to helper T-cells have also been developed. Helper T-cells or CD4+ T-cells express CD4 co-receptors. They bind to class II MHC molecules, which are only expressed in professional antigen-presenting cells like dendritic cells or macrophages. Class II MHC molecules present extracellular antigens, allowing helper T-cells to detect bacteria, fungi, and parasites. Class II MHC tetramer use is becoming more common, but the tetramers are more difficult to create than class I tetramers and the bond between helper T-cells and MHC molecules is even weaker. Natural Killer T-cells Natural killer T-cells (NKT cells) can also be visualized with tetramer technology. NKT cells bind to proteins that present lipid or glycolipid antigens. The antigen presenting complex that NKT cells bind to involves CD1 proteins, so tetramers made of CD1 can be used to stain for NKT cells. Examples An early application of tetramer technology focused on the cell-mediated immune response to HIV infection. MHC tetramers were developed to present HIV antigens and used to find the percentage of CTLs specific to those HIV antigens in blood samples of infected patients. This was compared to results of cytotoxic assays and plasma RNA viral load to characterize the function of CTLs in HIV infection. The CTLs that bound to tetramers were sorted into ELIspot wells for analysis of cytokine secretion. Another study utilized MHC tetramer complexes to investigate the effectiveness of an influenza vaccine delivery method. Mice were given subcutaneous and intranasal vaccinations for influenza, and tetramer stains coupled with flow cytometry were used to quantify the CTLs specific to the antigen used in the vaccine. This allowed for comparison of the immune response (the number of T-cells that target a virus) in two different vaccine delivery methods. References Further reading Blood tests Flow cytometry
Tetramer assay
Chemistry,Biology
1,420