id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
51,899,244
https://en.wikipedia.org/wiki/Kumeyaay%20astronomy
Kumeyaay astronomy or cosmology (Kumeyaay: My Uuyow, "sky knowledge") comprises the astronomical knowledge of the Kumeyaay people, a Native American group whose traditional homeland occupies what is now Southern California in the United States and adjacent parts of northern Baja California in Mexico. A deeply rooted cosmological belief system was developed and followed by the Kumeyaay civilization based on this knowledge including the computing of time (Kumeyaay Mat’taam). The first evidence of astronomical observations and visual registration was discovered in the El Vallecito archeological zone. The "Men in a square" rupestric painting located at El Diablito area of El Vallecito depicted a square that aligns with sunlight on the Fall equinox. These paintings were made by the Kumeyaay people, possibly during nomadic travels. Kumeyaay sand paintings and rock art modeled the passage of the sun, moon, and constellations. Observation areas were made by the Kumeyaay to watch and register astronomical events. However many were destroyed by vandals before protection measures were instituted. Astronomical objects Hatotkeur (Spine of the Sky) - Milky Way Constellations: See also Cultural astronomy References External links San Diego Museum of Man Astronomy-related lists History of astronomy Kumeyaay Archaeoastronomy
Kumeyaay astronomy
Astronomy
280
50,376,013
https://en.wikipedia.org/wiki/Anomaly%20%28natural%20sciences%29
In the natural sciences, especially in atmospheric and Earth sciences involving applied statistics, an anomaly is a persisting deviation in a physical quantity from its expected value, e.g., the systematic difference between a measurement and a trend or a model prediction. Similarly, a standardized anomaly equals an anomaly divided by a standard deviation. A group of anomalies can be analyzed spatially, as a map, or temporally, as a time series. It should not be confused for an isolated outlier. There are examples in atmospheric sciences and in geophysics. Calculation The location and scale measures used in forming an anomaly time-series may either be constant or may themselves be a time series or a map. For example, if the original time series consisted of daily mean temperatures, the effect of seasonal cycles might be removed using a deseasonalization filter. Robust statistics, resistant to the effects of outliers, are sometimes used as the basis of the transformation. Examples Atmospheric sciences In the atmospheric sciences, the climatological annual cycle is often used as the expected value. Famous atmospheric anomalies are for instance the Southern Oscillation index (SOI) and the North Atlantic oscillation index. SOI is the atmospheric component of El Niño, while NAO plays an important role for European weather by modification of the exit of the Atlantic storm track. A climate normal can also be used to derive a climate anomaly. Geophysics Gravity anomaly, difference between the observed gravity and a value predicted from a model Bouguer anomaly, anomaly in gravimetry Free-air anomaly, gravity anomaly that has been computed for latitude and corrected for elevation of the station Iridium anomaly, an unusual abundance of what is normally a very rare element in the Earth's crust Magnetic anomaly, local variation in the Earth's magnetic field Bangui magnetic anomaly, in central Africa Kursk Magnetic Anomaly, territory rich in iron ores located within Kursk Oblast, Belgorod Oblast, and Oryol Oblast Temagami Magnetic Anomaly, large buried geologic structure in the Temagami region of Ontario, Canada See also Bias (statistics) Climate oscillation Frequency spectrum Innovation (signal processing) Least squares Least-squares spectral analysis Temperature anomaly References Time series Climate and weather statistics Geophysics
Anomaly (natural sciences)
Physics
459
68,589,911
https://en.wikipedia.org/wiki/2C-T-28
2C-T-28 is a lesser-known psychedelic drug related to compounds such as 2C-T-7 and 2C-T-21. It was named by Alexander Shulgin but was never made or tested by him, and was instead first synthesised by Daniel Trachsel some years later. It has a binding affinity of 75 nM at 5-HT2A and 28 nM at 5-HT2C. It is reportedly a potent psychedelic drug with an active dose in the 8–20 mg range, and a duration of action of 8–10 hours, with prominent visual effects. 2C-T-28 is the 3-fluoropropyl instead of 2-fluoroethyl chain-lengthened homologue of 2C-T-21 and has very similar properties, although unlike 2C-T-21 it will not form toxic fluoroacetate as a metabolite. See also 2C-T-16 2C-TFE 3C-DFE DOPF Trifluoromescaline 2C-x DOx 25-NB References 2C (psychedelics) Entheogens Thioethers Amines Methoxy compounds
2C-T-28
Chemistry
250
24,634,176
https://en.wikipedia.org/wiki/EIA/TIA-662
TIA/EIA-662 is a 1995 telecommunications standard from the Telecommunications Industry Association, a 1988 offshoot of the EIA. The standard addresses Personal Wireless Telecommunications (PWT). The standard is based on a microcell radio communications system that provides low-power radio access between portable equipment and the fixed network over distances of a few hundred meters. Such wireless personal telecommunications devices may be used for wireless PBX services and for sending data in packets or over circuits. This standard is based on the Digital Enhanced Cordless Telecommunications (DECT) standard. References ORGANIZATION OF AMERICAN STATES Inter-American Telecommunication Commission, XI MEETING OF PERMANENT CONSULTATIVE COMMITTEE I: PUBLIC TELECOMMUNICATION SERVICES, October 25 to 29, 1999 Final Report Mobile telecommunications standards EIA standards
EIA/TIA-662
Technology
152
1,786
https://en.wikipedia.org/wiki/Arabic%20numerals
The ten Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) are the most commonly used symbols for writing numbers. The term often also implies a positional notation number with a decimal base, in particular when contrasted with Roman numerals. However the symbols are also used to write numbers in other bases, such as octal, as well as non-numerical information such as trademarks or license plate identifiers. They are also called Western Arabic numerals, Western digits, European digits, Ghubār numerals, or Hindu–Arabic numerals due to positional notation (but not these digits) originating in India. The Oxford English Dictionary uses lowercase Arabic numerals while using the fully capitalized term Arabic Numerals for Eastern Arabic numerals. In contemporary society, the terms digits, numbers, and numerals often implies only these symbols, although it can only be inferred from context. Europeans first learned of Arabic numerals , though their spread was a gradual process. After Italian scholar Fibonacci of Pisa encountered the numerals in the Algerian city of Béjaïa, his 13th-century work became crucial in making them known in Europe. However, their use was largely confined to Northern Italy until the invention of the printing press in the 15th century. European trade, books, and colonialism subsequently helped popularize the adoption of Arabic numerals around the world. The numerals are used worldwide—significantly beyond the contemporary spread of the Latin alphabet—and have become common in the writing systems where other numeral systems existed previously, such as Chinese and Japanese numerals. History Origin Positional decimal notation including a zero symbol was developed in India, using symbols visually distinct from those that would eventually enter into international use. As the concept spread, the sets of symbols used in different regions diverged over time. The immediate ancestors of the digits now commonly called "Arabic numerals" were introduced to Europe in the 10th century by Arabic speakers of Spain and North Africa, with digits at the time in wide use from Libya to Morocco. In the east from Egypt to Iraq and the Arabian Peninsula, the Arabs were using the Eastern Arabic numerals or "Mashriki" numerals: ٠, ١, ٢, ٣, ٤, ٥, ٦, ٧, ٨, ٩. Al-Nasawi wrote in the early 11th century that mathematicians had not agreed on the form of the numerals, but most of them had agreed to train themselves with the forms now known as Eastern Arabic numerals. The oldest specimens of the written numerals available are from Egypt and date to 873–874 AD. They show three forms of the numeral "2" and two forms of the numeral "3", and these variations indicate the divergence between what later became known as the Eastern Arabic numerals and the Western Arabic numerals. The Western Arabic numerals came to be used in the Maghreb and Al-Andalus from the 10th century onward. Some amount of consistency in the Western Arabic numeral forms endured from the 10th century, found in a Latin manuscript of Isidore of Seville's from 976 and the Gerbertian abacus, into the 12th and 13th centuries, in early manuscripts of translations from the city of Toledo. Calculations were originally performed using a dust board (, Latin: ), which involved writing symbols with a stylus and erasing them. The use of the dust board appears to have introduced a divergence in terminology as well: whereas the Hindu reckoning was called in the east, it was called 'calculation with dust' in the west. The numerals themselves were referred to in the west as 'dust figures' or 'dust letters'. Al-Uqlidisi later invented a system of calculations with ink and paper 'without board and erasing' (). A popular myth claims that the symbols were designed to indicate their numeric value through the number of angles they contained, but there is no contemporary evidence of this, and the myth is difficult to reconcile with any digits past 4. Adoption and spread The first mentions of the numerals from 1 to 9 in the West are found in the 976 , an illuminated collection of various historical documents covering a period from antiquity to the 10th century in Hispania. Other texts show that numbers from 1 to 9 were occasionally supplemented by a placeholder known as , represented as a circle or wheel, reminiscent of the eventual symbol for zero. The Arabic term for zero is (), transliterated into Latin as , which became the English word cipher. From the 980s, Gerbert of Aurillac (later Pope Sylvester II) used his position to spread knowledge of the numerals in Europe. Gerbert studied in Barcelona in his youth. He was known to have requested mathematical treatises concerning the astrolabe from Lupitus of Barcelona after he had returned to France. The reception of Arabic numerals in the West was gradual and lukewarm, as other numeral systems circulated in addition to the older Roman numbers. As a discipline, the first to adopt Arabic numerals as part of their own writings were astronomers and astrologists, evidenced from manuscripts surviving from mid-12th-century Bavaria. Reinher of Paderborn (1140–1190) used the numerals in his calendrical tables to calculate the dates of Easter more easily in his text . Italy Leonardo Fibonacci was a Pisan mathematician who had studied in the Pisan trading colony of Bugia, in what is now Algeria, and he endeavored to promote the numeral system in Europe with his 1202 book : When my father, who had been appointed by his country as public notary in the customs at Bugia acting for the Pisan merchants going there, was in charge, he summoned me to him while I was still a child, and having an eye to usefulness and future convenience, desired me to stay there and receive instruction in the school of accounting. There, when I had been introduced to the art of the Indians' nine symbols through remarkable teaching, knowledge of the art very soon pleased me above all else and I came to understand it. The s analysis highlighting the advantages of positional notation was widely influential. Likewise, Fibonacci's use of the Béjaïa digits in his exposition ultimately led to their widespread adoption in Europe. Fibonacci's work coincided with the European commercial revolution of the 12th and 13th centuries centered in Italy. Positional notation facilitated complex calculations (such as currency conversion) to be completed more quickly than was possible with the Roman system. In addition, the system could handle larger numbers, did not require a separate reckoning tool, and allowed the user to check their work without repeating the entire procedure. Late medieval Italian merchants did not stop using Roman numerals or other reckoning tools: instead, Arabic numerals were adopted for use in addition to their preexisting methods. Europe By the late 14th century, only a few texts using Arabic numerals appeared outside of Italy. This suggests that the use of Arabic numerals in commercial practice, and the significant advantage they conferred, remained a virtual Italian monopoly until the late 15th century. This may in part have been due to language barriers: although Fibonacci's was written in Latin, the Italian abacus traditions were predominantly written in Italian vernaculars that circulated in the private collections of abacus schools or individuals. The European acceptance of the numerals was accelerated by the invention of the printing press, and they became widely known during the 15th century. Their use grew steadily in other centers of finance and trade such as Lyon. Early evidence of their use in Britain includes: an equal hour horary quadrant from 1396, in England, a 1445 inscription on the tower of Heathfield Church, Sussex; a 1448 inscription on a wooden lych-gate of Bray Church, Berkshire; and a 1487 inscription on the belfry door at Piddletrenthide church, Dorset; and in Scotland a 1470 inscription on the tomb of the first Earl of Huntly in Elgin Cathedral. In central Europe, the King of Hungary Ladislaus the Posthumous, started the use of Arabic numerals, which appear for the first time in a royal document of 1456. By the mid-16th century, they had been widely adopted in Europe, and by 1800 had almost completely replaced the use of counting boards and Roman numerals in accounting. Roman numerals were mostly relegated to niche uses such as years and numbers on clock faces. Russia Prior to the introduction of Arabic numerals, Cyrillic numerals, derived from the Cyrillic alphabet, were used by South and East Slavs. The system was used in Russia as late as the early 18th century, although it was formally replaced in official use by Peter the Great in 1699. Reasons for Peter's switch from the alphanumerical system are believed to go beyond a surface-level desire to imitate the West. Historian Peter Brown makes arguments for sociological, militaristic, and pedagogical reasons for the change. At a broad, societal level, Russian merchants, soldiers, and officials increasingly came into contact with counterparts from the West and became familiar with the communal use of Arabic numerals. Peter also covertly travelled throughout Northern Europe from 1697 to 1698 during his Grand Embassy and was likely informally exposed to Western mathematics during this time. The Cyrillic system was found to be inferior for calculating practical kinematic values, such as the trajectories and parabolic flight patterns of artillery. With its use, it was difficult to keep pace with Arabic numerals in the growing field of ballistics, whereas Western mathematicians such as John Napier had been publishing on the topic since 1614. China The Chinese Shang dynasty numerals from the 14th century BC predates the Indian Brahmi numerals by over 1000 years and shows substantial similarity to the Brahmi numerals. Similar to the modern Arabic numerals, the Shang dynasty numeral system was also decimal based and positional. While positional Chinese numeral systems such as the counting rod system and Suzhou numerals had been in use prior to the introduction of modern Arabic numerals, the externally-developed system was eventually introduced to medieval China by the Hui people. In the early 17th century, European-style Arabic numerals were introduced by Spanish and Portuguese Jesuits. Encoding The ten Arabic numerals are encoded in virtually every character set designed for electric, radio, and digital communication, such as Morse code. They are encoded in ASCII (and therefore in Unicode encodings) at positions 0x30 to 0x39. Masking all but the four least-significant binary digits gives the value of the decimal digit, a design decision facilitating the digitization of text onto early computers. EBCDIC used a different offset, but also possessed the aforementioned masking property. See also Arabic numeral variations Regional variations in modern handwritten Arabic numerals Seven-segment display Text figures Footnotes Sources Further reading External links Lam Lay Yong, "Development of Hindu Arabic and Traditional Chinese Arithmetic", Chinese Science 13 (1996): 35–54. "Counting Systems and Numerals", Historyworld. Retrieved 11 December 2005. The Evolution of Numbers. 16 April 2005. O'Connor, J. J., and E. F. Robertson, Indian numerals . November 2000. History of the numerals Arabic numerals Hindu–Arabic numerals Numeral & Numbers' history and curiosities Gerbert d'Aurillac's early use of Hindu–Arabic numerals at Convergence Numerals
Arabic numerals
Mathematics
2,416
62,692,135
https://en.wikipedia.org/wiki/Garland%20bearers
Garlands bearers, typically in the form of small naked putti holding up a continuous garland very large in relation to their size, formed a popular ornamental design in classical arts, from the Greco-Roman world to India, with ramifications as far as China. In Europe they were revived in the Renaissance, and continued in later periods. Greco-Roman art The garland-bearer design was extremely popular in the Mediterranean. It first appeared at the end of the Hellenistic period, and its popularity expanded during the Roman period. The design reached a peak of popularity in the 2nd century CE, adorning sarcophagi made in Asia Minor to be sold in Rome. Greek garland bearer designs tend to be continuous, and the garlands are furnished with leaves and stems. Roman garland bearer designs are segmented and often use flowers and fruits for decoration. Garland bearers were also particularly associated to the cult of Dyonisos. Central Asia Indian art The erotes or putti holding garlands is one of the most common motif of the Greco-Buddhist art of Gandhara. According to John Boardman, they find their origin in Hellenistic designs, rather than Roman ones. The garlands had an important role in decorating Buddhist stupas. China The garland bearer design can be seen in Buddhist frescoes in Miran, China, from the 3rd century CE. References Architectural sculpture Visual motifs Ornaments (architecture)
Garland bearers
Mathematics
286
38,516,236
https://en.wikipedia.org/wiki/Pairwise%20error%20probability
Pairwise error probability is the error probability that for a transmitted signal () its corresponding but distorted version () will be received. This type of probability is called ″pair-wise error probability″ because the probability exists with a pair of signal vectors in a signal constellation. It's mainly used in communication systems. Expansion of the definition In general, the received signal is a distorted version of the transmitted signal. Thus, we introduce the symbol error probability, which is the probability that the demodulator will make a wrong estimation of the transmitted symbol based on the received symbol, which is defined as follows: where is the size of signal constellation. The pairwise error probability is defined as the probability that, when is transmitted, is received. can be expressed as the probability that at least one is closer than to . Using the upper bound to the probability of a union of events, it can be written: Finally: Closed form computation For the simple case of the additive white Gaussian noise (AWGN) channel: The PEP can be computed in closed form as follows: is a Gaussian random variable with mean 0 and variance . For a zero mean, variance Gaussian random variable: Hence, See also Signal processing Telecommunication Electrical engineering Random variable References Further reading Signal processing Probability theory
Pairwise error probability
Technology,Engineering
261
26,786,438
https://en.wikipedia.org/wiki/Soft%20chemistry
Soft chemistry (also known as chimie douce) is a type of chemistry that uses reactions at ambient temperature in open reaction vessels with reactions similar to those occurring in biological systems. Aims The aim of the soft chemistry is to synthesize materials, drawing capacity of living beings - more or less basic - such as diatoms capable of producing glass from silicates dissolved. It is a new branch of materials science that differs from conventional solid-state chemistry and its application to the intense energy to explore the chemical inventiveness of the living world. This specialty emerged in the 1980s around the label of "chimie douce", which was first published by the French chemist, Jacques Livage in Le Monde, 26 October 1977. French hits, the term soft chemistry is employed as such in the early twenty-first century in scientific publications, English and others. His mode of synthesis is similar generally for reactions involved in the polymerizations based on organic and the establishment of solutions reactive energy intake without essential polycondensation. The fundamental interest of this kind of polymerization mineral obtained at room temperature is to preserve organic molecules or microorganisms that wishes to fit. The products obtained by means of the so-called soft chemistry sol-gel can be stored in several types: mineral structures of various qualities (smoothness, uniformity, etc.) mixed structures combining inorganic and organic molecules on mineral structures wrapper complex molecules and even microorganisms maintaining or optimizing their beneficial characteristics. The early results have included the creation of glasses and ceramic with new properties. These different structures are more or less composite mobilized a wide range of applications ranging from health to the needs of the conquest of space. Beyond its mode of synthesis, a compound with the label soft chemistry combines the advantages of the mineral (resistance, transparency, repetition patterns, etc.) and now exploring the potential of the biochemistry and organic chemistry (interface with the organic world, reactivity, synthesis capability, etc.). According to its practitioners, "soft chemistry" is only in its early success and opens up vast prospects. References Chemistry Biochemistry Chemical synthesis Biomimetics
Soft chemistry
Chemistry,Engineering,Biology
433
71,013,672
https://en.wikipedia.org/wiki/Milk%20immunity
Milk immunity is the protection provided to immune system of an infant via the biologically active components in milk, typically provided by the infant's mother. Mammalian milk All mammalian milk contains water, sugar, fat, vitamins, and protein with the variation within and between species and individuals differing mainly in the amount of these components. Other than the variation in quantity of these components, not a lot is known about bio-active or immune-modulating factors in many mammalian species. However, in comparison to other mammalian milk, human milk has the most oligosaccharide diversity. Bovine milk Ruminant mothers do not transfer immunity to their infants during pregnancy, which makes milk the first introduction to maternal immunity calves receive. Bovine milk contains both immunoglobulins A and G, but in contrast to human milk where IgA is the most abundant, IgG is more abundant. Secretory Component, IgM, both anti-inflammatory and inflammatory cytokines, and other proteins with antimicrobial functions are also present in bovine milk. Human milk Avian crop milk Crop milk is a secretion from the crop of a bird that is regurgitated to feed their offspring. Birds that produce this secretion include pigeons, flamingos, emperor penguins, and doves. Pigeon milk contains some immune-modulating factors such as microbes and IgA, as well as other components with similar biological activities to mammalian milk including pigeon growth factor, and transferrin. References Breastfeeding Immunology Milk
Milk immunity
Biology
310
41,458,505
https://en.wikipedia.org/wiki/Macrocybe%20spectabilis
Macrocybe spectabilis is a species of mushroom-forming fungus. It is found in Mauritius, Japan, and Hawaii. It is associated with sugarcane. It and Macrocybe titans contain large concentrations of cyanide. This mushroom is listed 食用 (edible) in the book きのこ ("Mushrooms") in the series "New Yama-Kei Pocket Guide." References Fungi described in 1973 Fungi of Mauritius Fungi of Japan Fungi of Hawaii Tricholomataceae Fungi without expected TNC conservation status Fungus species
Macrocybe spectabilis
Biology
109
23,066,512
https://en.wikipedia.org/wiki/Super%20Mario%20Galaxy%202
is a 2010 platform game developed and published by Nintendo for the Wii. It was first announced at E3 2009 and is the sequel to 2007's Super Mario Galaxy. Much like the first game, the story follows Mario as he pursues the Koopa King, Bowser, into outer space, where he has imprisoned Princess Peach and taken control of the universe using Power Stars and Grand Stars. Mario must travel across various galaxies to recover the Power Stars in order to travel to the center of the universe and rescue Princess Peach. The game was originally planned as an updated version of Super Mario Galaxy, known as Super Mario Galaxy More. However, it was later decided that the game would be expanded into a fully fleshed-out sequel when the development staff continued to build upon the game with dozens of new ideas. As such, development time expanded to two and a half years. Among the new additions are dynamic environments, new power-ups, and the addition of Yoshi. Super Mario Galaxy 2 was met with critical acclaim, and was considered to match or surpass its lauded predecessor, with its creativity, level design, gameplay, music, and technological improvements over the original receiving high praise, although critics were divided on its lack of story and high difficulty compared to the original. It is frequently regarded by critics to be one of the greatest video games ever made and is one of the best-selling games on the Wii, with over seven million copies sold worldwide. Gameplay The gameplay of Super Mario Galaxy 2 is near-identical to its predecessor's, with a focus on platforming based on and around 3D planets, grouped into levels known as galaxies. Planets and galaxies each have varying themes, sizes, landscapes, and climates. The player controls Mario (or later in the game, his brother Luigi, though using him is optional), who has special abilities such as the "Spin" attack, long jump, wall jumps, and a variety of somersaults. As in the original, the objective of the game is to travel to the various galaxies and collect Power Stars, which are awarded by completing levels and accomplishing tasks and are used to gain access to later levels. The game retains various gameplay mechanics introduced in the original, such as the blue Star Pointer that allows the player to collect Star Bits and shoot them at enemies, levels that restrict movement to a 2D plane, balance ball levels, and gravity-reversing background arrows. Setting and level design The game provides the player access to the game's galaxies through means of a map system similar to that in previous Mario games such as Super Mario World and New Super Mario Bros. Wii. This is navigated via a mobile planet called Starship Mario that serves as a hub world, which can be visited anytime and is expanded when new abilities or levels are unlocked. The game contains forty-nine galaxies allotted among seven different regions in the universe (called "worlds"), with the general difficulty progressively increasing in each world. The first six worlds end with a boss level, in which the objective is to defeat Bowser or Bowser Jr. (the former being in even-numbered worlds, and the latter being in odd-numbered worlds), which then allows the player to access the next world. When the player collects all 120 Power Stars, 120 Green Star missions are unlocked. These levels, containing Green Stars that are hidden or placed in hard-to-reach areas, require intense exploration and precision and may cause instant death if the player fails. Super Mario Galaxy 2 contains 242 unique Power Stars to collect overall. Most of the levels in Super Mario Galaxy 2 offer a unique task based around its theme, and many focus on dynamic environments that change or alternate between various states. For example, some environments change to the beat of the background music, such as sudden shifts in the direction of gravity or the appearance or disappearance of platforms; and others feature a special switch that temporarily slows down time. Prankster Comets, which were featured in the original game and cause variation and tougher challenges in levels, no longer appear randomly in visited galaxies but instead require the collection of a Comet Medal in that galaxy in order for it to appear. In addition, Prankster Comets have become more general and offer any number of variations: while Super Mario Galaxy offered only five mutually exclusive variations, the Prankster Comets in Super Mario Galaxy 2 range to any number of challenges that often mix or overlap. These include defeating all the enemies, collecting 100 Purple Coins, completing the level within a time limit, completing the level with only one maximum health unit, or avoiding Cosmic Clones (doppelgängers of Mario that imitate the player's actions). As a result, both the dynamic environments and the Prankster Comets often create challenges with puzzle elements, requiring precision and strategy in order to overcome them. Power-ups All the original transformations in Super Mario Galaxy return, with the exception of Ice Mario and Flying Mario. Three new power-ups and items for Mario are introduced in the game. These include the Spin Drill, which allows the player to burrow through planets and emerge out the other side; Rock Mario, which allows the player to transform into a boulder and smash through enemies and other obstacles; and Cloud Mario, which allows the player to create up to three temporary platforms in midair. Mario is able to ride the dinosaur Yoshi in certain levels. When riding Yoshi, the player's blue Star Pointer is replaced by a red dot, which allows the player to point at various objects and manipulate them with Yoshi's tongue. Yoshi can also use his tongue to swing across gaps, pull levers, and swallow berries and enemies (with the option to spit the latter back out as projectiles). In addition, Yoshi allows the player to flutter jump. There are also three different power-up fruits available for Yoshi to eat that grant him temporary abilities. These are the Dash Pepper, the Blimp Fruit and the Bulb Berry. The Dash Pepper allows Yoshi to run at an extremely high speed, allowing him to run up walls and on water; the Blimp Fruit allows Yoshi to float in the air for a limited amount of time; the Bulb Berry allows Yoshi to reveal secret pathways. If the player takes damage while riding Yoshi, the player will fall off and Yoshi will run away until the player gets back on him. If the player does not get back on, Yoshi will retreat into his egg and to different nests which can only be found in certain areas of the level. Guides and multiplayer The Cosmic Guide appears if the player has failed during a particular level a certain number of times, and allows the player to give computer control over Mario to complete the level. The drawback is that the player is awarded a Bronze Star, which is not added to the overall Power Star count, requiring the player to complete the level without using the Cosmic Guide to earn a golden Power Star. There are also monitors called "Hint TVs" that will demonstrate how to perform a specific move or optimal ways of using a power-up. Multiplayer gameplay has also been expanded upon over the original. In Super Mario Galaxy, another player could use a second Wii Remote to control a second Star Pointer and assist Mario by grabbing enemies or collecting and shooting Star Bits. In Super Mario Galaxy 2, the second player now controls an orange Luma who retains all the original abilities, but can also physically attack enemies and collect items, power-ups and 1-ups, making the player's involvement more useful. Plot In a retelling of the first game's story, Princess Peach invites Mario to share a cake at the Star Festival, a centennial celebration that occurs when Star Bits rain down from the skies over the Mushroom Kingdom. On his way to Peach's castle, Mario finds a lost Baby Luma, who immediately befriends him and grants him the ability to spin. Shortly thereafter, Mario's nemesis Bowser, who has grown to an immense size after abusing the power of the Grand Stars, attacks the castle. Kidnapping Peach, Bowser escapes into outer space to recreate his empire at the center of the universe. After finding two Lumas who offer their help to Mario, one of them transforms into a Launch Star that launches Mario into outer space. After landing on and venturing through the first galaxy and obtaining his first Power Star, Mario arrives on a small planetoid functioning as a spaceship and meets Lubba, a large purple Luma who leads a small band of Lumas. Lubba explains that Power Stars are needed to power the spaceship and that he and his crew were attacked by Bowser earlier, with some Lumas having been thrown overboard. Lubba has realized that Bowser kidnapped Peach and offers his help in tracking him down and saving the princess. He offers to grant Mario temporary ownership of the spaceship in exchange for him bringing back more Power Stars. After Mario agrees, he instructs his Lumas to rebuild the ship in honor of Mario and they do so, rebuilding it in the shape of his head. Mario is then thusly given control of Starship Mario. Starship Mario then sets off on its journey towards the center of the universe to save Peach. As Mario travels the cosmos, explores more galaxies, and obtains more Power Stars, he meets new species and joins up with his companion Yoshi, the Toad Brigade from the original Super Mario Galaxy, and his brother Luigi, all whom join Mario on the starship. As Mario and his allies travel the universe, he encounters Bowser's son Bowser Jr., who is once again aiding his father in his plan and hinders Mario's progress by fighting him twice, losing both times. Mario also encounters Bowser twice in his own galaxies, also managing to defeat him in battle both times, although he escapes after each defeat. All the while, Mario collects Grand Stars, which are enhanced forms of Power Stars that create portals allowing access to another part of the universe. After traveling through various galaxies throughout the universe collecting Power and Grand Stars, Mario and his allies finally reach Bowser's giant starship generator, which is draining energy from what appears to be a comet. Mario infiltrates the starship and engages Bowser in a third battle. Once again, Mario defeats Bowser and causes him to revert to his normal size and fall to his presumed death. Just as he falls, the last Grand Star appears. Before Mario can grab it, Bowser emerges, having survived the fall. He consumes the Grand Star, once again increasing his size and making him more powerful. A final battle ensues, in which Mario manages to finally defeat Bowser by ground-pounding meteorites onto him, causing him to once again shrink and fall into the abyss. Mario grabs the last Grand Star and saves Peach. They return to Starship Mario and Rosalina and her Comet Observatory from the first game appear before Starship Mario. Rosalina thanks Mario for watching over Baby Luma, who then returns to the Comet Observatory, taking Mario's hat with him. Mario and his friends return to the Mushroom Kingdom and celebrate their victory, whereas Bowser is revealed to once again have survived. However, he is enraged at having been shrunken down to a comically small size. The game ends with Starship Mario flying above Peach's castle, with the Comet Observatory streaking across the sky. Development After Nintendo completed Super Mario Galaxy, Shigeru Miyamoto approached the development team and suggested that a follow-up be produced. The game was originally planned to just do variations on the original game's planets and call the game "Super Mario Galaxy More", and was dubbed "Super Mario Galaxy 1.5" during early development, with a projected development time of approximately a year. The first elements that were implemented were anything that was scrapped from the original game, either to ensure game balance or simply because of time constraints, such as Yoshi and the concept of a planet shaped like Mario's head. Over time, more and more new elements and ideas were brought into the game, and it was decided that the game would be a fleshed-out sequel rather than a slightly modified follow-up. Thus, development took two and a half years. Koichi Hayashida and Takeshi Hayakawa served as the director and lead programmer, respectively. Hayakawa created a development tool that allowed different staff members, including visual and sound designers, to easily design and create stages without waiting for programmers, many of which were incorporated into the final game. In order to help distinguish Super Mario Galaxy 2 from its predecessor, the staff originally wanted the whole game to revolve around the concept of "switching", in which the game's environments would dramatically change under certain conditions. This concept ended up being particularly difficult to implement full-scale, so was relegated to only certain levels. Another idea that came up early on were cameo inclusions by other Nintendo characters (specifically Donkey Kong and Pikmin). The idea however was nixed by Miyamoto who stated that Pikmin characters wouldn't work within the Mario universe, and that there was no reason for other such cameos. Game tutorials were confined to an optional system called the "Tip Network" in order to benefit players already familiar with the original game. Miyamoto compared Super Mario Galaxy 2 to The Legend of Zelda: Majora's Mask, in that both games use the same engines as their predecessors, yet build upon their foundations. The game was revealed at E3 2009 on June 2. In Miyamoto's private conference, it was stated that the game was very far along in development, but its release was held back to mid-2010 because of New Super Mario Bros. Wiis release in late 2009. Miyamoto also stated that the game has 95–99% new features, with the rest being previous features introduced in Super Mario Galaxy. With regard to the original game, Nintendo of America President and CEO Reggie Fils-Aimé stated in an interview that the sequel would be more challenging, and Miyamoto said in a Wired interview that the game would have less focus on plot. Miyamoto initially hinted that the game might utilize the "Super Guide" feature, introduced in New Super Mario Bros. Wii, into the game, and this was confirmed by Nintendo's Senior Manager of Product Managing, Bill Trinen, who claimed that the feature was implemented differently compared to what New Super Mario Bros. Wii offered. The feature is called Cosmic Guide, where the Cosmic Spirit (Rosalina) takes control of Mario. The game made its playable debut at the Nintendo Media Summit 2010 on February 24, 2010, when a second trailer for the game was released, and its North American release date on May 23, 2010, was finally announced. The Japanese, European and Australian versions of the game came packaged with an instructional DVD manual, explaining the basic controls, as well as showing advanced play. The voice actors from Super Mario Galaxy reprise their roles for its sequel including Scott Burns (who voiced Bowser in previous games) and Dex Manley (who played Lubba and Lakitu). In January 2015, late Nintendo president Satoru Iwata announced at a Nintendo Direct presentation that Super Mario Galaxy 2, alongside other Wii games such as Punch-Out!! and Metroid Prime: Trilogy, would be re-released for download on the Wii U's Nintendo eShop. It was released on January 14, 2015. Music As with the original Super Mario Galaxy, Super Mario Galaxy 2 features a musical score written for and performed by a symphony orchestra (known as the Mario Galaxy Orchestra in the credits). Early in the development process, when the concept of "Super Mario Galaxy 1.5" was being considered, there were no plans to use different music from the first Super Mario Galaxy. However, as the game evolved, the sound team, headed by Mahito Yokota, realized they needed new music that fit with the new gameplay mechanics that were being added. Although they were hesitant to use a symphony orchestra again because of recording difficulties, general producer Shigeru Miyamoto gave permission immediately – according to Yokota, Miyamoto felt that players would be expecting an orchestral soundtrack. Miyamoto also apparently suggested that players would want to hear arrangements from Super Mario Galaxy, which is why the soundtrack is a mixture of brand new pieces and arrangements of themes from the original Galaxy as well as many past installments in the Mario series, such as Super Mario World and Super Mario 64. Ryo Nagamatsu, who worked previously on Mario Kart Wii, Wii Sports Resort, and New Super Mario Bros. Wii, contributed nine pieces to the soundtrack. Koji Kondo recruited sixty musicians for the orchestra, ten more than the number of musicians used for the original game's score, with an additional ten musicians providing a big band style of music with trumpets, trombones, saxophones and drums for a grand total of seventy players. The orchestral performances were conducted by Taizo Takemoto, renowned for his work with the Super Smash Bros. Concert in 2002, while Kondo served as a supervisor, while also contributing five pieces to the soundtrack. The soundtrack was available as a 2-disc set to Japanese Club Nintendo members with seventy songs taken from the game. Reception Critical reception Like its predecessor, Super Mario Galaxy 2 was widely acclaimed by major video game critics with numerous reviews praising the game for its creativity and technical improvements over the original. Most reviewers agreed that the game either lived up to or surpassed the original Super Mario Galaxy. It has an average critic score of 97% at GameRankings and 97/100 at Metacritic, making it one of the highest-rated games on the sites alongside its predecessor. Tom McShea from GameSpot called it a "new standard for platformers", giving it a perfect 10, making it the seventh game in the site's history to earn that score. Other perfect scores came from Edge, stating "This isn't a game that redefines the genre: this is one that rolls it up and locks it away," and IGN Craig Harris, who felt that the game "perfectly captures that classic videogame charm, the reason why most of us got into gaming from the start". IGN later placed Super Mario Galaxy 2 fourth on their "Top Modern Games" list and listed it as the greatest Wii game of all time. The Escapist editor Susan Arendt echoed this view by stating it "doesn't tinker with the established formula very much, but we didn't really want it to", while GameTrailers commented that "there's something tremendous for just about everyone and games that we can truly recommend to almost everyone are rare". Ryan Scott at GameSpy regards it a much better game than the first Super Mario Galaxy, stating, "For a series that's explored every conceivable angle of its genre, the Mario games keep coming up with ways to challenge our notions of what a platformer can and should do." Giant Bomb Ryan Davis particularly praised the improved level designs, commenting that the designers were "bolder" and "more willing to take some weird risks with the planetoids and abstract platforming that set the tone in the original Galaxy", while Chris Kohler from Wired commented that the level concepts alone "could be made into full games on their own". Additionally, 1UP.com Justin Haywald noted the expanded soundtrack as "sweeping". X-Play editor Andrew Pfister awarded Super Mario Galaxy 2 a 5/5, calling it "the culmination of 20 years of Mario gaming into one fantastically-designed and creative platformer". Despite this praise, some critics raised complaints over increased difficulty and the game's similarity to the original Super Mario Galaxy. Chris Scullion from Official Nintendo Magazine called it the "new best game on Wii", but said it lacked the original's impact (though they admitted the extreme difficulty of this, due to the quality of the original). Game Informer editor Matt Helgeson was concerned with some of the challenges being potentially "frustrating", particularly towards the end of the game; similarly, Ben PerLee from GamePro remarked that the "increased difficulty and high proficiency requirement may turn new fans off". However, Worthplaying editor Chris DeAngelus praised the game's difficulty, stating that "there are very few sequences where death will feel like a result of bad design instead of player error, which helps keep the frustration down". McShea opined that the game is "much more streamlined than its predecessor" and therefore "the best thing that can be said about the story is that it mostly stays in the background". Kohler acknowledged that the reduced focus on story "was done with the intent of keeping things laser-focused on the gameplay" but mentioned that "Galaxy showed that the Mario team has some genuinely solid storytelling ability, and they implemented it in a way that didn't distract from the gameplay" and that "in this case it feels like a waste of talent." Sales In Japan, Super Mario Galaxy 2 sold 143,000 copies on its first day of release and 340,000 copies in its first week, about 90,000 more than the first Super Mario Galaxy sold in the same amount of time. In North America, the game sold 650,000 copies during the month of May 2010. In the United Kingdom, Super Mario Galaxy 2 was the third best-selling game among multiplatform releases and the best-selling single platform release for the week ending June 26, 2010. As of July 16, 2010, the game has sold 1 million copies within the USA. As of April 2011, Super Mario Galaxy 2 has sold 6.36 million copies worldwide. Awards Super Mario Galaxy 2 received Game of the Year 2010 awards from Nintendo Power, GamesMaster, Official Nintendo Magazine, Edge, GamesTM, Destructoid and Metacritic. It was named best "Wii Game of the Year" by IGN, GameTrailers, GameSpot, 1UP.com, and many other media outlets. As of December 2010, IGN awarded Super Mario Galaxy 2 the number 1 Wii game, overtaking its predecessor. In the March 2012 issue of Official Nintendo Magazine, the publication named Super Mario Galaxy 2 the 'Greatest Nintendo Game Ever Made' ranking at #1 out of 100. The game was nominated for Best Wii Game at the Spike TV Video Game Awards 2010. It was also nominated for "Favorite Video Game" at the 2011 Kids' Choice Awards, but lost to Just Dance 2. During the 14th Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences nominated Super Mario Galaxy 2 for "Outstanding Achievement in Gameplay Engineering". Notes References External links (archived) Super Mario Galaxy 2 at nintendo.com 2010 video games 3D platformers Asymmetrical multiplayer video games BAFTA winners (video games) Multiplayer and single-player video games Science fantasy video games Galaxy 2 Video game sequels Video games developed in Japan Video games produced by Takashi Tezuka Video games scored by Koji Kondo Video games set in outer space Video games set on fictional planets Video games scored by Mahito Yokota Video games scored by Ryo Nagamatsu Wii games Wii games re-released on the Nintendo eShop Wii-only games
Super Mario Galaxy 2
Physics
4,745
866,638
https://en.wikipedia.org/wiki/False%20precision
False precision (also called overprecision, fake precision, misplaced precision, and spurious precision) occurs when numerical data are presented in a manner that implies better precision than is justified; since precision is a limit to accuracy (in the ISO definition of accuracy), this often leads to overconfidence in the accuracy, named precision bias. Overview Madsen Pirie defines the term "false precision" in a more general way: when exact numbers are used for notions that cannot be expressed in exact terms. For example, "We know that 90% of the difficulty in writing is getting started." Often false precision is abused to produce an unwarranted confidence in the claim: "our mouthwash is twice as good as our competitor's". In science and engineering, convention dictates that unless a margin of error is explicitly stated, the number of significant figures used in the presentation of data should be limited to what is warranted by the precision of those data. For example, if an instrument can be read to tenths of a unit of measurement, results of calculations using data obtained from that instrument can only be confidently stated to the tenths place, regardless of what the raw calculation returns or whether other data used in the calculation are more accurate. Even outside these disciplines, there is a tendency to assume that all the non-zero digits of a number are meaningful; thus, providing excessive figures may lead the viewer to expect better precision than exists. However, in contrast, it is good practice to retain more significant figures than this in the intermediate stages of a calculation, in order to avoid accumulated rounding errors. False precision commonly arises when high-precision and low-precision data are combined, when using an electronic calculator, and in conversion of units. Examples False precision is the gist of numerous variations of a joke which can be summarized as follows: A tour guide at a museum says a dinosaur skeleton is 100,000,005 years old, because an expert told him that it was 100 million years old when he started working there 5 years ago. If a car's speedometer indicates a speed of 60 mph, converting it to 96.56064 km/h makes it seem like the measurement was very precise, when in fact it was not. Assuming the speedometer is accurate to 1 mph, a more appropriate conversion is 97 km/h. Measures that rely on statistical sampling, such as IQ tests, are often reported with false precision. See also Arithmetic underflow Limit of detection Precision bias Propagation of uncertainty Round-off error Rounding Significant figures References Arithmetic Numerical analysis de:Signifikante_Stellen
False precision
Mathematics
534
5,055,268
https://en.wikipedia.org/wiki/36%20Capricorni
36 Capricorni is a single, yellow-hued star in the southern constellation of Capricornus. It is visible to the naked eye with an apparent visual magnitude of +4.50. The distance to this star, as determined from an annual parallax shift of , is around 171 light years. It is currently moving closer with a heliocentric radial velocity of −21 km/s, and will come within in about 685,000 years ago. This is an evolved G-type giant star with a stellar classification of G7IIIb Fe–1, where the suffix notation indicates an underabundance of iron found in the spectrum. At the age of 770 million years it has become a red clump giant, meaning it is generating energy through helium fusion at its core. It has 2.26 times the mass of the Sun, but has expanded to 8.6 times its radius. It is radiating 43 times the Sun's luminosity from its photosphere at an effective temperature of 5,047 K. Chinese name In Chinese, (), meaning Twelve States, refers to an asterism which is represent twelve ancient states in the Spring and Autumn period and the Warring States period, consisting of 36 Capricorni, φ Capricorni, ι Capricorni, 37 Capricorni, 35 Capricorni, χ Capricorni, θ Capricorni, 30 Capricorni, 33 Capricorni, ζ Capricorni, 19 Capricorni, 26 Capricorni, 27 Capricorni, 20 Capricorni, η Capricorni and 21 Capricorni. Consequently, 36 Capricorni itself is represent the state Jin ()(or Tsin), together with κ Herculis in Right Wall of Heavenly Market Enclosure (asterism) References G-type giants Horizontal-branch stars Capricornus Capricorni, b Durchmusterung objects Capricorni, 36 204381 106039 8213
36 Capricorni
Astronomy
429
57,783,055
https://en.wikipedia.org/wiki/Vsevolod%20Perekalin
Vsevolod Vasilyevich Perekalin (; 27 February 1913, Saint Petersburg – 7 January 1998, Saint Petersburg) was a Soviet and Russian organic chemist, Doktor nauk. He created the drug known as Phenibut. Biography His father was a military physician. He was a student of Academician . In 1940, he defended his Candidate's Dissertation at the N.D. Zelinsky Institute of Organic Chemistry. In 1949, he defended his doctoral dissertation. From 1950 to 1992, Perekalin headed the Department of Organic Chemistry at the Herzen University. In this University he organized the Faculty of Chemistry. He taught in the Herzen University for 48 years. In 1995, he was appointed Soros Professor. He has a son Pyotr. Perekalin is the author of more than 350 scientific papers. Awards and honors Ushinsky Medal Honored Science Worker of the RSFSR (1967) Latvian SSR State Prize Order of the Patriotic War, 2nd class Order of the Red Banner of Labour References External links 1913 births 1998 deaths Scientists from Saint Petersburg Recipients of the Order of the Red Banner of Labour Russian organic chemists Soviet organic chemists
Vsevolod Perekalin
Chemistry
240
47,689,010
https://en.wikipedia.org/wiki/Judith%20Q.%20Longyear
Judith Querida Longyear (20 September 1938–13 December 1995) was an American mathematician and professor whose research interests included graph theory and combinatorics. Longyear was the second woman to ever earn a mathematics Ph.D. from Pennsylvania State University, where she studied under the supervision of Sarvadaman Chowla and wrote a thesis entitled Tactical Configurations. Longyear taught mathematics at several universities including California Institute of Technology, Dartmouth College and Wayne State University. She worked on nested block designs and Hadamard matrices. References Graph theorists 20th-century American mathematicians 1938 births 1995 deaths 20th-century American women mathematicians
Judith Q. Longyear
Mathematics
126
41,808,229
https://en.wikipedia.org/wiki/Tetranitratoborate
Tetranitratoborate is an anion composed of boron with four nitrate groups. It has formula . It can form salts with large cations such as tetramethylammonium nitratoborate, or tetraethylammonium tetranitratoborate. The ion was first discovered by C. R. Guibert and M. D. Marshall in 1966 after failed attempts to make neutral (non-ionic) boron nitrate, , which has resisted attempts to make it; if it exists, it is unstable above −78 °C. Other related ions are the slightly more stable tetraperchloratoborates, with perchlorate groups instead of nitrate, and tetranitratoaluminate with the next atom down the periodic table, aluminium instead of boron (). Formation Tetramethylammonium chloride reacts with to make . Then the tetrachloroborate is reacted with at around −20 °C to form tetramethylammonium nitratoborate, and other gases such as and . Another mechanism to make tetranitratoborate salts is to shake a metal nitrate with in chloroform at 20 °C for several days. Trichloronitratoborate is an unstable intermediate. Properties The infrared spectrum of tetramethylammonium nitratoborate includes a prominent line at 1,612 cm−1 with shoulders at 1582 and 1,626 cm−1 attributed to v4. Also prominent is 1,297 and 1,311 cm−1 attributed to v1, with these vibrations due to the nitrate bonded via one oxygen. The density of tetramethylammonium nitratoborate is 1.555 g·cm−3. It is colourless and crystalline. As tetramethylammonium nitratoborate is heated it has some sort of transition between 51 and 62 °C. It decomposes above 75 °C producing gas. Above 112 °C it is exothermic, and a solid is left if it is heated to 160 °C. Tetramethylammonium nitratoborate is insoluble in cold water but slightly soluble in hot water. It does not react with water. It also dissolves in liquid ammonia, acetonitrile, methanol, and dimethylformamide. It reacts with liquid sulfur dioxide. At room temperature tetramethylammonium nitratoborate is stable for months. It does not explode with impact. Alkali metal tetranitratoborates are unstable at room temperature and decompose. 1-Ethyl-3-methyl-imidazolimium tetranitratoborate was discovered in 2002. It is an ionic liquid that turns solid at −25 °C. References Nitrates Boron compounds Borates Anions
Tetranitratoborate
Physics,Chemistry
592
20,260,573
https://en.wikipedia.org/wiki/Iraqi%20biological%20weapons%20program
Saddam Hussein (1937–2006) began an extensive biological weapons (BW) program in Iraq in the early 1980s, despite having signed (but not ratified until 1991) the Biological Weapons Convention (BWC) of 1972. Details of the BW program and a chemical weapons program surfaced after the Gulf War (1990–91) during the disarmament of Iraq under the United Nations Special Commission (UNSCOM). By the end of the war, program scientists had investigated the BW potential of five bacterial strains, one fungal strain, five types of virus, and four toxins. Of these, three—anthrax, botulinum and aflatoxin—had proceeded to weaponization for deployment. Because of the UN disarmament program that followed the war, more is known today about the once-secret bioweapons program in Iraq than that of any other nation. The program no longer existed when the George W. Bush administration cited it as justification for its 2003 invasion of Iraq and the subsequent Iraq War. The program Startup and foreign suppliers In the early 1980s, five German firms supplied equipment to manufacture botulin toxin and mycotoxin to Iraq. Iraq's State Establishment for Pesticide Production (SEPP) also ordered culture media and incubators from Germany's Water Engineering Trading. Strains of dual-use biological material from France also helped advance Iraq's biological warfare program. From the United States, the non-profit American Type Culture Collection and the U.S. Centers for Disease Control sold or sent biological samples to Iraq up until 1989, which Iraq claimed to need for medical research. These materials included anthrax, West Nile virus and botulism, as well as Brucella melitensis, and Clostridium perfringens. Some of these materials were used for Iraq's biological weapons research program, while others were used for vaccine development. In delivering these materials "The CDC was abiding by World Health Organization guidelines that encouraged the free exchange of biological samples among medical researchers..." according to Thomas Monath, CDC lab director. It was a request "which we were obligated to fulfill," as described in WHO and UN treaties. Facilities, agents and production Iraq's BW facilities included its main biowarfare research center at Salman Pak (just south of Baghdad), the main bioweapons production facility at Al Hakum (the "Single-Cell Protein Production Plant") and the viral biowarfare research site at Al Manal (the "Foot and Mouth Disease Center"). The Al Hakum facility began mass production of weapons-grade anthrax in 1989, eventually producing 8,000 liters or more (the 8,000 liter figure is based on declared amounts). Iraq officially acknowledged that it had worked with several species of bacterial pathogen, including Bacillus anthracis, Clostridium botulinum and Clostridium perfringens (gas gangrene) and several viruses (including enterovirus 17 [human conjunctivitis], rotavirus and camelpox). The program also purified biological toxins, such as botulinum toxin, ricin and aflatoxin. After 1995, it was learned that, in all, Iraq had produced 19,000 liters of concentrated botulinum toxin (nearly 10,000 liters filled into munitions), 8,500 liters of concentrated anthrax (6,500 liters filled into munitions) and 2,200 liters of aflatoxin (1,580 liters filled into munitions). In total, the program grew a half million liters of biological agents. Human experimentation During UN inspections in 1998, it emerged that Hussein had prisoners tied to stakes and bombarded with anthrax and chemical weapons for experimental purposes. These experiments began in the 1980s during the Iran–Iraq War after initial experiments on sheep and camels. Dozens of prisoners are believed to have died in agony during the program. According to an article in the London Sunday Times: In one incident, Iranian prisoners of war are said to have been tied up and killed by bacteria from a shell detonated nearby. Others were exposed to an aerosol of anthrax sprayed into a chamber while doctors watched behind a glass screen. Two British-trained scientists have been identified as leading figures in the programme. … According to Israeli military intelligence sources, 10 Iranian prisoners of war were taken to a location near Iraq's border with Saudi Arabia. They were lashed to posts and left helpless as an anthrax bomb was exploded by remote control 15 yards away. All died painfully from internal haemorrhaging. In another experiment, 15 Kurdish prisoners were tied up in a field while shells containing camel pox, a mild virus, were dropped from a light aircraft. The results were slower but the test was judged a success; the prisoners fell ill within a week. Iraqi sources say some of the cruellest research has been conducted at an underground facility near Salman Pak, southwest of Baghdad. Here, the sources say, experiments with biological and chemical agents were carried out first on dogs and cats, then on Iranian prisoners. The prisoners were secured to a bed in a purpose-built chamber, into which lethal agents, including anthrax, were sprayed from a high-velocity device mounted in the ceiling. Medical researchers viewed the results through fortified glass. Details of the experiments were known only to Saddam and an inner circle of senior government officials and Iraqi scientists educated in the West. … The facility, which is understood to have been built by German engineers in the 1980s, has been at the centre of Iraq's experiments on "human guinea pigs" for more than 10 years, according to Israeli military sources. Bioweaponeers Iraqi scientist Nassir al-Hindawi was described by United Nations inspectors as the "father of Iraq's biological weapons program". Two of the leading researchers in the program studied in Britain. Rihab al-Taha ("Dr. Germ"), educated at the University of East Anglia, was head of Iraq's military research and development institute. Another scientist received a doctorate in molecular biology from the University of Edinburgh. U.S. officials alleged that a third scientist — Huda Salih Mahdi Ammash ("Mrs. Anthrax", "Chemical Sally"), who was trained at the University of Missouri — helped to rebuild Iraq's BW program in the mid-1990s after the Gulf War. Both al-Taha and Ammash were captured by U.S. forces after the 2003 invasion of Iraq, but both were released in 2005 after they were among those an American-Iraqi board found to be no longer security threats. They had no charges filed against them. Consequences of the program 1991 US response During the Gulf War, US and other intelligence reports had suggested that Iraq was operating a BW program. Coalition troops trained with protective gear and stockpiled the antibiotic ciprofloxacin for use as post-exposure prophylaxis against anthrax. Approximately 150,000 US troops received the U.S. Food and Drug Administration–licensed anthrax vaccine (BioThrax), and 8,000 received a botulinum toxoid vaccine also approved by the FDA as an investigational new drug. Although Iraq had loaded anthrax, botulinum, and aflatoxin bio-agent into missiles and artillery shells in preparing for the war, and although these munitions were deployed to four locations in Iraq, they were never used. Post-war inspections In August 1991, the UN carried out its first inspection of Iraq's BW capabilities in the aftermath of the Gulf War. On 2 August 1991, representatives of the Iraqi government announced to leaders of UNSCOM's "Team 7" that they had conducted research into the offensive use of B. anthracis, botulinum toxins, and Clostridium perfringens toxins. Post-war inspections by UNSCOM, however, were confounded by misinformation and obfuscation. After Iraqi General Hussein Kamel al-Majid defected to Jordan in August 1995, the Iraqi government further disclosed that it had operated a robust BW program at six major sites since the 1980s. It was revealed that the Iraqi program conducted basic research on B. anthracis, rotavirus, camelpox virus, aflatoxin, botulinum toxins, mycotoxins, and an anticrop agent (wheat cover smut). It tested several delivery systems including aerial spray tanks and drone aircraft. The Iraqi government had weaponized 6,000 liters of B. anthracis spores and 12,000 liters of botulinum toxin in aerial bombs, rockets, and missile warheads before the outbreak of war in 1991. These bio-weapons were deployed but never used. Non-use by Saddam After Kamel's defection, it became known that in December 1990 the Iraqis had filled 100 R-400 bombs with botulinum toxin, 50 with anthrax, and 16 with aflatoxin. In addition, 13 Al Hussein (SCUD) warheads were filled with botulinum toxin, 10 with anthrax, and 2 with aflatoxin. These weapons were deployed in January 1991 to four locations for use against Coalition forces. Why Saddam Hussein did not use these biological weapons in 1991 is unclear, but the presumption has been that he was concerned about provoking massive retaliation. Other plausible factors include the perceived ineffectiveness of the untested delivery and dispersal systems, the probable ineffectiveness of liquid slurries resulting from poor aerosolization, and the potential hazards to the Iraqi troops themselves, as they lacked the protective equipment and training available to Coalition forces. Several defectors (see Khidir Hamza) have claimed that these weapons were intended only as "weapons of last resort" in case the Coalition stormed the gates of Baghdad. Since this never happened, Saddam found their use unnecessary. 2003 invasion of Iraq The Iraqis claimed to have destroyed their biological arsenal immediately after the 1991 war, but they did not provide confirmatory evidence. A covert military research and development program continued for another four years, with the intent of resuming agent production and weapons manufacture after the end of UN sanctions. Basic infrastructure was preserved, and research on producing dried agent was conducted under the guise of biopesticide production at Al Hakum until its destruction by UNSCOM inspectors in 1996. The same year, operational portions of the facilities at Salman Pak and Al Manal were also supposedly destroyed, either by the Iraqis themselves or under direct UNSCOM supervision. But UNSCOM inspectors never received full cooperation from the Hussein regime and they were finally expelled from Iraq in 1998. International concerns led to renewed inspections in 2002 under UN Security Council Resolution 1441 and these facilities were again targets for the U.S. military during the 2003 invasion of Iraq as potentially still being operational. President Bush cited the non-cooperation with inspectors as a major justification for military action. The extent of Iraq's BW program between 1998 when UNSCOM left Iraq and the U.S. Coalition invasion in March 2003 remains unknown. Current information indicates the discovery of a clandestine network of biological laboratories operated by the Iraqi Intelligence Service (Mukhabarat), a prison laboratory complex possibly used for human experimentation, an Iraqi scientist's private culture collection with a strain of possible BW interest, and new research activities involving Brucella and Crimean-Congo hemorrhagic fever virus. Despite diligent investigations since 2003, evidence for the existence of additional BW stockpiles in Iraq has not been documented. 2005 Iraq Survey Group report In 2005, the Iraq Survey Group — an international group composed of civilian and military experts — concluded that the Iraqi military BW program had been abandoned during 1995 and 1996 because of fear that discovery of continued activity would result in severe political repercussions including the extension of UN sanctions. However, they concluded, Hussein had perpetuated ambiguity regarding a possible program as a strategic deterrent against Iran. Other conclusions were that the Mukhabarat continued to investigate toxins as tools of assassination, concealed its program from UNSCOM inspectors after the 1991 war, and reportedly conducted lethal human experimentation until 1994. Small-scale covert laboratories were maintained until 2003. See also Iraq and weapons of mass destruction Iraqi chemical weapons program History of biological warfare References Biological warfare Weapons of Iraq Military of Iraq Iran–Iraq War crimes 20th-century prisoner of war massacres
Iraqi biological weapons program
Biology
2,573
301,750
https://en.wikipedia.org/wiki/Magnus%20effect
The Magnus effect is a phenomenon that occurs when a spinning object is moving through a fluid. A lift force acts on the spinning object and its path may be deflected in a manner not present when it is not spinning. The strength and direction of the Magnus effect is dependent on the speed and direction of the rotation of the object. The Magnus effect is named after Heinrich Gustav Magnus, the German physicist who investigated it. The force on a rotating cylinder is an example of Kutta–Joukowski lift, named after Martin Kutta and Nikolay Zhukovsky (or Joukowski), mathematicians who contributed to the knowledge of how lift is generated in a fluid flow. Description The most readily observable case of the Magnus effect is when a spinning sphere (or cylinder) curves away from the arc it would follow if it were not spinning. It is often used by football (soccer) and volleyball players, baseball pitchers, and cricket bowlers. Consequently, the phenomenon is important in the study of the physics of many ball sports. It is also an important factor in the study of the effects of spinning on guided missiles—and has some engineering uses, for instance in the design of rotor ships and Flettner airplanes. Topspin in ball games is defined as spin about a horizontal axis perpendicular to the direction of travel that moves the top surface of the ball in the direction of travel. Under the Magnus effect, topspin produces a downward swerve of a moving ball, greater than would be produced by gravity alone. Backspin produces an upwards force that prolongs the flight of a moving ball. Likewise side-spin causes swerve to either side as seen during some baseball pitches, e.g. slider. The overall behaviour is similar to that around an aerofoil (see lift force), but with a circulation generated by mechanical rotation rather than shape of the foil. In baseball, this effect is used to generate the downward motion of a curveball, in which the baseball is rotating forward (with 'topspin'). Participants in other sports played with a ball also take advantage of this effect. Physics The Magnus effect or Magnus force acts on a rotating body moving relative to a fluid. Examples include a "curve ball" in baseball or a tennis ball hit obliquely. The rotation alters the boundary layer between the object and the fluid. The force is perpendicular to the relative direction of motion and oriented towards the direction of rotation, i.e. the direction the "nose" of the ball is turning towards. The magnitude of the force depends primarily on the rotation rate, the relative velocity, and the geometry of the body; the magnitude also depends upon the body's surface roughness and viscosity of the fluid. Accurate quantitative predictions of the force are difficult, but as with other examples of aerodynamic lift there are simpler, qualitative explanations: Flow deflection The diagram shows lift being produced on a back-spinning ball. The wake and trailing air-flow have been deflected downwards; according to Newton's third law of motion there must be a reaction force in the opposite direction. Pressure differences The air's viscosity and the surface roughness of the object cause the air to be carried around the object. This adds to the air velocity on one side of the object and decreases the velocity on the other side. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure, implying that there is lower air pressure on one side than the other. This pressure difference results in a force perpendicular to the direction of travel. Kutta–Joukowski lift On a cylinder, the force due to rotation is an example of Kutta–Joukowski lift. It can be analysed in terms of the vortex produced by rotation. The lift per unit length of the cylinder , is the product of the freestream velocity (in m/s), the fluid density (in kg/m3), and circulation due to viscous effects: where the vortex strength (assuming that the surrounding fluid obeys the no-slip condition) is given by where ω is the angular velocity of the cylinder (in rad/s) and r is the radius of the cylinder (in m). Inverse Magnus effect In wind tunnel studies, (rough surfaced) baseballs show the Magnus effect, but smooth spheres do not. Further study has shown that certain combinations of conditions result in turbulence in the fluid on one side of the rotating body but laminar flow on the other side. In these cases are called the inverse Magnus effect: the deflection is opposite to that of the typical Magnus effect. Magnus effect in potential flow Potential flow is a mathematical model of the steady flow of a fluid with no viscosity or vorticity present. For potential flow around a circular cylinder, it provides the following results: Non-spinning cylinder The flow pattern is symmetric about a horizontal axis through the centre of the cylinder. At each point above the axis and its corresponding point below the axis, the spacing of streamlines is the same so velocities are also the same at the two points. Bernoulli’s principle shows that, outside the boundary layers, pressures are also the same at corresponding points. There is no lift acting on the cylinder. Spinning cylinder Streamlines are closer spaced immediately above the cylinder than below, so the air flows faster past the upper surface than past the lower surface. Bernoulli’s principle shows that the pressure adjacent to the upper surface is lower than the pressure adjacent to the lower surface. The Magnus force acts vertically upwards on the cylinder. Streamlines immediately above the cylinder are curved with radius little more than the radius of the cylinder. This means there is low pressure close to the upper surface of the cylinder. Streamlines immediately below the cylinder are curved with a larger radius than streamlines above the cylinder. This means there is higher pressure acting on the lower surface than on the upper. Air immediately above and below the cylinder is curving downwards, accelerated by the pressure gradient. A downwards force is acting on the air. Newton's third law predicts that the Magnus force and the downwards force acting on the air are equal in magnitude and opposite in direction. History The effect is named after German physicist Heinrich Gustav Magnus who demonstrated the effect with a rapidly rotating brass cylinder and an air blower in 1852. In 1672, Isaac Newton had speculated on the effect after observing tennis players in his Cambridge college. In 1742, Benjamin Robins, a British mathematician, ballistics researcher, and military engineer, explained deviations in the trajectories of musket balls due to their rotation. Pioneering wind tunnel research on the Magnus effect was carried out with smooth rotating spheres in 1928. Lyman Briggs later studied baseballs in a wind tunnel, and others have produced images of the effect. The studies show that a turbulent wake behind the spinning ball causes aerodynamic drag, plus there is a noticeable angular deflection in the wake, and this deflection is in the direction of spin. In sport The Magnus effect explains commonly observed deviations from the typical trajectories or paths of spinning balls in sport, notably association football, table tennis, tennis, volleyball, golf, baseball, and cricket. The curved path of a golf ball known as slice or hook is largely due to the ball's spin axis being tilted away from the horizontal due to the combined effects of club face angle and swing path, causing the Magnus effect to act at an angle, moving the ball away from a straight line in its trajectory. Backspin (upper surface rotating backwards from the direction of movement) on a golf ball causes a vertical force that counteracts the force of gravity slightly, and enables the ball to remain airborne a little longer than it would were the ball not spinning: this allows the ball to travel farther than a ball not spinning about its horizontal axis. In table tennis, the Magnus effect is easily observed, because of the small mass and low density of the ball. An experienced player can place a wide variety of spins on the ball. Table tennis rackets usually have a surface made of rubber to give the racket maximum grip on the ball to impart a spin. In cricket, the Magnus effect contributes to the types of motion known as drift, dip and lift in spin bowling, depending on the axis of rotation of the spin applied to the ball. The Magnus effect is not responsible for the movement seen in conventional swing bowling, in which the pressure gradient is not caused by the ball's spin, but rather by its raised seam, and the asymmetric roughness or smoothness of its two halves; however, the Magnus effect may be responsible for so-called "Malinga Swing", as observed in the bowling of the swing bowler Lasith Malinga. In airsoft, a system known as hop-up is used to create a backspin on a fired BB, which greatly increases its range, using the Magnus effect in a similar manner as in golf. In baseball, pitchers often impart different spins on the ball, causing it to curve in the desired direction due to the Magnus effect. The PITCHf/x system measures the change in trajectory caused by Magnus in all pitches thrown in Major League Baseball. The match ball for the 2010 FIFA World Cup has been criticised for the different Magnus effect from previous match balls. The ball was described as having less Magnus effect and as a result flies farther but with less controllable swerve. In external ballistics The Magnus effect can also be found in advanced external ballistics. First, a spinning bullet in flight is often subject to a crosswind, which can be simplified as blowing from either the left or the right. In addition to this, even in completely calm air a bullet experiences a small sideways wind component due to its yawing motion. This yawing motion along the bullet's flight path means that the nose of the bullet points in a slightly different direction from the direction the bullet travels. In other words, the bullet "skids" sideways at any given moment, and thus experiences a small sideways wind component in addition to any crosswind component. The combined sideways wind component of these two effects causes a Magnus force to act on the bullet, which is perpendicular both to the direction the bullet is pointing and the combined sideways wind. In a very simple case where we ignore various complicating factors, the Magnus force from the crosswind would cause an upward or downward force to act on the spinning bullet (depending on the left or right wind and rotation), causing deflection of the bullet's flight path up or down, thus influencing the point of impact. Overall, the effect of the Magnus force on a bullet's flight path itself is usually insignificant compared to other forces such as aerodynamic drag. However, it greatly affects the bullet's stability, which in turn affects the amount of drag, how the bullet behaves upon impact, and many other factors. The stability of the bullet is affected, because the Magnus effect acts on the bullet's centre of pressure instead of its centre of gravity. This means that it affects the yaw angle of the bullet; it tends to twist the bullet along its flight path, either towards the axis of flight (decreasing the yaw thus stabilising the bullet) or away from the axis of flight (increasing the yaw thus destabilising the bullet). The critical factor is the location of the centre of pressure, which depends on the flowfield structure, which in turn depends mainly on the bullet's speed (supersonic or subsonic), but also the shape, air density and surface features. If the centre of pressure is ahead of the centre of gravity, the effect is destabilizing; if the centre of pressure is behind the centre of gravity, the effect is stabilising. In aviation Some aircraft have been built to use the Magnus effect to create lift with a rotating cylinder instead of a wing, allowing flight at lower horizontal speeds. The earliest attempt to use the Magnus effect for a heavier-than-air aircraft was in 1910 by a US member of Congress, Butler Ames of Massachusetts. The next attempt was in the early 1930s by three inventors in New York state. Ship propulsion and stabilization Rotor ships use mast-like cylinders, called Flettner rotors, for propulsion. These are mounted vertically on the ship's deck. When the wind blows from the side, the Magnus effect creates a forward thrust. Thus, as with any sailing ship, a rotor ship can only move forwards when there is a wind blowing. The effect is also used in a special type of ship stabilizer consisting of a rotating cylinder mounted beneath the waterline and emerging laterally. By controlling the direction and speed of rotation, strong lift or downforce can be generated. The largest deployment of the system to date is in the motor yacht Eclipse. See also Air resistance Ball of the Century Bernoulli's principle Coandă effect Fluid dynamics Kite types Navier–Stokes equations Potential flow around a circular cylinder Reynolds number Tesla turbine References Further reading External links Magnus Cups, Ri Channel Video, January 2012 Analytic Functions, The Magnus Effect, and Wings at MathPages How do bullets fly? Ruprecht Nennstiel, Wiesbaden, Germany How do bullets fly? old version (1998), by Ruprecht Nennstiel Anthony Thyssen's Rotor Kites page Has plans on how to build a model Harnessing wind power using the Magnus effect Researchers Observe Magnus Effect in Light for First Time Quantum Maglift Video:Applications of the Magnus effect Fluid dynamics Physical phenomena
Magnus effect
Physics,Chemistry,Engineering
2,765
14,457,331
https://en.wikipedia.org/wiki/Lead%E2%80%93lead%20dating
Lead–lead dating is a method for dating geological samples, normally based on 'whole-rock' samples of material such as granite. For most dating requirements it has been superseded by uranium–lead dating (U–Pb dating), but in certain specialized situations (such as dating meteorites and the age of the Earth) it is more important than U–Pb dating. Decay equations for common Pb–Pb dating Three stable "daughter" Pb isotopes result from the radioactive decay of uranium and thorium in nature; they are 206Pb, 207Pb, and 208Pb. 204Pb is the only non-radiogenic lead isotope, therefore is not one of the daughter isotopes. These daughter isotopes are the final decay products of U and Th radioactive decay chains beginning from 238U (half-life 4.5 Gy), 235U (half-life 0.70 Gy) and 232Th (half-life 14 Gy) respectively. With the progress of time, the final decay product accumulates as the parent isotope decays at a constant rate. This shifts the ratio of radiogenic Pb versus non-radiogenic 204Pb (207Pb/204Pb or 206Pb/204Pb) in favor of radiogenic 207Pb or 206Pb. This can be expressed by the following decay equations: where the subscripts P and I refer to present-day and initial Pb isotope ratios, λ235 and λ238 are decay constants for 235U and 238U, and t is the age. The concept of common Pb–Pb dating (also referred to as whole rock lead isotope dating) was deduced through mathematical manipulation of the above equations. It was established by dividing the first equation above by the second, under the assumption that the U/Pb system was undisturbed. This rearranged equation formed: where the factor of 137.88 is the present-day 238U/235U ratio. As evident by the equation, initial Pb isotope ratios, as well as the age of the system are the two factors which determine the present day Pb isotope compositions. If the sample behaved as a closed system then graphing the difference between the present and initial ratios of 207Pb/204Pb versus 206Pb/204Pb should produce a straight line. The distance the point moves along this line is dependent on the U/Pb ratio, whereas the slope of the line depends on the time since Earth's formation. This was first established by Nier et al., 1941. The development of the Geochron database The development of the Geochron database was mainly attributed to Clair Cameron Patterson’s application of PbPb dating on meteorites in 1956. The Pb ratios of three stony and two iron meteorites were measured. The dating of meteorites would then help Patterson in determining not only the age of these meteorites but also the age of Earth's formation. By dating meteorites Patterson was directly dating the age of various planetesimals. Assuming the process of elemental differentiation is identical on Earth as it is on other planets, the core of these planetesimals would be depleted of uranium and thorium, while the crust and mantle would contain higher U/Pb ratios. As planetesimals collided, various fragments were scattered and produced meteorites. Iron meteorites were identified as pieces of the core, while stony meteorites were segments of the mantle and crustal units of these various planetesimals. Samples of iron meteorite from Canyon Diablo (Meteor Crater) Arizona were found to have the least radiogenic composition of any material in the solar system. The U/Pb ratio was so low that no radiogenic decay was detected in the isotopic composition. As illustrated in figure 1, this point defines the lower (left) end of the isochron. Therefore, troilite found in Canyon Diablo represents the primeval lead isotope composition of the solar system, dating back to . Stony meteorites however, exhibited very high 207Pb/204Pb versus 206Pb/204Pb ratios, indicating that these samples came from the crust or mantle of the planetesimal. Together, these samples define an isochron, whose slope gives the age of meteorites as 4.55 Byr. Patterson also analyzed terrestrial sediment collected from the ocean floor, which was believed to be representative of the Bulk Earth composition. Because the isotope composition of this sample plotted on the meteorite isochron, it suggested that earth had the same age and origin as meteorites, therefore solving the age of the Earth and giving rise to the name 'geochron'. Lead isotope isochron diagram used by C. C. Patterson to determine the age of the Earth in 1956. Animation shows progressive growth over 4550 million years (Myr) of the lead isotope ratios for two stony meteorites (Nuevo Laredo and Forest City) from initial lead isotope ratios matching those of the Canyon Diablo iron meteorite. Precise Pb–Pb dating of meteorites Chondrules and calcium–aluminium-rich inclusions (CAIs) are spherical particles that make up chondritic meteorites and are believed to be the oldest objects in the Solar System. Hence precise dating of these objects is important to constrain the early evolution of the Solar System and the age of the Earth. The U–Pb dating method can yield the most precise ages for early Solar System objects due to the optimal half-life of 238U. However, the absence of zircon or other uranium-rich minerals in chondrites, and the presence of initial non-radiogenic Pb (common Pb), rules out direct use of the U–Pb concordia method. Therefore, the most precise dating method for these meteorites is the Pb–Pb method, which allows a correction for common Pb. When the abundance of 204Pb is relatively low, this isotope has larger measurement errors than the other Pb isotopes, leading to very strong correlation of errors between the measured ratios. This makes it difficult to determine the analytical uncertainty on the age. To avoid this problem, researchers developed an 'alternative Pb–Pb isochron diagram' (see figure) with reduced error correlation between the measured ratios. In this diagram the 204Pb/206Pb ratio (the reciprocal of the normal ratio) is plotted on the x-axis, so that a point on the y axis (zero 204Pb/206Pb) would have infinitely radiogenic Pb. The ratio plotted on this axis is the 207Pb/206Pb ratio, corresponding to the slope of a normal Pb/Pb isochron, which yields the age. The most accurate ages are produced by samples near the y-axis, which was achieved by step-wise leaching and analysis of the samples. Previously, when applying the alternative Pb–Pb isochron diagram, the 238U/235U isotope ratios were assumed to be invariant among meteoritic material. However, it has been shown that 238U/235U ratios are variable among meteoritic material. To accommodate this, U-corrected Pb–Pb dating analysis is used to generate ages for the oldest solid material in the Solar System using a revised 238U/235U value of 137.786 ± 0.013 to represent the mean 238U/235U isotope ratio in bulk inner Solar System materials. The result of U-corrected Pb–Pb dating has produced ages of 4567.35 ± 0.28 My for CAIs (A) and chondrules with ages between 4567.32 ± 0.42 and 4564.71 ± 0.30 My (B and C) (see figure). This supports the idea that CAIs crystallization and chondrule formation occurred around the same time during the formation of the solar system. However, chondrules continued to form for approximately 3 My after CAIs. Hence the best age for the original formation of the Solar System is 4567.7 My. This date also represents the time of initiation of planetary accretion. Successive collisions between accreted bodies led to the formation of larger and larger planetesimals, finally forming the Earth–Moon system in a giant impact event. The age difference between CAIs and chondrules measured in these studies verifies the chronology of the early Solar System derived from extinct short-lived nuclide methods such as 26Al–26Mg, thus improving our understanding of the development of the Solar System and the formation of the Earth. References External links Geochronology and Isotopes Data Portal Radiometric dating
Lead–lead dating
Chemistry
1,803
12,849,052
https://en.wikipedia.org/wiki/Delta%20Doradus
δ Doradus (often Latinised to Delta Doradus, abbreviated to δ Dor or delta Dor) is a star in the southern constellation of Dorado. Based upon an annual parallax shift of 21.80 mas as seen from Earth, it is located around 150 light years from the Sun. The star is visible to the naked eye with an apparent visual magnitude of +4.34. This is an A-type main sequence star with a stellar classification of A7 V. The star is spinning rapidly with a projected rotational velocity of 172 km/s. This is giving the star an oblate shape with an equatorial bulge that is 12% larger than the polar radius. Although A-type stars are not expected to harbor a magnetic dynamo needed to power X-ray emission, an X-ray flux of has been detected at these coordinates. This may indicate that the star has an unseen companion. δ Doradus displays an infrared excess suggesting it may be a Vega-like star with an orbiting debris disk. Currently this star is the Moon's south pole star, which occurs once every 18.6 years. The pole star status changes periodically, because of the precession of the Moon's rotational axis. When δ Doradus is the pole star, it is better aligned than Earth's Polaris (α Ursae Minoris), but much fainter. It is also the south pole star of Jupiter. References External links http://server6.wikisky.org/starview?object_type=1&object_id=951&object_name=%CE%B4+Dor&locale=EN A-type main-sequence stars Circumstellar disks Southern pole stars Orbit of the Moon Dorado Doradus, Delta Durchmusterung objects 039014 027100 2015
Delta Doradus
Astronomy
380
702,658
https://en.wikipedia.org/wiki/Hindu%20temple%20architecture
Hindu temple architecture as the main form of Hindu architecture has many different styles, though the basic nature of the Hindu temple remains the same, with the essential feature an inner sanctum, the garbha griha or womb-chamber, where the primary Murti or the image of a deity is housed in a simple bare cell. For rituals and prayers, this chamber frequently has an open space that can be moved in a clockwise direction. There are frequently additional buildings and structures in the vicinity of this chamber, with the largest ones covering several acres. On the exterior, the garbhagriha is crowned by a tower-like shikhara, also called the vimana in the south. The shrine building often includes an circumambulatory passage for parikrama, a mandapa congregation hall, and sometimes an antarala antechamber and porch between garbhagriha and mandapa. In addition to other small temples in the compound, there may be additional mandapas or buildings that are either connected or separate from the larger temples. Hindu temple architecture reflects a synthesis of arts, the ideals of dharma, values, and the way of life cherished under Hinduism. The temple is a place for Tirtha—pilgrimage. All the cosmic elements that create and celebrate life in Hindu pantheon, are present in a Hindu temple—from fire to water, from images of nature to deities, from the feminine to the masculine, from kama to artha, from the fleeting sounds and incense smells to Purusha—the eternal nothingness yet universality—is part of a Hindu temple architecture. The form and meanings of architectural elements in a Hindu temple are designed to function as a place in which to create a link between man and the divine, to help his progress to spiritual knowledge and truth, his liberation it calls moksha. The architectural principles of Hindu temples in India are described in the Shilpa Shastras and Vastu Sastras. The Hindu culture has encouraged aesthetic independence to its temple builders, and its architects have sometimes exercised considerable flexibility in creative expression by adopting other perfect geometries and mathematical principles in Mandir construction to express the Hindu Way of life. Hindu temple architecture and its various styles has had a profound influence on the stylistic origins of Buddhist architecture. Aspects seen on Buddhist architecture like the stupa may have been influenced by the shikhara, a stylistic element which in some regions evolved to the pagoda which are seen throughout Thailand, Cambodia, Nepal, China, Taiwan, Japan, Korea, Myanmar, and Vietnam. History Early structures Remains of early elliptical shrines discovered in Besnagar (3rd-2nd century BCE) and Nagari (1st century BCE), may be the earliest known Hindu temple structures, associated to the early Bhagavata tradition, a precursor of Vaishnavism. In Tamil Nadu, the earliest version of the Murugan Temple, Saluvankuppam, north-facing and in brick, appears to date from between the 3rd century BCE and 3rd century CE. In Besnagar, the temple structures have been found in conjonction with the Heliodorus pillar dedicated to Vāsudeva. The archaeologists found an ancient elliptical foundation, extensive floor and plinth produced from burnt bricks. Further, the foundations for all the major components of a Hindu temple – garbhagriha (sanctum), pradakshinapatha (circumambulation passage), antarala (antechamber next to sanctum) and mandapa (gathering hall) – were found. These sections had a thick support base for their walls. These core temple remains cover an area of 30 x 30 m. The sections had post-holes, which likely contained the wooden pillars for the temple superstructure above. In the soil were iron nails that likely held together the wooden pillars. The superstructure of the temple was likely made of wood, mud and other perishable materials. The ancient temple complex discovered in Nagari (Chittorgarh, Rajasthan) – about 500 kilometers to the west of Vidisha, has a sub-surface structure nearly identical to that of the Besnagar temple. The structure is also associated to the cult of Vāsudeva and Saṃkarṣaṇa, and dated to the 1st century BCE. Classical period (4-6th century) Though there are very few remains of stone Hindu temples before the Gupta dynasty in the 5th century CE, there may be earlier structures constructed from timber-based architecture. The rock-cut Udayagiri Caves (401 CE) are among the most important early sites, built with royal sponsorship, recorded by inscriptions, and with impressive sculpture. The earliest preserved Hindu temples are simple cell-like stone temples, some rock-cut and others structural, as at Temple 17 at Sanchi. By the 6th or 7th century, these evolved into high shikhara stone superstructures. However, there is inscriptional evidence such as the ancient Gangadhara inscription from about 424 CE, states Meister, that towering temples existed before this time and these were possibly made from more perishable material. These temples have not survived. Examples of early major North Indian temples that have survived after the Udayagiri Caves in Madhya Pradesh include those at Tigawa, Deogarh, Parvati Temple, Nachna (465), Bhitargaon, the largest Gupta brick temple to survive, Lakshman Brick Temple, Sirpur (600-625 CE); Rajiv Lochan temple, Rajim (7th-century). Gop Temple in Gujarat (c. 550 or later) is an oddity, with no surviving close comparator. No pre-7th century CE South Indian free-standing stone temples have survived. Examples of early major South Indian temples that have survived, some in ruins, include the diverse styles at Mahabalipuram, from the 7th and 8th centuries. According to Meister, the Mahabalipuram temples are "monolithic models of a variety of formal structures all of which already can be said to typify a developed "Dravida" (South Indian) order". They suggest a tradition and a knowledge base existed in South India by the time of the early Chalukya and Pallava era when these were built. In the Deccan, Cave 3 of the Badami cave temples was cut out in 578 CE, and Cave 1 is probably slightly earlier. Other examples are found in Aihole and Pattadakal. Medieval period (7th to 16th century) By about the 7th century most main features of the Hindu temple were established along with theoretical texts on temple architecture and building methods. From between about the 7th and 13th centuries a large number of temples and their ruins have survived (though far fewer than once existed). Many regional styles developed, very often following political divisions, as large temples were typically built with royal patronage. The Vesara style originated in the region between the Krishna and Tungabhadra rivers that is contemporary north Karnataka. According to some art historians, the roots of Vesara style can be traced to the Chalukyas of Badami (500-753AD) whose Early Chalukya or Badami Chalukya architecture built temples in a style that mixed some features of the nagara and the dravida styles, for example using both the northern shikhara and southern vimana type of superstructure over the sanctum in different temples of similar date, as at Pattadakal. This style was further refined by the Rashtrakutas of Manyakheta (750-983AD) in sites such as Ellora. Though there is clearly a good deal of continuity with the Badami or Early Chalukya style, other writers only date the start of Vesara to the later Western Chalukyas of Kalyani (983-1195 AD), in sites such as Lakkundi, Dambal, Itagi, and Gadag, and continued by the Hoysala empire (1000-1330 AD). The earliest examples of Pallava architecture are rock-cut temples dating from 610 to 690 CE and structural temples between 690 and 900 CE. The greatest accomplishments of the Pallava architecture are the rock-cut Group of Monuments at Mahabalipuram at Mahabalipuram, a UNESCO World Heritage Site, including the Shore Temple. This group includes both excavated pillared halls, with no external roof except the natural rock, and monolithic shrines where the natural rock is entirely cut away and carved to give an external roof. Early temples were mostly dedicated to Shiva. The Kailasanatha temple also called Rajasimha Pallaveswaram in Kanchipuram built by Narasimhavarman II also known as Rajasimha is a fine example of the Pallava style temple. Western Chalukya architecture linked between the Badami Chalukya Architecture of the 8th century and the Hoysala architecture popularised in the 13th century. The art of Western Chalukyas is sometimes called the "Gadag style" after the number of ornate temples they built in the Tungabhadra – Krishna River doab region of present-day Gadag district in Karnataka. Their temple building reached its maturity and culmination in the 12th century, with over a hundred temples built across the deccan, more than half of them in present-day Karnataka. Apart from temples they are also well known for ornate stepped wells (Pushkarni) which served as ritual bathing places, many of which are well preserved in Lakkundi. Their stepped well designs were later incorporated by the Hoysalas and the Vijayanagara empire in the coming centuries. In the north, Muslim invasions from the 11th century onwards reduced the building of temples, and saw the loss of many existing ones. The south also witnessed Hindu-Muslim conflict that affected the temples, but the region was relatively less affected than the north. In late 14th century, the Hindu Vijayanagara Empire came to power and controlled much of South India. During this period, the distinctive very tall gopuram gatehouse, (actually a late development, from the 12th century or later), was typically added to older large temples. Southeast Asian Hindu temples Possibly the oldest Hindu temples in Southeast Asia dates back to 2nd century BCE from the Funan site of Oc Eo in the Mekong Delta. They were probably dedicated to a sun god, Shiva and Vishnu. The temple were constructed using granite blocks and bricks, one with a small stepped pond. The earliest evidence trace to Sanskrit stone inscriptions found on the islands and the mainland Southeast Asia is the Võ Cạnh inscription of Champa dated to 2nd or 3rd century CE in Vietnam or in Cambodia between the 4th and 5th century CE. Prior to the 14th-century local versions of Hindu temples were built in Myanmar, Malaysia, Indonesia, Thailand, Cambodia, Laos and Vietnam. These developed several national traditions, and often mixed Hinduism and Buddhism. Theravada Buddhism prevailed in many parts of the South-East Asia, except Malaysia and Indonesia where Islam displaced them both. Hindu temples in Southeast Asia developed their own distinct versions, mostly based on Indian architectural models, both North Indian and South Indian styles. However, the Southeast Asian temple architecture styles are different, and there is no known single temple in India that can be the source of the Southeast Asian temples. According to Michell, it is as if the Southeast Asian architects learned from "the theoretical prescriptions about temple building" from Indian texts, but never saw one. They reassembled the elements with their own creative interpretations. The Hindu temples found in Southeast Asia are more conservative and far more strongly link the Mount Meru-related cosmological elements of Indian thought than the Hindu temples found in the subcontinent. Additionally, unlike the Indian temples, the sacred architecture in Southeast Asia associated the ruler (devaraja) with the divine, with the temple serving as a memorial to the king as much as being house of gods. Notable examples of Southeast Asian Hindu temple architecture are the Shivaist Prambanan Trimurti temple compound in Java, Indonesia (9th century), and the Vishnuite Angkor Wat in Cambodia (12th century). Design A Hindu temple is a symmetry-driven structure, with many variations, on a square grid of padas, depicting perfect geometric shapes such as circles and squares. Susan Lewandowski states that the underlying principle in a Hindu temple is built around the belief that all things are one and that everything is connected. A temple, states Lewandowski, "replicates again and again the Hindu beliefs in the parts mirroring, and at the same time being, the universal whole" like an "organism of repeating cells". The pilgrim is welcomed through mathematically structured spaces, a network of art, pillars with carvings and statues that display and celebrate the four important and necessary principles of human life—the pursuit of artha (prosperity, wealth), the pursuit of kama (desire), the pursuit of dharma (virtues, ethical life) and the pursuit of moksha (release, self-knowledge). At the centre of the temple, typically below and sometimes above or next to the deity, is mere hollow space with no decoration, symbolically representing Purusa, the Supreme Principle, the sacred Universal, one without form, which is present everywhere, connects everything, and is the essence of everyone. A Hindu temple is meant to encourage reflection, facilitate purification of one's mind, and trigger the process of inner realization within the devotee. The specific process is left to the devotee's school of belief. The primary deity of different Hindu temples varies to reflect this spiritual spectrum. The site The appropriate site for a Mandir, suggest ancient Sanskrit texts, is near water and gardens, where lotus and flowers bloom, where swans, ducks and other birds are heard, where animals rest without fear of injury or harm. These harmonious places were recommended in these texts with the explanation that such are the places where gods play, and thus the best site for Hindu temples. While major Hindu mandirs are recommended at sangams (confluence of rivers), river banks, lakes and seashore, the Brhat Samhita and Puranas suggest temples may also be built where a natural source of water is not present. Here too, they recommend that a pond be built preferably in front or to the left of the temple with water gardens. If water is neither present naturally nor by design, water is symbolically present at the consecration of temple or the deity. Temples may also be built, suggests Visnudharmottara in Part III of Chapter 93, inside caves and carved stones, on hill tops affording peaceful views, mountain slopes overlooking beautiful valleys, inside forests and hermitages, next to gardens, or at the head of a town street. In practice most temples are built as part of a village or town. Some sites such as the capitals of kingdoms and those considered particularly favourable in terms of sacred geography had numerous temples. Many ancient capitals vanished and the surviving temples are now found in a rural landscape; often these are the best-preserved examples of older styles. Aihole, Badami, Pattadakal and Gangaikonda Cholapuram are examples. The plan The design, especially the floor plan, of the part of a Hindu temple around the sanctum or shrine follows a geometrical design called vastu-purusha-mandala. The name is a composite Sanskrit word with three of the most important components of the plan. Mandala means circle, Purusha is universal essence at the core of Hindu tradition, while Vastu means the dwelling structure. Vastupurushamandala is a yantra. The design lays out a Hindu temple in a symmetrical, self-repeating structure derived from central beliefs, myths, cardinality and mathematical principles. The four cardinal directions help create the axis of a Hindu temple, around which is formed a perfect square in the space available. The circle of mandala circumscribes the square. The square is considered divine for its perfection and as a symbolic product of knowledge and human thought, while circle is considered earthly, human and observed in everyday life (moon, sun, horizon, water drop, rainbow). Each supports the other. The square is divided into perfect square grids. In large temples, this is often a 8×8 or 64-grid structure. In ceremonial temple superstructures, this is an 81 sub-square grid. The squares are called ‘‘padas’’. The square is symbolic and has Vedic origins from fire altar, Agni. The alignment along cardinal direction, similarly is an extension of Vedic rituals of three fires. This symbolism is also found among Greek and other ancient civilizations, through the gnomon. In Hindu temple manuals, design plans are described with 1, 4, 9, 16, 25, 36, 49, 64, 81 up to 1024 squares; 1 pada is considered the simplest plan, as a seat for a hermit or devotee to sit and meditate on, do yoga, or make offerings with Vedic fire in front. The second design of 4 padas has a symbolic central core at the diagonal intersection, and is also a meditative layout. The 9 pada design has a sacred surrounded centre, and is the template for the smallest temple. Older Hindu temple vastumandalas may use the 9 through 49 pada series, but 64 is considered the most sacred geometric grid in Hindu temples. It is also called Manduka, Bhekapada or Ajira in various ancient Sanskrit texts. Each pada is conceptually assigned to a symbolic element, sometimes in the form of a deity or to a spirit or apasara. The central square(s) of the 64 is dedicated to the Brahman (not to be confused with Brahmin), and are called Brahma padas. In a Hindu temple's structure of symmetry and concentric squares, each concentric layer has significance. The outermost layer, Paisachika padas, signify aspects of Asuras and evil; the next inner concentric layer is Manusha padas signifying human life; while Devika padas signify aspects of Devas and good. The Manusha padas typically houses the ambulatory. The devotees, as they walk around in clockwise fashion through this ambulatory to complete Parikrama (or Pradakshina), walk between good on inner side and evil on the outer side. In smaller temples, the Paisachika pada is not part of the temple superstructure, but may be on the boundary of the temple or just symbolically represented. The Paisachika padas, Manusha padas and Devika padas surround Brahma padas, which signifies creative energy and serves as the location for temple's primary idol for darsana. Finally at the very centre of Brahma padas is Garbhagruha(Garbha- Centre, gruha- house; literally the centre of the house) (Purusa Space), signifying Universal Principle present in everything and everyone. The spire of a Hindu temple, called Shikhara in north India and Vimana in south India, is perfectly aligned above the Brahma pada(s). Beneath the mandala's central square(s) is the space for the formless shapeless all pervasive all connecting Universal Spirit, the Purusha. This space is sometimes referred to as garbha-griya (literally womb house) – a small, perfect square, windowless, enclosed space without ornamentation that represents universal essence. In or near this space is typically a murti. This is the main deity image, and this varies with each temple. Often it is this idol that gives it a local name, such as Vishnu temple, Krishna temple, Rama temple, Narayana temple, Siva temple, Lakshmi temple, Ganesha temple, Durga temple, Hanuman temple, Surya temple, and others. It is this garbha-griya which devotees seek for ‘‘darsana’’ (literally, a sight of knowledge, or vision). Above the vastu-purusha-mandala is a high superstructure called the shikhara in north India, and vimana in south India, that stretches towards the sky. Sometimes, in makeshift temples, the superstructure may be replaced with symbolic bamboo with few leaves at the top. The vertical dimension's cupola or dome is designed as a pyramid, conical or other mountain-like shape, once again using principle of concentric circles and squares (see below). Scholars such as Lewandowski state that this shape is inspired by cosmic mountain of Mount Meru or Himalayan Kailasa, the abode of gods according to its ancient mythology. In larger temples, the outer three padas are visually decorated with carvings, paintings or images meant to inspire the devotee. In some temples, these images or wall reliefs may be stories from Hindu Epics, in others they may be Vedic tales about right and wrong or virtues and vice, in some they may be idols of minor or regional deities. The pillars, walls and ceilings typically also have highly ornate carvings or images of the four just and necessary pursuits of life—kama, artha, dharma, and moksa. This walk around is called pradakshina. Large temples also have pillared halls called mandapa. One on the east side, serves as the waiting room for pilgrims and devotees. The mandapa may be a separate structure in older temples, but in newer temples this space is integrated into the temple superstructure. Mega temple sites have a main temple surrounded by smaller temples and shrines, but these are still arranged by principles of symmetry, grids and mathematical precision. An important principle found in the layout of Hindu temples is mirroring and repeating fractal-like design structure, each unique yet also repeating the central common principle, one which Susan Lewandowski refers to as “an organism of repeating cells”. Exceptions to the square grid principle Predominant number of Hindu temples exhibit the perfect square grid principle. However, there are some exceptions. For example, the Teli ka Mandir in Gwalior, built in the 8th century CE is not a square but is a rectangle consisting of stacked squares. Further, the temple explores a number of structures and shrines in 1:1, 1:2, 1:3, 2:5, 3:5 and 4:5 ratios. These ratios are exact, suggesting the architect intended to use these harmonic ratios, and the rectangle pattern was not a mistake, nor an arbitrary approximation. Other examples of non-square harmonic ratios are found at Naresar temple site of Madhya Pradesh and Nakti-Mata temple near Jaipur, Rajasthan. Michael Meister states that these exceptions mean the ancient Sanskrit manuals for temple building were guidelines, and Hinduism permitted its artisans flexibility in expression and aesthetic independence. The Hindu text Sthapatya Veda describes many plans and styles of temples of which the following are found in other derivative literature: Chaturasra (square), Ashtasra (octagonal), Vritta (circular), Ayatasra (rectangular), Ayata Ashtasra (rectangular-octagonal fusion), Ayata Vritta (elliptical), Hasti Prishta (apsidal), Dvayasra Vrita (rectangular-circular fusion); in Tamil literature, the Prana Vikara (shaped like a Tamil Om sign, ) is also found. Methods of combining squares and circles to produce all of these plans are described in the Hindu texts. The builders The temples were built by guilds of architects, artisans and workmen. Their knowledge and craft traditions, states Michell, were originally preserved by the oral tradition, later with palm-leaf manuscripts. The building tradition was typically transmitted within families from one generation to the next, and this knowledge was jealously guarded. The guilds were like a corporate body that set rules of work and standard wages. These guilds over time became wealthy, and themselves made charitable donations as evidenced by inscriptions. The guilds covered almost every aspect of life in the camps around the site where the workmen lived during the period of construction, which in the case of large projects might be several years. The work was led by a chief architect (sutradhara). The construction superintendent was equal in his authority. Other important members were stonemason chief and the chief image-maker who collaborated to complete a temple. The sculptors were called shilpins. Women participated in temple building, but in lighter work such as polishing stones and clearing. Hindu texts are inconsistent about which caste did the construction work, with some texts accepting all castes to work as a shilpin. The Brahmins were the experts in art theory and guided the workmen when needed. They also performed consecration rituals of the superstructure and in the sanctum. In the earliest periods of Hindu art, from about the 4th century to about the 10th century, the artists had considerable freedom and this is evidenced in the considerable variations and innovations in images crafted and temple designs. Later, much of this freedom was lost as iconography became more standardized and the demand for iconometry consistency increased. This "presumably reflected the influence of brahman theologians" states Michell, and the "increasing dependence of the artist upon the brahmins" on suitable forms of sacred images. The "individual pursuit of self-expression" in a temple project was not allowed and instead, the artist expressed the sacred values in the visual form through a temple, for the most part anonymously. The sponsors used contracts for the building tasks. Though great masters probably had assistants to help complete principal images in a temple, the reliefs panels in a Hindu temple were "almost certainly the inspiration of a single artist". Schools of temple building tradition Along with guilds, surviving texts suggest that several schools of Hindu temple architecture had developed in ancient India. Each school developed its own gurukuls (study centres) and texts. Of these, state Bharne and Krusche, two became most prominent: the Vishwakarma school and the Maya (Devanagari: मय not to be pronounced as Maayaa) school. The Vishwakarma school is credited with treatises, terminology and innovations related to the Nagara style of architecture, while the Maya school with those related to the Dravida style. The style now called Vesara bridges and combines elements of the Nagara and the Dravida styles, it probably reflects one of the other extinct schools. Some scholars have questioned the relevance of these texts, whether the artists relied on śilpa śāstras theory and Sanskrit construction manuals probably written by Brahmins, and did these treatises precede or follow the big temples and ancient sculptures therein. Other scholars question whether big temples and complex symmetric architecture or sculpture with consistent themes and common iconography across distant sites, over many centuries, could have been built by artists and architects without adequate theory, shared terminology and tools, and if so how. According to Adam Hardy – an architecture historian and professor of Asian Architecture, the truth "must lie somewhere in between". According to George Michell – an art historian and professor specializing in Hindu Architecture, the theory and the creative field practice likely co-evolved, and the construction workers and artists building complex temples likely consulted the theoreticians when they needed to. Various styles of architecture The ancient Hindu texts on architecture such as Brihatsamhita and others, states Michell, classify temples into five orders based on their typological features: Nagara, Dravida, Vesara, ellipse and rectangle. The plan described for each include square, octagonal and apsidal. Their horizontal plan regulates the vertical form. Each temple architecture in turn has developed its own vocabulary, with terms that overlap but do not necessarily mean exactly the same thing in another style and may apply to a different part of the temple. Following a general historical division, the early Hindu temples, up to the 7th or 8th century, are often called classical or ancient temples, while those after the classical period to the 12th or 13th century are sometimes referred to as medieval. However, this division does not reflect a major break in Hindu architecture, which continued to evolve gradually across these periods. The style of Hindu temple architecture is not only the result of the theology, spiritual ideas, and the early Hindu texts but also a result of innovation driven by regional availability of raw materials and the local climate. Some materials of construction were imported from distant regions, but much of the temples were built from readily available materials. In some regions, such as in South Karnataka, the local availability of soft stone led to Hoysala architects to innovate architectural styles that are difficult with hard crystalline rocks. In other places, artists used to cut granite or other stones to build temples and create sculptures. Rock faces allowed artists to carve cave temples or a region's rocky terrain encouraged monolithic rock-cut temple architecture. In regions where stones were unavailable, brick temples flourished. Hindu temple architecture has historically been affected by the building material available in each region, its "tonal value, texture and structural possibilities" states Michell. India Dravidian architecture Dravidian architecture is an architectural idiom in Hindu temple architecture that emerged from South India, reaching its final form by the 1500 CE. It is seen in Hindu temples, and the most distinctive difference from north Indian styles is the use of a shorter and more pyramidal tower over the garbhagriha or sanctuary called a vimana, where the north has taller towers, usually bending inwards as they rise, called shikhara. However, for modern visitors to larger temples the dominating feature is the high Gopura or gatehouse at the edge of the compound; large temples have several, dwarfing the vimana; these are a much more recent development. There are numerous other distinct features such as the dvarapalakas – twin guardians at the main entrance and the inner sanctum of the temple and goshtams – deities carved in niches on the outer side walls of the garbhagriha. Mentioned as one of three styles of temple building in the ancient book Vastu shastra, the majority of the existing structures are located in the Southern Indian states of Karnataka, Tamil Nadu, Kerala, Andhra Pradesh, Telangana, some parts of Maharashtra, Odisha and Sri Lanka. Various kingdoms and empires such as the Satavahanas, the Vakatakas of Vidarbha, the Cholas, the Chera, the Kakatiyas, the Reddis, the Pandyas, the Pallavas, the Gangas, the Kadambas, the Rashtrakutas, the Chalukyas, the Hoysalas and Vijayanagara Empire among others have made substantial contribution to the evolution of the Dravida architecture. Dravida and Nagara architecture Of the different styles of temple architecture in India, the Nagara architecture of northern India and the Dravidian architecture of southern India are most common. Other styles are also found. For example, the rainy climate and the materials of construction available in Bengal, Kerala, Java and Bali Indonesia have influenced the evolutions of styles and structures in these regions. At other sites such as Ellora and Pattadakal, adjacent temples may have features drawing from different traditions, as well as features in a common style local to that region and period. In modern era literature, many styles have been named after the royal dynasties in whose territories they were built. Regional styles The architecture of the rock-cut temples, particularly the rathas, became a model for south Indian temples. Architectural features, particularly the sculptures, were widely adopted in South Indian, Cambodian, Annamese and Javanese temples. Descendants of the sculptors of the shrines are artisans in contemporary Mahabalipuram. Badami Chalukya architecture The Badami Chalukya Architecture style originated by 5th century in Aihole and was perfected in Pattadakal and Badami. Between 500 and 757 CE, Badami Chalukyas built Hindu temples out of sandstone cut into enormous blocks from the outcrops in the chains of the Kaladgi hills. In Aihole, known as the "Cradle of Indian architecture," there are over 150 temples scattered around the village. The Lad Khan Temple is the oldest. The Durga Temple is notable for its semi-circular apse, elevated plinth and the gallery that encircles the sanctum sanctorum. A sculpture of Vishnu sitting atop a large cobra is at Hutchimali Temple. The Ravalphadi cave temple celebrates the many forms of Shiva. Other temples include the Konthi temple complex and the Meguti Jain temple. Pattadakal is a World Heritage Site, where one finds the Virupaksha temple; it is the biggest temple, having carved scenes from the great epics of the Ramayana and the Mahabharata. Other temples at Pattadakal are Mallikarjuna, Kashivishwanatha, Galaganatha and Papanath. Bengal temple architecture Several styles of temple architecture developed in Bengal. Notable temple architectural styles of Bengal are the Chala, Ratna and Dalan temples. Chala-style is a hut with a sloping roof, which follows the pattern of huts in most villages of Bengal. Ratna-style originated in Bengal from the 15th to 16th centuries, under the Mallabhum kingdom (also called Malla dynasty). One of the most prominent features of the Chala and Ratna style is the terracotta artwork on the temple walls. The Dalan-style is flat-roofed temples with their heavy cornices on S-curved brackets, and this style was later influenced by European ideas in the 19th century. The prominent examples of Chala-style are Siddheshwari Kali Temple of Kalna City and Palpara Terracotta Temple of Palpara. One of the prominent example of Ratna-style is Ramchandraji temple at Guptipara. The Sharabhuja Gauranga temple at Panchrol is an example of Dalan-style. Gadag architecture The Gadag style of architecture is also called Western Chalukya architecture. The style flourished for 150 years (1050 to 1200 CE); in this period, about 50 temples were built. Some examples are the Saraswati temple in the Trikuteshwara temple complex at Gadag, the Doddabasappa Temple at Dambal, the Kasivisvesvara Temple at Lakkundi, and the Amriteshwara temple at Annigeri. which is marked by ornate pillars with intricate sculpture. This style originated during the period of the Kalyani Chalukyas (also known as Western Chalukya) Someswara I. Kalinga architecture The design which flourished in eastern Indian state of Odisha and Northern Andhra Pradesh are called Kalinga style of architecture. The style consists of three distinct type of temples namely Rekha Deula, Pidha Deula and Khakhara Deula. Deula means "temple" in the Odia language. The former two are associated with Vishnu, Surya and Shiva temple while the third is mainly with Chamunda and Durga temples. The Rekha deula and Khakhara deula houses the sanctum sanctorum while the Pidha Deula constitutes outer dancing and offering halls. The prominent examples of Rekha Deula are Lingaraja Temple of Bhubaneswar and Jagannath Temple of Puri. One of the prominent example of Khakhara Deula is Vaital Deula. The Mukhasala structure that remains of the Konark Sun Temple is an example of Pidha Deula. Māru-Gurjara architecture Māru-Gurjara architecture, or Solaṅkī style, is a style of north Indian temple architecture that originated in Gujarat and Rajasthan from the 11th to 13th centuries, under the Chaulukya dynasty (or Solaṅkī dynasty). Although originating as a regional style in Hindu temple architecture, it became especially popular in Jain temples and, mainly under Jain patronage, later spread across India and to diaspora communities around the world. On the exteriors, the style is distinguished from other north Indian temple styles of the period in "that the external walls of the temples have been structured by increasing numbers of projections and recesses, accommodating sharply carved statues in niches. These are normally positioned in superimposed registers, above the lower bands of moldings. The latter display continuous lines of horse riders, elephants, and kīrttimukhas. Hardly any segment of the surface is left unadorned." The main shikhara tower usually has many urushringa subsidiary spirelets on it, and two smaller side-entrances with porches are common in larger temples. Interiors are if anything even more lavishly decorated, with elaborate carving on most surfaces. In particular, Jain temples often have small low domes carved on the inside with a highly intricate rosette design. Another distinctive feature is "flying" arch-like elements between pillars, touching the horizontal beam above in the centre, and elaborately carved. These have no structural function, and are purely decorative. The style developed large pillared halls, many open at the sides, with Jain temples often having one closed and two pillared halls in sequence on the main axis leading to the shrine. The style mostly fell from use in Hindu temples in its original regions by the 13th century, especially as the area had fallen to the Muslim Delhi Sultanate by 1298. But, unusually for an Indian temple style, it continued to be used by Jains there and elsewhere, with a notable "revival" in the 15th century. Since then it has continued in use in Jain and some Hindu temples, and from the late 20th century has spread to temples built outside India by both the Jain diaspora and Hindus. Some buildings mix Māru-Gurjara elements with those of local temple styles and modern international ones. Generally, where there is elaborate carving, often still done by craftsmen from Gujarat or Rajasthan, this has more ornamental and decorative work than small figures. In particular the style is used in India and abroad by the Swaminarayan sect. Sometimes the Māru-Gurjara influence is limited to the "flying arches" and mandapa ceiling rosettes, and a preference for white marble. Nepal Newar architecture This style is one of the oldest styles of temples on the Asian continent and derives its shape from Himalayan fir trees. The ground floor is typically the residence of the deity, either Hindu or Buddhist, while the upper floors are used as storage for religious items. There is gajura at the top which is the combination of a lotus base, an upside-down vase, a triangle and a kalasha. The pagoda style flourished in Nepal from the beginning of the 13th century. The temples of Pashupatinath, Changunarayan, Chandeshwori and Banepa are excellent examples of ancient architecture in the pagoda style. The Malla period produced various pagoda-style temples and palaces such as Nayatapola, Dattatraya of Bhaktapur, Kasthamandap of Kathmandu, Taleju Temple, Vajrabarahi, Vajrayogini. Southeast Asia as part of Greater India Architecture of the southeast nations was inspired by the Indian temple architecture, as those were Indianised as part of the Greater India. Champa architecture Between the 6th and the 16th century, the Kingdom of Champa flourished in present-day central and southern Vietnam. Unlike the Javanese that mostly used volcanic andesite stone for their temples, and Khmer of Angkor which mostly employed grey sandstones to construct their religious buildings, the Cham built their temples from reddish bricks. The most important remaining sites of Cham bricks temple architecture include Mỹ Sơn near Da Nang, Po Nagar near Nha Trang, and Po Klong Garai near Phan Rang. Typically, a Cham temple complex consisted of several different kinds of buildings. They are kalan, a brick sanctuary, typically in the form of a tower with garbahgriha used to host the murti of deity. A mandapa is an entry hallway connected with a sanctuary. A kosagrha or "fire-house" is a temple construction typically with a saddle-shaped roof, used to house the valuables belonging to the deity or to cook for the deity. The gopura was a gate-tower leading into a walled temple complex. These building types are typical for Hindu temples in general; the classification is valid not only for the architecture of Champa, but also for other architectural traditions of Greater India. Indonesian architecture Temples are called candi () in Indonesia, whether it is Buddhist or Hindu. A Candi refers to a structure based on the Indian type of single-celled shrine, with a pyramidal tower above it (Meru tower in Bali), and a portico for entrance, mostly built between the 7th to 15th centuries. In Hindu Balinese architecture, a candi shrine can be found within a pura compound. The best example of Indonesian Javanese Hindu temple architecture is the 9th century Prambanan (Shivagrha) temple compound, located in Central Java, near Yogyakarta. This largest Hindu temple in Indonesia has three main prasad towers, dedicated to Trimurti gods. Shiva temple, the largest main temple is towering to 47 metre-high (154 ft). The term "candi" itself is believed was derived from Candika, one of the manifestations of the goddess Durga as the goddess of death. The candi architecture follows the typical Hindu architecture traditions based on Vastu Shastra. The temple layout, especially in central Java period, incorporated mandala temple plan arrangements and also the typical high towering spires of Hindu temples. The candi was designed to mimic Meru, the holy mountain the abode of gods. The whole temple is a model of Hindu universe according to Hindu cosmology and the layers of Loka. The candi structure and layout recognize the hierarchy of the zones, spanned from the less holy to the holiest realms. The Indic tradition of Hindu-Buddhist architecture recognize the concept of arranging elements in three parts or three elements. Subsequently, the design, plan and layout of the temple follows the rule of space allocation within three elements; commonly identified as foot (base), body (centre), and head (roof). They are Bhurloka represented by the outer courtyard and the foot (base) part of each temples, Bhuvarloka represented by the middle courtyard and the body of each temples, and Svarloka which symbolized by the roof of Hindu structure usually crowned with ratna (sanskrit: jewel) or vajra. Khmer architecture Before the 14th century, the Khmer Empire flourished in present-day Cambodia with its influence extended to most of mainland Southeast Asia. Its great capital, Angkor (, "Capital City", derived from Sanskrit "nagara"), contains some of the most important and the most magnificent example of Khmer temple architecture. The classic style of Angkorian temple is demonstrated by the 12th century Angkor Wat. Angkorian builders mainly used sandstone and laterite as temple building materials. The main superstructure of typical Khmer temple is a towering prasat called prang which houses the garbhagriha inner chamber, where the murti of Vishnu or Shiva, or a lingam resides. Khmer temples were typically enclosed by a concentric series of walls, with the central sanctuary in the middle; this arrangement represented the mountain ranges surrounding Mount Meru, the mythical home of the gods. Enclosures are the spaces between these walls, and between the innermost wall and the temple itself. The walls defining the enclosures of Khmer temples are frequently lined by galleries, while passage through the walls is by way of gopuras located at the cardinal points. The main entrance usually adorned with elevated causeway with cruciform terrace. Glossary The Hindu texts on temple architecture have an extensive terminology. Most terms have several different names in the various Indian languages used in different regions of India, as well as the Sanscrit names used in ancient texts. A few of the more common terms are tabulated below, mostly in their Sanscrit/Hindi forms: Gallery See also Temple tank Vedic altars Indonesian architecture, Candi of Indonesia Rock-cut architecture Indian rock-cut architecture Architecture of Angkor Hemadpanthi architecture Style Dhvajastambha (flagstaff) Notes References Bibliography Dehejia, V. (1997). Indian Art. Phaidon: London. . Hardy, Adam (2007). The Temple Architecture of India, Wiley: Chichester. Harle, J.C., The Art and Architecture of the Indian Subcontinent, 2nd edn. 1994, Yale University Press Pelican History of Art, Michell, George (1990), The Penguin Guide to the Monuments of India, Volume 1: Buddhist, Jain, Hindu, 1990, Penguin Books, Rajan, K.V. Soundara (1998). Rock-Cut Temple Styles. Somaiya Publications: Mumbai. External links Sabha, Vedic altar, Indian temples and Buddhist Mandala: Drawings, Patrick George, University of Pennsylvania Space and Cosmology in the Hindu Temple Hindu Javanese Temples (archived 24 June 2004) Sacral architecture
Hindu temple architecture
Engineering
9,316
1,765,852
https://en.wikipedia.org/wiki/Matrix%20calculus
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics. Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section. Scope Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separate calculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way. As a first example, consider the gradient from vector calculus. For a scalar function of three independent variables, , the gradient is given by the vector equation where represents a unit vector in the direction for . This type of generalized derivative can be seen as the derivative of a scalar, f, with respect to a vector, , and its result can be easily collected in vector form. More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have an -vector of dependent variables, or functions, of independent variables we might consider the derivative of the dependent vector with respect to the independent vector. The result could be collected in an matrix consisting of all of the possible derivative combinations. There are a total of nine possibilities using scalars, vectors, and matrices. Notice that as we consider higher numbers of components in each of the independent and dependent variables we can be left with a very large number of possibilities. The six kinds of derivatives that can be most neatly organized in matrix form are collected in the following table. Here, we have used the term "matrix" in its most general sense, recognizing that vectors are simply matrices with one column (and scalars are simply vectors with one row). Moreover, we have used bold letters to indicate vectors and bold capital letters for matrices. This notation is used throughout. Notice that we could also talk about the derivative of a vector with respect to a matrix, or any of the other unfilled cells in our table. However, these derivatives are most naturally organized in a tensor of rank higher than 2, so that they do not fit neatly into a matrix. In the following three sections we will define each one of these derivatives and relate them to other branches of mathematics. See the layout conventions section for a more detailed table. Relation to other derivatives The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fréchet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general for partial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping. Usages Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of: Kalman filter Wiener filter Expectation-maximization algorithm for Gaussian mixture Gradient descent Notation The vector and matrix derivatives presented in the sections to follow take full advantage of matrix notation, using a single variable to represent a large number of variables. In what follows we will distinguish scalars, vectors and matrices by their typeface. We will let denote the space of real matrices with rows and columns. Such matrices will be denoted using bold capital letters: , , , etc. An element of , that is, a column vector, is denoted with a boldface lowercase letter: , , , etc. An element of is a scalar, denoted with lowercase italic typeface: , , , etc. denotes matrix transpose, is the trace, and or is the determinant. All functions are assumed to be of differentiability class unless otherwise noted. Generally letters from the first half of the alphabet (a, b, c, ...) will be used to denote constants, and from the second half (t, x, y, ...) to denote variables. NOTE: As mentioned above, there are competing notations for laying out systems of partial derivatives in vectors and matrices, and no standard appears to be emerging yet. The next two introductory sections use the numerator layout convention simply for the purposes of convenience, to avoid overly complicating the discussion. The section after them discusses layout conventions in more detail. It is important to realize the following: Despite the use of the terms "numerator layout" and "denominator layout", there are actually more than two possible notational choices involved. The reason is that the choice of numerator vs. denominator (or in some situations, numerator vs. mixed) can be made independently for scalar-by-vector, vector-by-scalar, vector-by-vector, and scalar-by-matrix derivatives, and a number of authors mix and match their layout choices in various ways. The choice of numerator layout in the introductory sections below does not imply that this is the "correct" or "superior" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors. As a result, when working with existing formulas the best policy is probably to identify whichever layout is used and maintain consistency with it, rather than attempting to use the same layout in all situations. Alternatives The tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. All of the work here can be done in this notation without use of the single-variable matrix notation. However, many problems in estimation theory and other areas of applied mathematics would result in too many indices to properly keep track of, pointing in favor of matrix calculus in those areas. Also, Einstein notation can be very useful in proving the identities presented here (see section on differentiation) as an alternative to typical element notation, which can become cumbersome when the explicit sums are carried around. Note that a matrix can be considered a tensor of rank two. Derivatives with vectors Because vectors are matrices with only one column, the simplest matrix derivatives are vector derivatives. The notations developed here can accommodate the usual operations of vector calculus by identifying the space of -vectors with the Euclidean space , and the scalar is identified with . The corresponding concept from vector calculus is indicated at the end of each subsection. NOTE: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions. Vector-by-scalar The derivative of a vector , by a scalar is written (in numerator layout notation) as In vector calculus the derivative of a vector with respect to a scalar is known as the tangent vector of the vector , . Notice here that . Example Simple examples of this include the velocity vector in Euclidean space, which is the tangent vector of the position vector (considered as a function of time). Also, the acceleration is the tangent vector of the velocity. Scalar-by-vector The derivative of a scalar by a vector , is written (in numerator layout notation) as In vector calculus, the gradient of a scalar field (whose independent coordinates are the components of ) is the transpose of the derivative of a scalar by a vector. By example, in physics, the electric field is the negative vector gradient of the electric potential. The directional derivative of a scalar function of the space vector in the direction of the unit vector (represented in this case as a column vector) is defined using the gradient as follows. Using the notation just defined for the derivative of a scalar with respect to a vector we can re-write the directional derivative as This type of notation will be nice when proving product rules and chain rules that come out looking similar to what we are familiar with for the scalar derivative. Vector-by-vector Each of the previous two cases can be considered as an application of the derivative of a vector with respect to a vector, using a vector of size one appropriately. Similarly we will find that the derivatives involving matrices will reduce to derivatives involving vectors in a corresponding way. The derivative of a vector function (a vector whose components are functions) , with respect to an input vector, , is written (in numerator layout notation) as In vector calculus, the derivative of a vector function with respect to a vector whose components represent a space is known as the pushforward (or differential), or the Jacobian matrix. The pushforward along a vector function with respect to vector in is given by Derivatives with matrices There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the names tangent matrix and gradient matrix respectively after their analogs for vectors. Note: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions. Matrix-by-scalar The derivative of a matrix function by a scalar is known as the tangent matrix and is given (in numerator layout notation) by Scalar-by-matrix The derivative of a scalar function , with respect to a matrix of independent variables, is given (in numerator layout notation) by Important examples of scalar functions of matrices include the trace of a matrix and the determinant. In analog with vector calculus this derivative is often written as the following. Also in analog with vector calculus, the directional derivative of a scalar of a matrix in the direction of matrix is given by It is the gradient matrix, in particular, that finds many uses in minimization problems in estimation theory, particularly in the derivation of the Kalman filter algorithm, which is of great importance in the field. Other matrix derivatives The three types of derivatives that have not been considered are those involving vectors-by-matrices, matrices-by-vectors, and matrices-by-matrices. These are not as widely considered and a notation is not widely agreed upon. Layout conventions This section discusses the similarities and differences between notational conventions that are used in the various fields that take advantage of matrix calculus. Although there are largely two consistent conventions, some authors find it convenient to mix the two conventions in forms that are discussed below. After this section, equations will be listed in both competing forms separately. The fundamental issue is that the derivative of a vector with respect to a vector, i.e. , is often written in two competing ways. If the numerator is of size and the denominator of size n, then the result can be laid out as either an matrix or matrix, i.e. the elements of laid out in rows and the elements of laid out in columns, or vice versa. This leads to the following possibilities: Numerator layout, i.e. lay out according to and (i.e. contrarily to ). This is sometimes known as the Jacobian formulation. This corresponds to the layout in the previous example, which means that the row number of equals to the size of the numerator and the column number of equals to the size of . Denominator layout, i.e. lay out according to and (i.e. contrarily to y). This is sometimes known as the Hessian formulation. Some authors term this layout the gradient, in distinction to the Jacobian (numerator layout), which is its transpose. (However, gradient more commonly means the derivative regardless of layout.). This corresponds to the n×m layout in the previous example, which means that the row number of equals to the size of (the denominator). A third possibility sometimes seen is to insist on writing the derivative as (i.e. the derivative is taken with respect to the transpose of ) and follow the numerator layout. This makes it possible to claim that the matrix is laid out according to both numerator and denominator. In practice this produces results the same as the numerator layout. When handling the gradient and the opposite case we have the same issues. To be consistent, we should do one of the following: If we choose numerator layout for we should lay out the gradient as a row vector, and as a column vector. If we choose denominator layout for we should lay out the gradient as a column vector, and as a row vector. In the third possibility above, we write and and use numerator layout. Not all math textbooks and papers are consistent in this respect throughout. That is, sometimes different conventions are used in different contexts within the same book or paper. For example, some choose denominator layout for gradients (laying them out as column vectors), but numerator layout for the vector-by-vector derivative Similarly, when it comes to scalar-by-matrix derivatives and matrix-by-scalar derivatives then consistent numerator layout lays out according to and , while consistent denominator layout lays out according to and . In practice, however, following a denominator layout for and laying the result out according to , is rarely seen because it makes for ugly formulas that do not correspond to the scalar formulas. As a result, the following layouts can often be found: Consistent numerator layout, which lays out according to and according to . Mixed layout, which lays out according to and according to . Use the notation with results the same as consistent numerator layout. In the following formulas, we handle the five possible combinations and separately. We also handle cases of scalar-by-scalar derivatives that involve an intermediate vector or matrix. (This can arise, for example, if a multi-dimensional parametric curve is defined in terms of a scalar variable, and then a derivative of a scalar function of the curve is taken with respect to the scalar that parameterizes the curve.) For each of the various combinations, we give numerator-layout and denominator-layout results, except in the cases above where denominator layout rarely occurs. In cases involving matrices where it makes sense, we give numerator-layout and mixed-layout results. As noted above, cases where vector and matrix denominators are written in transpose notation are equivalent to numerator layout with the denominators written without the transpose. Keep in mind that various authors use different combinations of numerator and denominator layouts for different types of derivatives, and there is no guarantee that an author will consistently use either numerator or denominator layout for all types. Match up the formulas below with those quoted in the source to determine the layout used for that particular type of derivative, but be careful not to assume that derivatives of other types necessarily follow the same kind of layout. When taking derivatives with an aggregate (vector or matrix) denominator in order to find a maximum or minimum of the aggregate, it should be kept in mind that using numerator layout will produce results that are transposed with respect to the aggregate. For example, in attempting to find the maximum likelihood estimate of a multivariate normal distribution using matrix calculus, if the domain is a k×1 column vector, then the result using the numerator layout will be in the form of a 1×k row vector. Thus, either the results should be transposed at the end or the denominator layout (or mixed layout) should be used. {|class="wikitable" |+ Result of differentiating various kinds of aggregates with other kinds of aggregates ! colspan=2 rowspan=2 | ! colspan=2 | Scalar ! colspan=2 | Column vector (size ) ! colspan=2 | Matrix (size ) |- ! Notation !! Type ! Notation !! Type ! Notation !! Type |- ! rowspan=2 | Scalar ! Numerator | rowspan=2 style="text-align:center;" | | rowspan=2 | Scalar | rowspan=2 style="text-align:center;" | | Size- column vector | rowspan=2 style="text-align:center;" | | matrix |- ! Denominator | Size-m row vector | |- ! rowspan=2 | Column vector (size ) ! Numerator | rowspan=2 style="text-align:center;" | | Size- row vector | rowspan=2 style="text-align:center;" | | matrix | rowspan=2 style="text-align:center;" | | rowspan=2 | |- ! Denominator | Size- column vector | matrix |- ! rowspan=2 | Matrix (size ) ! Numerator | rowspan=2 style="text-align:center;" | | matrix | rowspan=2 style="text-align:center;" | | rowspan=2 | | rowspan=2 style="text-align:center;" | | rowspan=2 | |- ! Denominator | matrix |} The results of operations will be transposed when switching between numerator-layout and denominator-layout notation. Numerator-layout notation Using numerator-layout notation, we have: The following definitions are only provided in numerator-layout notation: Denominator-layout notation Using denominator-layout notation, we have: Identities As noted above, in general, the results of operations will be transposed when switching between numerator-layout and denominator-layout notation. To help make sense of all the identities below, keep in mind the most important rules: the chain rule, product rule and sum rule. The sum rule applies universally, and the product rule applies in most of the cases below, provided that the order of matrix products is maintained, since matrix products are not commutative. The chain rule applies in some of the cases, but unfortunately does not apply in matrix-by-scalar derivatives or scalar-by-matrix derivatives (in the latter case, mostly involving the trace operator applied to matrices). In the latter case, the product rule can't quite be applied directly, either, but the equivalent can be done with a bit more work using the differential identities. The following identities adopt the following conventions: the scalars, , , , , and are constant in respect of, and the scalars, , and are functions of one of , , or ; the vectors, , , , , and are constant in respect of, and the vectors, , and are functions of one of , , or ; the matrices, , , , , and are constant in respect of, and the matrices, and are functions of one of , , or . Vector-by-vector identities This is presented first because all of the operations that apply to vector-by-vector differentiation apply directly to vector-by-scalar or scalar-by-vector differentiation simply by reducing the appropriate vector in the numerator or denominator to a scalar. {|class="wikitable" style="text-align: center;" |+ Identities: vector-by-vector ! scope="col" width="150" | Condition ! scope="col" width="10" | Expression ! scope="col" width="100" | Numerator layout, i.e. by and ! scope="col" width="100" | Denominator layout, i.e. by and |- | is not a function of || ||colspan=2| |- | || || colspan=2| |- | is not a function of || || || |- | is not a function of || || || |- | is not a function of , || | colspan=2| |- | , is not a function of || || || |- |, || || || |- | is not a function of , || || || |- | , || | colspan=2| |- | || || || |- | || || || |} Scalar-by-vector identities The fundamental identities are placed above the thick black line. {|class="wikitable" style="text-align: center;" |+ Identities: scalar-by-vector ! scope="col" width="150" | Condition ! scope="col" width="200" | Expression ! scope="col" width="200" | Numerator layout,i.e. by ; result is row vector ! scope="col" width="200" | Denominator layout,i.e. by ; result is column vector |- | is not a function of || | || |- | is not a function of , || | colspan=2| |- | , || | colspan=2| |- | , || | colspan=2| |- | || | colspan=2| |- | || | colspan=2| |- | , | | in numerator layout | in denominator layout |- | , , is not a function of | | in numerator layout | in denominator layout |- | | | | , the Hessian matrix |- style="border-top: 3px solid;" | is not a function of || || || |- | is not a function of is not a function of || || || |- | is not a function of || || || |- | is not a function of is symmetric || || || |- | is not a function of || || colspan=2| |- | is not a function of is symmetric || || colspan=2| |- | || || || |- | is not a function of , | | in numerator layout | in denominator layout |- | , are not functions of || || || |- | , , , , are not functions of || || || |- | is not a function of || |||| |} Vector-by-scalar identities {|class="wikitable" style="text-align: center;" |+ Identities: vector-by-scalar ! scope="col" width="150" | Condition ! scope="col" width="100" | Expression ! scope="col" width="100" | Numerator layout, i.e. by ,result is column vector ! scope="col" width="100" | Denominator layout, i.e. by ,result is row vector |- | is not a function of || ||colspan=2| |- | is not a function of , || | colspan=2| |- | is not a function of x, || || || |- | || | colspan=2| |- | , || | colspan=2| |- | , || || || |- | rowspan=2| || rowspan=2| || || |- |colspan=2|Assumes consistent matrix layout; see below. |- | rowspan=2| || rowspan=2| || || |- |colspan=2|Assumes consistent matrix layout; see below. |- | , || || || |} NOTE: The formulas involving the vector-by-vector derivatives and (whose outputs are matrices) assume the matrices are laid out consistent with the vector layout, i.e. numerator-layout matrix when numerator-layout vector and vice versa; otherwise, transpose the vector-by-vector derivatives. Scalar-by-matrix identities Note that exact equivalents of the scalar product rule and chain rule do not exist when applied to matrix-valued functions of matrices. However, the product rule of this sort does apply to the differential form (see below), and this is the way to derive many of the identities below involving the trace function, combined with the fact that the trace function allows transposing and cyclic permutation, i.e.: For example, to compute Therefore, (numerator layout) (denominator layout) (For the last step, see the Conversion from differential to derivative form section.) {|class="wikitable" style="text-align: center;" |+ Identities: scalar-by-matrix ! scope="col" width="175" | Condition ! scope="col" width="10" | Expression ! scope="col" width="100" | Numerator layout, i.e. by ! scope="col" width="100" | Denominator layout, i.e. by |- | is not a function of || | || |- | is not a function of , || | colspan=2| |- | , || | colspan=2| |- | , || | colspan=2| |- | || | colspan=2| |- | || | colspan=2| |- | rowspan=2| || rowspan=2|     || || |- | colspan=2|Both forms assume numerator layout for i.e. mixed layout if denominator layout for is being used. |- style="border-top: 3px solid;" | and are not functions of || ||| |- | and are not functions of || ||| |- | and are not functions of , is a real-valued differentiable function | | | |- | , and are not functions of || ||| |- | , and are not functions of || ||| |- style="border-top: 3px solid;" | || || colspan="2" | |- | , || || colspan="2" | |- | is not a function of ,|| || colspan="2" | |- | is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc. using a Taylor series); is the equivalent scalar function, is its derivative, and is the corresponding matrix function || || || |- | is not a function of ||     || || |- | is not a function of ||     || || |- | is not a function of ||     || || |- | is not a function of ||     || || |- | , are not functions of || || || |- | , , are not functions of || || || |- | is a positive integer ||     || || |- | is not a function of , is a positive integer ||     || || |- | ||     || || |- | ||     || || |- style="border-top: 3px solid;" | ||     || || |- | is not a function of || || || |- | , are not functions of ||      || || |- | is a positive integer ||     || || |- | (see pseudo-inverse) ||      || || |- | (see pseudo-inverse) ||      || || |- | is not a function of , is square and invertible || || || |- | is not a function of , is non-square, is symmetric || || || |- | is not a function of , is non-square, is non-symmetric || | | |} Matrix-by-scalar identities {|class="wikitable" style="text-align: center;" |+ Identities: matrix-by-scalar ! scope="col" width="175" | Condition ! scope="col" width="100" | Expression ! scope="col" width="100" | Numerator layout, i.e. by |- | || || |- | , are not functions of x, || || |- | , || || |- | , || || |- | , || || |- | , || || |- | || || |- | || || |- | is not a function of , is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc.); is the equivalent scalar function, is its derivative, and is the corresponding matrix function || || |- | is not a function of || || |} Scalar-by-scalar identities With vectors involved {|class="wikitable" style="text-align: center;" |+ Identities: scalar-by-scalar, with vectors involved ! scope="col" width="150" | Condition ! scope="col" width="10" | Expression ! scope="col" width="150" | Any layout (assumes dot product ignores row vs. column layout) |- | || || |- | , || | |} With matrices involved {|class="wikitable" style="text-align: center;" |+Identities: scalar-by-scalar, with matrices involved ! scope="col" width="175" | Condition ! scope="col" width="100" | Expression ! scope="col" width="100" | Consistent numerator layout,i.e. by and ! scope="col" width="100" | Mixed layout,i.e. by and |- | || || colspan=2| |- | || || colspan=2| |- | || | colspan=2 | |- | | | | |- | is not a function of , is any polynomial with scalar coefficients, or any matrix function defined by an infinite polynomial series (e.g. , , , , etc.); is the equivalent scalar function, is its derivative, and is the corresponding matrix function. || || colspan=2| |- | is not a function of || || colspan=2| |} Identities in differential form It is often easier to work in differential form and then convert back to normal derivatives. This only works well using the numerator layout. In these rules, is a scalar. {|class="wikitable" style="text-align: center;" |+ Differential identities: scalar involving matrix ! Expression !! Result (numerator layout) |- | || |- | || |- | || |} {|class="wikitable" style="text-align: center;" |+ Differential identities: matrix ! Condition !! Expression !! Result (numerator layout) |- |A is not a function of || || |- |a is not a function of || || |- | || || |- | || || |- | (Kronecker product) || || |- | (Hadamard product) || || |- | || || |- | | | |- | (conjugate transpose) || || |- | is a positive integer || || |- | | | |- | | | |- | is diagonalizable is differentiable at every eigenvalue | | |} In the last row, is the Kronecker delta and is the set of orthogonal projection operators that project onto the -th eigenvector of . is the matrix of eigenvectors of , and are the eigenvalues. The matrix function is defined in terms of the scalar function for diagonalizable matrices by where with To convert to normal derivative form, first convert it to one of the following canonical forms, and then use these identities: {|class="wikitable" style="text-align: center;" |+ Conversion from differential to derivative form ! Canonical differential form !! Equivalent derivative form (numerator layout) |- | || |- | || |- | || |- | || |- | || |- | || |} Applications Matrix differential calculus is used in statistics and econometrics, particularly for the statistical analysis of multivariate distributions, especially the multivariate normal distribution and other elliptical distributions. It is used in regression analysis to compute, for example, the ordinary least squares regression formula for the case of multiple explanatory variables. It is also used in random matrices, statistical moments, local sensitivity and statistical diagnostics. See also Derivative (generalizations) Product integral Ricci calculus Tensor derivative Notes References Further reading . Note that this Wikipedia article has been nearly completely revised from the version criticized in this article. External links Software MatrixCalculus.org, a website for evaluating matrix calculus expressions symbolically NCAlgebra, an open-source Mathematica package that has some matrix calculus functionality SymPy supports symbolic matrix derivatives in its matrix expression module, as well as symbolic tensor derivatives in its array expression module. Information Matrix Reference Manual, Mike Brookes, Imperial College London. Matrix Differentiation (and some other stuff), Randal J. Barnes, Department of Civil Engineering, University of Minnesota. Notes on Matrix Calculus, Paul L. Fackler, North Carolina State University. Matrix Differential Calculus (slide presentation), Zhang Le, University of Edinburgh. Introduction to Vector and Matrix Differentiation (notes on matrix differentiation, in the context of Econometrics), Heino Bohn Nielsen. A note on differentiating matrices (notes on matrix differentiation), Pawel Koval, from Munich Personal RePEc Archive. Vector/Matrix Calculus More notes on matrix differentiation. Matrix Identities (notes on matrix differentiation), Sam Roweis. Matrix theory Linear algebra Multivariable calculus
Matrix calculus
Mathematics
7,703
78,359,137
https://en.wikipedia.org/wiki/Bioliteracy
Bioliteracy is the ability to understand and engage with biological topics. The concept is used particularly in the contexts of biotechnology and biodiversity. Description In the biotechnology context, bioliteracy is considered important for promoting the biotechnology industry and the development of biological engineering products. It has also been defined as "the concept of imbuing people, personnel, or teams with an understanding of and comfort with biology and biotechnology." The use in the context of biodiversity is somewhat distinct, focusing on improving awareness of different organisms with the goal of conservation. Citizen science initiatives, such as iNaturalist, are considered effective ways to increase bioliteracy, engaging students with the direct observation of nature. References Biology Biotechnology Biodiversity Biology terminology Biological engineering Literacy Conservation biology
Bioliteracy
Engineering,Biology
152
33,289,298
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2070
In molecular biology, glycoside hydrolase family 70 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. This family includes glucosyltransferases or sucrose 6-glycosyl transferases (GTF-S) (CAZY GH_70) which catalyse the transfer of D-glucopyramnosyl units from sucrose onto acceptor molecules. Some members of this family contain a cell wall-binding repeat. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 70
Biology
233
5,693,654
https://en.wikipedia.org/wiki/Celebrity%20worship%20syndrome
Celebrity worship syndrome (CWS) or celebrity obsession disorder (COD) is an obsessive addictive disorder in which a person becomes overly involved with the details of a celebrity's personal and professional life. Psychologists have indicated that though many people obsess over film, television, sport and pop stars, the only common factor between them is that they are all figures in the public eye. Written observations of celebrity worship date back to the 19th century. Classifications Simple obsessional Simple obsessional stalking constitutes a majority of all stalking cases, anywhere from 69 to 79%, and is dominated by males. This form of stalking is generally associated with individuals who have shared previous personal relationships with their victims. However, this is not necessarily the case between a common member of the public exhibiting celebrity worship syndrome and the famous person with whom they are obsessed. Individuals that meet the criteria of being labeled as a "simple obsessional stalker" tend to share a set of characteristics including an inability to have successful personal relationships in their own lives, social awkwardness, feelings of powerlessness, a sense of insecurity, and very low self-esteem. Of these characteristics, low self-esteem plays a large role in the obsession that these individuals develop with their victim, in this case, the famous person. If the individual is unable to have any sort of connection to the celebrity with which they are obsessed, their own sense of self-worth may decline. Entertainment-social This level of admiration is linked to a celebrity's ability to capture the attention of their fans. Entertainment-social celebrity worship is used to describe a relatively low level of obsession. An example of a typical entertainment-social attitude would be "My friends and I like to discuss what my favorite celebrity has done." It may also be seen in the form of obsessively following celebrities on social media, although considered the lowest level of celebrity worship. It has been seen to have a number of negative effects with regards the development of unhealthy eating tendencies (eating disorders), anxiety, depression, poor body image and low self esteem, especially in young adolescents aged 13 to mid-20s. This can be supported by a study carried out on a group of female adolescents between the ages of (17–20). Intense-personal This is an intermediate level of obsession that is associated with neuroticism as well as behaviors linked to psychoticism. An example of an intense-personal attitude toward a celebrity would include claims such as "I consider my favorite celebrity to be my soul mate." It has been found that in particular, people who worship celebrities in this manner often have low self-esteem with regards to their body type, especially if they think that the celebrity is physically attractive. The effects of intense-personal celebrity worship on body image are seen in some cases of cosmetic surgery. Females who have high levels of obsession are more accepting of cosmetic surgery than those who do not obsess over celebrities to this extent. Love obsessional As the name suggests, individuals who demonstrate this sort of stalking behavior develop a love obsession with somebody who they have no personal relation to. Love obsessional stalking accounts for roughly 20–25% of all stalking cases. The people that demonstrate this form of stalking behavior are likely to have a mental disorder, commonly either schizophrenia or paranoia. Individuals that are love obsessional stalkers often convince themselves that they are in fact in a relationship with the subject of their obsession. For example, a woman who had been stalking David Letterman for a total of five years claimed to be his wife when she had no personal connection to him. Other celebrities who have fallen victim to this form of stalking include Jennifer Aniston, Halle Berry, Jodie Foster, and Mila Kunis, along with numerous other A-list stars. Erotomanic Erotomanic, originating from the word erotomania, refers to stalkers who genuinely believe that their victims are in love with them. The victims in this case are almost always well known within their community or within the media, meaning that they can range from small-town celebrities to famous personalities from Hollywood. Comprising less than 10% of all stalking cases, erotomanic stalkers are the least common. Unlike simple-obsessional stalkers, a majority of the individuals in this category of stalking are women. Similar to love-obsessional stalkers, the behavior of erotomanic stalkers may be a result of an underlying psychological disorder such as schizophrenia, bipolar disorder, or major depression. Individuals who have erotomania tend to believe that the celebrity with whom they are obsessed with is utilizing the media as a way to communicate with them by sending special messages or signals. Although these stalkers have unrealistic beliefs, they are less likely to seek any form of face-to-face interaction with their celebrity obsession, therefore posing less of a threat to them. Borderline-pathological This classification is the most severe level of celebrity worship. It is characterized by pathological attitudes and behaviors, as a result of celebrity worship. This includes willingness to commit crime on behalf of the celebrity who is the object of worship, or to spend money on common items used by the celebrity at some point, such as napkins. Mental health Evidence indicates that poor mental health is correlated with celebrity worship. Researchers have examined the relationship between celebrity worship and mental health in United Kingdom adult samples. One study found evidence to suggest that the intense-personal celebrity worship dimension was related to higher levels of depression and anxiety. Similarly, another study in 2004, found that the intense-personal celebrity worship dimension was not only related to higher levels of depression and anxiety, but also higher levels of stress, negative affect, and reports of illness. Both these studies showed no evidence for a significant relationship between either the entertainment-social or the borderline-pathological dimensions of celebrity worship and mental health. Another correlated pathology examined the role of celebrity interest in shaping body image cognitions. Among three separate UK samples (adolescents, students, and older adults), individuals selected a celebrity of their own sex whose body/figure they liked and admired, and then completed the Celebrity Attitude Scale along with two measures of body image. Significant relationships were found between attitudes toward celebrities and body image among female adolescents only. The findings suggested that, in female adolescence, there is an interaction between intense-personal celebrity worship and body image between the ages of 14 and 16, and some tentative evidence suggest that this relationship disappears at the onset of adulthood, which is between the ages of 17 and 20. These results are consistent with the authors who stress the importance of the formation of relationships with media figures, and suggest that relationships with celebrities perceived as having a good body shape may lead to a poor body image in female adolescents. This can be again supported by a study carried out, which investigated the link between mass media and its direct correlation to poor self-worth/ body image in a sample group of females between the ages of 17 and 20. Within a clinical context the effect of celebrity might be more extreme, particularly when considering extreme aspects of celebrity worship. Relationships between the three classifications of celebrity worship (entertainment-social, intense-personal and borderline-pathological celebrity worship and obsessiveness), ego-identity, fantasy proneness and dissociation were examined. Two of these variables drew particular attention: fantasy proneness and dissociation. Fantasy proneness involves fantasizing for a duration of time, reporting hallucinatory intensities as real, reporting vivid childhood memories, having intense religious and paranormal experiences. Dissociation is the lack of a normal integration of experiences, feelings, and thoughts in everyday consciousness and memory; in addition, it is related to a number of psychiatric problems. Though low levels of celebrity worship (entertainment-social) are not associated with any clinical measures, medium levels of celebrity worship (intense-personal) are related to fantasy proneness (approximately 10% of the shared variance), while high levels of celebrity worship (borderline-pathological) share a greater association with fantasy proneness (around 14% of the shared variance) and dissociation (around 3% of the shared variance, though the effect size of this is small and most probably due to the large sample size). This finding suggests that as "celebrity worship becomes more intense, and the individual perceives having a relationship with the celebrity, the more the individual is prone to fantasies." Celebrity worship syndrome can lead to the manifestation of unhealthy tendencies such as materialism and compulsive buying, which can be supported by a study carried out by Robert. A. Reeves, Gary. A. Baker and Chris. S. Truluck. The results of this study link high rates of celebrity worship to high rates of materialism and compulsive buying. A number of historical, ethnographic, netnographic and auto-ethnographic studies in diverse academic disciplines such as film studies, media studies, cultural studies and consumer research, which – unlike McCutcheon et al. focused mainly on a student sample (with two exceptions) – have actually studied real fans in the field, have come to very different conclusions that are more in line with Horton & Wohl's original concept of parasocial interaction or an earlier study by Leets. See also Anti-fan Fanaticism Fictosexuality Nijikon Obsessive love disorder Paparazzi Parasocial interaction Sasaeng fan Stalking Stan (fan) Yandere Cyberstalking References Further reading Behavioral addiction Celebrity fandom Fandom Social phenomena Stalking
Celebrity worship syndrome
Biology
1,942
2,750,191
https://en.wikipedia.org/wiki/Selectin
The selectins (cluster of differentiation 62 or CD62) are a family of cell adhesion molecules (or CAMs). All selectins are single-chain transmembrane glycoproteins that share similar properties to C-type lectins due to a related amino terminus and calcium-dependent binding. Selectins bind to sugar moieties and so are considered to be a type of lectin, cell adhesion proteins that bind sugar polymers. Structure All three known members of the selectin family (L-, E-, and P-selectin) share a similar cassette structure: an N-terminal, calcium-dependent lectin domain, an epidermal growth factor (EGF)-like domain, a variable number of consensus repeat units (2, 6, and 9 for L-, E-, and P-selectin, respectively), a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The transmembrane and cytoplasmic parts are not conserved across the selectins being responsible for their targeting to different compartments. Though they share common elements, their tissue distribution and binding kinetics are quite different, reflecting their divergent roles in various pathophysiological processes. Types There are three subsets of selectins: E-selectin (in endothelial cells) L-selectin (in leukocytes) P-selectin (in platelets and endothelial cells) L-selectin is the smallest of the vascular selectins, expressed on all granulocytes and monocytes and on most lymphocytes, can be found in most leukocytes. P-selectin, the largest selectin, is stored in α-granules of platelets and in Weibel–Palade bodies of endothelial cells, and is translocated to the cell surface of activated endothelial cells and platelets. E-selectin is not expressed under baseline conditions, except in skin microvessels, but is rapidly induced by inflammatory cytokines. These three types share a significant degree of sequence homology among themselves (except in the transmembrane and cytoplasmic domains) and between species. Analysis of this homology has revealed that the lectin domain, which binds sugars, is most conserved, suggesting that the three selectins bind similar sugar structures. The cytoplasmic and transmembrane domains are highly conserved between species, but not conserved across the selectins. These parts of the selectin molecules are responsible for their targeting to different compartments: P-selectin to secretory granules, E-selectin to the plasma membrane, and L-selectin to the tips of microfolds on leukocytes. Etymology The name selectin comes from the words "selected" and "lectins," which are a type of carbohydrate-recognizing protein. Function Selectins are involved in constitutive lymphocyte homing, and in chronic and acute inflammation processes, including post-ischemic inflammation in muscle, kidney and heart, skin inflammation, atherosclerosis, glomerulonephritis and lupus erythematosus and cancer metastasis. During an inflammatory response, P-selectin is expressed on endothelial cells first, followed by E-selectin later. Stimuli such as histamine and thrombin cause endothelial cells to mobilize immediate release of preformed P-selectin from Weible-Palade bodies inside the cell. Cytokines such as TNF-alpha stimulate transcription and translation of E-selectin and additional P-selectin, which account for the delay of several hours. As the leukocyte rolls along the blood vessel wall, the distal lectin-like domain of the selectin binds to certain carbohydrate groups presented on proteins (such as PSGL-1) on the leukocyte, which slows the cell and allows it to leave the blood vessel and enter the site of infection. The low-affinity nature of selectins is what allows the characteristic "rolling" action attributed to leukocytes during the leukocyte adhesion cascade. Each selectin has a carbohydrate recognition domain that mediates binding to specific glycans on apposing cells. They have remarkably similar protein folds and carbohydrate binding residues, leading to overlap in the glycans to which they bind. Selectins bind to the sialyl Lewis X (SLex) determinant “NeuAcα2-3Galβ1-4(Fucα1-3)GlcNAc.” However, SLex, per se, does not constitute an effective selectin receptor. Instead, SLex and related sialylated, fucosylated glycans are components of more extensive binding determinants. The best-characterized ligand for the three selectins is P-selectin glycoprotein ligand-1 (PSGL-1), which is a mucin-type glycoprotein expressed on all white blood cells. Neutrophils and eosinophils bind to E-selectin. One of the reported ligands for E-selectin is the sialylated Lewis X antigen (SLex). Eosinophils, like neutrophils, use sialylated, protease-resistant structures to bind to E-selectin, although the eosinophil expresses much lower levels of these structures on its surface. Ligands for P-selectin on eosinophils and neutrophils are similar sialylated, protease-sensitive, endo-beta-galactosidase-resistant structures, clearly different from those reported for E-selectin, and suggest disparate roles for P-selectin and E-selectin during recruitment during inflammatory responses. Bonding mechanisms Selectins have hinge domains, allowing them to undergo rapid conformational changes in the nanosecond range between ‘open’ and ‘closed’ conformations. Shear stress on the selectin molecule causes it to favor the ‘open’ conformation. In leukocyte rolling, the ‘open’ conformation of the selectin allows it to bind to inward sialyl Lewis molecules farther up along the PSGL-1 chain, increasing overall binding affinity—if the selectin-sialyl Lewis bond breaks, it can slide and form new bonds with the other sialyl Lewis molecules down the chain. In the ‘closed’ conformation, however, the selectin is only able to bind to one sialyl Lewis molecule, and thus has greatly reduced binding affinity. The result of such is that selectins exhibit catch and slip bond behavior—under low shear stresses, their bonding affinities are actually increased by an increase in tensile force applied to the bond because of more selectins preferring the ‘open’ conformation. At high stresses, the binding affinities are still reduced because the selectin-ligand bond is still a normal slip bond. It is thought that this shear stress threshold helps select for the right diameter of blood vessels to initiate leukocyte extravasation, and may also help prevent inappropriate leukocyte aggregation during vascular stasis. Role in cancer It is becoming evident that selectin may play a role in inflammation and progression of cancer. Tumor cells exploit the selectin-dependent mechanisms mediating cell tethering and rolling interactions through recognition of carbohydrate ligands on tumor cell to enhance distant organ metastasis, showing ‘leukocyte mimicry’. A number of studies have shown increased expression of carbohydrate ligands on metastatic tumor, enhanced E-selectin expression on the surface of endothelial vessels at the site at tumor metastasis, and the capacity of metastatic tumor cells to roll and adhere to endothelial cells, indicating the role of selectins in metastasis. In addition to E-selectin, the role of P-selectin (expressed on platelets) and L-selectin (on leukocytes) in cancer dissemination has been suggested in the way that they interact with circulating cancer cells at an early stage of metastasis. Organ selectivity The selectins and selectin ligands determine the organ selectivity of metastasis. Several factors may explain the seed and soil theory or homing of metastasis. In particular, genetic regulation and activation of specific chemokines, cytokines and proteases may direct metastasis to a preferred organ. In fact, the extravasation of circulating tumor cells in the host organ requires successive adhesive interactions between endothelial cells and their ligands or counter-receptors present on the cancer cells. Metastatic cells that show a high propensity to metastasize to certain organs adhere at higher rates to venular endothelial cells isolated from these target sites. Moreover, they invade the target tissue at higher rates and respond better to paracrine growth factors released from the target site. Typically, the cancer cell/endothelial cell interactions imply first a selectin-mediated initial attachment and rolling of the circulating cancer cells on the endothelium. The rolling cancer cells then become activated by locally released chemokines present at the surface of endothelial cells. This triggers the activation of integrins from the cancer cells allowing their firmer adhesion to members of the Ig-CAM family such as ICAM, initiating the transendothelial migration and extravasation processes.[72] The appropriate set of endothelial receptors is sometimes not expressed constitutively and the cancer cells have to trigger their expression. In this context, the culture supernatants of cancer cells can trigger the expression of E- selectin by endothelial cells suggesting that cancer cells may release by themselves cytokines such as TNF-α, IL-1β or INF-γ that will directly activate endothelial cells to express E-selectin, P-selectin, ICAM-2 or VCAM. On the other hand, several studies further show that cancer cells may initiate the expression of endothelial adhesion molecules in a more indirect ways. Since the adhesion of several cancer cells to endothelium requires the presence of endothelial selectins as well as sialyl Lewis carbohydrates on cancer cells, the degree of expression of selectins on the vascular wall and the presence of the appropriate ligand on cancer cells are determinant for their adhesion and extravasation into a specific organ. The differential selectin expression profile on endothelium and the specific interactions of selectins expressed by endothelial cells of potential target organs and their ligands expressed on cancer cells are major determinants that underlie the organ-specific distribution of metastases. Research Selectins are involved in projects to treat osteoporosis, a disease that occurs when bone-creating cells called osteoblasts become too scarce. Osteoblasts develop from stem cells, and scientists hope to eventually be able to treat osteoporosis by adding stem cells to a patient’s bone marrow. Researchers have developed a way to use selectins to direct stem cells introduced into the vascular system to the bone marrow. E-selectins are constitutively expressed in the bone marrow, and researchers have shown that tagging stem cells with a certain glycoprotein causes these cells to migrate to the bone marrow. Thus, selectins may someday be essential to a regenerative therapy for osteoporosis. See also Sushi domain References External links Sackstein Lab of Research Computer-generated movie of the mobilization of P-selectin inside a leukocyte at mcb.harvard.edu Cell adhesion proteins Lectins Selectins Protein domains Single-pass transmembrane proteins
Selectin
Biology
2,483
39,937,659
https://en.wikipedia.org/wiki/Eigenstate%20thermalization%20hypothesis
The eigenstate thermalization hypothesis (or ETH) is a set of ideas which purports to explain when and why an isolated quantum mechanical system can be accurately described using equilibrium statistical mechanics. In particular, it is devoted to understanding how systems which are initially prepared in far-from-equilibrium states can evolve in time to a state which appears to be in thermal equilibrium. The phrase "eigenstate thermalization" was first coined by Mark Srednicki in 1994, after similar ideas had been introduced by Josh Deutsch in 1991. The principal philosophy underlying the eigenstate thermalization hypothesis is that instead of explaining the ergodicity of a thermodynamic system through the mechanism of dynamical chaos, as is done in classical mechanics, one should instead examine the properties of matrix elements of observable quantities in individual energy eigenstates of the system. Motivation In statistical mechanics, the microcanonical ensemble is a particular statistical ensemble which is used to make predictions about the outcomes of experiments performed on isolated systems that are believed to be in equilibrium with an exactly known energy. The microcanonical ensemble is based upon the assumption that, when such an equilibrated system is probed, the probability for it to be found in any of the microscopic states with the same total energy have equal probability. With this assumption, the ensemble average of an observable quantity is found by averaging the value of that observable over all microstates with the correct total energy: Importantly, this quantity is independent of everything about the initial state except for its energy. The assumptions of ergodicity are well-motivated in classical mechanics as a result of dynamical chaos, since a chaotic system will in general spend equal time in equal areas of its phase space. If we prepare an isolated, chaotic, classical system in some region of its phase space, then as the system is allowed to evolve in time, it will sample its entire phase space, subject only to a small number of conservation laws (such as conservation of total energy). If one can justify the claim that a given physical system is ergodic, then this mechanism will provide an explanation for why statistical mechanics is successful in making accurate predictions. For example, the hard sphere gas has been rigorously proven to be ergodic. This argument cannot be straightforwardly extended to quantum systems, even ones that are analogous to chaotic classical systems, because time evolution of a quantum system does not uniformly sample all vectors in Hilbert space with a given energy. Given the state at time zero in a basis of energy eigenstates the expectation value of any observable is Even if the are incommensurate, so that this expectation value is given for long times by the expectation value permanently retains knowledge of the initial state in the form of the coefficients . In principle it is thus an open question as to whether an isolated quantum mechanical system, prepared in an arbitrary initial state, will approach a state which resembles thermal equilibrium, in which a handful of observables are adequate to make successful predictions about the system. However, a variety of experiments in cold atomic gases have indeed observed thermal relaxation in systems which are, to a very good approximation, completely isolated from their environment, and for a wide class of initial states. The task of explaining this experimentally observed applicability of equilibrium statistical mechanics to isolated quantum systems is the primary goal of the eigenstate thermalization hypothesis. Statement Suppose that we are studying an isolated, quantum mechanical many-body system. In this context, "isolated" refers to the fact that the system has no (or at least negligible) interactions with the environment external to it. If the Hamiltonian of the system is denoted , then a complete set of basis states for the system is given in terms of the eigenstates of the Hamiltonian, where is the eigenstate of the Hamiltonian with eigenvalue . We will refer to these states simply as "energy eigenstates." For simplicity, we will assume that the system has no degeneracy in its energy eigenvalues, and that it is finite in extent, so that the energy eigenvalues form a discrete, non-degenerate spectrum (this is not an unreasonable assumption, since any "real" laboratory system will tend to have sufficient disorder and strong enough interactions as to eliminate almost all degeneracy from the system, and of course will be finite in size). This allows us to label the energy eigenstates in order of increasing energy eigenvalue. Additionally, consider some other quantum-mechanical observable , which we wish to make thermal predictions about. The matrix elements of this operator, as expressed in a basis of energy eigenstates, will be denoted by We now imagine that we prepare our system in an initial state for which the expectation value of is far from its value predicted in a microcanonical ensemble appropriate to the energy scale in question (we assume that our initial state is some superposition of energy eigenstates which are all sufficiently "close" in energy). The eigenstate thermalization hypothesis says that for an arbitrary initial state, the expectation value of will ultimately evolve in time to its value predicted by a microcanonical ensemble, and thereafter will exhibit only small fluctuations around that value, provided that the following two conditions are met: The diagonal matrix elements vary smoothly as a function of energy, with the difference between neighboring values, , becoming exponentially small in the system size. The off-diagonal matrix elements , with , are much smaller than the diagonal matrix elements, and in particular are themselves exponentially small in the system size. These conditions can be written as where and are smooth functions of energy, is the many-body Hilbert space dimension, and is a random variable with zero mean and unit variance. Conversely if a quantum many-body system satisfies the ETH, the matrix representation of any local operator in the energy eigen basis is expected to follow the above ansatz. Equivalence of the diagonal and microcanonical ensembles We can define a long-time average of the expectation value of the operator according to the expression If we use the explicit expression for the time evolution of this expectation value, we can write The integration in this expression can be performed explicitly, and the result is Each of the terms in the second sum will become smaller as the limit is taken to infinity. Assuming that the phase coherence between the different exponential terms in the second sum does not ever become large enough to rival this decay, the second sum will go to zero, and we find that the long-time average of the expectation value is given by This prediction for the time-average of the observable is referred to as its predicted value in the diagonal ensemble, The most important aspect of the diagonal ensemble is that it depends explicitly on the initial state of the system, and so would appear to retain all of the information regarding the preparation of the system. In contrast, the predicted value in the microcanonical ensemble is given by the equally-weighted average over all energy eigenstates within some energy window centered around the mean energy of the system where is the number of states in the appropriate energy window, and the prime on the sum indices indicates that the summation is restricted to this appropriate microcanonical window. This prediction makes absolutely no reference to the initial state of the system, unlike the diagonal ensemble. Because of this, it is not clear why the microcanonical ensemble should provide such an accurate description of the long-time averages of observables in such a wide variety of physical systems. However, suppose that the matrix elements are effectively constant over the relevant energy window, with fluctuations that are sufficiently small. If this is true, this one constant value A can be effectively pulled out of the sum, and the prediction of the diagonal ensemble is simply equal to this value, where we have assumed that the initial state is normalized appropriately. Likewise, the prediction of the microcanonical ensemble becomes The two ensembles are therefore in agreement. This constancy of the values of over small energy windows is the primary idea underlying the eigenstate thermalization hypothesis. Notice that in particular, it states that the expectation value of in a single energy eigenstate is equal to the value predicted by a microcanonical ensemble constructed at that energy scale. This constitutes a foundation for quantum statistical mechanics which is radically different from the one built upon the notions of dynamical ergodicity. Tests Several numerical studies of small lattice systems appear to tentatively confirm the predictions of the eigenstate thermalization hypothesis in interacting systems which would be expected to thermalize. Likewise, systems which are integrable tend not to obey the eigenstate thermalization hypothesis. Some analytical results can also be obtained if one makes certain assumptions about the nature of highly excited energy eigenstates. The original 1994 paper on the ETH by Mark Srednicki studied, in particular, the example of a quantum hard sphere gas in an insulated box. This is a system which is known to exhibit chaos classically. For states of sufficiently high energy, Berry's conjecture states that energy eigenfunctions in this many-body system of hard sphere particles will appear to behave as superpositions of plane waves, with the plane waves entering the superposition with random phases and Gaussian-distributed amplitudes (the precise notion of this random superposition is clarified in the paper). Under this assumption, one can show that, up to corrections which are negligibly small in the thermodynamic limit, the momentum distribution function for each individual, distinguishable particle is equal to the Maxwell–Boltzmann distribution where is the particle's momentum, m is the mass of the particles, k is the Boltzmann constant, and the "temperature" is related to the energy of the eigenstate according to the usual equation of state for an ideal gas, where N is the number of particles in the gas. This result is a specific manifestation of the ETH, in that it results in a prediction for the value of an observable in one energy eigenstate which is in agreement with the prediction derived from a microcanonical (or canonical) ensemble. Note that no averaging over initial states whatsoever has been performed, nor has anything resembling the H-theorem been invoked. Additionally, one can also derive the appropriate Bose–Einstein or Fermi–Dirac distributions, if one imposes the appropriate commutation relations for the particles comprising the gas. Currently, it is not well understood how high the energy of an eigenstate of the hard sphere gas must be in order for it to obey the ETH. A rough criterion is that the average thermal wavelength of each particle be sufficiently smaller than the radius of the hard sphere particles, so that the system can probe the features which result in chaos classically (namely, the fact that the particles have a finite size ). However, it is conceivable that this condition may be able to be relaxed, and perhaps in the thermodynamic limit, energy eigenstates of arbitrarily low energies will satisfy the ETH (aside from the ground state itself, which is required to have certain special properties, for example, the lack of any nodes ). Alternatives Three alternative explanations for the thermalization of isolated quantum systems are often proposed: For initial states of physical interest, the coefficients exhibit large fluctuations from eigenstate to eigenstate, in a fashion which is completely uncorrelated with the fluctuations of from eigenstate to eigenstate. Because the coefficients and matrix elements are uncorrelated, the summation in the diagonal ensemble is effectively performing an unbiased sampling of the values of over the appropriate energy window. For a sufficiently large system, this unbiased sampling should result in a value which is close to the true mean of the values of over this window, and will effectively reproduce the prediction of the microcanonical ensemble. However, this mechanism may be disfavored for the following heuristic reason. Typically, one is interested in physical situations in which the initial expectation value of is far from its equilibrium value. For this to be true, the initial state must contain some sort of specific information about , and so it becomes suspect whether or not the initial state truly represents an unbiased sampling of the values of over the appropriate energy window. Furthermore, whether or not this were to be true, it still does not provide an answer to the question of when arbitrary initial states will come to equilibrium, if they ever do. For initial states of physical interest, the coefficients are effectively constant, and do not fluctuate at all. In this case, the diagonal ensemble is precisely the same as the microcanonical ensemble, and there is no mystery as to why their predictions are identical. However, this explanation is disfavored for much the same reasons as the first. Integrable quantum systems are proved to thermalize under condition of simple regular time-dependence of parameters, suggesting that cosmological expansion of the Universe and integrability of the most fundamental equations of motion are ultimately responsible for thermalization. Temporal fluctuations of expectation values The condition that the ETH imposes on the diagonal elements of an observable is responsible for the equality of the predictions of the diagonal and microcanonical ensembles. However, the equality of these long-time averages does not guarantee that the fluctuations in time around this average will be small. That is, the equality of the long-time averages does not ensure that the expectation value of will settle down to this long-time average value, and then stay there for most times. In order to deduce the conditions necessary for the observable's expectation value to exhibit small temporal fluctuations around its time-average, we study the mean squared amplitude of the temporal fluctuations, defined as where is a shorthand notation for the expectation value of at time t. This expression can be computed explicitly, and one finds that Temporal fluctuations about the long-time average will be small so long as the off-diagonal elements satisfy the conditions imposed on them by the ETH, namely that they become exponentially small in the system size. Notice that this condition allows for the possibility of isolated resurgence times, in which the phases align coherently in order to produce large fluctuations away from the long-time average. The amount of time the system spends far away from the long-time average is guaranteed to be small so long as the above mean squared amplitude is sufficiently small. If a system poses a dynamical symmetry, however, it will periodically oscillate around the long-time average. Quantum fluctuations and thermal fluctuations The expectation value of a quantum mechanical observable represents the average value which would be measured after performing repeated measurements on an ensemble of identically prepared quantum states. Therefore, while we have been examining this expectation value as the principal object of interest, it is not clear to what extent this represents physically relevant quantities. As a result of quantum fluctuations, the expectation value of an observable is not typically what will be measured during one experiment on an isolated system. However, it has been shown that for an observable satisfying the ETH, quantum fluctuations in its expectation value will typically be of the same order of magnitude as the thermal fluctuations which would be predicted in a traditional microcanonical ensemble. This lends further credence to the idea that the ETH is the underlying mechanism responsible for the thermalization of isolated quantum systems. General validity Currently, there is no known analytical derivation of the eigenstate thermalization hypothesis for general interacting systems. However, it has been verified to be true for a wide variety of interacting systems using numerical exact diagonalization techniques, to within the uncertainty of these methods. It has also been proven to be true in certain special cases in the semi-classical limit, where the validity of the ETH rests on the validity of Shnirelman's theorem, which states that in a system which is classically chaotic, the expectation value of an operator in an energy eigenstate is equal to its classical, microcanonical average at the appropriate energy. Whether or not it can be shown to be true more generally in interacting quantum systems remains an open question. It is also known to explicitly fail in certain integrable systems, in which the presence of a large number of constants of motion prevent thermalization. It is also important to note that the ETH makes statements about specific observables on a case-by-case basis - it does not make any claims about whether every observable in a system will obey ETH. In fact, this certainly cannot be true. Given a basis of energy eigenstates, one can always explicitly construct an operator which violates the ETH, simply by writing down the operator as a matrix in this basis whose elements explicitly do not obey the conditions imposed by the ETH. Conversely, it is always trivially possible to find operators which do satisfy ETH, by writing down a matrix whose elements are specifically chosen to obey ETH. In light of this, one may be led to believe that the ETH is somewhat trivial in its usefulness. However, the important consideration to bear in mind is that these operators thus constructed may not have any physical relevance. While one can construct these matrices, it is not clear that they correspond to observables which could be realistically measured in an experiment, or bear any resemblance to physically interesting quantities. An arbitrary Hermitian operator on the Hilbert space of the system need not correspond to something which is a physically measurable observable. Typically, the ETH is postulated to hold for "few-body operators," observables which involve only a small number of particles. Examples of this would include the occupation of a given momentum in a gas of particles, or the occupation of a particular site in a lattice system of particles. Notice that while the ETH is typically applied to "simple" few-body operators such as these, these observables need not be local in space - the momentum number operator in the above example does not represent a local quantity. There has also been considerable interest in the case where isolated, non-integrable quantum systems fail to thermalize, despite the predictions of conventional statistical mechanics. Disordered systems which exhibit many-body localization are candidates for this type of behavior, with the possibility of excited energy eigenstates whose thermodynamic properties more closely resemble those of ground states. It remains an open question as to whether a completely isolated, non-integrable system without static disorder can ever fail to thermalize. One intriguing possibility is the realization of "Quantum Disentangled Liquids." It also an open question whether all eigenstates must obey the ETH in a thermalizing system. The eigenstate thermalization hypothesis is closely connected to the quantum nature of chaos (see quantum chaos). Furthermore, since a classically chaotic system is also ergodic, almost all of its trajectories eventually explore uniformly the entire accessible phase space, which would imply the eigenstates of the quantum chaotic system fill the quantum phase space evenly (up to random fluctuations) in the semiclassical limit . In particular, there is a quantum ergodicity theorem showing that the expectation value of an operator converges to the corresponding microcanonical classical average as . However, the quantum ergodicity theorem leaves open the possibility of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars. Since the former arise a combined effect of special nearly-degenerate unperturbed states and the localized nature of the perturbation (potential bums), the scarring can slow down the thermalization process in disordered quantum dots and wells, which is further illustrated by the fact that these quantum scars can be used to propagate quantum wave packets in a disordered nanostructure with high fidelity. On the other hand, the latter form of scarring has been speculated to be the culprit behind the unexpectedly slow thermalization of cold atoms observed experimentally. See also Equilibrium thermodynamics Fluctuation dissipation theorem Important Publications in Statistical Mechanics Non-equilibrium thermodynamics Quantum thermodynamics Statistical physics Configuration entropy Chaos Theory Hard spheres Quantum statistical mechanics Microcanonical Ensemble H-theorem Adiabatic theorem Footnotes References External links "Overview of Eigenstate Thermalization Hypothesis" by Mark Srednicki, UCSB, KITP Program: Quantum Dynamics in Far from Equilibrium Thermally Isolated Systems "The Eigenstate Thermalization Hypothesis" by Mark Srednicki, UCSB, KITP Rapid Response Workshop: Black Holes: Complementarity, Fuzz, or Fire? "Quantum Disentangled Liquids" by Matthew P. A. Fisher, UCSB, KITP Conference: From the Renormalization Group to Quantum Gravity Celebrating the science of Joe Polchinski Hypotheses Quantum mechanics Statistical mechanics Thermodynamics
Eigenstate thermalization hypothesis
Physics,Chemistry,Mathematics
4,363
34,917,449
https://en.wikipedia.org/wiki/PRESENT
PRESENT is a lightweight block cipher, developed by the Orange Labs (France), Ruhr University Bochum (Germany) and the Technical University of Denmark in 2007. PRESENT was designed by Andrey Bogdanov, Lars R. Knudsen, Gregor Leander, Christof Paar, Axel Poschmann, Matthew J. B. Robshaw, Yannick Seurin, and C. Vikkelsoe. The algorithm is notable for its compact size (about 2.5 times smaller than AES). Overview The block size is 64 bits and the key size can be 80 bit or 128 bit. The non-linear layer is based on a single 4-bit S-box which was designed with hardware optimizations in mind. PRESENT is intended to be used in situations where low-power consumption and high chip efficiency is desired. The International Organization for Standardization and the International Electrotechnical Commission included PRESENT in the new international standard for lightweight cryptographic methods. Cryptanalysis A truncated differential attack on 26 out of 31 rounds of PRESENT was suggested in 2014. Several full-round attacks using biclique cryptanalysis have been introduced on PRESENT. By design all block ciphers with a block size of 64 bit can have problems with block collisions if they are used with large amounts of data. Therefore, implementations need to make sure that the amount of data encrypted with the same key is limited and rekeying is properly implemented. Performance PRESENT uses bit-oriented permutations and is not software-friendly. It is clearly targeted at hardware, where bit-permutations are possible with simple wiring. Performance of PRESENT when evaluated in microcontroller software environment using FELICS (Fair Evaluation of Lightweight Cryptographic Systems), a benchmarking framework for evaluation of software implementations of lightweight cryptographic primitives. Standardization PRESENT is included in the following standards. ISO/IEC 29167-11:2014, Information technology - Automatic identification and data capture techniques - Part 11: Crypto suite PRESENT-80 security services for air interface communications ISO/IEC 29192-2:2019, Information security - Lightweight cryptography - Part 2: Block ciphers References External links PRESENT: An Ultra-Lightweight Block Cipher http://www.lightweightcrypto.org/implementations.php Software Implementations in C and Python https://web.archive.org/web/20160809024354/http://cis.sjtu.edu.cn/index.php/Software_Implementation_of_Block_Cipher_PRESENT_for_8-Bit_Platforms C implementation http://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/29/present_ches2007_slides.pdf Talk slides from Cryptographic Hardware and Embedded Systems Block ciphers Cryptography
PRESENT
Mathematics,Engineering
589
3,504,251
https://en.wikipedia.org/wiki/Transylvania%20lottery
In mathematical combinatorics, the Transylvania lottery is a lottery where players selected three numbers from 1 to 14 for each ticket, and then three numbers are chosen randomly. A ticket wins if two of the numbers match the random ones. The problem asks how many tickets the player must buy in order to be certain of winning. An upper bound can be given using the Fano plane with a collection of 14 tickets in two sets of seven. Each set of seven uses every line of a Fano plane, labelled with the numbers 1 to 7, and 8 to 14. At least two of the three randomly chosen numbers must be in one Fano plane set, and any two points on a Fano plane are on a line, so there will be a ticket in the collection containing those two numbers. There is a ×= chance that all three randomly chosen numbers are in the same Fano plane set. In this case, there is a chance that they are on a line, and hence all three numbers are on one ticket, otherwise each of the three pairs are on three different tickets. See also Combinatorial design Lottery Wheeling References Combinatorics
Transylvania lottery
Mathematics
231
33,388,213
https://en.wikipedia.org/wiki/Majid%20Jafar
Majid Hamid Jafar (Arabic: مجيد حميد جعفر; born 1976) is an Emirati businessman of Iraqi descent. He is the CEO of Crescent Petroleum, vice-chairman of the Crescent Group, and managing director of Dana Gas. In 2021 Jafar was named among the 100 inspiring leaders in the Middle East by Arabian Business magazine. Early life and education Majid Jafar is the eldest son of Hamid Jafar, founder of Crescent Petroleum and chairman of the Crescent Group, and Sawsan Al-Fahoum Jafar, who serves as Chairman of the Board of the Friends of Cancer Patients charity in the UAE. He is the grandson of Iraqi Dhia Jafar, politician and cabinet minister who served in the last decade of Iraq's monarchy, during the reign of King Faisal II until 1958. The Jafar family is a notable Iraqi family that claims agnatic descent from Musa al-Kadhim. Jafar attended Eton College and graduated from the University of Cambridge (Churchill College) with bachelor's and master's degrees in engineering (fluid mechanics and thermodynamics). He holds a master's degree in international studies and diplomacy with distinction from the University of London's School of Oriental and African Studies (SOAS), and an MBA with distinction from Harvard Business School. Career In his early career Jafar worked for Shell International's Exploration & Production and Gas & Power Divisions in London until 2004. In 2004, he joined Crescent Petroleum at their headquarters in Sharjah, UAE. He became CEO of Crescent Petroleum in 2011. Crescent Petroleum's business and exploration focus lies in the MENA region with a special focus on Egypt and Iraq, from where the Jafar family originates. In a 2018 licensing round Crescent Petroleum was awarded three concessions for gas fields in Diyala province and as well as the Khidhr Al Mai exploration block in southern Iraq. Jafar was named as one of the 25 most powerful people in the Middle East oil and gas sector according to Oil& Gas Middle East. He is also a frequent commentator on the oil and gas sector and energy policy and has written on the economic challenges in the Arab World, and the geopolitics of oil and gas in the Caspian Region. Jafar is also a member of the board of the International Advisory Council of the Atlantic Council. Jafar is also a trustee of the Arab Forum for Environment and Development (AFED) and a board member at the Iraqi Energy Institute, a member of the Young Presidents Organisation and of the panel of senior advisors of British think tank Chatham House. He has repeatedly stressed the importance of the expansion of the private sector to fully develop the potential of natural resources in the region. Jafar has championed the importance of the oil and gas industry in the low-carbon energy transition, highlighting the important role natural gas will play in tackling carbon emissions particularly in the developing world. In 2017, Jafar co-chaired the WEF MENA Summit together with EU Commission President Usula von der Leyen and McKinsey managing partner Dominic Barton). He was named a Young Global Leader by the World Economic Forum (WEF). In 2021, Jafar announced that Crescent Petroleum became one of the first companies in the oil and gas industry to achieve carbon neutrality across its operations after completing a series of projects to reduce carbon intensity and offset remaining emissions. In December 2023, Jafar also committed Crescent Petroleum as a signatory to the Oil and Gas Decarbonization Charter (OGDC), a global industry Charter dedicated to achieveing net-zero operations by 2050 at the latest, and achieving near-zero upstream methane emissions and ending routine flaring by 2030. In 2021 Jafar was named as one of 100 inspiring leaders in the Middle East, and was listed in the Dubai's 100: Most influential people in the Emirate, by Arabian Business. In 2023 & 2024, Forbes Middle East magazine named him among the Middle East Sustainable 100 list. Other work Jafar has written columns for the Financial Times, and HuffPost and is regularly interviewed on news channels like CNN and CNBC. Jafar previously served on the Middle East Advisory board of Carnegie Endowment and Harvard Business School, and the board of the Queen Rania Foundation. Jafar authored the opening chapter of Performance and Progress: Essays on Capitalism, Business and Society published in 2015 by Oxford University Press. In October 2020 Jafar was a signatory to the World Economic Forum's Principles of Stakeholder Capitalism for the Middle East and North Africa. And in 2021 Crescent Petroleum partnered with Edraak to launch the Edraak Career Readiness programme to boost the employability skills of half a million young people across the MENA region. 2023 the company built on the success of the initiative with Edraak by launching the Career Compass Pathway course to teach interviewing and job hunting skills, targeting  1 million registrations across the Middle East. Philanthropy Jafar is the Co-founder of the Loulou Foundation, which he established together with his wife Lynn to address their eldest daughter Alia's rare disease (CDKL5 Deficiency Disorder). Until 2022, the foundation supported 60 projects at 45 different institutions around the world, enabling the dedicated research of over 180 scientists. Jafar and his wife were honored at the 2017 Finding A Cure for Epilepsy and Seizures (FACES) Gala in New York for their contributions to research towards better treatments for children who have epileptic seizures and other chronic conditions. And in 2023 they received honorary degrees from the University of Edinburgh as University Benefactor on behalf of the Loulou Foundation. Separately Jafar serves as Co-Chair of the Cambridge Children’s Hospital Fundraising Campaign and is a cornerstone donor to the pioneering hospital project, which will be one of the first hospitals designed to bring together treatment of mental and physical health, with an embedded University of Cambridge research institute. In 2020 the Jafars also established an endowed scholarship at Harvard Medical School to support medical students from the Middle East region. Jafar is a member of the Board of Fellows and Co-Chair of the Discovery Council of Harvard Medical School. Jafar's father, Hamid Jafar, supported the Jafar Research Professorship of Petroleum Engineering at Cambridge University. In 2015, the Jafar Hall and the Jafar Gallery at Eton College was supported by the Jafar's family and was opened by the Prince of Wales . Personal life Jafar is married to Lynn Barghout Jafar, daughter of businessman and philanthropist Bassam Barghout, who was awarded the Order of the Cedar with rank of Knight by the President and Prime Minister of Lebanon for his services to the country. Her grandfather Khalil Al-Hibri served as a member of parliament, Chairman of the Water Board of Beirut, Minister of Public Works and then Prime Minister of Lebanon in 1958, heading the transitional government in response to the Lebanon Crisis. Her great-grandfather was Sheikh Toufik El-Hibri, a founder of the Scout movement in Lebanon and across the Arab World. Majid and Lynn Jafar have four children (two girls and two boys). Lynn Barghout Jafar founded and manages High Hopes Dubai, a pediatric therapy center, which was opened in November 2017 by HRH Princess Haya bint Hussein. and further expanded in March 2023. Majid and Lynn Jafar are patrons of the Dubai Collection for Art. References 1976 births Living people Alumni of Churchill College, Cambridge Alumni of SOAS University of London Emirati businesspeople Emirati chief executives Emirati engineers Emirati people of Iraqi descent Harvard Business School alumni Harvard Medical School people People educated at Eton College Petroleum engineers
Majid Jafar
Engineering
1,556
52,307,694
https://en.wikipedia.org/wiki/Robert%20Cekuta
Robert Francis Cekuta (born 1954) is a career Foreign Service Officer, and served as U.S. Ambassador to Azerbaijan from February 2015 through March 2018. Early life and education Cekuta attended Georgetown University's School of Foreign Service, graduating in 1976 with a B.S. He then went to the Thunderbird School of Global Management, earning a master's degree in international marketing in 1978. He later earned another master's degree in national security strategies from the National War College. Career Cekuta joined the U.S. Foreign Service in 1978 and his early assignments included Vienna, Austria; Baghdad, Iraq; Johannesburg, South Africa; and Sana'a, Yemen. He also directed a task force in Kosovo during the conflict there and served in the Bureau of Near East and South Asian Affairs. From 1996 to 1999, he was deputy chief of mission in the U.S. Embassy in Tirana, Albania. Much of Cekuta's career has focused on business and trade issues. In 1999, he was senior advisor to the Office of the U.S. Trade Representative and in 2000 he was named director of Economic Policy Analysis and Public Diplomacy in the State Department. Cekuta in 2002 was named director of the Iraq Economic Group in the Bureau of Economic and Business Affairs. In 2002, he was also the bureau's special negotiator for biotechnology. Beginning in 2003, Cekuta was economic minister-counselor at the embassy in Berlin and in 2007 he was sent to Tokyo as the minister-counselor for economic affairs. Cekuta came home in 2010, first as senior advisor for food security in the State Department and later that year as Deputy Assistant Secretary of State for Energy, Sanctions and Commodities. One of his more prominent roles involved working with the jewelry industry on compliance with regulations on conflict diamonds and gold. In 2011, Cekuta became the Principal Deputy Assistant Secretary of State in the Bureau of Energy Resources. In this capacity, he acted as a point man for the State Department's views on the proposed Keystone XL pipeline. Cekuta was nominated by President Barack Obama on July 8, 2014 to be U.S. ambassador to Azerbaijan. Cekuta testified before the Senate Foreign Relations Committee on September 17, 2014, and was confirmed on December 16. Cekuta presented his credentials to President Ilham Aliyev on February 19, 2015, and served in the position until March 31, 2018. Personal life Cekuta and his wife, Anne, have three children. In addition to English he speaks German, Arabic, and Albanian. References External links Official site 1954 births Living people Ambassadors of the United States to Azerbaijan People from Rochester, New York National War College alumni Walsh School of Foreign Service alumni Thunderbird School of Global Management alumni Energy policy of the United States Petroleum politics United States Foreign Service personnel
Robert Cekuta
Chemistry
577
14,680,805
https://en.wikipedia.org/wiki/Blohm%20%26%20Voss%20P%20178
The Blohm & Voss P 178 was a German jet-powered dive bomber/fighter-bomber of unusual asymmetric form, proposed during World War II. Overview This asymmetrically-designed dive bomber had one Junkers Jumo 004B turbojet located under the wing to the starboard side of the fuselage. The pilot sat in a cockpit in the forward fuselage, with a large fuel tank located to the rear of the cockpit. Beneath the fuel tank, there was a deep recess in which an SC 500 bomb could be carried within the fuselage, or an SC 1000 bomb which would protrude slightly out of the fuselage. Two solid-fuel auxiliary rockets extended from the rear, used for take-off. Two 15  mm (.60 in) MG 151 cannons were located in the nose. Specifications See also List of German aircraft projects, 1939–45 References External links Secret Projects; Blohm und Voss P.178 Asymmetrical aircraft P 178 Abandoned military aircraft projects of Germany 1940s German attack aircraft
Blohm & Voss P 178
Physics
208
1,110,017
https://en.wikipedia.org/wiki/Dump%20truck
A dump truck, known also as a dumping truck, dump trailer, dumper trailer, dump lorry or dumper lorry or a dumper for short, is used for transporting materials (such as dirt, gravel, or demolition waste) for construction as well as coal. A typical dump truck is equipped with an open-box bed, which is hinged at the rear and equipped with hydraulic rams to lift the front, allowing the material in the bed to be deposited ("dumped") on the ground behind the truck at the site of delivery. In the UK, Australia, South Africa and India the term applies to off-road construction plants only and the road vehicle is known as a tip lorry, tipper lorry (UK, India), tipper truck, tip truck, tip trailer or tipper trailer or simply a tipper (Australia, New Zealand, South Africa). History The dump truck is thought to have been first conceived in the farms of late 19th century western Europe. Thornycroft developed a steam dust-cart in 1896 with a tipper mechanism. The first motorized dump trucks in the United States were developed by small equipment companies such as The Fruehauf Trailer Corporation, Galion Buggy Co. and Lauth-Juergens among many others around 1910. Hydraulic dump beds were introduced by Wood Hoist Co. shortly after. Such companies flourished during World War I due to massive wartime demand. August Fruehauf had obtained military contracts for his semi-trailer, invented in 1914 and later created the partner vehicle, the semi-truck for use in World War I. After the war, Fruehauf introduced hydraulics in his trailers. They offered hydraulic lift gates, hydraulic winches and a dump trailer for sales in the early 1920s. Fruehauf became the premier supplier of dump trailers and their famed "bathtub dump" was considered to be the best by heavy haulers, road and mining construction firms. Companies like Galion Buggy Co. continued to grow after the war by manufacturing a number of express bodies and some smaller dump bodies that could be easily installed on either stock or converted (heavy-duty suspension and drivetrain) Model T chassis prior to 1920. Galion and Wood Mfg. Co. built all of the dump bodies offered by Ford on their heavy-duty AA and BB chassis during the 1930s. Galion (now Galion Godwin Truck Body Co.) is the oldest known truck body manufacturer still in operation today. The first known Canadian dump truck was developed in Saint John, New Brunswick, when Robert T. Mawhinney attached a dump box to a flatbed truck in 1920. The lifting device was a winch attached to a cable that fed over sheave (pulley) mounted on a mast behind the cab. The cable was connected to the lower front end of the wooden dump box which was attached by a pivot at the back of the truck frame. The operator turned a crank to raise and lower the box. From the 1930s Euclid, International-Harvester and Mack contributed to ongoing development. Mack modified its existing trucks with varying success. In 1934 Euclid became the first manufacturer in the world to successfully produce a dedicated off-highway truck. Types Today, virtually all dump trucks operate by hydraulics and they come in a variety of configurations each designed to accomplish a specific task in the construction material supply chain. Standard dump truck A standard dump truck is a truck chassis with a dump body mounted to the frame. The bed is raised by a vertical hydraulic ram mounted under the front of the body (known as a front post hoist configuration), or a horizontal hydraulic ram and lever arrangement between the frame rails (known as an underbody hoist configuration), and the back of the bed is hinged at the back of the truck. The tailgate (sometimes referred to as an end gate) can be configured to swing up on top hinges (and sometimes also to fold down on lower hinges) or it can be configured in the "High Lift Tailgate" format wherein pneumatic or hydraulic rams lift the gate open and up above the dump body. Some bodies, typically for hauling grain, have swing-out doors for entering the box and a metering gate/chute in the center for a more controlled dumping. In the United States most standard dump trucks have one front steering axle and one (4x2 4-wheeler) or two (6x4 6-wheeler) rear axles which typically have dual wheels on each side. Tandem rear axles are almost always powered, front steering axles are also sometimes powered (4x4, 6x6). Unpowered axles are sometimes used to support extra weight. Most unpowered rear axles can be raised off the ground to minimize wear when the truck is empty or lightly loaded, and are commonly called "lift axles". European Union heavy trucks often have two steering axles. Dump truck configurations are two, three, and four axles. The four-axle eight wheeler has two steering axles at the front and two powered axles at the rear and is limited to gross weight in most EU countries. The largest of the standard European dump trucks is commonly called a "centipede" and has seven axles. The front axle is the steering axle, the rear two axles are powered, and the remaining four are lift axles. The shorter wheelbase of a standard dump truck often makes it more maneuverable than the higher capacity semi-trailer dump trucks. Semi trailer end dump truck A semi end dump is a tractor-trailer combination wherein the trailer itself contains the hydraulic hoist. In the US a typical semi end dump has a 3-axle tractor pulling a 2-axle trailer with dual tires, in the EU trailers often have 3 axles and single tires. The key advantage of a semi end dump is a large payload. A key disadvantage is that they are very unstable when raised in the dumping position limiting their use in many applications where the dumping location is uneven or off level. Some end dumps make use of an articulated arm (known as a stabilizer) below the box, between the chassis rails, to stabilize the load in the raised position. Frame and Frameless end dump truck Depending on the structure, semi trailer end dump truck can also be divided into frame trailer and frameless trailer. The main difference between them is the different structure. The frame dump trailer has a large beam that runs along the bottom of the trailer to support it. The frameless dump trailer has no frame under the trailer but has ribs that go around the body for support and the top rail of the trailer serves as a suspension bridge for support. The difference in structure also brings with it a difference in weight. Frame dump trailers are heavier. For the same length, a frame dump trailer weighs around 5 ton more than a frameless dump trailer. Transfer dump truck A transfer dump truck is a standard dump truck pulling a separate trailer with a movable cargo container, which can also be loaded with construction aggregate, gravel, sand, asphalt, klinkers, snow, wood chips, triple mix, etc. The second aggregate container on the trailer ("B" box), is powered by an electric motor, a pneumatic motor or a hydraulic line. It rolls on small wheels, riding on rails from the trailer's frame into the empty main dump container ("A" box). This maximizes payload capacity without sacrificing the maneuverability of the standard dump truck. Transfer dump trucks are typically seen in the western United States due to the peculiar weight restrictions on highways there. Another configuration is called a triple transfer train, consisting of a "B" and "C" box. These are common on Nevada and Utah Highways, but not in California. Depending on the axle arrangement, a triple transfer can haul up to with a special permit in certain American states. , a triple transfer costs a contractor about $105 an hour, while a A/B configuration costs about $85 per hour. Transfer dump trucks typically haul between of aggregate per load, each truck is capable of 3–5 loads per day, generally speaking. Truck and pup A truck and pup is very similar to a transfer dump. It consists of a standard dump truck pulling a dump trailer. The pup trailer, unlike the transfer, has its own hydraulic ram and is capable of self-unloading. Superdump truck A super dump is a straight dump truck equipped with a trailing axle, a liftable, load-bearing axle rated as high as . Trailing behind the rear tandem, the trailing axle stretches the outer "bridge" measurement—the distance between the first and last axles—to the maximum overall length allowed. This increases the gross weight allowed under the federal bridge formula, which sets standards for truck size and weight. Depending on the vehicle length and axle configuration, Superdumps can be rated as high as GVW and carry of payload or more. When the truck is empty or ready to offload, the trailing axle toggles up off the road surface on two hydraulic arms to clear the rear of the vehicle. Truck owners call their trailing axle-equipped trucks Superdumps because they far exceed the payload, productivity, and return on investment of a conventional dump truck. The Superdump and trailing axle concept were developed by Strong Industries of Houston, Texas. Semi trailer bottom dump truck A semi bottom dump, bottom hopper, or belly dump is a (commonly) 3-axle tractor pulling a 2-axle trailer with a clam shell type dump gate in the belly of the trailer. The key advantage of a semi bottom dump is its ability to lay material in a windrow, a linear heap. In addition, a semi bottom dump is maneuverable in reverse, unlike the double and triple trailer configurations described below. These trailers may be found either of the windrow type shown in the photo or may be of the cross spread type, with the gate opening front to rear instead of left and right. The cross spread type gate will actually spread the cereal grains fairly and evenly from the width of the trailer. By comparison, the windrow-type gate leaves a pile in the middle. The cross spread type gate, on the other hand, tends to jam and may not work very well with coarse materials. Double and triple trailer bottom dump truck Double and triple bottom dumps consist of a 2-axle tractor pulling one single-axle semi-trailer and an additional full trailer (or two full trailers in the case of triples). These dump trucks allow the driver to lay material in windrows without leaving the cab or stopping the truck. The main disadvantage is the difficulty in backing double and triple units. The specific type of dump truck used in any specific country is likely to be closely keyed to the weight and axle limitations of that jurisdiction. Rock, dirt, and other types of materials commonly hauled in trucks of this type are quite heavy, and almost any style of truck can be easily overloaded. Because of that, this type of truck is frequently configured to take advantage of local weight limitations to maximize the cargo. For example, within the United States, the maximum weight limit is throughout the country, except for specific bridges with lower limits. Individual states, in some instances, are allowed to authorize trucks up to . Most states that do so require that the trucks be very long, to spread the weight over more distance. It is in this context that double and triple bottoms are found within the United States. Bumper Pull Dump Trailer Bumper Pull personal and commercial Dump Trailers come in a variety of sizes from smaller 6x10 7,000 GVWR models to larger 7x16 High Side 14,000 GVWR models. Dump trailers come with a range of options and features such as tarp kits, high side options, dump/spread/swing gates, remote control, scissor, telescop, dual or single cylinder lifts, and metal locking toolboxes. They offer the perfect solution for a variety of applications, including roofing, rock and mulch delivery, general contractors, skid steer grading, trash out, and recycling. Side dump truck A side dump truck (SDT) consists of a 3-axle tractor pulling a 2-axle semi-trailer. It has hydraulic rams that tilt the dump body onto its side, spilling the material to either the left or right side of the trailer. The key advantages of the side dump are that it allows rapid unloading and can carry more weight in the western United States. In addition, it is almost immune to upset (tipping over) while dumping, unlike the semi end dumps which are very prone to tipping over. It is, however, highly likely that a side dump trailer will tip over if dumping is stopped prematurely. Also, when dumping loose materials or cobble sized stone, the side dump can become stuck if the pile becomes wide enough to cover too much of the trailer's wheels. Trailers that dump at the appropriate angle (50° for example) avoid the problem of the dumped load fouling the path of the trailer wheels by dumping their loads further to the side of the truck, in some cases leaving sufficient clearance to walk between the dumped load and the trailer. Winter service vehicles Many winter service vehicles are based on dump trucks, to allow the placement of ballast to weigh the truck down or to hold sodium or calcium chloride salts for spreading on snow and ice-covered surfaces. Plowing is severe service and needs heavy-duty trucks. Roll-off trucks A Roll-off has a hoist and subframe, but no body, it carries removable containers. The container is loaded on the ground, then pulled onto the back of the truck with a winch and cable. The truck goes to the dumpsite, after it has been dumped the empty container is taken and placed to be loaded or stored. The hoist is raised and the container slides down the subframe so the rear is on the ground. The container has rollers on the rear and can be moved forward or back until the front of it is lowered onto the ground. The containers are usually open-topped boxes used for rubble and building debris, but rubbish compactor containers are also carried. A newer hook-lift system ("roller container" in the UK) does the same job, but lifts, lowers, and dumps the container with a boom arrangement instead of a cable and hoist. Off-highway dump trucks Off-highway dump trucks are heavy construction equipment and share little resemblance to highway dump trucks. Bigger off-highway dump trucks are used strictly off-road for mining and heavy dirt hauling jobs. There are two primary forms: rigid frame and articulating frame. The term "dump" truck is not generally used by the mining industry, or by the manufacturers that build these machines. The more appropriate U.S. term for this strictly off-road vehicle is "haul truck" and the equivalent European term is "dumper". Haul truck Haul trucks are used in large surface mines and quarries. They have a rigid frame and conventional steering with drive at the rear wheel. As of late 2013, the largest ever production haul truck is the 450 metric ton BelAZ 75710, followed by the Liebherr T 282B, the Bucyrus MT6300AC and the Caterpillar 797F, which each have payload capacities of up to . The previous record holder being the Canadian-built Terex 33-19 "Titan", having held the record for over 25 years. Most large-size haul trucks employ Diesel-electric powertrains, using the Diesel engine to drive an AC alternator or DC generator that sends electric power to electric motors at each rear wheel. The Caterpillar 797 is unique for its size, as it employs a Diesel engine to power a mechanical powertrain, typical of most road-going vehicles and intermediary size haul trucks. Other major manufacturers of haul trucks include SANY, XCMG, Hitachi, Komatsu, DAC, Terex, and BelAZ. Articulated hauler An articulated dumper is an all-wheel-drive, off-road dump truck. It has a hinge between the cab and the dump box but is distinct from a semi-trailer truck in that the power unit is a permanent fixture, not a separable vehicle. Steering is accomplished via hydraulic cylinders that pivot the entire tractor in relation to the trailer, rather than rack and pinion steering on the front axle as in a conventional dump truck. By this way of steering, the trailer's wheels follow the same path as the front wheels. Together with all-wheel drive and low center of gravity, it is highly adaptable to rough terrain. Major manufacturers include Volvo CE, Terex, John Deere, and Caterpillar. U-shaped dump truck U-shaped dump trucks, also known as tub-body trucks, is used to transport construction waste, it is made of high-strength super wear-resistant special steel plate directly bent, and has the characteristics of impact resistance, alternating stress resistance, corrosion resistance and so on. 1. Cleaner unloading U-shaped dump truck, there is no dead angle at the corners of the cargo box, it is not easy to stick to the box when unloading, and the unloading is cleaner. 2. Lightweight The U-shaped cargo box reduces its own weight through structural optimization. Now the most common U-shaped dump is to use high-strength plates. Under the premise of ensuring the strength of the car body, the thickness of the plate is reduced by about 20%, and the self-weight of the car is reduced by about 1 ton, which effectively improves the utilization factor of the load mass. 3. Strong carrying capacity. Using high-strength steel plate, high yield strength, better impact resistance and fatigue resistance. For users of ore transportation, it can reduce the damage of ore to the container. 4. Low center of gravity The U-shaped structure has a lower center of gravity, which makes the ride more stable, especially when cornering, and avoids spilling cargo. 5. Save tires The U-shaped cargo box can keep the cargo in the center, and the tires on both sides are more evenly stressed, which is beneficial to improve the life of the tires. Dangers Collisions Dump trucks are normally built for some amount of off-road or construction site driving; as the driver is protected by the chassis and height of the driver's seat, bumpers are either placed high or omitted for added ground clearance. The disadvantage is that in a collision with a standard car, the entire motor section or luggage compartment goes under the truck. Thus, the passengers in the car could be more severely injured than would be common in a collision with another car. Several countries have made rules that new trucks should have bumpers approximately above ground in order to protect other drivers. There are also rules about how long the load or construction of the truck can go beyond the rear bumper to prevent cars that rear-end the truck from going under it. Tipping Another safety consideration is the leveling of the truck before unloading. If the truck is not parked on relatively horizontal ground, the sudden change of weight and balance due to lifting of the body and dumping of the material can cause the truck to slide, or even to tip over. The live bottom trailer is an approach to eliminate this danger. Back-up accidents Because of their size and the difficulty of maintaining visual contact with on-foot workers, dump trucks can be a threat, especially when backing up. Mirrors and back-up alarms provide some level of protection, and having a spotter working with the driver also decreases back-up injuries and fatalities. Manufacturers Ashok Leyland Asia MotorWorks Astra Veicoli Industriali BelAZ BEML Case CE Caterpillar Inc. DAC Daewoo Dart (commercial vehicle) Eicher Motors Euclid Trucks FAP HEPCO Hitachi Construction Machinery Hitachi Construction Machinery (Europe) Iveco John Deere Kamaz Kenworth Kioleides Komatsu KrAZ Leader Trucks Liebherr Group Mack Trucks Mahindra Trucks & Buses Ltd. MAN SE Mercedes-Benz Navistar International New Holland Peterbilt SANY Scania AB ST Kinetics Tata Tatra (company) Terex Corporation Volvo Construction Equipment Volvo Trucks XCMG See also Cement mixer truck Road roller Combine harvester Tractor Crane construction (truck) Bulldozer Forklift Dumper Garbage truck Live bottom trailer Rear-eject haul truck bodies Notes References Canadian inventions Engineering vehicles Trailers
Dump truck
Engineering
4,188
49,190,541
https://en.wikipedia.org/wiki/Cold%20blob
The cold blob in the North Atlantic (also called the North Atlantic warming hole) describes a cold temperature anomaly of ocean surface waters, affecting the Atlantic Meridional Overturning Circulation (AMOC) which is part of the thermohaline circulation, possibly related to global warming-induced melting of the Greenland ice sheet. General AMOC is driven by ocean temperature and salinity differences. The major possible mechanism causing the cold ocean surface temperature anomaly is based on the fact that freshwater decreases ocean water salinity, and through this process prevents colder waters sinking. Observed freshwater increase originates probably from Greenland ice melt. Research 2015 and earlier Climate scientists Michael Mann of Penn State and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research suggested that the observed cold pattern during years of temperature records is a sign that the Atlantic Ocean's Meridional overturning circulation (AMOC) may be weakening. They published their findings, and concluded that the AMOC circulation shows exceptional slowdown in the last century, and that Greenland melt is a possible contributor. Tom Delworth of NOAA suggested that natural variability, which includes different modes, here namely the North Atlantic Oscillation and the Atlantic Multidecadal Oscillation through wind driven ocean temperatures are also a factor. A 2014 study by Jon Robson et al. from the University of Reading concluded about the anomaly, "...suggest that a substantial change in the AMOC is unfolding now." Another study by Didier Swingedouw concluded that the slowdown of AMOC in the 1970s may have been unprecedented over the last millennium. 2016 A study published in 2016, by researchers from the University of South Florida, Canada and the Netherlands, used GRACE satellite data to estimate freshwater flux from Greenland. They concluded that freshwater runoff is accelerating, and could eventually cause a disruption of AMOC in the future, which would affect Europe and North America. Another study published in 2016, found further evidence for a considerable impact from sea level rise for the U.S. East Coast. The study confirms earlier research findings which identified the region as a hotspot for rising seas, with a potential to divert 3–4 times higher than the global average sea level rise rate. The researchers attribute the possible increase to an ocean circulation mechanism called deep water formation, which is reduced due to AMOC slow down, leading to more warmer water pockets below the surface. Additionally, the study noted: "Our results suggest that higher carbon emission rates also contribute to increased [sea level rise] in this region compared to the global average". Background In 2005, British researchers noticed that the net flow of the northern Gulf Stream had decreased by about 30% since 1957. Coincidentally, scientists at Woods Hole had been measuring the freshening of the North Atlantic as Earth becomes warmer. Their findings suggested that precipitation increases in the high northern latitudes, and polar ice melts as a consequence. By flooding the northern seas with excessive fresh water, global warming could, in theory, divert the Gulf Stream waters that usually flow northward, past the British Isles and Norway, and cause them to instead circulate toward the equator. Were this to happen, Europe's climate would be seriously impacted. Don Chambers from the USF College of Marine Science mentioned, "The major effect of a slowing AMOC is expected to be cooler winters and summers around the North Atlantic, and small regional increases in sea level on the North American coast." James Hansen and Makiko Sato stated, "AMOC slowdown that causes cooling ~1°C and perhaps affects weather patterns is very different from an AMOC shutdown that cools the North Atlantic several degrees Celsius; the latter would have dramatic effects on storms and be irreversible on the century time scale." Downturn of the Atlantic meridional overturning circulation, has been tied to extreme regional sea level rise. Measurements Since 2004 the RAPID program monitors the ocean circulation. See also Abrupt climate change The Blob (Pacific Ocean) Deglaciation Physical impacts of climate change References External links Extended lecture by Stefan Rahmstorf about AMOC slowdown (May 27, 2016) A Nasty Surprise in the Greenhouse (video about the shutdown of the thermohaline circulation, 2015) Blizzard Jonas and the slowdown of the Gulf Stream System (RealClimate January 24, 2016) Atlantic Ocean Effects of climate change Physical oceanography Chemical oceanography Anomalous weather
Cold blob
Physics,Chemistry
904
4,181,953
https://en.wikipedia.org/wiki/Railway%20Preservation%20Society%20of%20Ireland
The Railway Preservation Society of Ireland (RPSI) is a railway preservation group founded in 1964 and operating throughout Ireland. Mainline steam train railtours are operated from Dublin, while short train rides are operated up and down the platform at Whitehead, County Antrim, and as of 2023, the group sometimes operates mainline trains in Northern Ireland using hired-in NIR diesel trains from Belfast. The RPSI has bases in Dublin and Whitehead, with the latter having a museum. The society owns heritage wagons, carriages, steam engines, diesel locomotives and metal-bodied carriages suitable for mainline use. Bases The society has developed several bases over time, with Whitehead joined by Sallins, then Mullingar, and also Inchicore and Connolly in Dublin. As of 2019, three locations are in operation: Whitehead, Inchicore and Connolly. Current operations Whitehead site and museum Whitehead, near Belfast, has a long history as an excursion station, and the RPSI developed a working steam and engineering depot there. This was added to by the development of a museum. The Whitehead Railway Museum opened without ceremony in early 2017, after a 5-year project to expand the site from a depot to include a rebuilt Whitehouse Excursion station and the museum. The total cost was £3.1m from various funding sources. The museum received 10,000 visitors in 2017, its first year, and 15,000 in 2018. The museum hosts five galleries and it is possible for visitors to see various heritage steam and diesel locomotives and observe work on railway carriage restoration. Guides from the society are present. Inchicore, Dublin The RPSI has arrangements for storage of stock at Inchicore Works with maintenance also being carried out there. Connolly shed In 2015 the RPSI gained an arrangement with Iarnród Éireann to lease the locomotive shed just to the north of for the maintenance and storage of mainline diesel locomotives. Historic operations Mullingar The RPSI moved into the loco shed at Mullingar in 1974 and based steam locos 184 and 186 there. Carriages were also restored there. The base has since become derelict, with funding instead being channeled to Whitehead, including a board decision not to spend money on the green carriages based at Mullingar. Generating Van 3173 was the last vehicle to be overhauled. Sallins Prior to Mullingar, Sallins Goods Shed was used as a base. Whitehead and Belfast The Society used to operate mainline steam trains from Whitehead and Belfast. Since 2023, these have ceased, as Northern Irish Railways are no longer training staff as steam drivers. This leaves Whitehead focused on short steam train rides up and down the platform there. Rolling stock Steam locomotives The Society possesses 9 steam locomotives (plus one more operated by them but owned by the Ulster Folk and Transport Museum), typically only a small number will be operational at any time: Passenger tender locomotives The RPSI has three Great Northern Railway of Ireland 's within its fleet. No. 131, a Q class, was built in 1901. The others are S class no. 171 Slieve Gullion and V class No. 85 Merlin, although the latter is owned by the Ulster Folk & Transport Museum and is on loan. These locomotives are suitable for longer distance main line work, but are speed restricted if they need to run tender-first in the event they cannot be turned. Mixed large tank locomotive The RPSI's Northern Counties Committee (NCC) , WT class No. 4 holds significant records. It worked the last steam passenger train on Northern Ireland Railways, and with No. 53 operated the last stone goods train on 22 October 1970. Acquired by the RPSI in June 1971 it then went on to work over most of the remaining Irish railway network. They also own a SLNCR Lough class. Goods tender locomotives The Society possesses three goods tender locomotives all of which are suitable for slower speed passenger workings. Two of these are from the 101 (J15) class, of which over 100 were built between 1866 and 1903 and which lasted until the end of the steam era on CIÉ in 1963. The RPSI possesses two examples of these simple, reliable and robust engines, No. 184 with a saturated boiler and round-shaped firebox, and No. 186 with a superheated boiler and squarer Belpaire firebox. No. 461, a DSER 15 and 16 Class heavy goods locomotive, is the only Dublin and South Eastern Railway example that has been preserved. Shunting locomotives Shunting locomotives are useful and economical for shunting and short passenger work within Whitehead yard. These include the .3 'R.H. Smyth', affectionally known as Harvey, which has also been used to pull ballast hoppers for NIR. There is also No3BG "Guinness", a Hudswell Clarke engine presented by Guinness to the Society in 1965. Diesel and other locomotives The RPSI has indicated it has a strategy to create a mainline heritage diesel fleet. It has acquired four 65t General Motors Bo-Bos; CIÉ 121 Class number 134 and CIÉ 141 Class numbers 141, 142 and 175. The RPSI used to own two NIR 101 Class Hunslet diesels Numbered 101 and 102. They scrapped 101 and 102 was transferred to the Ulster Folk & Transport Museum. The RPSI also has some small diesel shunters, including a Ruston from Carlow sugar factory, a planet diesel from Irish Shell and a unilok diesel from the UTA. Carriages and other stock In the 2000s, with more rail stringent regulations, the RPSI was forced to acquire rakes of metal bodied carriages for mainline railtours. Freight wagons and other stock Whitehead has a collection of historic wagons, including a GNR brakevan named Ivan, restored by their award-winning Youth team, a Guinness van and NCC handcrane and a GSWR ballast hopper and an oil tanker from Irish Shell. Operations Railtours The main work of the society is in securing and maintaining steam rolling stock, with a view to running rail tours and Mulligan, in "One Hundred and Fifty Years of Irish Railways" noted that the RPSI did "sterling work" in the area of organising of such rail tours around the island, following the end of steam as a regular means of service provision on UTA and CIÉ lines. Films The RPSI has been able to assist in the provision of suitable rolling stock for train-related scenes in films made on the island of Ireland. The shooting of The First Great Train Robbery in 1978 was an early significant involvement in film making by the RPSI. Publication Five Foot Three is the RPSI's membership magazine. It is published annually Incidents On 7 November 2014, an RPSI train chartered by Web Summit blocked a level crossing in Midleton for over 25 minutes. The operation was referred to the Commission for Railway Regulation. The resulting investigation found that the Society had knowingly run a train that was too long for the station's platform and that it would block a level crossing, yet senior IR management overrode their internal safety department by allowing the train to run. On 7 July 2019, a serious incident occurred at Gorey when No.85 ran out of water and the fusible plug melted in the firebox. The Civil Defense had to cool down the boiler with hoses while the crew were evacuated from the cab and a rescue diesel summoned from Dublin. See also List of heritage railways in Northern Ireland List of heritage railways in the Republic of Ireland Irish Steam Preservation Society Irish Traction Group References Footnotes Notes Sources Primary sources External links RPSI website Engineering preservation societies Railway societies All-Ireland organisations Museums in County Antrim Railway museums in Northern Ireland Railway companies of Ireland Railway companies of the Republic of Ireland 1964 establishments in Ireland
Railway Preservation Society of Ireland
Engineering
1,584
55,454,492
https://en.wikipedia.org/wiki/EPrivacy%20Regulation
The ePrivacy Regulation (ePR) is a proposal for the regulation of various privacy-related topics, mostly in relation to electronic communications within the European Union. Its full name is "Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications)." It would repeal the Privacy and Electronic Communications Directive 2002 (ePrivacy Directive) and would be lex specialis to the General Data Protection Regulation. It would particularise and complement the latter in respect of privacy-related topics. Key fields of the proposed regulation are the confidentiality of communications, privacy controls through electronic consent and browsers, and cookies. The history of the regulation goes back to January 2017 when the European Commission proposed the ePrivacy Regulation. The intention was that it would sit alongside the EU GDPR (General Data Protection Regulation) when it was introduced on 25 May 2018. The scope is still under discussion. According to some proposals, it would apply to any business that processes data in relation to any form of online communication service, uses online tracking technologies, or engages in electronic direct marketing. The proposed penalties for noncompliance would be up to €20 million or, in the case of an undertaking, up to 4% of the total worldwide annual turnover, whichever is higher. The ePrivacy Regulation originally was intended to come in effect on 25 May 2018, together with the GDPR, but has still not been adopted. Difference between Regulation and Directive The (new) ePrivacy Regulation will repeal the (current) ePrivacy Directive. In contrast to an EU Directive, an EU Regulation is a legal act of the European Union that becomes immediately effective as law in all member states simultaneously. The current ePrivacy Directive is a legal act of the European Union that requires member states to achieve a particular result without dictating the means of achieving that result. It has therefore been implemented into national laws and regulations. If the proposed ePrivacy Regulation became effective, these laws would be superseded and will (for reasons of clarity) likely be repealed. The ePrivacy Regulation would be self-executing and not require many implementing measures. Key points of Commission's proposal According to the EU Commission, the proposal includes the following key changes: New players: Privacy rules will also apply to new players providing electronic communications services such as WhatsApp, Facebook Messenger, and Skype. That will ensure that the popular services guarantee the same level of confidentiality of communications as traditional telecoms operators. Stronger rules: All people and businesses in the EU will enjoy the same level of protection of their electronic communications through this directly applicable regulation. Businesses will also benefit from one single set of rules across the EU. Communications content and metadata: Privacy is guaranteed for communications like the time and the location of a call. Metadata have a high privacy component and must be anonymised or deleted if users did not give their consent unless the data is needed for billing. New business opportunities: Once consent is given for communications data (content and/or metadata) to be processed, traditional telecoms operators will have more opportunities to provide additional services and to develop their businesses. For example, they could produce heat maps indicating the presence of individuals, which could help public authorities and transport companies when developing new infrastructure projects. Simpler rules on cookies: The cookie provision, which has resulted in an overload of consent requests for internet users, will be streamlined. The new rule will be more user-friendly, as browser settings will provide for an easy way to accept or refuse tracking cookies and other identifiers. The proposal also clarifies that no consent is needed for non-privacy-intrusive cookies improving internet experience (like to remember shopping cart history) or cookies used by a website to count the number of visitors. Protection against spam: The proposal bans unsolicited electronic communications by emails, SMS, and automated calling machines. Depending on national law, people will either be protected by default or be able to use a do-not-call list to avoid receiving marketing phone calls. Marketing callers will need to display their phone number or use a special pre-fix that indicates a marketing call. More effective enforcement: The enforcement of the confidentiality rules in the regulation will be the responsibility of data protection authorities, already in charge of the rules under the General Data Protection Regulation. Reception In February 2021, the German Federal Commissioner for Data Protection and Freedom of Information saw multiple red lines being crossed. Data retention had again become part of the proposal, despite the fact that it had been ruled unlawful by many courts. The regulations concerning the Internet constituted a step back in that cookie walls would be again allowed. Important consumer rights such as the "right to object" and "data protection impact assessment" would be voided. Personal data could be processed for purposes different from the original ones without the person's consent. The "pay-or-allow-to-be-tracked" question to access a website would henceforth be permitted. The directive of 2001 required in its art 15(1) that data might be retained for an important public interest. The proposal now in 17a does not have such a reference to the public interest anymore. In March 2021, France was reported to be leading an effort to modify the ePrivacy initiative to exempt national security agencies from some provisions. On July 6, 2021, the European Parliament approved a derogation to the ePrivacy regulation that enables providers of electronic communication services to scan and report private online messages containing material depicting child sex abuse, and allow companies to apply approved technologies to detect grooming techniques. Three-way negotiations are currently underway between the EU Commission, the Parliament and the Council of the European Union to reach agreement on the final text of the regulation. It is expected to be finalized and come into effect in 2024 References External links The proposed Regulation on Privacy and Electronic Communications on europa.eu Procedure File: 2017/0003(COD) | Legislative Observatory | European Parliament Draft European Union laws Email Information privacy Information technology organizations based in Europe Privacy legislation Spamming Open digital policy proposals Regulation of technologies
EPrivacy Regulation
Engineering
1,268
1,894,582
https://en.wikipedia.org/wiki/Dielectric%20spectroscopy
Dielectric spectroscopy (which falls in a subcategory of the impedance spectroscopy) measures the dielectric properties of a medium as a function of frequency. It is based on the interaction of an external field with the electric dipole moment of the sample, often expressed by permittivity. It is also an experimental method of characterizing electrochemical systems. This technique measures the impedance of a system over a range of frequencies, and therefore the frequency response of the system, including the energy storage and dissipation properties, is revealed. Often, data obtained by electrochemical impedance spectroscopy (EIS) is expressed graphically in a Bode plot or a Nyquist plot. Impedance is the opposition to the flow of alternating current (AC) in a complex system. A passive complex electrical system comprises both energy dissipater (resistor) and energy storage (capacitor) elements. If the system is purely resistive, then the opposition to AC or direct current (DC) is simply resistance. Materials or systems exhibiting multiple phases (such as composites or heterogeneous materials) commonly show a universal dielectric response, whereby dielectric spectroscopy reveals a power law relationship between the impedance (or the inverse term, admittance) and the frequency, ω, of the applied AC field. Almost any physico-chemical system, such as electrochemical cells, mass-beam oscillators, and even biological tissue possesses energy storage and dissipation properties. EIS examines them. This technique has grown tremendously in stature over the past few years and is now being widely employed in a wide variety of scientific fields such as fuel cell testing, biomolecular interaction, and microstructural characterization. Often, EIS reveals information about the reaction mechanism of an electrochemical process: different reaction steps will dominate at certain frequencies, and the frequency response shown by EIS can help identify the rate limiting step. Dielectric mechanisms There are a number of different dielectric mechanisms, connected to the way a studied medium reacts to the applied field (see the figure illustration). Each dielectric mechanism is centered around its characteristic frequency, which is the reciprocal of the characteristic time of the process. In general, dielectric mechanisms can be divided into relaxation and resonance processes. The most common, starting from high frequencies, are: Electronic polarization This resonant process occurs in a neutral atom when the electric field displaces the electron density relative to the nucleus it surrounds. This displacement occurs due to the equilibrium between restoration and electric forces. Electronic polarization may be understood by assuming an atom as a point nucleus surrounded by spherical electron cloud of uniform charge density. Atomic polarization Atomic polarization is observed when the nucleus of the atom reorients in response to the electric field. This is a resonant process. Atomic polarization is intrinsic to the nature of the atom and is a consequence of an applied field. Electronic polarization refers to the electron density and is a consequence of an applied field. Atomic polarization is usually small compared to electronic polarization. Dipole relaxation This originates from permanent and induced dipoles aligning to an electric field. Their orientation polarisation is disturbed by thermal noise (which mis-aligns the dipole vectors from the direction of the field), and the time needed for dipoles to relax is determined by the local viscosity. These two facts make dipole relaxation heavily dependent on temperature, pressure, and chemical surrounding. Ionic relaxation Ionic relaxation comprises ionic conductivity and interfacial and space charge relaxation. Ionic conductivity predominates at low frequencies and introduces only losses to the system. Interfacial relaxation occurs when charge carriers are trapped at interfaces of heterogeneous systems. A related effect is Maxwell-Wagner-Sillars polarization, where charge carriers blocked at inner dielectric boundary layers (on the mesoscopic scale) or external electrodes (on a macroscopic scale) lead to a separation of charges. The charges may be separated by a considerable distance and therefore make contributions to the dielectric loss that are orders of magnitude larger than the response due to molecular fluctuations. Dielectric relaxation Dielectric relaxation as a whole is the result of the movement of dipoles (dipole relaxation) and electric charges (ionic relaxation) due to an applied alternating field, and is usually observed in the frequency range 102-1010 Hz. Relaxation mechanisms are relatively slow compared to resonant electronic transitions or molecular vibrations, which usually have frequencies above 1012 Hz. Principles Steady-state For a redox reaction R O + e, without mass-transfer limitation, the relationship between the current density and the electrode overpotential is given by the Butler–Volmer equation: with is the exchange current density and and are the symmetry factors. The curve vs. is not a straight line (Fig. 1), therefore a redox reaction is not a linear system. Dynamic behavior Faradaic impedance In an electrochemical cell the faradaic impedance of an electrolyte-electrode interface is the joint electrical resistance and capacitance at that interface. Let us suppose that the Butler-Volmer relationship correctly describes the dynamic behavior of the redox reaction: Dynamic behavior of the redox reaction is characterized by the so-called charge transfer resistance defined by: The value of the charge transfer resistance changes with the overpotential. For this simplest example the faradaic impedance is reduced to a resistance. It is worthwhile to notice that: for . Double-layer capacitance An electrode electrolyte interface behaves like a capacitance called electrochemical double-layer capacitance . The equivalent circuit for the redox reaction in Fig. 2 includes the double-layer capacitance as well as the charge transfer resistance . Another analog circuit commonly used to model the electrochemical double-layer is called a constant phase element. The electrical impedance of this circuit is easily obtained remembering the impedance of a capacitance which is given by: where is the angular frequency of a sinusoidal signal (rad/s), and . It is obtained: Nyquist diagram of the impedance of the circuit shown in Fig. 3 is a semicircle with a diameter and an angular frequency at the apex equal to (Fig. 3). Other representations, Bode plots, or Black plans can be used. Ohmic resistance The ohmic resistance appears in series with the electrode impedance of the reaction and the Nyquist diagram is translated to the right. Universal dielectric response Under AC conditions with varying frequency ω, heterogeneous systems and composite materials exhibit a universal dielectric response, in which overall admittance exhibits a region of power law scaling with frequency. . Measurement of the impedance parameters Plotting the Nyquist diagram with a potentiostat and an impedance analyzer, most often included in modern potentiostats, allows the user to determine charge transfer resistance, double-layer capacitance and ohmic resistance. The exchange current density can be easily determined measuring the impedance of a redox reaction for . Nyquist diagrams are made of several arcs for reactions more complex than redox reactions and with mass-transfer limitations. Applications Electrochemical impedance spectroscopy is used in a wide range of applications. In the paint and coatings industry, it is a useful tool to investigate the quality of coatings and to detect the presence of corrosion. It is used in many biosensor systems as a label-free technique to measure bacterial concentration and to detect dangerous pathogens such as Escherichia coli O157:H7 and Salmonella, and yeast cells. Electrochemical impedance spectroscopy is also used to analyze and characterize different food products. Some examples are the assessment of food–package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, the measure of meat ageing, the investigation of ripeness and quality in fruits and the determination of free acidity in olive oil. In the field of human health monitoring is better known as bioelectrical impedance analysis (BIA) and is used to estimate body composition as well as different parameters such as total body water and free fat mass. Electrochemical impedance spectroscopy can be used to obtain the frequency response of batteries and electrocatalytic systems at relatively high temperatures. Biomedical sensors working in the microwave range relies on dielectric spectroscopy to detect changes in the dielectric properties over a frequency range, such as non-invasive continuous blood glucose monitoring. The IFAC database can be used as a resource to get the dielectric properties for human body tissues. For heterogenous mixtures like suspensions impedance spectroscopy can be used to monitor the particle sedimentation process. See also Debye relaxation Dielectric absorption, ultra-low frequency changes Dielectric loss Electrochemistry Ellipsometry Green–Kubo relations Induced polarization (IP) Kramers–Kronig relations Linear response function Potentiostat Spectral induced polarisation (SIP) References Electric and magnetic fields in matter Electrochemistry Impedance measurements Spectroscopy
Dielectric spectroscopy
Physics,Chemistry,Materials_science,Engineering
1,878
22,022,093
https://en.wikipedia.org/wiki/Instinet
Instinet Incorporated is an institutional, agency-model broker that also serves as the independent equity trading arm of its parent, Nomura Group. It executes trades for asset management firms, hedge funds, insurance companies, mutual funds and pension funds. Headquartered in New York City, the company provides sales trading services and trading technologies such as the Newport EMS, algorithms, trade cost analytics, commission management, independent research and dark pools. However, Instinet is best known for being the first off-exchange trading alternatives, with its "green screen" terminals prevalent in the 1980s and 1990s, and as the founder of electronic communication networks, Chi-X Europe and Chi-X Global. According to industry research group Markit, in 2015 Instinet was the 3rd-largest cash equities broker in Europe. History Early history Instinet was founded by Jerome M. Pustilnik and Herbert R. Behrens and was incorporated in 1969 as Institutional Networks Corp. The founders aimed to compete with the New York Stock Exchange by means of computer links between major institutions, such as banks, mutual funds, and insurance companies, with no delays or intervening specialists. Through the Instinet system, which went live in December 1969, the company provided computer services and a communications network for the automated buying and selling of equity securities on an anonymous, confidential basis. Uptake of the platform was slow through the 1970s, and in 1983 Instinet turned to William A. "Bill" Lupien, a former Pacific Stock Exchange specialist, to run the company. Bill Lupien decided to market the system more aggressively to the broker community, rather than focus exclusively on the buyside as his predecessors had. To expand its market, Lupien brought on board Fredric W. Rittereiser, formerly of Troster Singer and the Sherwood Group, as president and Chief Operating Officer and David N. Rosensaft as Vice President (later SVP) of New Products Development. Rittereiser later depqrted to becme CEO of First Jersey and was replaced by Joseph Taussig as COO. Together, they successfully introduced many innovations which made Instinet an integral tool for traders on both the "buy" and "sell" sides of the market. Reuters acquisition As a result of Lupien's refocusing of Instinet (which the business was renamed in 1985), the firm grew rapidly in the mid-1980s. During the Crash of 1987 electronic trading system allowed trading when brokers and market makers were unwilling to answer their phones during the free-fall. Reuters, which in 1985 had acquired a portion of the firm, acquired the entire business in May 1987, though under the deal Instinet would remain an independent, New York-based subsidiary. Lupien and then COO Murray Finebaum would resign shortly thereafter. Alternative Trading Systems regulations Under Reuters, the Instinet platform continued to grow through the late 1980s and into the early 1990s. By the time that the U.S. Securities and Exchange Commission introduced the Order Handling Rules and alternative trading systems (ATS) regulation in the late 1990s. In 1992, Instinet expanded internationally. Douglas Atkin led the effort and by 1998 Instinet was operating in over 20 world markets and had grown revenues to approximately $100 million. Instinet was the dominant electronic communication network. However, these rules also gave rise to new competitors, some of whom employed new pricing schemes. By the early 2000s, these competitors, helped by missteps at Instinet that included rapid expansion, over-spending and slow uptake of technology, had managed to erode the firm's market share. As a result, Instinet in 2002 merged with the Island ECN, renaming the Island technology platform Inet. Public listing Reuters went on to IPO Instinet in 2001 keeping a 62% ownership stake. It would hold this until the 2005 acquisition of Instinet by NASDAQ in 2005, in which Nasdaq retained the INET ECN and subsequently sold the agency brokerage business to Silver Lake Partners. Nomura acquisition In February 2007, Nomura purchased the firm from Silver Lake for a reported $1.2 billion. Instinet is today operated as an independent subsidiary of Nomura and run by CEO Ralston Roberts. In December 2009, in commemoration of its 40th anniversary, Instinet worked with the Make-a-Wish Foundation to grant wishes to 40 children with life-threatening illnesses. In May 2012, Nomura announced that it would transfer electronic trading in the United States to Instinet, with the goal of eventually making it the electronic trading arm for all of Nomura. However, in September 2012, Nomura announced that it would instead make Instinet its execution services (cash, program and electronic trading) arm in all markets globally excluding Japan. Achievements Instinet is credited with several firsts in electronic trading. In addition to launching one of the first electronic trading platforms in 1969, Instinet developed: 1980: First direct market access system 1986: First after market crossing system 1993: Instinet OMS, one of the first modern execution management system (EMS) platforms 1999: Instinet Helix, one of the first market routing platforms 2007: Chi-X Europe, the first and largest European multilateral trading facility, which in 2011 was acquired by BATS Global Markets 2008: Chi-X Global, operator of Chi-X Australia, Chi-X Canada and Chi-X Japan; in 2015, Nasdaq announced acquisition of Chi-X Canada from Chi-X Global; in 2016 Chi-X Australia, Chi-X Japan and Chi-Tech was acquired by J.C. Flowers. 2017: Instinet announces acquisition of Blockcross, State Street's US-equity dark pool. References External links Instinet Incorporated American companies established in 1969 Financial services companies established in 1969 1969 establishments in New York City 2001 initial public offerings 2007 mergers and acquisitions Computer networking Stock market Nomura Holdings Financial services companies based in New York City American subsidiaries of foreign companies fa:سیستم شبکه مؤسسات
Instinet
Technology,Engineering
1,255
54,681
https://en.wikipedia.org/wiki/NP-hardness
In computational complexity theory, a computational problem H is called NP-hard if, for every problem L which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from L to H. That is, assuming a solution for H takes 1 unit time, Hs solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected, but unproven, that P≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist. A simple example of an NP-hard problem is the subset sum problem. Informally, if H is NP-hard, then it is at least as difficult to solve as the problems in NP. However, the opposite direction is not true: some problems are undecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP). Definition A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H. Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems. Consequences If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS). There are many classes of approximability, each one enabling approximation up to a different level. Examples All NP-complete problems are also NP-hard (see List of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as the travelling salesman problem—is NP-hard. The subset sum problem is another example: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be NP-complete. There are decision problems that are NP-hard but not NP-complete such as the halting problem. That is the problem which asks "given a program and its input, will it run forever?" That is a yes/no question and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, but the halting problem, in general, is undecidable. There are also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the language of true quantified Boolean formulas is decidable in polynomial space, but not in non-deterministic polynomial time (unless NP = PSPACE). NP-naming convention NP-hard problems do not have to be elements of the complexity class NP. As NP plays a central role in computational complexity, it is used as the basis of several classes: NP Class of computational decision problems for which any given yes-solution can be verified as a solution in polynomial time by a deterministic Turing machine (or solvable by a non-deterministic Turing machine in polynomial time). NP-hard Class of problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to be elements of NP; indeed, they may not even be decidable. NP-complete Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP. NP-easy At most as hard as NP, but not necessarily in NP. NP-equivalent Decision problems that are both NP-hard and NP-easy, but not necessarily in NP. NP-intermediate If P and NP are different, then there exist decision problems in the region of NP that fall between P and the NP-complete problems. (If P and NP are the same class, then NP-intermediate problems do not exist because in this case every NP-complete problem would fall in P, and by definition, every problem in NP can be reduced to an NP-complete problem.) Application areas NP-hard problems are often tackled with rules-based languages in areas including: Approximate computing Configuration Cryptography Data mining Decision support Phylogenetics Planning Process monitoring and control Rosters or schedules Routing/vehicle routing Scheduling See also Lists of problems List of unsolved problems Reduction (complexity) Unknowability References Complexity classes
NP-hardness
Mathematics
1,089
5,042,212
https://en.wikipedia.org/wiki/Sofya%20Yanovskaya
Sofya Aleksandrovna Yanovskaya (also Janovskaja; ; 31 January 1896 – 24 October 1966) was a Soviet mathematician, philosopher and historian, specializing in the history of mathematics, mathematical logic, and philosophy of mathematics. She is best known for her efforts in restoring the research of mathematical logic in the Soviet Union and publishing and editing the mathematical works of Karl Marx. Biography Yanovskaya was born in Pruzhany, a town near Brest, to a Jewish family of accountant Alexander Neimark. From 1915 to 1918, she studied in a woman's college in Odessa, when she became a communist. She worked as a party official until 1924, when she started teaching at the Institute of Red Professors. With exception of the war years (1941–1945), she worked at Moscow State University until retirement. Engels had noted in his writings that Karl Marx had written some mathematics. Yanonskaya found Marx's ''Mathematical Manuscripts'' and she arranged for their first publication in 1933 in Russian. She received her doctoral degree in 1935. Her work on Karl Marx's mathematical manuscripts began in 1930s and may have had some influence on the study of non-standard analysis in China. In the academia she is most remembered now for her work on history and philosophy of mathematics, as well as for her influence on young generation of researchers. She persuaded Ludwig Wittgenstein when he was visiting Soviet Union in 1935 to give up his idea to relocate to the Soviet Union. In 1968 Yanovskaya arranged for a better publication of Marx's work. She died from diabetes in Moscow. Awards and honours For her work, Yanovskaya received the Order of Lenin and other medals. References Sources Irving Anellis (1987) "The heritage of S.A. Janovskaja". History and Philosophy of Logic 8: 45-56. B.A. Kushner (1996) "Sof'ja Aleksandrovna Janovskaja: a few reminiscences", Modern Logic 6: 67-72. V.A. Bazhanov (2002) Essays on the Social History of Logic in Russia. Simbirsk-Ulyanovsk. Chapter 5 (bibliography of S.A. Yanovskaya's works is presented here). (in Russian). B.V. Biryukov and L.G. Biryukova (2004) "Ludwig Wittgenstein and Sof'ya Aleksandrovna Yanovskaya. The 'Cambridge Genius' becomes acquainted with Soviet mathematicians in the 1930s" (in Russian). Logical Investigations. No. 11 (Russian), 46-94, Nauka, Moscow. Further reading "Sof'ya Aleksandrovna Janovskaja", Biographies of Women Mathematicians, Agnes Scott College Remembrances and more remembrances of S.A. Yanovskaya, by Boris A. Kushner (in Russian). a review of Yanovskaya's Methodological problems in science monograph – an article by B.V. Biryukov and O.A. Borisova (in Russian). 1965 Moscow Interview with Sofya Yanovskaya, Eugene Dynkin Collection of Mathematics Interviews, Cornell University Library (in Russian). Vadim Valilyev on the meeting between Ludwig Wittgenstein and Sophia Yanovskaya (in Russian). 1896 births 1966 deaths Burials at Novodevichy Cemetery People from Pruzhany People from Pruzhansky Uyezd Belarusian Jews Jews from the Russian Empire Soviet Jews Bolsheviks Communist Party of the Soviet Union members Soviet historians Soviet mathematicians Soviet women mathematicians Historians of science Historians of mathematics Jewish scientists Jewish historians Philosophers of mathematics 20th-century women mathematicians Soviet women historians Institute of Red Professors alumni Recipients of the Order of Lenin
Sofya Yanovskaya
Mathematics
774
24,271,629
https://en.wikipedia.org/wiki/Local%20tangent%20space%20alignment
Local tangent space alignment (LTSA) is a method for manifold learning, which can efficiently learn a nonlinear embedding into low-dimensional coordinates from high-dimensional data, and can also reconstruct high-dimensional coordinates from embedding coordinates. It is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing the k-nearest neighbors of every point. It computes the tangent space at every point by computing the d-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces, but it ignores the label information conveyed by data samples, and thus can not be used for classification directly. See also Isomap References Further reading Dimension reduction Manifolds
Local tangent space alignment
Mathematics
166
60,361,757
https://en.wikipedia.org/wiki/Fish%20DNA%20barcoding
DNA barcoding methods for fish are used to identify groups of fish based on DNA sequences within selected regions of a genome. These methods can be used to study fish, as genetic material, in the form of environmental DNA (eDNA) or cells, is freely diffused in the water. This allows researchers to identify which species are present in a body of water by collecting a water sample, extracting DNA from the sample and isolating DNA sequences that are specific for the species of interest. Barcoding methods can also be used for biomonitoring and food safety validation, animal diet assessment, assessment of food webs and species distribution, and for detection of invasive species. In fish research, barcoding can be used as an alternative to traditional sampling methods. Barcoding methods can often provide information without damage to the studied animal. Aquatic environments have unique properties that affect how genetic material from organisms is distributed. DNA material diffuses rapidly in aquatic environments, which makes it possible to detect organisms from a large area when sampling a specific spot. Due to rapid degradation of DNA in aquatic environments, detected species represent contemporary presence, without confounding signals from the past. DNA-based identification is fast, reliable and accurate in its characterization across life stages and species. Reference libraries are used to connect barcode sequences to single species and can be used to identify the species present in DNA samples. Libraries of reference sequences are also useful in identifying species in cases of morphological ambiguity, such as with larval stages. eDNA samples and barcoding methods are used in water management, as species composition can be used as an indicator of ecosystem health. Barcoding and metabarcoding methods are particularly useful in studying endangered or elusive fish, as species can be detected without catching or harming the animals. Applications Ecological monitoring Biomonitoring of aquatic ecosystems is required by national and international legislation (e.g. the Water Framework Directive and the Marine Strategy Framework Directive). Traditional methods are time-consuming and include destructive practices that can harm individuals of rare or protected species. DNA barcoding is a relatively cost-effective and quick method for identifying fish species aquatic environments. Presence or absence of key fish species can be established using eDNA from water samples and spatio-temporal distribution of fish species (e.g. timing and location of spawning) can be studied. This can help discover e.g. impacts of physical barriers such as dam construction and other human disturbances. DNA tools are also used in dietary studies of fish and the construction of aquatic food webs. Metabarcoding of fish gut contents or feces identify recently consumed prey species. However, secondary predation must be taken into consideration. Invasive species Early detection is vital for control and removal of non-indigenous, ecologically harmful species (e.g. lion fish (Pteroissp.) in the Atlantic and Caribbean). Metabarcoding of eDNA can be used to detect cryptic or invasive species in aquatic ecosystems. Fisheries management Barcoding and metabarcoding approaches yield rigorous and extensive data on recruitment, ecology and geographic ranges of fisheries resources. The methods also improve knowledge of nursery areas and spawning grounds, with benefits for fisheries management. Traditional methods for fishery assessment can be highly destructive, such as gillnet sampling or trawling. Molecular methods offers an alternative for non-invasive sampling. For example, barcoding and metabarcoding can help identifying fish eggs to species to ensure reliable data for stock assessment, as it has proven more reliable than identification via phenotypic characters. Barcoding and metabarcoding are also powerful tools in monitoring of fisheries quotas and by-catch. eDNA can detect and quantify the abundance of some anadromous species as well as their temporal distribution. This approach can be used to develop appropriate management measures, of particular importance for commercial fisheries. Food safety Globalisation of food supply chains has led to an increased uncertainty of the origin and safety of fish-based products. Barcoding can be used to validate the labelling of products and to trace their origin. “Fish fraud” has been discovered across the globe. A recent study from supermarkets in the state of New York found that 26.92% of seafood purchases with an identifiable barcode were mislabelled. Barcoding can also trace fish species as there can be human health hazards related to consumption of fish. Further, biotoxins can occasionally be concentrated when toxins move up the food chain. One example relates to coral reef species where predatory fish such as barracuda have been detected to cause Ciguatera fish poisoning. Such new associations of fish poisoning can be detected by the use of fish barcoding. Protection of endangered species Barcoding can be used in the conservation of endangered species through the prevention of illegal trading of CITES listed species. There is a large black market for fish based products and also in the aquarium and pet trades. To protect sharks from overexploitation, illegal use can be detected from barcoding shark fin soup and traditional medicines. Methodology Sampling in aquatic environments Aquatic environments have special attributes that need to be considered when sampling for fish eDNA metabarcoding. Seawater sampling is of particular interest for assessment of health of marine ecosystems and their biodiversity. Although the dispersion of eDNA in seawater is large and salinity negatively influences DNA preservation, a water sample can contain high amounts of eDNA from fish up to one week after sampling. Free molecules, intestinal lining and skin cell debris are the main sources of fish eDNA. In comparison to marine environments, ponds have biological and chemical properties that can alter eDNA detection. The small size of ponds compared to other water bodies makes them more sensitive to environmental conditions such as exposure to UV light and changes in temperature and pH. These factors can affect the amount of eDNA. Moreover, trees and dense vegetation around ponds represent a barrier that prevents water aeration by wind. Such barriers can also promote the accumulation of chemical substances that damage eDNA integrity. Heterogeneous distribution of eDNA in ponds may affect detection of fishes. Availability of fish eDNA is also dependent of life stage, activity, seasonality and behavior. The largest amounts of eDNA are obtained from spawning, larval stages and breeding activity. Target regions Primer design is crucial for metabarcoding success. Some studies on primer development have described cytochrome B and 16S as suitable target regions for fish metabarcoding. Evans et.al. (2016) described that Ac16S and L2513/H2714 primer sets are able to detect fish species accurately in different mesocosms. Another study performed by Valentini et.al. (2016) showed that the L1848/H1913 primer pair, which amplifies a region of 12S rRNA locus, was able to reach high taxonomical coverage and discrimination even with a short target fragment. This research also evidenced that in 89% of sampling sites, metabarcoding approach was similar or even higher than traditional methods (e.g. electrofishing and netting methods). Hänfling et.al. (2016) performed metabarcoding experiments focused on lake fish communities using 12S_F1/12S_R1 and CytB_L14841/CytB_H15149 primer pairs, whose targets were located in the mitochondrial 12S and cytochrome B regions respectively. The results demonstrate that detection of fish species was higher when using 12S primers than CytB. This was due to the persistence of shorter 12S fragments (~100 bp) in comparison to larger CytB amplicon (~460 bp). In general, these studies summarize that special considerations about primer design and selection have to be taken according to the objectives and nature of the experiment. Fish reference databases There are a number of open access databases available to researchers worldwide. The proper identification of fish specimens with DNA barcoding methods relies heavily on the quality and species coverage of available sequence databases. A fish reference database is an electronic database that typically contains DNA barcodes, images, and geospatial coordinates of examined fish specimens. The database can also contain linkages to voucher specimens, information on species distributions, nomenclature, authoritative taxonomic information, collateral natural history information and literature citations. Reference databases may be curated, meaning that the entries are subjected to expert assessment before being included, or uncurated, in which case they may include a large number of reference sequences but with less reliable identification of species. FISH-BOL Launched in 2005, The Fish Barcode of Life Initiative (FISH-BOL) www.fishbol.org is an international research collaboration that is assembling a standardized reference DNA sequence library for all fish species. It is a concerted global research project with the goal to collect and assemble standardized DNA barcode sequences and associated voucher provenance data in a curated reference sequence library to aid the molecular identification of all fish species. If researchers wish to contribute to the FISH-BOL reference library, clear guidelines are provided for specimen collection, imaging, preservation, and archival, as well as meta-data collection and submission protocols. The Fish-BOL database functions as a portal to the Barcode of Life Data Systems (BOLD). French Polynesia Fish Barcoding Base The French Polynesia Fish Barcoding Database contains all the specimens captured during several field trips organised or participated in by CRIOBE (Centre for Island Research and Environmental Observatory) since 2006 in the Archipelagos of French Polynesia. For each classified specimen, the following information can be available: scientific name, picture, date, GPS coordinate, depth and method of capture, size, and Cytochrome Oxidase c Subunit 1 (CO1) DNA sequence. The database can be searched using name (genus or species) or using a part of the CO1 DNA sequence. Aquagene A collaborative product developed by several German institutions, Aquagene provides free access to curated genetic information of marine fish species. The database allows species identification by DNA sequence comparisons. All species are characterized by multiple gene sequences, presently including the standard CO1 barcoding gene together with CYTB, MYH6 and (coming shortly) RHOD, facilitating unambiguous species determination even for closely related species or those with high intraspecific diversity. The genetic data is complemented online with additional data of the sampled specimen, such as digital images, voucher number and geographic origin. Additional resources Other reference databases that are more general, but may also be useful for barcoding fish are the Barcode of Life Datasystem and Genbank. Advantages Barcoding/metabarcoding provides quick and usually reliable species identification, meaning that morphological identification, i.e. taxonomic expertise, is not needed. Metabarcoding also makes it possible to identify species when organisms are degraded or only part of an organism is available. It is a powerful tool for detection of rare and/or invasive species, which can be detected despite low abundance. Traditional methods to assess fish biodiversity, abundance and density include the use of gears like nets, electrofishing equipment, trawls, cages, fyke-nets or other gear which show reliable results of presence only for abundant species. Contrary, rare native species, as well as newly established alien species, are less likely to be detected via traditional methods, leading to incorrect absence/presence assumptions. Barcoding/metabarcoding is also in some cases a non-invasive sampling method, as it provides the opportunity to analyze DNA from eDNA or by sampling living organisms. For fish parasites, metabarcoding allows for detection of cryptic or microscopic parasites from aquatic environments, which is difficult with more direct methods (e.g. identifying species from samples with microscopy). Some parasites exhibit cryptic variation and metabarcoding can be helpful method in revealing this. The application of eDNA metabarcoding is cost-effective in large surveys or when many samples are required. eDNA can reduce the costs of fishing, transport of samples and time invested by taxonomists, and in most cases requires only small amounts of DNA from target species to reach reliable detection. Constantly decreasing prices for barcoding/metabarcoding due to technical development is another advantage. The eDNA approach is also suitable for monitoring of inaccessible environments. Challenges The results obtained from metabarcoding are limited or biased to the frequency of occurrence. It is also problematic that far from all species have barcodes attached to them. Even though metabarcoding may overcome some practical limitations of conventional sampling methods, there is still no consensus regarding experimental design and the bioinformatic criteria for application of eDNA metabarcoding. The lack of criteria is due to the heterogeneity of experiments and studies conducted so far, which dealt with different fish diversities and abundances, types of aquatic ecosystems, numbers of markers and marker specificities. Another significant challenge for the method is how to quantify fish abundance from molecular data. Although there are some cases in which quantification has been possible there appears to be no consensus on how, or to what extent, molecular data can meet this aim for fish monitoring. See also DNA barcoding DNA barcoding in diet assessment Algae DNA barcoding Microbial DNA barcoding Aquatic macroinvertebrate DNA barcoding References Authentication methods Bioinformatics Biometrics Molecular genetics DNA barcoding
Fish DNA barcoding
Chemistry,Engineering,Biology
2,739
76,229,435
https://en.wikipedia.org/wiki/Illinois%20Central%20382
Illinois Central No. 382, also known as "Ole' 382" or "The Cannonball", was a 4-6-0 "Ten Wheeler" bought new from the Rogers Locomotive Works in Paterson, New Jersey for the Illinois Central Railroad. Constructed in 1898, the locomotive was used for fast passenger service between Chicago, Illinois and New Orleans, Louisiana. On the night of April 30, 1900, engineer Casey Jones and fireman Simeon "Sim" Webb were traveling with the engine from Memphis, Tennessee to Canton, Mississippi. The train collided into the rear of a freight train stuck on the mainline, killing Jones, and injuries dozens more in Vaughan, Mississippi, the last station before Canton. After the accident, the locomotive was rebuilt in Water Valley, Mississippi, and returned to service. The locomotive was believed to be cursed after Jones' death as it would suffer three more accidents in its career before being retired in July 1935, and scrapped. Today, a stand in for No. 382, former Clinchfield Railroad No. 99, is now on display at the Casey Jones Home & Railroad Museum, in Jackson, Tennessee, painted up as Illinois Central No. 382. History No. 382 was bought new from the Rogers Locomotive Works of Patterson, New Jersey. The new 300 series of 4-6-0 locomotives were designed for fast passenger service on the Illinois Central between Chicago, Illinois, and New Orleans, Louisiana. 1900 Wreck There are many accounts of Casey Jones' final journey that led up to his accident in Vaughan, Mississippi. But the agreed upon set of facts state that Jones had taken up a double shift to clear up a sick engineer named Sam Tate on April 29. Jones and his fireman, Simeon Webb, had already traveled from Canton, Mississippi northbound to Memphis, Tennessee for their shift, taking the "New Orleans Special" with a sister locomotive of No. 382, No. 384. When Tate called in sick, Jones and Webb agreed to take Tate's "New Orleans Special" from Memphis, Tennessee to Canton, Mississippi. When they departed with their southbound "New Orleans Special" passenger train, it was an hour and a half behind schedule, with No. 382 being the engine hauling the five car train since its departure in Chicago. At 12:30 AM on the night of April 30, the train left Memphis and started their near non-stop journey to Canton, with the only stop being in Goodman, Mississippi to let another train pass. As Jones drove No. 382 down toward Canton, the station and sidings in Vaughan, Mississippi were filled with three trains all at the same time. The crucial train was a doubleheader going southbound, as its train was too long for the siding. As the "New Orleans Special" rounded an S-Curve, fireman Simeon Webb spotted the doubleheader stuck on the tracks. After yelling at Jones about the train, he applied the emergency brakes and threw No. 382 into reverse at the same time. Jones told Webb to jump out, and so Webb did, getting knocked unconscious as he hit the ground. Jones' train crashed at 3:52 AM and smashed through a caboose, two separate flat cars, one full of hay and the other for corn, and halfway through a flatcar of lumber. Jones was the only fatality from that accident. Post 1900 After the Vaughan Wreck, No. 382 was moved to Water Valley, Mississippi for repairs, returning to service that summer. However, the engine had a string of other accidents throughout the rest of her career, resulting in six deaths, including Casey Jones. In 1903, criminals sabotaged the tracks and caused 382 to flip on its side. Engineer Harry A. Norton lost both of his legs and received third degree burns. His fireman, however, died in that accident three days later, after being scalded to death. In 1905, the engine ran over a set of points, derailed, and flipped down an embankment in the Memphis South Yards in Tennessee. Norton was the driver for No. 382 that day as well, but he survived that accident as well. The locomotive was renumbered 212 in July 1900, then 2012 in July/August 1907, then 5012 in 1922. On January 22, 1912, No. 2012 crashed into the rear of a passenger train in Kinmundy, Illinois, resulting in four deaths, including the former president of the Illinois Central. This also ended up being the engine's deadliest accident. In July 1935, No. 2012 was removed from service and scrapped. Clinchfield No. 99 Carolina, Clinchfield, & Ohio Railroad, or Clinchfield for short, No. 99 is a 4-6-0 built by the Baldwin Locomotive Works in 1905 as South & Western Railway Company No. 1. In 1908, the South & Western became the Carolina, Clinchfield & Ohio Railway. In 1924, the road was incorporated with the Carolina, Clinchfield & Ohio of South Carolina and the Clinchfield & Northern Railway of Kentucky into the new Clinchfield Railroad, and the engine was renumbered to No. 99. In 1953, No. 99 was sold to the Black Mountain Railway in Burnsville, North Carolina, where it was renumbered to No. 3. The company was bought by the Yancey Railroad in 1955. The following year, the engine was retired on the Yancey Railroad in 1956 and was sold to the City of Jackson, Tennessee. They purchased No. 99 for the purpose of putting it on display on a new museum dedicated to Casey Jones' life near his and Jeanie Brady's home. The engine was cosmetically restored as Illinois Central No. 382 and was put on display at the Casey Jones Home & Railroad Museum, later opening that same year. In 1980, the Casey Jones Village was established, and Jones' home and No. 382 were moved to the new plaza, with the museum reopening a year later in 1981. Current Disposition No. 99, repainted as IC No. 382, is now on static display at the Casey Jones Home & Railroad Museum in Jackson, Tennessee. Legacy No. 382 has been featured and mentioned in several songs in addition with Casey Jones. No. 382 even served as the basis for the mock up locomotives, No. 29 & Constitution in the 2013 live action Disney film The Lone Ranger. See also Illinois Central Railroad Illinois Railway Museum The Ballad of Casey Jones References Rogers locomotives Jackson, Tennessee Illinois Central locomotives Scrapped locomotives Train wreck ballads Casey Jones 4-6-0 locomotives Curses Illinois Central Railroad
Illinois Central 382
Technology
1,331
1,423,632
https://en.wikipedia.org/wiki/Integrated%20Motor%20Assist
Integrated Motor Assist (commonly abbreviated as IMA) is Honda's hybrid car technology, introduced in 1999 on the Insight. It is a specific implementation of a parallel hybrid. It uses an electric motor mounted between the internal combustion engine and transmission to act as a starter motor, power generator, engine balancer, and assist traction motor. Overview In its first generation, IMA could not power the car on electricity alone, and could only use the motor to assist or start the engine. 2006 Civic Hybrid, however, can activate the electric motor while the vehicle is coasting without turning on the internal combustion engine, though in contrast to Toyota's Hybrid Synergy Drive (HSD) or General Motors and DaimlerChrysler's Global Hybrid Cooperation, the IMA has a less powerful motor/generator which allows the car to slow or stop its rate of deceleration to a lesser extent; it cannot operate without turning over the engine which is directly coupled to its electric motor. Regenerative braking The IMA uses regenerative braking to capture some of the energy that would otherwise be lost as heat during braking and reuse that energy later to help accelerate the vehicle. This has three effects: it increases the rate of acceleration, reduces the work required of the engine, and reduces the frequency of brake hardware replacement. The acceleration boost is important as it allows the engine to be scaled down to a smaller but more fuel-efficient variant without rendering the vehicle unacceptably slow or weak. This smaller engine is the primary reason cars equipped with IMA get better highway mileage than their more conventional counterparts. Starting Additionally, vehicles equipped with IMA can shut off their engine when the vehicle stops and use the electric motor to rapidly spin it back up when the driver releases the brake pedal (or in the manual transmission variant, when the shifter is placed into gear from neutral). They also have a conventional starter as a backup, making it the only production hybrid system which can operate with its high voltage electric system disabled, using only its engine like a traditional vehicle. However, since the IMA also acts as the vehicle's alternator, eventually the 12 volt accessory battery would require an external charge. Other names ISG: Integrated Starter Generator ISA: Integrated Starter Alternator ISAD: Integrated Starter Alternator Damper CAS: Combined Alternator Starter CSA: Crankshaft Starter Alternator or Combined Starter Alternator CISG: Crank-mounted Integrated Starter Generator List of vehicles using IMA Honda J-VX (1997 concept car) Honda Insight (1999-2006, 2010-2014) Honda Dualnote (2001 concept car) Honda Fit Hybrid (2010-2014) Honda Fit Shuttle Hybrid (2010-2015) Honda Civic Hybrid (2003-2015) Honda Accord Hybrid (2005-2007) Honda Freed Hybrid (2008-2016) Honda CR-Z (2010-2016) Acura ILX Hybrid (2013-2014) References External links Honda Official "Green Technology - Hybrid" Automotive technology tradenames Engine technology Honda Hybrid powertrain de: Honda Civic IMA
Integrated Motor Assist
Technology
619
145,753
https://en.wikipedia.org/wiki/Hydrosphere
The hydrosphere () is the combined mass of water found on, under, and above the surface of a planet, minor planet, or natural satellite. Although Earth's hydrosphere has been around for about 4 billion years, it continues to change in shape. This is caused by seafloor spreading and continental drift, which rearranges the land and ocean. It has been estimated that there are 1.386 billion cubic kilometres (333 million cubic miles) of water on Earth. This includes water in gaseous, liquid and frozen forms as soil moisture, groundwater and permafrost in the Earth's crust (to a depth of 2 km); oceans and seas, lakes, rivers and streams, wetlands, glaciers, ice and snow cover on Earth's surface; vapour, droplets and crystals in the air; and part of living plants, animals and unicellular organisms of the biosphere. Saltwater accounts for 97.5% of this amount, whereas fresh water accounts for only 2.5%. Of this fresh water, 68.9% is in the form of ice and permanent snow cover in the Arctic, the Antarctic and mountain glaciers; 30.8% is in the form of fresh groundwater; and only 0.3% of the fresh water on Earth is in easily accessible lakes, reservoirs and river systems. The total mass of Earth's hydrosphere is about 1.4 × 1018 tonnes, which is about 0.023% of Earth's total mass. At any given time, about 2 × 1013 tonnes of this is in the form of water vapor in the Earth's atmosphere (for practical purposes, 1 cubic metre of water weighs 1 tonne). Approximately 71% of Earth's surface, an area of some 361 million square kilometres (139.5 million square miles), is covered by ocean. The average salinity of Earth's oceans is about 35 grams of salt per kilogram of sea water (3.5%). History According to Merriam Webster, the word hydrosphere was brought into English in 1887, translating the German term hydrosphäre, introduced by Eduard Suess. Water cycle The water cycle refers to the transfer of water from one state or reservoir to another. Reservoirs include atmospheric moisture (snow, rain and clouds), streams, oceans, rivers, lakes, groundwater, subterranean aquifers, polar ice caps and saturated soil. Solar energy, in the form of heat and light (insolation), and gravity cause the transfer from one state to another over periods from hours to thousands of years. Most evaporation comes from the oceans and is returned to the earth as snow or rain.Sublimation refers to evaporation from snow and ice. Transpiration refers to the expiration of water through the minute pores or stomata of trees. Evapotranspiration is the term used by hydrologists in reference to the three processes together, transpiration, sublimation and evaporation. Marq de Villiers has described the hydrosphere as a closed system in which water exists. The hydrosphere is intricate, complex, interdependent, all-pervading, stable, and "seems purpose-built for regulating life." De Villiers claimed that, "On earth, the total amount of water has almost certainly not changed since geological times: what we had then we still have. Water can be polluted, abused, and misused but it is neither created nor destroyed, it only migrates. There is no evidence that water vapor escapes into space."Every year the turnover of water on Earth involves 577,000 km3 of water. This is water that evaporates from the oceanic surface (502,800 km3) and from land (74,200 km3). The same amount of water falls as atmospheric precipitation, 458,000 km3 on the ocean and 119,000 km3 on land. The difference between precipitation and evaporation from the land surface (119,000 − 74,200 = 44,800 km3/year) represents the total runoff of the Earth's rivers (42,700 km3/year) and direct groundwater runoff to the ocean (2100 km3/year). These are the principal sources of fresh water to support life necessities and man's economic activities.Water is a basic necessity of life. Since two thirds of the Earth is covered by water, the Earth is also called the blue planet and the watery planet. The hydrosphere plays an important role in the existence of the atmosphere in its present form. Oceans are important in this regard. When the Earth was formed it had only a very thin atmosphere rich in hydrogen and helium similar to the present atmosphere of Mercury. Later the gases hydrogen and helium were expelled from the atmosphere. The gases and water vapor released as the Earth cooled became its present atmosphere. Other gases and water vapor released by volcanoes also entered the atmosphere. As the Earth cooled the water vapor in the atmosphere condensed and fell as rain. The atmosphere cooled further as atmospheric carbon dioxide dissolved into the rain water. In turn, this further caused water vapor to condense and fall as rain. This rain water filled the depressions on the Earth's surface and formed the oceans. It is estimated that this occurred about 4000 million years ago. The first life forms began in the oceans. These organisms did not breathe oxygen. Later, when cyanobacteria evolved, the process of conversion of carbon dioxide into food and oxygen began. As a result, Earth's atmosphere has a distinctly different composition from that of other planets and allowed for life to evolve on Earth. Human activity has had an impact on the water cycle. Infrastructure, like dams, have a clear, direct impact on the water cycle by blocking and redirecting water pathways. Human caused pollution has changed the biogeochemical cycles of some water systems, and climate change has significantly altered weather patterns. Water withdrawals have exponentially increased because of agriculture, state and domestic use, and infrastructure. Recharging reservoirs According to Igor A. Shiklomanov, it takes 2500 years for the complete recharge and replenishment of oceanic waters, 10,000 years for permafrost and ice, 1500 years for deep groundwater and mountainous glaciers, 17 years in lakes, and 16 days in rivers. Specific fresh water availability "Specific water availability is the residual (after use) per capita quantity of fresh water." Fresh water resources are unevenly distributed in terms of space and time and can go from floods to water shortages within months in the same area. In 1998, 76% of the total population had a specific water availability of less than 5.0 thousand m3 per year per capita. Already by 1998, 35% of the global population suffered "very low or catastrophically low water supplies," and Shiklomanov predicted that the situation would deteriorate in the twenty-first century with "most of the Earth's population living under the conditions of low or catastrophically low water supply" by 2025. Only 2.5% of the water in the hydrosphere is fresh water and only 0.25% of that water is accessible for our use. Human impact The activities of modern humans have drastic effects on the hydrosphere. For instance, water diversion, human development, and pollution all affect the hydrosphere and natural processes within. Humans are withdrawing water from aquifers and diverting rivers at an unprecedented rate. The Ogallala Aquifer is used for agriculture in the United States; if the aquifer goes dry, more than $20 billion worth of food and fiber will vanish from the world's markets. The aquifer is being depleted so much faster than it is replenished that, eventually, the aquifer will run dry. Additionally, only one third of rivers are free-flowing due to the extensive use of dams, levees, hydropower, and habitat degradation. Excessive water use has also caused intermittent streams to become more dry, which is dangerous because they are extremely important for water purification and habitat. Other ways humans impact the hydrosphere include eutrophication, acid rain, and ocean acidification. Humans also rely on the health of the hydrosphere. It is used for water supply, navigation, fishing, agriculture, energy, and recreation. See also Aquatic ecosystem Biosphere Climate system Cryosphere Lithosphere World ocean Pedosphere Water cycle Extraterrestrial liquid water List of largest lakes and seas in the Solar System Ocean world Notes References External links Ground Water - USGS Aquatic ecology Hydrology Physical geography Global natural environment Water Hydrogeology
Hydrosphere
Chemistry,Engineering,Biology,Environmental_science
1,772
63,514,425
https://en.wikipedia.org/wiki/Praseodymium%28IV%29%20oxide
Praseodymium(IV) oxide is an inorganic compound with chemical formula PrO2. Production Praseodymium(IV) oxide can be produced by boiling Pr6O11 in water or acetic acid: Pr6O11 + 3 H2O → 4 PrO2 + 2 Pr(OH)3 Chemical reactions Praseodymium(IV) oxide starts to decompose at 320~360 °C, liberating oxygen. References Praseodymium compounds Oxides Fluorite crystal structure
Praseodymium(IV) oxide
Chemistry
108
5,066,661
https://en.wikipedia.org/wiki/Zinc%20fluoride
Zinc fluoride is an inorganic chemical compound with the chemical formula . It is encountered as the anhydrous form and also as the tetrahydrate, (rhombohedral crystal structure). It has a high melting point and has the rutile structure containing 6 coordinate zinc, which suggests appreciable ionic character in its chemical bonding. Unlike the other zinc halides, , and , it is not very soluble in water. Like some other metal difluorides, crystallizes in the rutile structure, which features octahedral Zn cations and trigonal planar fluorides. Preparation and reactions Zinc fluoride can be synthesized several ways. The reaction of zinc metal with fluorine gas. Reaction of hydrofluoric acid with zinc, to yield hydrogen gas () and zinc fluoride (). Zinc fluoride can be hydrolysed by hot water to form the zinc hydroxide fluoride, Zn(OH)F. The salt is believed to form both a tetrahydrate and a dihydrate. References External links zinc Metal halides fluoride
Zinc fluoride
Chemistry
238
529,912
https://en.wikipedia.org/wiki/Coral%20Reefs
Coral Reefs is a quarterly peer-reviewed scientific journal dedicated to the study of coral reefs. It was established in 1982 and is published by Springer Science+Business Media on behalf of the International Society for Reef Studies, of which it is the official journal. This journal also acts as the International Coral Reef Society. The editor-in-chief is Morgan Pratchett (James Cook University). According to the Journal Citation Reports, the journal has a 2017 impact factor of 3.095. According to Springer the journal has a 2020 impact factor of 3.902, five year impact factor of 3.880, and as of 2021 has 454,744 downloads. References External links International Coral Reef Society Ecology journals Coral reefs Springer Science+Business Media academic journals English-language journals Academic journals established in 1982 Academic journals associated with international learned and professional societies Quarterly journals
Coral Reefs
Biology,Environmental_science
176
29,717,622
https://en.wikipedia.org/wiki/Foucault%20knife-edge%20test
The Foucault knife-edge test is an optical test to accurately measure the shape of concave curved mirrors. It is commonly used by amateur telescope makers for figuring primary mirrors in reflecting telescopes. It uses a relatively simple, inexpensive apparatus compared to other testing techniques. Overview The Foucault knife-edge test was described in 1858 by French physicist Léon Foucault as a way to measure conic shapes of optical mirrors. It measures mirror surface dimensions by reflecting light into a knife edge at or near the mirror's centre of curvature. In doing so, it only needs a tester which in its most basic 19th century form consists of a light bulb, a piece of tinfoil with a pinhole in it, and a razor blade to create the knife edge. The testing device is adjustable along the X-axis (knife cut direction) across the Y-axis (optical axis), and is usually equipped with measurable adjustment to 0.001 inch (25 μm) or better along lines parallel to the optical axis. The test can measure errors in a mirror's curvature to fractions of wavelengths of light (or Angstroms, millionths of an inch, or nanometers). Foucault test basics Foucault testing is commonly used by amateur telescope makers for figuring primary mirrors in reflecting telescopes. The mirror to be tested is placed vertically in a stand. The Foucault tester is set up at the distance of the mirror's radius of curvature (radius R is twice the focal length.) with the pinhole to one side of the centre of curvature (a short vertical slit parallel to the knife edge can be used instead of the pinhole). The tester is adjusted so that the returning beam from the pinhole light source is interrupted by the knife edge. Viewing the mirror from behind the knife edge shows a pattern on the mirror surface. If the mirror surface is part of a perfect sphere, the mirror appears evenly lighted across the entire surface. If the mirror is spherical but with defects such as bumps or depressions, the defects appear greatly magnified in height. If the surface is paraboloidal, the mirror usually looks like a doughnut or lozenge although the exact appearance depends on the exact position of the knife edge. It is possible to calculate how closely the mirror surface resembles a perfect parabola by placing a Couder mask, Everest pin stick (after A. W. Everest) or other zone marker over the mirror. A series of measurements with the tester, finding the radii of curvature of the zones along the optical axis of the mirror (Y-axis). These data are then reduced and graphed against an ideal parabolic curve. Other testing techniques A number of other tests are used which measure the mirror at the center of curvature. Some telescope makers use a variant of the Foucault test called a Ronchi test that replaces the knife edge with a grating (similar to a very coarse diffraction grating) comprising fine parallel wires, an etching on a glass plate, a photograph negative or computer printed transparency. Ronchi test patterns are matched to those of standard mirrors or generated by computer. Other variants of the Foucault test include the Gaviola or Caustic test which can measure mirrors of fast f/ratio more accurately than the Foucault test which is limited to about (λ/8) wavelength accuracy on small and medium-sized mirrors. The Caustic test is capable of measuring larger mirrors and achieving a (λ/20) wave peak to valley accuracy by using a testing stage which is adjusted from side to side so as to measure each zone of each side of the mirror from the center of its curvature. The Dall null test uses a plano-convex lens placed a short distance in front of the pinhole. With the correct positioning of the lens, a parabolic mirror appears flat under testing instead of doughnut-shaped so testing is much easier and zonal measurements are not needed. There are a number of interferometric tests which have been used including the Michelson-Twyman and the Michelson method, both published in 1918, the Lenouvel method and the Fizeau method. Interferometric testing has been made more affordable in recent years by affordable lasers, digital cameras (such as webcams), and computers, but remains primarily an industrial methodology. See also Schlieren photography Airy disk Amateur telescope making Angular resolution (see Angular resolution#Explanation for discussion of the Rayleigh criterion) Diffraction-limited system Huygens–Fresnel principle#Single slit diffraction Fabrication and testing of optical components Null corrector Strehl ratio References Further reading L. Foucault, "Description des procedees employes pour reconnaitre la configuration des surfaces optiques," Comptes rendus hebdomadaires des séances de l'Académie des Sciences, Paris, vol. 47, pages 958-959 (1858). L. Foucault, "Mémoire sur la construction des télescopes en verre argenté," Annales de l'Observatoire impériale de Paris, vol. 5, pages 197-237 (1859). Amateur astronomy Measuring instruments Mirrors Optical devices Telescopes
Foucault knife-edge test
Materials_science,Astronomy,Technology,Engineering
1,088
34,279,530
https://en.wikipedia.org/wiki/Durcupan
Durcupan is a water-soluble epoxy resin produced by the Fluka subsidiary of Sigma-Aldrich. It is commonly used for embedding electron microscope samples in plastic so they may be sectioned (sliced thin) with a microtome and then imaged. Durcupan is notable for refractive index nD20 of 1.654, which is a very high value for epoxy resins. References Electron microscopy Synthetic resins
Durcupan
Chemistry
98
13,478,591
https://en.wikipedia.org/wiki/Thomas%20M.%20Connelly
Thomas M. Connelly Jr. (born June 1952) is an American business executive with a focus on chemical engineering. In February 2015, he succeeded Madeleine Jacobs as chief executive officer and executive director of the American Chemical Society. In November 2014, E. I. du Pont de Nemours and Company announced that Connelly was retiring from his position as executive vice president and chief innovation officer after 36 years with the company. Education Connelly studied at Princeton University earning degrees in Chemical Engineering and Economics in 1974. He then attended the University of Cambridge as a Winston Churchill Scholar, where he received a Ph.D. in chemical engineering. DuPont Connelly was employed by E. I. du Pont de Nemours and Company for 36 years. He joined the company in 1977 as a research engineer at the DuPont Experimental Station in Wilmington, Delaware. He had assignments in Kentucky and West Virginia before starting his overseas assignments. He had positions in England, Switzerland and China – the final position with responsibility for DuPont's Asia Pacific businesses. He then returned to Wilmington in 1999 and was named vice president and general manager of DuPont Fluoroproducts. He was named senior vice-president of research and Chief Science and Technology Officer in 2001. He was promoted to Executive Vice President, the Chief Innovation Officer and a member of the Office of the Chief Executives of DuPont in 2006. In this position, he had responsibility for DuPont's Applied BioSciences, Nutrition & Health, Performance Polymers and Packaging & Industrial Polymers businesses. He also had responsibility for Integrated Operations which includes Operations, Sourcing & Logistics and Engineering. DuPont announced he was retiring from the company in 2014. Other positions and honors He is a member of the Department of Chemical Engineering Advisory Committee of Princeton University. As part of the Chemical Heritage Foundation "Heritage Day 2005" ceremonies, Connelly received the 2005 Award for Executive Excellence of the Commercial Development and Marketing Association (CDMA). References External links Engineers from Ohio Businesspeople from Toledo, Ohio American chemical industry businesspeople DuPont people 1952 births Living people American Chemical Society
Thomas M. Connelly
Chemistry
418
5,851,271
https://en.wikipedia.org/wiki/Religious%20goods%20store
A religious goods store, also known as a religious bookstore, religious gifts store or religious supplies shop, is a store specializing in supplying materials used in the practice of a particular religious tradition, such as Buddhism, Taoism, Chinese folk religion, Christianity and Islam among other religions. These shops are abundant across the Greater Chinese region as well as Overseas Chinese communities around the world. In Iran, religious goods stores are usually visited to buy the Quran, Al Mafatih-Al Jinan, goods like the tasbīḥ, and many other things. One of the services related to this is to add a page to Mafatih al-Jinan book for a deceased loved one. In Christendom, religious goods stores are often visited to purchase Christian art, books and devotional material for the home, as well as gifts such as a Bible, daily devotional or cross necklace for occasions such as Baptism, Confirmation and Holy Matrimony. Items for sale Buddhism In Buddhist bookshops, a variety of Buddhist books and chanting CDs are usually available for sales. There is also wide range of other products which includes Buddha statues, Buddhist pendants, incense, candles, chanting beads, instruments, Buddhist monastics' robes, meditation cushions and other Buddhist accessories. Christianity In Christendom, "religious goods stores", also known as "Christian bookstores", have Family Bibles, Christian art, daily devotional books, breviaries, catechisms, cross necklaces, Christian music albums, holy cards, home altars, prie-dieus, and prayer beads (such as the Dominican Rosary of Catholicism, the Wreath of Christ of Lutheranism, the Anglican Rosary of Anglicanism, and the Chotki of Eastern Orthodoxy), among other sacramentals. Chinese folk religion and Taoism Statues representing Chinese deities, also include Bodhisattva images like Guanyin or Di Zang Wang Tong Sheng (通勝), Chinese divination guide and almanac all form of Chinese incenses, kemenyan and candles incense papers, underworld bank notes and various forms of paper offerings tablets dedicated to Tian Gong (天公), Tu Di Gong (土地公), Zao Jun (灶君) and ancestral tablets unconsecrated religious devotional objects like Pa-Kua, Qian Kun Tai Ji Tu and Shang Hai Zheng incense urns or incense holders Chinese teapots, tea cups and Chinese tea leaves incense paper burners incense sticks Gallery See also Chinese folk religion Chinese folk religion in Southeast Asia Chinese ancestral worship Ancestral hall & Ancestral tablets Joss paper Dajiao Zhizha Papier-mache offering shops in Hong Kong References Chinese folk religion Religious objects Religion in Taiwan Religion in Hong Kong Religion in Singapore Religion in Malaysia Retailing in China Retailing in Taiwan Retailing in Hong Kong Retailing in Singapore
Religious goods store
Physics
581
66,926,646
https://en.wikipedia.org/wiki/2-Hexyne
2-Hexyne is an organic compound that belongs to the alkyne group. Just like its isomers, it also has the chemical formula of C6H10. Reactions 2-Hexyne can be semihydrogenated to yield 2-hexene or fully hydrogenated to hexane. With appropriate noble metal catalysts it can selectively form cis-2-hexene. 2-Hexyne can act as a ligand on gold atoms. With strong sulfuric acid, the ketone 2-hexanone is produced. However this reaction also causes polymerization and charring. Under heat and pressure 2-hexyne polymerizes to linear oligomers and polymers. This can be hastened by some catalysts such as molybdenum pentachloride with tetraphenyl tin. However Ziegler–Natta catalysts have no action as the triple bond is hindered. References Alkynes
2-Hexyne
Chemistry
201
40,688
https://en.wikipedia.org/wiki/Baud
In telecommunications and electronics, baud (; symbol: Bd) is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel. It is the unit for symbol rate or modulation rate in symbols per second or pulses per second. It is the number of distinct symbol changes (signalling events) made to the transmission medium per second in a digitally modulated signal or a bd rate line code. Baud is related to gross bit rate, which can be expressed in bits per second (bit/s). If there are precisely two symbols in the system (typically 0 and 1), then baud and bits per second are equivalent. Naming The baud unit is named after Émile Baudot, the inventor of the Baudot code for telegraphy, and is represented according to the rules for SI units. That is, the first letter of its symbol is uppercase (Bd), but when the unit is spelled out, it should be written in lowercase (baud) except when it begins a sentence or is capitalized for another reason, such as in title case. It was defined by the CCITT (now the ITU) in November 1926. The earlier standard had been the number of words per minute, which was a less robust measure since word length can vary. Definitions The symbol duration time, also known as the unit interval, can be directly measured as the time between transitions by looking at an eye diagram of the signal on an oscilloscope. The symbol duration time Ts can be calculated as: where fs is the symbol rate. There is also a chance of miscommunication which leads to ambiguity. Example: Communication at the baud rate 1000 Bd means communication by means of sending 1000 symbols per second. In the case of a modem, this corresponds to 1000 tones per second; similarly, in the case of a line code, this corresponds to 1000 pulses per second. The symbol duration time is  second (that is, 1 millisecond). The baud is scaled using standard metric prefixes, so that for example 1 kBd (kilobaud) = 1000 Bd 1 MBd (megabaud) = 1000 kBd 1 GBd (gigabaud) = 1000 MBd Relationship to gross bit rate The symbol rate is related to gross bit rate expressed in bit/s. The term baud has sometimes incorrectly been used to mean bit rate, since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol, such that binary digit "0" is represented by one symbol, and binary digit "1" by another symbol. In more advanced modems and data transmission techniques, a symbol may have more than two states, so it may represent more than one bit. A bit (binary digit) always represents one of two states. If bits are conveyed per symbol, and the gross bit rate is , inclusive of channel coding overhead, the symbol rate can be calculated as By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley constructed a measure of the gross bit rate R as where Here, the denotes the ceiling function of , where is taken to be any real number greater than zero, then the ceiling function rounds up to the nearest natural number (e.g. ). In that case, different symbols are used. In a modem, these may be time-limited sinewave tones with unique combinations of amplitude, phase and/or frequency. For example, in a 64QAM modem, , and so the bit rate is times the baud rate. In a line code, these may be M different voltage levels. The ratio is not necessarily an integer; in 4B3T coding, the bit rate is of the baud rate. (A typical basic rate interface with a 160 kbit/s raw data rate operates at 120 kBd.) Codes with many symbols, and thus a bit rate higher than the symbol rate, are most useful on channels such as telephone lines with a limited bandwidth but a high signal-to-noise ratio within that bandwidth. In other applications, the bit rate is less than the symbol rate. Eight-to-fourteen modulation as used on audio CDs has bit rate of the baud rate. See also Notes References External links Data transmission Units of frequency
Baud
Mathematics
916
312,308
https://en.wikipedia.org/wiki/Dirac%20sea
The Dirac sea is a theoretical model of the electron vacuum as an infinite sea of electrons with negative energy, now called positrons. It was first postulated by the British physicist Paul Dirac in 1930 to explain the anomalous negative-energy quantum states predicted by the relativistically-correct Dirac equation for electrons. The positron, the antimatter counterpart of the electron, was originally conceived of as a hole in the Dirac sea, before its experimental discovery in 1932. In hole theory, the solutions with negative time evolution factors are reinterpreted as representing the positron, discovered by Carl Anderson. The interpretation of this result requires a Dirac sea, showing that the Dirac equation is not merely a combination of special relativity and quantum mechanics, but it also implies that the number of particles cannot be conserved. Dirac sea theory has been displaced by quantum field theory, though they are mathematically compatible. Origins Similar ideas on holes in crystals had been developed by Soviet physicist Yakov Frenkel in 1926, but there is no indication the concept was discussed with Dirac when the two met in a Soviet physics congress in the summer of 1928. The origins of the Dirac sea lie in the energy spectrum of the Dirac equation, an extension of the Schrödinger equation consistent with special relativity, an equation that Dirac had formulated in 1928. Although this equation was extremely successful in describing electron dynamics, it possesses a rather peculiar feature: for each quantum state possessing a positive energy , there is a corresponding state with energy -. This is not a big difficulty when an isolated electron is considered, because its energy is conserved and negative-energy electrons may be left out. However, difficulties arise when effects of the electromagnetic field are considered, because a positive-energy electron would be able to shed energy by continuously emitting photons, a process that could continue without limit as the electron descends into ever lower energy states. However, real electrons clearly do not behave in this way. Dirac's solution to this was to rely on the Pauli exclusion principle. Electrons are fermions, and obey the exclusion principle, which means that no two electrons can share a single energy state within an atom. Dirac hypothesized that what we think of as the "vacuum" is actually the state in which all the negative-energy states are filled, and none of the positive-energy states. Therefore, if we want to introduce a single electron, we would have to put it in a positive-energy state, as all the negative-energy states are occupied. Furthermore, even if the electron loses energy by emitting photons it would be forbidden from dropping below zero energy. Dirac further pointed out that a situation might exist in which all the negative-energy states are occupied except one. This "hole" in the sea of negative-energy electrons would respond to electric fields as though it were a positively charged particle. Initially, Dirac identified this hole as a proton. However, Robert Oppenheimer pointed out that an electron and its hole would be able to annihilate each other, releasing energy on the order of the electron's rest energy in the form of energetic photons; if holes were protons, stable atoms would not exist. Hermann Weyl also noted that a hole should act as though it has the same mass as an electron, whereas the proton is about two thousand times heavier. The issue was finally resolved in 1932, when the positron was discovered by Carl Anderson, with all the physical properties predicted for the Dirac hole. Inelegance of Dirac sea Despite its success, the idea of the Dirac sea tends not to strike people as very elegant. The existence of the sea implies an infinite negative electric charge filling all of space. In order to make any sense out of this, one must assume that the "bare vacuum" must have an infinite positive charge density which is exactly cancelled by the Dirac sea. Since the absolute energy density is unobservable—the cosmological constant aside—the infinite energy density of the vacuum does not represent a problem. Only changes in the energy density are observable. Geoffrey Landis also notes that Pauli exclusion does not definitively mean that a filled Dirac sea cannot accept more electrons, since, as Hilbert elucidated, a sea of infinite extent can accept new particles even if it is filled. This happens when we have a chiral anomaly and a gauge instanton. The development of quantum field theory (QFT) in the 1930s made it possible to reformulate the Dirac equation in a way that treats the positron as a "real" particle rather than the absence of a particle, and makes the vacuum the state in which no particles exist instead of an infinite sea of particles. This picture recaptures all the valid predictions of the Dirac sea, such as electron-positron annihilation. On the other hand, the field formulation does not eliminate all the difficulties raised by the Dirac sea; in particular the problem of the vacuum possessing infinite energy. Mathematical expression Upon solving the free Dirac equation, one finds where for plane wave solutions with -momentum . This is a direct consequence of the relativistic energy-momentum relation upon which the Dirac equation is built. The quantity is a constant column vector and is a normalization constant. The quantity is called the time evolution factor, and its interpretation in similar roles in, for example, the plane wave solutions of the Schrödinger equation, is the energy of the wave (particle). This interpretation is not immediately available here since it may acquire negative values. A similar situation prevails for the Klein–Gordon equation. In that case, the absolute value of can be interpreted as the energy of the wave since in the canonical formalism, waves with negative actually have positive energy . But this is not the case with the Dirac equation. The energy in the canonical formalism associated with negative is . Modern interpretation The Dirac sea interpretation and the modern QFT interpretation are related by what may be thought of as a very simple Bogoliubov transformation, an identification between the creation and annihilation operators of two different free field theories. In the modern interpretation, the field operator for a Dirac spinor is a sum of creation operators and annihilation operators, in a schematic notation: An operator with negative frequency lowers the energy of any state by an amount proportional to the frequency, while operators with positive frequency raise the energy of any state. In the modern interpretation, the positive frequency operators add a positive energy particle, adding to the energy, while the negative frequency operators annihilate a positive energy particle, and lower the energy. For a fermionic field, the creation operator gives zero when the state with momentum k is already filled, while the annihilation operator gives zero when the state with momentum k is empty. But then it is possible to reinterpret the annihilation operator as a creation operator for a negative energy particle. It still lowers the energy of the vacuum, but in this point of view it does so by creating a negative energy object. This reinterpretation only affects the philosophy. To reproduce the rules for when annihilation in the vacuum gives zero, the notion of "empty" and "filled" must be reversed for the negative energy states. Instead of being states with no antiparticle, these are states that are already filled with a negative energy particle. The price is that there is a nonuniformity in certain expressions, because replacing annihilation with creation adds a constant to the negative energy particle number. The number operator for a Fermi field is: which means that if one replaces N by 1−N for negative energy states, there is a constant shift in quantities like the energy and the charge density, quantities that count the total number of particles. The infinite constant gives the Dirac sea an infinite energy and charge density. The vacuum charge density should be zero, since the vacuum is Lorentz invariant, but this is artificial to arrange in Dirac's picture. The way it is done is by passing to the modern interpretation. Dirac's idea is more directly applicable to solid state physics, where the valence band in a solid can be regarded as a "sea" of electrons. Holes in this sea indeed occur, and are extremely important for understanding the effects of semiconductors, though they are never referred to as "positrons". Unlike in particle physics, there is an underlying positive charge—the charge of the ionic lattice—that cancels out the electric charge of the sea. Revival in the theory of causal fermion systems Dirac's original concept of a sea of particles was revived in the theory of causal fermion systems, a recent proposal for a unified physical theory. In this approach, the problems of the infinite vacuum energy and infinite charge density of the Dirac sea disappear because these divergences drop out of the physical equations formulated via the causal action principle. These equations do not require a preexisting space-time, making it possible to realize the concept that space-time and all structures therein arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea. See also Fermi sea Positronium Vacuum polarization Virtual particle Remarks Notes References (Chapter 12 is dedicate to hole theory.) Quantum field theory Vacuum Sea
Dirac sea
Physics
1,936
23,718,280
https://en.wikipedia.org/wiki/Variable%20data%20publishing
Variable-data publishing (VDP) (also known as database publishing) is a term referring to the output of a variable composition system. While these systems can produce both electronically viewable and hard-copy (print) output, the "variable-data publishing" term today often distinguishes output destined for electronic viewing, rather than that which is destined for hard-copy print (e.g. variable data printing). Essentially the same techniques are employed to perform variable-data publishing, as those utilized with variable data printing. The difference is in the interpretation for output. While variable-data printing may be interpreted to produce various print streams or page-description files (e.g. AFP/IPDS, PostScript, PCL), variable-data publishing produces electronically viewable files, most commonly seen in the forms of PDF, HTML, or XML. Variable-data composition involves the use of data to conditionally: exhibit text (static blocks and/or variable content) exhibit images select fonts select colors format page layouts & flows Variable-data may be as simple as an address block or salutation. However, it can be any or all of the document's textual content—including words, sentences, paragraphs, pages, or the entire document. In other words, it can make up as little or as much of the document as the composer desires. Variable data may also be used to exhibit various images, such as logos, products, or membership photos. Further, variable-data can be used to build rule-based design schemes, including fonts, colors, and page formats. The possibilities are vast. The variable-data tools available today, make it possible to perform variable-data composition at nearly every stage of document production. However, the level of control that can be achieved varies, based upon how far into the document production process a variable-data tool is deployed. For example, if variable-data insertion occurs just prior to output...it's not likely that the text flow or layout can be altered with nearly as much control as would be available at the time of initial document composition. Many organizations will produce multiple forms of output (aka: multi-channel output), for the same document. This ensures that the published content is available to recipients via any form of access method they might require. When multi-channel output is utilized, integrity between those output channels often becomes important. Variable-data publishing may be performed on everything from a personal computer to a mainframe system. However, the speed and practical output volumes which can be achieved are directly affected by the computer power utilized. Origin of the concept The term variable-data publishing was likely an offshoot of the term "variable-data printing", first introduced to the printing industry by Frank Romano, Professor Emeritus, School of Print Media, at the College of Imaging Arts and Sciences at Rochester Institute of Technology. However, the concept of merging static document elements and variable document elements predates the term and has seen various implementations ranging from simple desktop 'mail merge', to complex mainframe applications in the financial and banking industry. In the past, the term VDP has been most closely associated with digital printing machines. However, in the past 3 years the application of this technology has spread to web pages, emails, and mobile messaging. See also Variable data printing Mass customization Dynamic publishing Print on demand Personalization Databases Data publishing Digital press Documents Publishing Reporting software Rochester Institute of Technology Workflow technology pt:VDP
Variable data publishing
Technology
709
4,038,861
https://en.wikipedia.org/wiki/Rubidium%20chloride
Rubidium chloride is the chemical compound with the formula RbCl. This alkali metal halide salt is composed of rubidium and chlorine, and finds diverse uses ranging from electrochemistry to molecular biology. Structure In its gas phase, RbCl is diatomic with a bond length estimated at 2.7868 Å. This distance increases to 3.285 Å for cubic RbCl, reflecting the higher coordination number of the ions in the solid phase. Depending on conditions, solid RbCl exists in one of three arrangements or polymorphs as determined with holographic imaging: Sodium chloride (octahedral 6:6) The sodium chloride (NaCl) polymorph is most common. A cubic close-packed arrangement of chloride anions with rubidium cations filling the octahedral holes describes this polymorph. Both ions are six-coordinate in this arrangement. The lattice energy of this polymorph is only 3.2 kJ/mol less than the following structure's. Caesium chloride (cubic 8:8) At high temperature and pressure, RbCl adopts the caesium chloride (CsCl) structure (NaCl and KCl undergo the same structural change at high pressures). Here, the chloride ions form a simple cubic arrangement with chloride anions occupying the vertices of a cube surrounding a central Rb+. This is RbCl's densest packing motif. Because a cube has eight vertices, both ions' coordination numbers equal eight. This is RbCl's highest possible coordination number. Therefore, according to the radius ratio rule, cations in this polymorph will reach their largest apparent radius because the anion-cation distances are greatest. Sphalerite (tetrahedral 4:4) The sphalerite polymorph of rubidium chloride has not been observed experimentally. This is consistent with the theory; the lattice energy is predicted to be nearly 40.0 kJ/mol smaller in magnitude than those of the preceding structures. Synthesis and reaction The most common preparation of pure rubidium chloride involves the reaction of its hydroxide with hydrochloric acid, followed by recrystallization: RbOH + HCl → RbCl + H2O Because RbCl is hygroscopic, it must be protected from atmospheric moisture, e.g. using a desiccator. RbCl is primarily used in laboratories. Therefore, numerous suppliers (see below) produce it in smaller quantities as needed. It is offered in a variety of forms for chemical and biomedical research. Rubidium chloride reacts with sulfuric acid to give rubidium hydrogen sulfate. Radioactivity Every 18 mg of rubidium chloride is equivalent to approximately one banana equivalent dose due to the large fraction (27.8%) of naturally-occurring radioactive isotope rubidium-87. Uses Rubidium chloride is used as a gasoline additive to improve its octane number. Rubidium chloride has been shown to modify coupling between circadian oscillators via reduced photaic input to the suprachiasmatic nuclei. The outcome is a more equalized circadian rhythm, even for stressed organisms. Rubidium chloride is an excellent non-invasive biomarker. The compound dissolves well in water and can readily be taken up by organisms. Once broken in the body, Rb+ replaces K+ in tissues because they are from the same chemical group. An example of this is the use of a radioactive isotope to evaluate perfusion of heart muscle. Rubidium chloride transformation for competent cells is arguably the compound's most abundant use. Cells treated with a hypotonic solution containing RbCl expand. As a result, the expulsion of membrane proteins allows negatively charged DNA to bind. Rubidium chloride has shown antidepressant effects in experimental human studies, in doses ranging from 180 to 720 mg. It purportedly works by elevating dopamine and norepinephrine levels, resulting in a stimulating effect, which would be useful for anergic and apathetic depression. References Rubidium compounds Chlorides Metal halides Antidepressants Stimulants Alkali metal chlorides Rock salt crystal structure
Rubidium chloride
Chemistry
857
28,201,409
https://en.wikipedia.org/wiki/Transforming%20growth%20interacting%20factor
Transforming growth interacting factor (TGIF) is a potential repressor of TGF-β pathways in myometrial cells. Expression of TGIF is increased in uterine leiomyoma compared with myometrium. References TGFβ domain
Transforming growth interacting factor
Chemistry
55
72,912,618
https://en.wikipedia.org/wiki/HD%2021693
HD 21693 is a star in the constellation Reticulum. It has an apparent visual magnitude of 7.94, therefore it is not visible to the naked eye. From its parallax measured by the Gaia spacecraft, it is located at a distance of 108.6 light-years (33.3 parsecs) from Earth. This is a G-type star with a spectral type of G9IV-V, with features intermediate between main sequence and subgiant. In 2011, the discovery of two Neptune-mass exoplanets around HD 21693 was announced. Star This star is classified with a spectral type of G9IV-V, indicating it is a slightly evolved star that is between the main sequence and the subgiant branch. Stellar evolution models suggest that it is right at the end of the main sequence, on the hook before the subgiant turnoff, with a mass of and an age of around 7 billion years, although with a high uncertainty of plus or minus 4 billion years. From its Gaia-measured distance and brightness, it is calculated to have a radius of and a luminosity of . Its effective temperature is 5,430 K and its metallicity, the proportion of elements heavier than helium, and approximately equal to that of the Sun. HD 21693 exhibits a stellar activity cycle with a period of 10 years, similar to the solar cycle, evidenced by long-term variations in various spectral activity indicators. Its chromospheric activity index varies between −5.02 and −4.83 during the cycle, an amplitude that is similar to that of the Sun's magnetic cycle. This index also shows a weaker variation with a period of 33.5 days, which may correspond to the star's rotation period. The activity cycle also affects the radial velocity of the star, which had to be taken into account when creating the orbital solution of the planets in the system. HD 21693 has no known companion stars. One observation by the NACO instrument at the Very Large Telescope failed to detect other stars in the system, with a detection limit of at 0.5 arcseconds (16.7 AU). Planetary system In 2011 the discovery of two exoplanets orbiting HD 21693 was announced, detected by the radial velocity method using observations taken by the HARPS spectrograph, at the La Silla Observatory. The detailed analysis of the discovery was only published in 2019. The HARPS instrument made 210 measurements of the star's radial velocity between 2003 and 2015, revealing two period signals caused by the gravitational influence of orbiting planets, plus a 10-year signal caused by the star's activity cycle. The planetary signals have no equivalent in the star's spectral activity indicators, which confirms their planetary nature. The radial velocity residuals, after removing all periodic signals, still show higher variability than expected, which can be caused by strong granulation on the star's surface. The inner planet, HD 21693 b, has a minimum mass of and is the transition regime between super-Earths and Neptune-mass planets. Since the radial velocity method used in its discovery cannot determine the inclination of its orbit, the planet's true mass cannot be determined, although the true mass is usually close to the minimum value. This planet orbits the star at a distance of 0.15 AU with a period of 22.7 days. The outer planet, HD 21693 c, has a minimum mass of , similar to the mass of Neptune. It is located at a distance of 0.26 AU from the star and has an orbital period of 53.7 days. The planets in the system have a period ratio of 2.37, which is close to a 5:2 commensurability. In one possible formation scenario, they experienced convergent migration shortly after their formation, which trapped them in a 5:2 resonance, but this resonance was lost shortly after the dissipation of the protoplanetary disk. See also Stars with planets discovered in the same paper: HD 20003, HD 20781, HD 31527, HD 45184, HD 51608, HD 134060, HD 136352 References G-type main-sequence stars G-type subgiants Reticulum Durchmusterung objects 021693 016085 Planetary systems with two confirmed planets
HD 21693
Astronomy
898
11,439,482
https://en.wikipedia.org/wiki/Sphaerulina%20rehmiana
Sphaerulina rehmiana is a fungal plant pathogen infecting roses. References Fungal plant pathogens and diseases Rose diseases rehmiana Fungi described in 1910 Fungus species
Sphaerulina rehmiana
Biology
37
27,178,529
https://en.wikipedia.org/wiki/Sleeping%20positions
The sleeping position is the body configuration assumed by a person during or prior to sleeping. It has been shown to have health implications, particularly for babies. Sleeping preferences A Canadian survey found that 39% of respondents preferring the "log" position (lying on one's side with the arms down the side) and 28% preferring to sleep on their side with their legs bent. A Travelodge survey found that 50% of heterosexual British couples prefer sleeping back-to-back, either not touching (27%) or touching (23%). Spooning was next, with the man on the outside 20% of the time vs. 8% with the woman on the outside. 10% favoured the "lovers' knot" (facing each other with legs intertwined), though all but 2% separated before going to sleep. The "Hollywood pose" of the woman with her head and arm on the man's chest was chosen by 4%. Health issues Sleep position in babies In the 1958 edition of his best-selling book The Common Sense Book of Baby and Child Care, paediatrician Dr Benjamin Spock warned against placing a baby on its back, writing, "if [an infant] vomits, he's more likely to choke on the vomitus." However, later studies have shown that placing a young baby in a face-down prone position increases the risk of sudden infant death syndrome (SIDS). A 2005 study concluded that "systematic review of preventable risk factors for SIDS from 1970 would have led to earlier recognition of the risks of sleeping on the front and might have prevented over 10,000 infant deaths in the UK and at least 50,000 in Europe, the USA, and Australasia." Glymphatic system clearance and sleeping position The brain parenchyma rids itself of harmful proteins through the glymphatic system, especially during sleep. Sleep position and snoring Snoring, which may be (but is not necessarily) an indicator of obstructive sleep apnea, may also be alleviated by sleeping on one's side. Sleep position and gastroesophageal reflux The right lateral sleeping position results in much more reflux in the night than the left lateral position and prone position. Sleep position and sleep paralysis Sleeping in the supine position has been linked to an increased occurrence of sleep paralysis. See also Lying (position) Human positions Sex positions References Sleep Human positions
Sleeping positions
Biology
505
18,104,627
https://en.wikipedia.org/wiki/Ergodic%20process
In physics, statistics, econometrics and signal processing, a stochastic process is said to be in an ergodic regime if an observable's ensemble average equals the time average. In this regime, any collection of random samples from a process must represent the average statistical properties of the entire regime. Conversely, a regime of a process that is not ergodic is said to be in non-ergodic regime. A regime implies a time-window of a process whereby ergodicity measure is applied. Specific definitions One can discuss the ergodicity of various statistics of a stochastic process. For example, a wide-sense stationary process has constant mean and autocovariance that depends only on the lag and not on time . The properties and are ensemble averages (calculated over all possible sample functions ), not time averages. The process is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate converges in squared mean to the ensemble average as . Likewise, the process is said to be autocovariance-ergodic or d moment if the time average estimate converges in squared mean to the ensemble average , as . A process which is ergodic in the mean and autocovariance is sometimes called ergodic in the wide sense. Discrete-time random processes The notion of ergodicity also applies to discrete-time random processes for integer . A discrete-time random process is ergodic in mean if converges in squared mean to the ensemble average , as . Examples Ergodicity means the ensemble average equals the time average. Following are examples to illustrate this principle. Call centre Each operator in a call centre spends time alternately speaking and listening on the telephone, as well as taking breaks between calls. Each break and each call are of different length, as are the durations of each 'burst' of speaking and listening, and indeed so is the rapidity of speech at any given moment, which could each be modelled as a random process. Take N call centre operators (N should be a very large integer) and plot the number of words spoken per minute for each operator over a long period (several shifts). For each operator you will have a series of points, which could be joined with lines to create a 'waveform'. Calculate the average value of those points in the waveform; this gives you the time average. There are N waveforms and N operators. These N waveforms are known as an ensemble. Now take a particular instant of time in all those waveforms and find the average value of the number of words spoken per minute. That gives you the ensemble average for that instant. If ensemble average always equals time average, then the system is ergodic. Electronics Each resistor has an associated thermal noise that depends on the temperature. Take N resistors (N should be very large) and plot the voltage across those resistors for a long period. For each resistor you will have a waveform. Calculate the average value of that waveform; this gives you the time average. There are N waveforms as there are N resistors. These N plots are known as an ensemble. Now take a particular instant of time in all those plots and find the average value of the voltage. That gives you the ensemble average for each plot. If ensemble average and time average are the same then it is ergodic. Examples of non-ergodic random processes An unbiased random walk is non-ergodic. Its expectation value is zero at all times, whereas its time average is a random variable with divergent variance. Suppose that we have two coins: one coin is fair and the other has two heads. We choose (at random) one of the coins first, and then perform a sequence of independent tosses of our selected coin. Let X[n] denote the outcome of the nth toss, with 1 for heads and 0 for tails. Then the ensemble average is   ( +  1) = ; yet the long-term average is for the fair coin and 1 for the two-headed coin. So the long term time-average is either 1/2 or 1. Hence, this random process is not ergodic in mean. See also Ergodic hypothesis Ergodicity Ergodic theory, a branch of mathematics concerned with a more general formulation of ergodicity Loschmidt's paradox Poincaré recurrence theorem Notes References Ergodic theory Signal processing
Ergodic process
Mathematics,Technology,Engineering
929
54,405,236
https://en.wikipedia.org/wiki/Miro%20Analytical
MIRO Analytical is a Swiss manufacturer of laser-based gas analyzers and isotope analyzers. The company is based in Zurich, Switzerland and was founded 2018. History MIRO Analytical is a spin-off of Empa, a Swiss research institute of the ETH domain. It has know-how in laser spectroscopy and in particular, in the combination of several quantum-cascade lasers (QCLs) into compact laser-based gas analyzers. The company's first instrument was a nine gas analyzer MGA-9 in 2018. By 2019 the MGA-10 a ten gas analyzer was introduced which measures greenhouse gases and air pollutants. Technology The gas analyzers directly measure concentrations of multiple gas species using mid-infrared laser absorption spectroscopy with QCLs as light sources. This allows for highly specific and accurate gas detection along with maximum measurement sensitivity. See also Infrared spectroscopy References Companies based in Zurich Spectroscopy Technology companies of Switzerland
Miro Analytical
Physics,Chemistry
189
6,991,065
https://en.wikipedia.org/wiki/Ammonium%20sulfamate
Ammonium sulfamate (or ammonium sulphamate) is a white crystalline solid, readily soluble in water. It is commonly used as a broad spectrum herbicide, with additional uses as a compost accelerator, flame retardant and in industrial processes. Manufacture and distribution It is a salt formed from ammonia and sulfamic acid. Ammonium sulfamate is distributed under the following tradenames, which are principally herbicidal product names: Amicide, Amidosulfate, Ammate, Amcide, Ammate X-NI, AMS, Fyran 206k, Ikurin, Sulfamate, AMS and Root-Out. Uses Herbicide Ammonium sulfamate is considered to be particularly useful in controlling tough woody weeds, tree stumps and brambles. Ammonium sulfamate has been successfully used in several major UK projects by organisations like the British Trust for Conservation Volunteers, English Heritage, the National Trust, and various railway, canal and waterways authorities. Several years ago the Henry Doubleday Research Association (HDRA) (known as Garden Organic), published an article on ammonium sulfamate after a successful set of herbicide trials. Though not approved for use by organic growers it does provide an option when alternatives have failed. The following problem weeds / plants can be controlled: Japanese Knotweed (Reynoutria japonica, syn. Fallopia japonica), Marestail / Horsetail (Equisetum), Ground-elder (Aegopodium podagraria), Rhododendron ponticum, Brambles, Brushwood, Ivy (Hedera species), Senecio/Ragwort, Honey fungus (Armillaria), and felled tree stumps and most other tough woody specimens. Compost accelerator Ammonium sulfamate is used as a composting accelerator in horticultural settings. It is especially effective in breaking down the tougher and woodier weeds put onto the compost heap. Flame retardant Ammonium sulfamate (like other ammonium salts, e.g. Ammonium dihydrogen phosphate, Ammonium sulfate) is a useful flame retardant. These salt based flame retardants offer advantages over other metal/mineral-based flame retardants in that they are water processable. Their relatively low decomposition temperature makes them suitable for flame retarding cellulose based materials (paper/wood). Ammonium sulfamate (like Ammonium dihydrogen phosphate) is sometimes used in conjunction with Magnesium sulfate or Ammonium sulfate (in ratios of approximately 2:1) for enhanced flame retardant properties. Other uses Within industry ammonium sulfamate is used as a flame retardant, a plasticiser and in electro-plating. Within the laboratory it is used as a reagent. Safety Ammonium sulfamate is considered to be only slightly toxic to humans and other animals, making it appropriate for amateur home garden, professional and forestry uses. It is generally accepted to be safe for use on plots of land that will be used for growing fruit and vegetables intended for consumption. It corrodes brass, copper, and iron. Its contact with eyes or skin can be harmful unless it is quickly washed off. In the United States, the Occupational Safety and Health Administration has set a permissible exposure limit at 15 mg/m3 over an eight-hour time-weighted average, while the National Institute for Occupational Safety and Health recommends exposures no greater than 10 mg/m3 over an eight-hour time-weighted average. These occupational exposure limits are protective values, given the IDLH concentration is set at 1500 mg/m3. It is also considered to be environmentally friendly due to its degradation to non-harmful residues. European Union licensing The pesticides review by the European Union led to herbicides containing ammonium sulfamate becoming unlicensed, and therefore effectively banned, from 2008. Its availability and use as a compost accelerator is unaffected by the EU's pesticide legislation. See also Sulfamide References Herbicides Ammonium compounds Sulfamates
Ammonium sulfamate
Chemistry,Biology
846
11,459,990
https://en.wikipedia.org/wiki/Alternaria%20senecionis
Alternaria senecionis is a fungal plant pathogen, can cause leaf spot on Cineraria species, such as on Senecio cruentus in Denmark. References senecionis Fungal plant pathogens and diseases Fungi described in 1946 Fungus species
Alternaria senecionis
Biology
54
1,677,334
https://en.wikipedia.org/wiki/Rate%20equation
In chemistry, the rate equation (also known as the rate law or empirical differential rate equation) is an empirical differential mathematical expression for the reaction rate of a given reaction in terms of concentrations of chemical species and constant parameters (normally rate coefficients and partial orders of reaction) only. For many reactions, the initial rate is given by a power law such as where and are the molar concentrations of the species and usually in moles per liter (molarity, ). The exponents and are the partial orders of reaction for and and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The order of reaction is a number which quantifies the degree to which the rate of a chemical reaction depends on concentrations of the reactants. In other words, the order of reaction is the exponent to which the concentration of a particular reactant is raised. The constant is the reaction rate constant or rate coefficient and at very few places velocity constant or specific rate of reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate applies throughout the course of the reaction. Elementary (single-step) reactions and reaction steps have reaction orders equal to the stoichiometric coefficients for each reactant. The overall reaction order, i.e. the sum of stoichiometric coefficients of reactants, is always equal to the molecularity of the elementary reaction. However, complex (multi-step) reactions may or may not have reaction orders equal to their stoichiometric coefficients. This implies that the order and the rate equation of a given reaction cannot be reliably deduced from the stoichiometry and must be determined experimentally, since an unknown reaction mechanism could be either elementary or complex. When the experimental rate equation has been determined, it is often of use for deduction of the reaction mechanism. The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species. A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules: Definition Consider a typical chemical reaction in which two reactants A and B combine to form a product C: This can also be written The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the molar concentration of chemical X, If the reaction takes place in a closed system at constant temperature and volume, without a build-up of reaction intermediates, the reaction rate is defined as where is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant. The initial reaction rate has some functional dependence on the concentrations of the reactants, and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment. Power laws A common form for the rate equation is a power law: The constant is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction. In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients. The differential rate equation for an elementary reaction using mathematical product notation is: Where: is the rate of change of reactant concentration with respect to time. k is the rate constant of the reaction. represents the concentrations of the reactants, raised to the powers of their stoichiometric coefficients and multiplied together. Determination of reaction order Method of initial rates The natural logarithm of the power-law rate equation is This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant with all other concentrations kept constant, so that The slope of a graph of as a function of then corresponds to the order with respect to reactant . However, this method is not always reliable because measurement of the initial rate requires accurate determination of small changes in concentration in short times (compared to the reaction half-life) and is sensitive to errors, and the rate equation will not be completely determined if the rate also depends on substances not present at the beginning of the reaction, such as intermediates or products. Integral method The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion. For example, the integrated rate law for a first-order reaction is where is the concentration at time and is the initial concentration at zero time. The first-order rate law is confirmed if is in fact a linear function of time. In this case the rate constant is equal to the slope with sign reversed. Method of flooding The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction with rate law the partial order with respect to is determined using a large excess of . In this case with and may be determined by the integral method. The order with respect to under the same conditions (with in excess) is determined by a series of similar experiments with a range of initial concentration so that the variation of can be measured. Zero order For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. The rate law for zero order reaction is The unit of k is mol dm−3 s−1. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface. Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol. Similarly reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine () on a hot tungsten surface at high pressure is zero order in phosphine, which decomposes at a constant rate. In homogeneous catalysis zero order behavior can come about from reversible inhibition. For example, ring-opening metathesis polymerization using third-generation Grubbs catalyst exhibits zero order behavior in catalyst due to the reversible inhibition that occurs between pyridine and the ruthenium center. First order A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is The unit of k is s−1. Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. However according to the Lindemann mechanism the reaction consists of two steps: the bimolecular collision which is second order and the reaction of the energized molecule which is unimolecular and first order. The rate of the overall reaction depends on the slowest step, so the overall reaction will be first order when the reaction of the energized reactant is slower than the collision step. The half-life is independent of the starting concentration and is given by . The mean lifetime is τ = 1/k. Examples of such reactions are: 2N2O5 -> 4NO2 + O2 [CoCl(NH3)5]^2+ + H2O -> [Co(H2O)(NH3)5]^3+ + Cl- H2O2 -> H2O + 1/2O2 In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution, , the rate equation is where Ar indicates an aryl group. Second order A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared, or (more commonly) to the product of two concentrations, As an example of the first type, the reaction is second-order in the reactant and zero order in the reactant CO. The observed rate is given by and is independent of the concentration of CO. For the rate proportional to a single concentration squared, the time dependence of the concentration is given by The unit of k is mol−1 dm3 s−1. The time dependence for a rate proportional to two unequal concentrations is if the concentrations are equal, they satisfy the previous equation. The second type includes nucleophilic addition-elimination reactions, such as the alkaline hydrolysis of ethyl acetate: CH3COOC2H5 + OH- -> CH3COO- + C2H5OH This reaction is first-order in each reactant and second-order overall: If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole, which as a catalyst does not appear in the overall chemical equation. Another well-known class of second-order reactions are the SN2 (bimolecular nucleophilic substitution) reactions, such as the reaction of n-butyl bromide with sodium iodide in acetone: CH3CH2CH2CH2Br + NaI -> CH3CH2CH2CH2I + NaBr(v) This same compound can be made to undergo a bimolecular (E2) elimination reaction, another common type of second-order reaction, if the sodium iodide and acetone are replaced with sodium tert-butoxide as the salt and tert-butanol as the solvent: CH3CH2CH2CH2Br + NaO\mathit{t}-Bu -> CH3CH2CH=CH2 + NaBr + HO\mathit{t}-Bu Pseudo-first order If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, leading to a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation if the concentration of reactant B is constant then where the pseudo–first-order rate constant The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier. One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics, where the concentration of water is constant because it is present in large excess: CH3COOCH3 + H2O -> CH3COOH + CH3OH The hydrolysis of sucrose () in acid solution is often cited as a first-order reaction with rate The true rate equation is third-order, however, the concentrations of both the catalyst and the solvent are normally constant, so that the reaction is pseudo–first-order. Summary for reaction orders 0, 1, 2, and n Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order. Here stands for concentration in molarity (mol · L−1), for time, and for the reaction rate constant. The half-life of a first-order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693). Fractional order In fractional order reactions, the order is a non-integer, which often indicates a chemical chain reaction or other complex reaction mechanism. For example, the pyrolysis of acetaldehyde () into methane and carbon monoxide proceeds with an order of 1.5 with respect to acetaldehyde: The decomposition of phosgene () to carbon monoxide and chlorine has order 1 with respect to phosgene itself and order 0.5 with respect to chlorine: The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is Initiation CH3CHO -> .CH3 + .CHO Propagation .CH3 + CH3CHO -> CH3CO. + CH4 CH3CO. -> .CH3 + CO Termination 2 .CH3 -> C2H6 where • denotes a free radical. To simplify the theory, the reactions of the to form a second are ignored. In the steady state, the rates of formation and destruction of methyl radicals are equal, so that so that the concentration of methyl radical satisfies [.CH3] \quad\propto \quad[CH3CHO]^{1/2}. The reaction rate equals the rate of the propagation steps which form the main reaction products and CO: in agreement with the experimental order of 3/2. Complex laws Mixed order More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed. Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining. Notable mechanisms with mixed-order rate laws with two-term denominators include: Michaelis–Menten kinetics for enzyme-catalysis: first-order in substrate (second-order overall) at low substrate concentrations, zero order in substrate (first-order overall) at higher substrate concentrations; and the Lindemann mechanism for unimolecular reactions: second-order at low pressures, first-order at high pressures. Negative order A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen. When a partial order is negative, the overall order is usually considered as undefined. In the above example, for instance, the reaction is not described as first order even though the sum of the partial orders is , because the rate equation is more complex than that of a simple first-order reaction. Opposed reactions A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients): {\mathit{a}A} + {\mathit{b}B} <=> {\mathit{p}P} + {\mathit{q}Q} The reaction rate expression for the above reactions (assuming each one is elementary) can be written as: where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B. The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance): Simple example In a simple equilibrium between two species: A <=> P where the reaction starts with an initial concentration of reactant A, [A]0, and an initial concentration of 0 for product P at time t=0. Then the equilibrium constant K is expressed as: where and are the concentrations of A and P at equilibrium, respectively. The concentration of A at time t, , is related to the concentration of P at time t, , by the equilibrium reaction equation: [A]_\mathit{t} = [A]0 - [P]_\mathit{t} The term [P]0 is not present because, in this simple example, the initial concentration of P is 0. This applies even when time t is at infinity; i.e., equilibrium has been reached: [A]_\mathit{e} = [A]0 - [P]_\mathit{e} then it follows, by the definition of K, that and, therefore, These equations allow us to uncouple the system of differential equations, and allow us to solve for the concentration of A alone. The reaction equation was given previously as: For A <=> P this is simply The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be , the concentration of A at time t. Let be the concentration of A at equilibrium. Then: Since: the reaction rate becomes: which results in: . A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known. Generalization of simple example If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions: When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy. Consecutive reactions If the rate constants for the following reaction are and ; A -> B -> C , then the rate equation is: For reactant A: For reactant B: For product C: With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are The steady state approximation leads to very similar results in an easier way. Parallel or competitive reactions When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place. Two first order reactions A -> B and A -> C , with constants and and rate equations ; and The integrated rate equations are then ; and . One important relationship in this case is One first order and one second order reaction This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: A + H2O -> B and A + R -> C . The rate equations are: and , where is the pseudo first order constant. The integrated rate equation for the main product [C] is , which is equivalent to . Concentration of B is related to that of C through The integrated equations were analytically obtained but during the process it was assumed that . Therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0 Stoichiometric reaction networks The most general description of a chemical reaction network considers a number of distinct chemical species reacting via reactions. The chemical equation of the -th reaction can then be written in the generic form which is often written in the equivalent form Here is the reaction index running from 1 to , denotes the -th chemical species, is the rate constant of the -th reaction and and are the stoichiometric coefficients of reactants and products, respectively. The rate of such a reaction can be inferred by the law of mass action which denotes the flux of molecules per unit time and unit volume. Here ([\mathbf X])=([X1], [X2], \ldots ,[X_\mathit{N}]) is the vector of concentrations. This definition includes the elementary reactions: zero order reactions for which for all , first order reactions for which for a single , second order reactions for which for exactly two ; that is, a bimolecular reaction, or for a single ; that is, a dimerization reaction. Each of these is discussed in detail below. One can define the stoichiometric matrix denoting the net extent of molecules of in reaction . The reaction rate equations can then be written in the general form This is the product of the stoichiometric matrix and the vector of reaction rate functions. Particular simple solutions exist in equilibrium, , for systems composed of merely reversible reactions. In this case, the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix alone and does not depend on the particular form of the rate functions . All other cases where detailed balance is violated are commonly studied by flux balance analysis, which has been developed to understand metabolic pathways. General dynamics of unimolecular conversion For a general unimolecular reaction involving interconversion of different species, whose concentrations at time are denoted by through , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species to species be denoted as , and construct a rate-constant matrix whose entries are the . Also, let be the vector of concentrations as a function of time. Let be the vector of ones. Let be the identity matrix. Let be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector. Let be the inverse Laplace transform from to . Then the time-evolved state is given by thus providing the relation between the initial conditions of the system and its state at time . See also Michaelis–Menten kinetics Molecularity Petersen matrix Reaction–diffusion system Reactions on surfaces: rate equations for reactions where at least one of the reactants adsorbs onto a surface Reaction progress kinetic analysis Reaction rate Reaction rate constant Steady state approximation Gillespie algorithm Balance equation Belousov–Zhabotinsky reaction Lotka–Volterra equations Chemical kinetics References Books cited External links Chemical kinetics, reaction rate, and order (needs flash player) Reaction kinetics, examples of important rate laws (lecture with audio). Rates of Reaction Chemical kinetics Chemical reaction engineering cy:Cyfradd adwaith#Hafaliadau cyfradd
Rate equation
Chemistry,Engineering
5,361
72,051,208
https://en.wikipedia.org/wiki/Phaeocalicium%20polyporaeum
Phaeocalicium polyporaeum, the fairy pin or common pin, is a species of non-lichenized fungus in the genus Phaeocalicium. They grow to a maximum size of 2.5 mm and resemble black matchsticks, with thin stalks and wider caps, in groups or rows primarily on the caps of Trichaptum biforme. Fairy pins are a type of parasitic fungi that grow primarily on the caps of Trichaptum biforme, but have also been reported on Trametes versicolor. They often co-occur on the upper side of caps with green algae on host fungi.<ref name="lichen-portal">{{cite web |title=CNALH - Phaeocalicium polyporaeum' |url=https://lichenportal.org/cnalh/taxa/index.php?taxon=52624&clid=1212 |website=lichenportal.org}}</ref> Fairy pins can be distinguished from other species of Phaeocalicium'' by their spores, which are very pale brown. Distribution Fairy pins are found in Europe, Siberia, and are common in the eastern United States as well as other parts of North America. The distribution is limited by the substrate distribution, rather than by other factors. Description Fairy pins can vary in appearance considerably, particularly their apothecia. The apothecia can be rounded in the form of an inverted cone and quite small, but other times they are biconvex and significantly larger in diameter in the stalk. Typically, the apothecia appear individually and are between 0.5 and 0.8 mm, but are scattered over the upper surface of shared host fungi. The stalks are a greenish brown and is unbranched, with hyphae arranged parallel to the surface of the host. References Eurotiomycetes Fungi described in 1875 Taxa named by William Nylander (botanist) Fungi of North America Fungi of Europe Fungi of Asia Fungus species
Phaeocalicium polyporaeum
Biology
428
2,164,583
https://en.wikipedia.org/wiki/State%20Research%20Center%20of%20Virology%20and%20Biotechnology%20VECTOR
The State Research Center of Virology and Biotechnology VECTOR, also known as the Vector Institute (), is a biological research center in Koltsovo, Novosibirsk Oblast, Russia. It has research facilities and capabilities for all levels of biological hazard, CDC levels 1–4. It is one of two official repositories for the now-eradicated smallpox virus, and was part of the system of laboratories known as the Biopreparat. The facility was upgraded and secured using modern cameras, motion sensors, fences and biohazard containment systems. Its relative seclusion makes security an easier task. Since its inception there has been an army regiment guarding the facility. At least in Soviet times the facility was a nexus for biological warfare research (see Soviet biological weapons program), though the nature of any ongoing research in this area is uncertain. As of April 2022 the Vector Institute is the Russian site for the WHO H5 Reference Laboratory Network, which responds "to the public health needs arising from avian influenza A (H5N1) infection in humans and influenza pandemic preparedness." History Organized in 1974, the center has a long history of virology, making impressive Soviet contribution to smallpox research. Genetic engineering projects included creation of viruses that manufacture toxins as well as research on bioregulators and various peptides that function in the nervous system. In the post-Soviet times the center made research and development contributions in many projects like a vaccine for Hepatitis A, influenza vaccines, vaccines for the Ebola virus, antiviral drugs with nucleotide analogs, test-systems for diagnostics of HIV and Hepatitis B and other development. It is one of the two laboratories worldwide that are authorized to keep smallpox. COVID-19 vaccine development In March 2020 it was reported that Russian scientists have begun to test vaccine prototypes for the new coronavirus disease (COVID-19), with the plan of presenting the most effective one in June, a laboratory chief at Vector Institute said. The prototypes have been created and the testing on animals began. In July 2020, research by the centre found that the SARS-CoV-2 virus that causes COVID-19 can be killed in room temperature water within 72 hours, helping further research about the disease during the pandemic. Tasks The main tasks of the centre, according to VECTOR, are: Basic research of causative agents of especially dangerous and socially important viral infections, and their genetic variability and diversity, pathogenesis of viral infections. Ensuring constant readiness for implementing diagnostics of especially dangerous infectious agents. The development and introduction into healthy practice of diagnostic curative and preventive medicines. Post-graduate training, and scientific training of higher qualification in the field of Virology, molecular biology and biotechnology through graduate school and higher education. Accidents On 30 April 1988, a doctor died two weeks after accidentally pricking himself through two layers of rubber gloves with a needle contaminated with the Marburg virus. In 2004, a researcher at VECTOR died after accidentally pricking herself with a needle contaminated with the Ebola virus. On 17 September 2019, a gas explosion occurred at Vector. One worker suffered third-degree burns, and the blast blew out window panes. The lab has highly contagious forms of bird flu and strains of hepatitis. The explosion happened in a decontamination room that was being renovated by a contractor. See also Smallpox Notes Citations External links State Research Center of Virology and Biotechnology VECTOR homepage About the center NPO Vector at Globalsecurity.org 1974 establishments in the Soviet Union Biological warfare facilities Biosafety level 4 laboratories COVID-19 vaccine producers V Medical research institutes in the Soviet Union National public health agencies Research institutes established in 1974 Soviet biological weapons program
State Research Center of Virology and Biotechnology VECTOR
Biology
765
3,528,567
https://en.wikipedia.org/wiki/Social%20disruption
Social disruption is a term used in sociology to describe the alteration, dysfunction or breakdown of social life, often in a community setting. Social disruption implies a radical transformation, in which the old certainties of modern society are falling away and something quite new is emerging. Social disruption might be caused through natural disasters, massive human displacements, rapid economic, technological and demographic change but also due to controversial policy-making. Social disruptions are for example rising sea levels that are creating new landscapes, drawing new world maps whose key lines are not traditional boundaries between nation-states but elevations above sea level. On the local level, an example would be the closing of a community grocery store, which might cause social disruption in a community by removing a "meeting ground" for community members to develop interpersonal relationships and community solidarity. Results of social disruption "We are wandering aimlessly and dispassionately, arguing for and against, but the one statement on which we are, beyond all differences and over many continents, to be able to agree on, is: "I can no longer understand the world". Social disruptions often lead to five social symptoms: frustration, democratic disconnection, fragmentation, polarization and escalation. Studies from the last decade show, that our societies have become more fragmented and less coherent (e.g. Bishop 2008), neighbourhoods turning into little states, organizing themselves to defend the local politics and culture against outsiders (Walzer 1983; Bauman 2017) and increasingly identifying through ways of voting, lifestyle or wellbeing (e.g. Schäfer 2015). Especially people on the more right and left political spectrum are more likely to say it is important to them to live in a place where most people share their political views and have similar interests (Pew 2014). Hence, citizens become alienated from democratic consensus (Foa and Munk 2016; Levitsky and Ziblatt 2018) and tend to assume that their opponents believe more extreme things than they really do (Iyengar et al. 2012). Moreover, fear of being identified as unqualified, denied value and dignity and for that reason marginalized, excluded or outcast, is giving rise to a widespread disenchantment with the idea that the future will improve the human condition and a mistrust in the ability of nation-states to make this happen (Pew 2015; Bauman 2017). At the same time, accelerations in liberal progression, globalization and migration flows have led to increasing polarized contestations about national identities - a volatile and critical social state, prone to conflict escalation (e.g. hate crimes after Brexit vote, incident at far-right rally in Charlottesville, USA). Policy making "It is unclear how to achieve policy changes of any kind in a polarized society that has few shared facts and whose civic muscles are atrophying." International but also local challenges force our societies to find solutions and make decisions on controversial issues in an accelerated manner. The complexity of such decisions is not only mirrored in the aim to tackle a multi-causality of root causes, it also faces a high degree of uncertainty as regard to its impact. Hence, due to the growing separation between the world of public opinion on the one hand, and the world of problem solving on the other (Mair 2009), it is very likely that political decisions further polarize our societies. The explanation is that citizens evaluate disruptive developments and related policy changes on a two-way level, on the personal interests and comfort, as well on its perceived impact on their social identity and community (Ryan and Deci 2000; Haidt 2012). If a policy change reflects the substantive representation of the median voter, is something that just does not matter to citizens in regard to their acceptance of decisions (Esaiasson et al. 2017). This can produce multi facet conflicts over interests, facts and norms between supporters and opponents (Itten 2017). Simultaneously, the capacity of political parties and actors of civil society, to bridge that divide, is declining (Mair 2009). In such a situations, social psychology tells us that citizens who feel uncomfortable will hold tighter to the assumptions that make them feel secure (Podziba 2014). Especially in public policy disputes, parties are hardly giving up their assumptions voluntarily, and citizens begin to masquerade their true individual conflict of interest (e.g. devaluation of property; insecurity) with more normative conflict of interest (e.g. protection of nature; protection of culture). Such distorted behaviour remarkably increases at times citizens or communities feel that a policy change is threatening their way of living. Bridging social capital In the light of the increasing social divisions and democratic disconnection, Putnam and Feldstein (2004) foresaw the importance of creating "bridging social capital", e.g. ties that link groups across a greater social distance. As the authors elaborate, the creation of robust social capital takes time and effort. It develops largely through extensive and time-consuming face-to-face conversation between two individuals or small groups of people. Only then there is the chance to build the trust and mutual understanding that characterizes the foundation of social capital. In no way, Putnam and Feldstein write, it is possible to create social capital instantaneous, anonymous or en masse. Furthermore, building social capital among people who already share a reservoir of similar cultural referents, ethnicity, personal experience or moral identity etc. is qualitatively different. Homogeneity makes connective strategies easier, however, a society with only homogeneous social capital risks looking like Bosnia or Belfast. Hence, bridging social capital is especially important for reconciling democracy and diversity. Yet, bridging social capital among diverse social group is intrinsically less likely to develop automatically. See also Sociology: Boomtown Gillette syndrome Social problem Social capital Social transformation Organisations: Civil Politics Disrupted Societies Institute References Bauman, Z. (2017). Symptoms in search for an object and a name, in Geiselberger, H. (Ed.) (2017). The Great Regression. Cambridge: Polity Press, 13-26. Beck, Ulrich (2017). The Metamorphosis of the World. Polity Press. Bishop, B. (2008). The big sort: Why the clustering of like-minded America is tearing us apart. Houghton Mifflin Harcourt. Esaiasson, P., Gilljam, M., and Persson, M. (2017): Responsiveness Beyond Policy Satisfaction: Does It Matter to Citizens? Comparative Political Studies 50(6): 739-765. Foa, R. S. and Mounk, Y. (2016). The democratic disconnect. Journal of Democracy, 27(3): 5-17. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Vintage. Itten, A. (2017). Context and Content toward Consensus in Public Mediation. Negotiation Journal, 33(3): 185-211. Iyengar, S., Sood, G., and Lelkes, Y. (2012). Affect, Not Ideology. A Social Identity Perspective on Polarization. Public opinion quarterly, 76(3): 405-431. Krannich, Richard S, and Thomas Greider. 1984. "Personal Well-Being in Rapid Growth and Stable Communities: Multiple Indicators and Contrasting Results." Rural Sociology 49(4): 541–552. Levitsky, S. and Ziblatt, D. (2018). How Democracies Die. Crown. Mair, P. (2009). Representative versus Responsible Government. MPIfG Working Paper 09/8. Pew Research Center (2014). Political Polarization in the American Public. June, 12. Pew Research Center (2015). Beyond Distrust: How Americans View Their Government. November, 23 Podziba, S. L. (2014). Civic fusion: Moving from certainty through not knowing to curiosity. Negotiation Journal, 30(3): 243-254. Putnam, R. D. and Feldstein, L. (2004). Better together: Restoring the American community. New York: Simon and Schuster. Ryan, R. M., and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1): 68-92. Schäfer, A. (2015). Demokratie? Mehr oder weniger, in Frankfurter Allgemeinen Zeitung, 9.11.2015. Walzer, M. (1983). Spheres of justice: A defense of pluralism and equality. New York: Basic books. W. David Pierce and Carl D. Cheney, Behavior Analysis and Learning 3rd ED Behaviorism
Social disruption
Biology
1,851
49,143,122
https://en.wikipedia.org/wiki/List%20of%20women%20anthropologists
This is a list of women anthropologists. See also List of anthropologists Anthropologists Anthropologists Anthropologists
List of women anthropologists
Technology
23
13,090,363
https://en.wikipedia.org/wiki/Error-correcting%20codes%20with%20feedback
In mathematics, computer science, telecommunication, information theory, and searching theory, error-correcting codes with feedback are error correcting codes designed to work in the presence of feedback from the receiver to the sender. Problem Alice (the sender) wishes to send a value x to Bob (the receiver). The communication channel between Alice and Bob is imperfect, and can introduce errors. Solution An error-correcting code is a way of encoding x as a message such that Bob will successfully understand the value x as intended by Alice, even if the message Alice sends and the message Bob receives differ. In an error-correcting code with feedback, the channel is two-way: Bob can send feedback to Alice about the message he received. Noisy feedback In an error-correcting code without noisy feedback, the feedback received by the sender is always free of errors. In an error-correcting code with noisy feedback, errors can occur in the feedback, as well as in the message. An error-correcting code with noiseless feedback is equivalent to an adaptive search strategy with errors. History In 1956, Claude Shannon introduced the discrete memoryless channel with noiseless feedback. In 1961, Alfréd Rényi introduced the Bar-Kochba game (also known as Twenty questions), with a given percentage of wrong answers, and calculated the minimum number of randomly chosen questions to determine the answer. In his 1964 dissertation, Elwyn Berlekamp considered error correcting codes with noiseless feedback. In Berlekamp's scenario, the receiver chose a subset of possible messages and asked the sender whether the given message was in this subset, a 'yes' or 'no' answer. Based on this answer, the receiver then chose a new subset and repeated the process. The game is further complicated due to noise; some of the answers will be wrong. See also Noisy channel coding theorem References Sources . . Error detection and correction
Error-correcting codes with feedback
Engineering
393
14,672,508
https://en.wikipedia.org/wiki/Hexachlorocyclohexa-2%2C5-dien-1-one
Hexachlorocyclohexa-2,5-dien-1-one, sometimes informally called hexachlorophenol (HCP), is an organochlorine compound. It can be prepared from phenol. Despite the informal name, the compound is not a phenol but is a ketone. The informal name is derived from its method of preparation which includes phenol as a reagent. Preparation HCP is normally produced by chlorination of phenol by chlorine in the presence of metal chloride catalyst, such as ferric chloride. It can also be produced by alkaline hydrolysis of polychlorinated benzenes at high temperature and pressure, by conversion of diazonium salts of chlorinated anilines, or by chlorination of phenolsulfonic acids and benzenesulfonic acids followed by removal of the sulfonic acid group. The hydrolysis of HCP gives chloranil. References See also Pentachlorophenol Hexachlorobenzene Disinfectants Fungicides Organochlorides Ketones
Hexachlorocyclohexa-2,5-dien-1-one
Chemistry,Biology
239
7,765,817
https://en.wikipedia.org/wiki/Borsuk%27s%20conjecture
The Borsuk problem in geometry, for historical reasons incorrectly called Borsuk's conjecture, is a question in discrete geometry. It is named after Karol Borsuk. Problem In 1932, Karol Borsuk showed that an ordinary 3-dimensional ball in Euclidean space can be easily dissected into 4 solids, each of which has a smaller diameter than the ball, and generally -dimensional ball can be covered with compact sets of diameters smaller than the ball. At the same time he proved that subsets are not enough in general. The proof is based on the Borsuk–Ulam theorem. That led Borsuk to a general question: The question was answered in the positive in the following cases: — which is the original result by Karol Borsuk (1932). — shown by Julian Perkal (1947), and independently, 8 years later, by H. G. Eggleston (1955). A simple proof was found later by Branko Grünbaum and Aladár Heppes. For all for smooth convex fields — shown by Hugo Hadwiger (1946). For all for centrally-symmetric fields — shown by A.S. Riesling (1971). For all for fields of revolution — shown by Boris Dekster (1995). The problem was finally solved in 1993 by Jeff Kahn and Gil Kalai, who showed that the general answer to Borsuk's question is . They claim that their construction shows that pieces do not suffice for and for each . However, as pointed out by Bernulf Weißbach, the first part of this claim is in fact false. But after improving a suboptimal conclusion within the corresponding derivation, one can indeed verify one of the constructed point sets as a counterexample for (as well as all higher dimensions up to 1560). Their result was improved in 2003 by Hinrichs and Richter, who constructed finite sets for , which cannot be partitioned into parts of smaller diameter. In 2013, Andriy V. Bondarenko had shown that Borsuk's conjecture is false for all . Shortly after, Thomas Jenrich derived a 64-dimensional counterexample from Bondarenko's construction, giving the best bound up to now. Apart from finding the minimum number of dimensions such that the number of pieces , mathematicians are interested in finding the general behavior of the function . Kahn and Kalai show that in general (that is, for sufficiently large), one needs many pieces. They also quote the upper bound by Oded Schramm, who showed that for every , if is sufficiently large, . The correct order of magnitude of is still unknown. However, it is conjectured that there is a constant such that for all . Oded Schramm also worked in a related question, a body of constant width is said to have effective radius if , where is the unit ball in , he proved the lower bound , where is the smallest effective radius of a body of constant width 2 in and asked if there exists such that for all , that is if the gap between the volumes of the smallest and largest constant-width bodies grows exponentially. In 2024 a preprint by Arman, Bondarenko, Nazarov, Prymak, Radchenko reported to have answered this question in the affirmative giving a construction that satisfies . See also Hadwiger's conjecture on covering convex fields with smaller copies of themselves Kahn–Kalai conjecture Note References Further reading Oleg Pikhurko, Algebraic Methods in Combinatorics, course notes. Andrei M. Raigorodskii, The Borsuk partition problem: the seventieth anniversary, Mathematical Intelligencer 26 (2004), no. 3, 4–12. External links Disproved conjectures Discrete geometry
Borsuk's conjecture
Mathematics
785
13,400,209
https://en.wikipedia.org/wiki/Sullivan%20conjecture
In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group . The most elementary formulation, however, is in terms of the classifying space of such a group. Roughly speaking, it is difficult to map such a space continuously into a finite CW complex in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller. Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from to is weakly contractible. This is equivalent to the statement that the map → from X to the function space of maps → , not necessarily preserving the base point, given by sending a point of to the constant map whose image is is a weak equivalence. The mapping space is an example of a homotopy fixed point set. Specifically, is the homotopy fixed point set of the group acting by the trivial action on . In general, for a group acting on a space , the homotopy fixed points are the fixed points of the mapping space of maps from the universal cover of to under the -action on given by in acts on a map in by sending it to . The -equivariant map from to a single point induces a natural map η: → from the fixed points to the homotopy fixed points of acting on . Miller's theorem is that η is a weak equivalence for trivial -actions on finite-dimensional CW complexes. An important ingredient and motivation for his proof is a result of Gunnar Carlsson on the homology of as an unstable module over the Steenrod algebra. Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on is allowed to be non-trivial. In, Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group . This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer, Carlsson, and Jean Lannes, showing that the natural map → is a weak equivalence when the order of is a power of a prime p, and where denotes the Bousfield-Kan p-completion of . Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points before completion, and Lannes's proof involves his T-functor. References External links Book extract J. Lurie's course notes Conjectures that have been proved Fixed points (mathematics) Homotopy theory
Sullivan conjecture
Mathematics
581
10,870,301
https://en.wikipedia.org/wiki/Metrifonate
Metrifonate (INN) or trichlorfon (USAN) is an irreversible organophosphate acetylcholinesterase inhibitor. It is a prodrug which is activated non-enzymatically into the active agent dichlorvos. It is used as an insecticide. According to the US Environmental Protection Agency trichlorfon has been used on golf course turf, home lawns, non-food contact areas of food and meat processing plants, ornamental shrubs and plants, and ornamental and baitfish ponds. Used to control caterpillars, white grubs, mole crickets, cattle lice, sod webworms, leaf miners, stink bugs, flies, ants, cockroaches, earwigs, crickets, diving beetle, water scavenger beetle, water boatman backswimmer, water scorpions, giant water bugs and pillbugs. After reregistration, a number of its uses were voluntarily restricted, and currently, it is used in nonfood areas to control flies, roaches, and ants among other pests. Outdoors it is used on ornamental plants, golf courses, and lawn grass to treat lepidopteran larvae pests, it is also used to treat flies in animal husbandry in areas that are not accessible to animals, it also used to control harvester ants. It can be used to treat schistosomiasis caused by Schistosoma haematobium, but is no longer commercially available. It has been proposed for use in treatment of Alzheimer's disease, but use for that purpose is not currently recommended. Bans and restrictions In the United States, trichlorfon/metrifonate may only be used on nonfood and nonfeed sites. Trichlorfon/metrifonate was banned in the EU in 2008 (Regulation (EC) 689/2008) and in Brazil in 2010. Trichlorfon/metrifonate was banned in Argentina in 2018, noting that trichlorvon converts to dichlorvos by metabolism in plants, as well as by biodegradation of the soil. Trichlorfon/metrifonate was banned in New Zealand in 2011. Trichlorfon/metrifonate was banned in India from 2020. References Acetylcholinesterase inhibitors Insecticides Antiparasitic agents Organophosphate insecticides Phosphonate esters Trichloromethyl compounds Prodrugs
Metrifonate
Chemistry,Biology
527
14,832,515
https://en.wikipedia.org/wiki/Joint%20Interface%20Control%20Officer
The Joint Interface Control Officer (JICO) is the senior multi-tactical data link interface control officer in support of joint task force operations. The JICO is responsible for effecting planning and management of the joint tactical data link network within a theater of operations. Notes References CJCSI 6240.01C CJCSM 3115.01A [6120.01 (series) Joint Multi-TDL Operation Procedure (JMTOP)] [MIL-STD-6016 (series) Tactical Data Link (TDL) 16 Message Standard] [STANAG 5516 Allied Tactical Data Link (TDL) 16 Message Standard] Information systems
Joint Interface Control Officer
Technology
136
57,852,827
https://en.wikipedia.org/wiki/Ethnostatistics
Ethnostatistics is the study of the social activity of producing and using statistics. The premise of the area of study is that statistics are themselves not neutral facts, but are themselves influenced by the social biases of the persons involved in their production. The concept was suggested in John Kitsuse or Aaron Cicourel in their 1962 article, "A Note on Official Statistics", published in Social Problems, where they suggested that "criminal statistics" are indicative of the social organization of the agencies responsible for assembling them. The concept was developed by sociologist Robert Gephart in his 1988 book, Ethnostatistics. The field of study "uses concepts from ethnomethodology to study sensemaking practices that social scientists employ in the production, interpretation, and display of statistics created in social research". As of the early 2000s, there were three "levels" of ethnostatistics, the first examining the social production of statistics, the second using computer simulations to examine the degree to which methods of gathering statistics may distort data, and third examining the persuasive effect of statistics on their end consumer. References Ethnography Philosophy of statistics
Ethnostatistics
Mathematics
241
63,415,758
https://en.wikipedia.org/wiki/Rupintrivir
Rupintrivir (AG-7088, Rupinavir) is a peptidomimetic antiviral drug which acts as a 3C and 3CL protease inhibitor. It was developed for the treatment of rhinoviruses, and has subsequently been investigated for the treatment of other viral diseases including those caused by picornaviruses, norovirus, and coronaviruses, such as SARS and COVID-19. See also 3CLpro-1 Carmofur Ebselen GC376 Iscartrelvir Theaflavin digallate References Antiviral drugs SARS-CoV-2 main protease inhibitors
Rupintrivir
Biology
141
74,660,899
https://en.wikipedia.org/wiki/3%2C4-Difluoroamphetamine
3,4-Difluoroamphetamine (DFA) is a substituted amphetamine which has been sold as a designer drug. It has relatively weak activity as a serotonin releasing agent with only around 1/4 of the affinity for the serotonin transporter compared to MDA, but its activity at other targets has not been studied. See also 3-Fluoroamphetamine 3-Fluoromethamphetamine 4-Fluoroamphetamine 4-Fluoromethamphetamine 3,5-Difluoromethcathinone DFMDA DODC Xylopropamine References Designer drugs Serotonin-norepinephrine-dopamine releasing agents Fluoroarenes Amines
3,4-Difluoroamphetamine
Chemistry
156
723,297
https://en.wikipedia.org/wiki/Centered%20square%20number
In elementary number theory, a centered square number is a centered figurate number that gives the number of dots in a square with a dot in the center and all other dots surrounding the center dot in successive square layers. That is, each centered square number equals the number of dots within a given city block distance of the center dot on a regular square lattice. While centered square numbers, like figurate numbers in general, have few if any direct practical applications, they are sometimes studied in recreational mathematics for their elegant geometric and arithmetic properties. The figures for the first four centered square numbers are shown below: {| |- align="center" valign="middle" style="line-height: 0;" | |     |     | |     |     | |     |     | |- align="center" valign="top" | |     |     | |     |     | |     |     | |} Each centered square number is the sum of successive squares. Example: as shown in the following figure of Floyd's triangle, 25 is a centered square number, and is the sum of the square 16 (yellow rhombus formed by shearing a square) and of the next smaller square, 9 (sum of two blue triangles): Relationships with other figurate numbers Let Ck,n generally represent the nth centered k-gonal number. The nth centered square number is given by the formula: That is, the nth centered square number is the sum of the nth and the (n – 1)th square numbers. The following pattern demonstrates this formula: {| |- align="center" valign="middle" style="line-height: 0;" | |     |     | |     |     | |     |     | |- align="center" valign="top" | |     |     | |     |     | |     |     | |} The formula can also be expressed as: That is, the nth centered square number is half of the nth odd square number plus 1, as illustrated below: {| |- align="center" valign="bottom" style="line-height: 0;" | |     |     | |     |     | |     |     | |- align="center" valign="top" | |     |     | |     |     | |     |     | |} Like all centered polygonal numbers, centered square numbers can also be expressed in terms of triangular numbers: where is the nth triangular number. This can be easily seen by removing the center dot and dividing the rest of the figure into four triangles, as below: {| |- align="center" valign="middle" style="line-height: 0;" | |     |     | |     |     | |     |     | |- align="center" valign="top" | |     |     | |     |     | |     |     | |} The difference between two consecutive octahedral numbers is a centered square number (Conway and Guy, p.50). Another way the centered square numbers can be expressed is: where Yet another way the centered square numbers can be expressed is in terms of the centered triangular numbers: where List of centered square numbers The first centered square numbers (C4,n < 4500) are: 1, 5, 13, 25, 41, 61, 85, 113, 145, 181, 221, 265, 313, 365, 421, 481, 545, 613, 685, 761, 841, 925, 1013, 1105, 1201, 1301, 1405, 1513, 1625, 1741, 1861, 1985, 2113, 2245, 2381, 2521, 2665, 2813, 2965, 3121, 3281, 3445, 3613, 3785, 3961, 4141, 4325, … . Properties All centered square numbers are odd, and in base 10 one can notice the one's digit follows the pattern 1-5-3-5-1. All centered square numbers and their divisors have a remainder of 1 when divided by 4. Hence all centered square numbers and their divisors end with digit 1 or 5 in base 6, 8, and 12. Every centered square number except 1 is the hypotenuse of a Pythagorean triple (3-4-5, 5-12-13, 7-24-25, ...). This is exactly the sequence of Pythagorean triples where the two longest sides differ by 1. (Example: 52 + 122 = 132.) This is not to be confused with the relationship (n – 1)2 + n2 = C4,n. (Example: 22 + 32 = 13.) Generating function The generating function that gives the centered square numbers is: References . . . . Figurate numbers Quadrilaterals
Centered square number
Mathematics
1,002
65,568,496
https://en.wikipedia.org/wiki/Invertebrate%20drift
Invertebrate drift is the downstream transport of invertebrate organisms in lotic freshwater systems such as rivers and streams. The term lotic comes from the Latin word lotus, meaning "washing", and is used to describe moving freshwater systems. This is in contrast with lentic coming from the Latin word lentus, meaning slow or motionless that typically describe still or standing waters such as lakes, ponds, and swamps. Drift can service freshwater invertebrates by giving them an escape route from predation, or the use of a current to disperse progeny downstream. On occasion, however, invertebrates will inadvertently lose their footing, and drift downstream. For that, invertebrates counter a stream's flow through physical and behavioral adaptations. And just as invertebrates adapted to stabilize themselves in the water column, or use the stream's energy to their advantage, so too have predators adapted to catch invertebrates as they drift. Species of fish, commonly salmonids, catch drifting insects during the peak times after dusk, and before dawn. Fishermen can exploit this relationship using fly fishing techniques and lures that mimic drifting insects to catch these fishes. Researchers have developed sampling techniques in lotic systems. From it, research as far back as 1928 has collected data on the phenomenon of drift. The study of invertebrate drift has progressed the field of stream ecology. Drift has been documented to impact community structure, benthic production, and the energy flow through trophic levels. Mechanisms of drift Types of drift Invertebrate drift can be categorized by the conditions that caused the drift to occur. Catastrophic drift: Disturbances such as floods physically dislodge animals. Behavioral drift: Behavior such as escaping, and inadvertently losing foothold in the water column, cause animals to drift downstream. Active drift describes animals choosing to enter drift. Distributional drift: Used by animals to disperse progeny downstream. Constant drift: Also known as background drift, describes a low, consistent rate of drifting invertebrates between temporal peaks. Emergence drift: Nymphs and pupae drift as they swim to the surface to emerge into their adult stage. Surface drift: Adult insects drift as they emerge on the surface of the river and drift when adult insects return to the surface to lay eggs. Species associated with drift Invertebrate species adapt to a stream's current through the organs or appendages that physically attach them to the substrate, or association with large boulders or thick plant growth to buffer the disturbances associated with flow. An example of the former is the family Heptageniidae in the order Ephemeroptera. Larvae within the genus have modified gills forming a friction disc that allows them to cling to the substrate in rapid moving waters. An example of the friction disk can be seen on the image to the right which shows the ventral side of a species within the genus epeorus. Müller (1954) found that water mites (Hydracarina) and aquatic beetles (Coleoptera) made up a large portion of the benthos population in the stream Skravelbäcken of Sweden, but since they associated with boulders and thick plant growth, they avoid being dislodged by water currents into drift. Drift-feeding predation Many predators of common insects and invertebrates found in streams feed off of those found in stream drift. Many of these predators have adapted or have become specialized to feeding on invertebrates found in stream drift. Predators that use this as their main source for food, typically fish, are called drift-feeders. The most common example of drift-feeding predators are stream salmonids, especially trout. These fish catch a lot of their prey during dusk and dawn. This has led to studies concluding that many invertebrates have adapted to drifting at night, where they can avoid predation due to these fish being mainly visual hunters. Other fish, such as the sculpin, have evolved with highly developed lateral lines, allowing them to have better nocturnal predation skills. As such, sculpins were found to catch a majority of their prey at night, as well as during the day. Fish predation on invertebrate has been seen to alter prey densities in streams by individual feeding of insects or by effecting insect dispersal behavior. Avoidance Common invertebrate species have adapted drifting behaviors to help avoid predation. The biggest example, as mentioned before, is to drift at night. However, invertebrate have adapted to change their drift behavior to avoid predation after receiving other certain signals and indicators. For example, the mayfly Baetis bicaudatis was shown to change its behavior based on odors chemically released in the water system similar to fish predators. Non-piscine predation Although fish are the main predators of invertebrates in stream drift, there are others as well, such as birds and large insects. For example, the white-throated dipper Cinclus cinclus, is an aquatic bird that feeds off of invertebrates in stream drift. Another example is the stonefly, which is a large insect that has been found to prey on other small drifting insects, such as the mayfly. Environmental factors affecting invertebrate drift Changes to the environment as a result of abiotic factors can lead to both increases and decreases of invertebrate drift. Factors such as a reduction of stream flow can lead to an increase of invertebrate drift, as observed by Minshall and Winger in their 1968 study. They found that as stream flow and the frequency had an inverse relationship over the course of July, August, and September in the Rocky Mountains of Idaho. Koetsier and Bryan sought to assess the effect of abiotic factors on invertebrate drift in the lower Mississippi river. Just as with Minshall and Winger, they found that there was a negative correlation between stream discharge and frequency of invertebrate drift. According to their 1995 study, river discharge could be attributed to approximately 40% of variation in the taxa of invertebrates which are more prone to drift. Invertebrate drift is also affected by the day/night cycle. At night invertebrate drift can be up to 10 times higher than during the day. Benke et al. found that all of the invertebrates that they sampled had a consistency to be more active in the drift at night especially during the summer. They found that this pattern of drift happening at night continued all throughout the year, but that the extent of the difference in drift between day and night was not as exaggerated as during the summer. Benke et al. also found that in southern states the invertebrate drift is more abundant and consistent which is largely attributed to the fact that there is not a sharp decline in temperature during winter like happens in more northern states. Drift research History of drift research The concept of drift can be traced back to 1928 where an experiment conducted by P. R. Needham of Cornell University sought to quantify allochthonous animal material in various stream environments. Needham used drift and stop nets to collect available drifting material and organisms and calculated the collection based on the area length of stream reaches. The study served as a proof of concept for use of these methods for future quantitative and qualitative ecological studies of drifting organisms. After a period of dormancy, there was a resurgence on the research of drift in the 1950s and 1960s. A prominent paper of the time was Müller's 1954 Investigations on the organic drift in North Sweden streams. Müller proposed the term "colonization cycle" after observing the upper stream reaches in Sweden were recolonizing quickly despite their progeny's physical inability to migrate against a current. To counter competition, immature organisms disperse downstream then migrated back upstream as adults to spawn, thus, replenishing populations. In the early 1960s, research done by Hikaru Tanaka, Thomas F. Walters, and Karl Müller discovered that invertebrate drift followed a distinct diel periodicity. In Walter’ 1962 paper "Diurnal Periodicity in the Drift of Stream Invertebrates", Walters measured the high volume of drifting scuds (Gammarus limnaeus) over 24 hours within four different months spanning each of the seasons. Uniformly, across August, October, February, and May, there was a notable increase in drift 1 hour after sunset and a notable decrease an hour before sunrise. In August specifically, there was a significant spike in scud volume after sunset reaching numbers of 100-fold of that caught a few hours prior. Walters measured other species caught like mayflies (Baetis vagans), caddisflies (Glossosoma intermedium), and water boatman adults (Hesperocorixa sp.) and they all displayed a similar pattern of diurnal periodicity. Walters hypothesized that the higher drift rates coincided with higher invertebrate activity during the night, and as the invertebrates moved around freely, they were swept downstream by the current. Methods of sampling There are three well used methods for sampling invertebrate drift: samplers with flow meters, samplers without flow meters, and tube samplers. Invertebrate drift is observed to function on 24-hour intervals. Samplers with flow meters, this type of method will simultaneously measure water volume entering the sampler. This sampler can be enclosed in a metal cylinder and held parallel to the stream bed with support. Disadvantages of this particular sampler is that they are unable to capture drift invertebrates that lie on the surface, that usually enter the stream from terrestrial land. Samplers without flow meters are long nets with a square mouth. The top of the net is above the water surface while the bottom edge is at the bottom of the stream. There are two iron rods that hold the net in place and are placed in the stream bed. Disadvantage for this type of sampler is there is a possibility that the net can become clogged which yields back-flow. This will decrease efficiency and by having the net located close to the stream bed there is a possibility that non-drift organisms have the chance to enter the net.The image to the left is an example of a sampler without flow meter. Tube samplers are used to pass stream discharge ending in the air above a filtering net. Therefore, the tube will be extended out of the water allowing water to exit and flow through a net filtering all invertebrates. Advantages to this method is there is a rarity of back flow. Disadvantages to this method involve invertebrates that have the ability to survive in the tube without being transferred through the filter. This can be solved by cleaning the tube after gathering sufficient data. The efficiency of these methods has been confirmed. Although, there are many factors that control samplers, it is believed that samplers maintain "laminar flow and do not significantly affect the velocity of water at the mouth." These models sample drift at close to maximum efficiency. Human use of invertebrate drift Fly fishing Fly fishing is a method of angling that uses lures composed of hair, feathers, and synthetic materials that mimics a fly, bug, or other prey items. Using a long rod, typically between 7 and 11 feet (2 to 3.5 meters), the angler snaps the rod back and forth allowing the lure to rest just above the water's surface before flicking back. The method described is referred to as dry-fly fishing as the lure is on or above the water. In contrast, there is wet-fly fishing where the lure sits on or beneath the water's surface. In wet-fly fishing, the angler casts their lure upstream allowing the current to carry the fly, whether submerged or on the surface, downstream to the target trout. A wet fly-fishing technique known as nymph fishing (or nymphing) is used commonly to catch trout who feed on drifting nymphs in shallow riffles. Anglers take advantage of invertebrate drift and cast their lines with a mimic nymph fly upstream and allow the river's current to carry their submerged lure downstream to where the trout are waiting to catch their prey. References Invertebrates Biogeography Freshwater ecology
Invertebrate drift
Biology
2,457
2,463,718
https://en.wikipedia.org/wiki/Herbicidal%20warfare
Herbicidal warfare is the use of substances primarily designed to destroy the plant-based ecosystem of an area. Although herbicidal warfare use chemical substances, its main purpose is to disrupt agricultural food production and/or to destroy plants which provide cover or concealment to the enemy, not to asphyxiate or poison humans and/or destroy human-made structures. Herbicidal warfare has been forbidden by the Environmental Modification Convention since 1978, which bans "any technique for changing the composition or structure of the Earth's biota". History Modern day herbicidal warfare resulted from military research discoveries of plant growth regulators during World War II, and is therefore a technological advance on the scorched earth practices by armies throughout history to deprive the enemy of food and cover. Work on military herbicides began in England in 1940, and by 1944, the United States joined in the effort. Even though herbicides are chemicals, due to their mechanism of action (growth regulators), they are often considered a means of biological warfare. Over 1,000 substances were investigated by the war's end for phytotoxic properties, and the Allies envisioned using herbicides to destroy Axis crops. British planners did not believe herbicides were logistically feasible against Nazi Germany. In May 1945, United States Army Air Force commander General Victor Betrandias advanced a proposal to his superior, General Henry H. Arnold, to use of ammonium thiocyanate to reduce Japanese rice crops part of Allied air raids on Japan. This was part of larger set of proposed measures to starve the Japanese in submission. The plan calculated that ammonium thiocyanate would not be seen as "gas warfare" because the substance was not particularly dangerous to humans. On the other hand, the same plan envisaged that if the U.S. were to engage in "gas warfare" against Japan, then mustard gas would be an even more effective rice crop killer. The Joint Target Group rejected the plan as tactically unsound, but expressed no moral reservations. Malaya During the Malayan Emergency, British Commonwealth forces deployed herbicides and defoliants in the Malaysian countryside in order to deprive Malayan National Liberation Army (MNLA) insurgents of cover, potential sources of food and to flush them out of the jungle. The herbicides and defoliants they used contained Trioxone, an ingredient which also formed part of the chemical composition of the Agent Orange herbicide used by the U.S. military during the Vietnam War. Deployment of herbicides and defoliants served the dual purpose of thinning jungle trails to prevent ambushes and destroying crop fields in regions where the MNLA was active to deprive them of potential sources of food. In the summer of 1952, 500 hectares were sprayed with 90,000 liters of Trioxone from fire engines; British Commonwealth forces found it difficult to operate the machinery in jungle conditions while wearing full protective gear. Herbicides and defoliants were also sprayed from Royal Air Force aircraft. Historical records of DOW chemical show that "Super Agent Orange", also called DOW Herbicide M-3393, was Agent Orange that was mixed with picloram. Super Orange was known to have been tested by representatives from Fort Detrick and DOW chemical in Texas, Puerto Rico, and Hawaii and later in Malaysia in a cooperative project with the International Rubber Research Institute. Discussions in the British government centered on avoiding the thorny issue of whether herbicidal warfare in Malaya was in violation of the spirit of the 1925 Geneva Protocol, which only prohibited chemical warfare among signatory states in international armed conflicts. The British were keen to avoid accusations like the allegations of biological warfare in the Korean War leveled against the United States. The British government found that the simplest solution was to deny that a conflict was going on in Malaya. They declared the insurgency to be an internal security matter; thus, the use of herbicidal agents was a matter of police action, much like the use of CS gas for riot control. Many Commonwealth personnel who handled herbicides and defoliants during, and in the decades after, the conflict suffered from serious exposure to dioxin, which also led to soil erosion in areas of Malaysia. Roughly 10,000 civilians and insurgents in Malaysia also suffered from the effects of the defoliant, though many historians argue the true number was higher given that herbicides and defoliants were used on a large scale in the Malayan Emergency; the British government manipulated data and kept its deployment of herbicidal warfare secret in fear of a diplomatic backlash. Vietnam War The United States used herbicides in Southeast Asia during the Vietnam War. Success with Project AGILE field tests with herbicides in South Vietnam in 1961 and inspiration by the British use of herbicides and defoliants during the Malayan Emergency led to the formal herbicidal program Operation Trail Dust (1961–1971). Operation Ranch Hand, a U.S. Air Force program to use C-123K aircraft to spray herbicides over large areas, was one of many programs under Trail Dust. The aircrews charged with spraying the defoliant used a sardonic motto-"Only you can prevent forests"-a shortening of the U.S. Forest Services famous warning to the general public "Only you can prevent forest fires". The United States and its allies officially claims that herbicidal and incendiary agents like napalm fall outside the definition of "chemical weapons" and that Britain set the precedent by using them during the Malayan Emergency. Ranch Hand started as a limited program of defoliation of border areas, security perimeters, and crop destruction. As the conflict continued, the anti-crop mission took on more prominence, and (along with other agents) defoliants became used to compel civilians to leave Viet Cong-controlled territories for government-controlled areas. It was also used experimentally for large area forest burning operations that failed to produce the desired results. Defoliation was judged in 1963 as improving visibility in jungles by 30–75% horizontally, and 40–80% vertically. Improvements in delivery systems by 1968 increased this to 50–70% horizontally, and 60–90% vertically. Ranch Hand pilots were the first to make an accurate 1:125,000 scale map of the Ho Chi Minh trail south of Tchepone, Laos by defoliating swaths perpendicular to the trail every half mile or so. Use of herbicides in Vietnam caused a shortage of commercial pesticides in mid-1966 when the Defense Department had to use powers under the Defense Production Act of 1950 to secure supplies. The concentration of herbicides sprayed in Operation Ranch Hand was more than an order of magnitude greater than that in domestic use. Approximately 10% of the land surface of South Vietnam was sprayed—about 17,000 square kilometers. About 85% of the spraying was for defoliation and about 15% was for crop destruction. War on drugs in South America and Afghanistan Types of herbicides The United States had technical military symbols for herbicides that have since been replaced by the more common color code names derived from the banding on shipping drums. The US further distinguished between tactical herbicides, which were to be used in combat operations and commercial herbicides, which used in and around military bases, etc. In 1966 the United States Defense Department claimed that herbicides used in Vietnam were not harmful to people or the environment. In 1972 it was advised that a known impurity precluded the use of these herbicides in Vietnam and all remaining stocks should be returned home. In 1977 the United States Air Force destroyed its stocks of Agent Orange 200 miles west of Johnston Island on the incinerator ship M/T Vulcanus. The impurity, 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) was a suspected carcinogen that may have affected the health of over 17,000 United States servicemen, 4,000 Australians, 1,700 New Zealanders, Koreans, countless Vietnamese soldiers and civilians, and with over 40,000 children of veterans possibly suffering birth defects from herbicidal warfare. Decades later the lingering problem of herbicidal warfare remains as a dominant issue of United States-Vietnam relations. In 2003, a coalition of Vietnamese survivors and long-term victims of Agent Orange sued a number of American-based and multinational chemical corporations for damages related to the manufacture and use of the chemical. A federal judge rejected the suit, claiming that the plaintiff's claim of direct responsibility was invalid. See also E14 munition E77 balloon bomb Enterotoxin M115 bomb Mycotoxin United States Army Biological Warfare Laboratories War on drugs#Aerial herbicide application References Further reading Herbicides Military tactics Chemical warfare Environmental impact of war Environmental racism
Herbicidal warfare
Chemistry,Biology
1,791
334,820
https://en.wikipedia.org/wiki/Subcutaneous%20administration
Subcutaneous administration is the insertion of medications beneath the skin either by injection or infusion. A subcutaneous injection is administered as a bolus into the subcutis, the layer of skin directly below the dermis and epidermis, collectively referred to as the cutis. The instruments are usually a hypodermic needle and a syringe. Subcutaneous injections are highly effective in administering medications such as insulin, morphine, diacetylmorphine and goserelin. Subcutaneous administration may be abbreviated as SC, SQ, subcu, sub-Q, SubQ, or subcut. Subcut is the preferred abbreviation to reduce the risk of misunderstanding and potential errors. Subcutaneous tissue has few blood vessels and so drugs injected into it are intended for slow, sustained rates of absorption, often with some amount of depot effect. Compared with other routes of administration, it is slower than intramuscular injections but still faster than intradermal injections. Subcutaneous infusion (as opposed to subcutaneous injection) is similar but involves a continuous drip from a bag and line, as opposed to injection with a syringe. Medical uses A subcutaneous injection is administered into the fatty tissue of the subcutaneous tissue, located below the dermis and epidermis. They are commonly used to administer medications, especially those which cannot be administered by mouth as they would not be absorbed from the gastrointestinal tract. A subcutaneous injection is absorbed slower than a substance injected intravenously or into a muscle, but faster than a medication administered by mouth. Medications Medications commonly administered via subcutaneous injection or infusion include insulin, live vaccines, monoclonal antibodies, and heparin. These medications cannot be administered orally as the molecules are too large to be absorbed in the intestines. Subcutaneous injections can also be used when the increased bioavailability and more rapid effects over oral administration are preferred. They are also the easiest form of parenteral administration of medication to perform by lay people, and are associated with less adverse effects such as pain or infection than other forms of injection. Insulin Perhaps the most common medication administered subcutaneously is insulin. While attempts have been made since the 1920s to administer insulin orally, the large size of the molecule has made it difficult to create a formulation with absorption and predictability that comes close to subcutaneous injections of insulin. People with type 1 diabetes almost all require insulin as part of their treatment regimens, and a smaller proportion of people with type 2 diabetes do as well — with tens of millions of prescriptions per year in the United States alone. Insulin historically was injected from a vial using a syringe and needle, but may also be administered subcutaneously using devices such as injector pens or insulin pumps. An insulin pump consists of a catheter which is inserted into the subcutaneous tissue, and then secured in place to allow insulin to be administered multiple times through the same injection site. Recreational drug use Subcutaneous injection may also be used by people to (self-) administer recreational drugs. This can be referred to as skin popping. In some cases, the administration of illicit drugs in this way is associated with unsafe practices leading to infections and other adverse effects. In rare cases, this results in serious side effects such as AA amyloidosis. Recreational drugs reported to be administered subcutaneously have included cocaine, mephedrone, and amphetamine derivatives such as PMMA. Contraindications Contraindications to subcutaneous injections primarily depend on the specific medication being administered. Doses which would require more than 2 mL to be injected at once are not administered subcutaneously. Medications which may cause necrosis or otherwise be damaging or irritating to tissues should also not be administered subcutaneously. An injection should not be given at a specific site if there is inflammation or skin damage in the area. Risks and complications With normal doses of medicine (less than 2 mL in volume), complications or adverse effects are very rare. The most common adverse reactions after subcutaneous injections are administered are termed "injection site reactions". This term encompasses any combination of redness, swelling, itching, bruising, or other irritation that does not spread beyond the immediate vicinity of the injection. Injection site reactions may be minimized if repeated injections are necessary by moving the injection site at least one inch from previous injections, or using a different injection location altogether. There may also be specific complications associated with the specific medication being administered. Medication-specific Due to the frequency of injections required for the administration of insulin products via subcutaneous injection, insulin is associated with the development of lipohypertrophy and lipoatrophy. This can lead to slower or incomplete absorption from the injection site. Rotating the injection site is the primary method of preventing changes in tissue structure from insulin administration. Heparin-based anticoagulants injected subcutaneously may cause hematoma and bruising around the injection site due to their anticoagulant effect. This includes heparin and low molecular weight heparin products such as enoxaparin. There is some low certainty evidence that administering the injection more slowly may decrease the pain from heparin injections, but not the risk of or extent of bruising. Subcutaneous heparin-based anticoagulation may also lead to necrosis of the surrounding skin or lesions, most commonly when injected in the abdomen. Many medications have the potential to cause local lesions or swelling due to the irritating effect the medications have on the skin and subcutaneous tissues. This includes medications such as apomorphine and hyaluronic acid injected as a filler, which may cause the area to appear bruised. Hyaluronic acid "bruising" may be treated using injections of hyaluronidase enzyme around the location. Other common medication-specific side effects include pain, burning or stinging, warmth, rash, flushing, or multiple of these reactions at the injection site, collectively termed "injection site reactions". This is seen with the subcutaneous injection of triptans for migraine headache, medroxyprogesterone acetate for contraception, as well as many monoclonal antibodies. In most cases, injection site reactions are self-limiting and resolve on their own after a short time without treatment, and do not require the medication to be discontinued. The administration of vaccines subcutaneously is also associated with injection site reactions. This includes the BCG vaccine which is associated with a specific scar appearance which can be used as evidence of prior vaccination. Other subcutaneous vaccines, many of which are live vaccines including the MMR vaccine and the varicella vaccine, which may cause fever and rash, as well as a feeling of general malaise for a day or two following the vaccination. Technique Subcutaneous injections are performed by cleaning the area to be injected followed by an injection, usually at a 45-degree angle to the skin when using a syringe and needle, or at a 90-degree angle (perpendicular) if using an injector pen. The appropriate injection angle is based on the length of needle used, and the depth of the subcutaneous fat in the skin of the specific person. A 90-degree angle is always used for medications such as heparin. If administered at an angle, the skin and underlying tissue may be pinched upwards prior to injection. The injection is administered slowly, lasting about 10 seconds per milliliter of fluid injected, and the needle may be left in place for 10 seconds following injection to ensure the medicine is fully injected. Equipment The gauge of the needle used can range from 25 gauge to 27 gauge, while the length can vary between -inch to -inch for injections using a syringe and needle. For subcutaneous injections delivered using devices such as injector pens, the needle used may be as thin as 34 gauge (commonly 30–32 gauge), and as short as 3.5 mm (commonly 3.5 mm to 5 mm). Subcutaneous injections can also be delivered via a pump system which uses a cannula inserted under the skin. The specific needle size/length, as well as appropriateness of a device such as a pen or pump, is based on the characteristics of a person's skin layers. Locations Commonly used injection sites include: The outer area of the upper arm. The abdomen, avoiding a 2-inch circle around the navel. The front of the thigh, between 4 inches from the top of the thigh and 4 inches above the knee. The upper back. The upper area of the buttock, just behind the hip bone. The choice of specific injection site is based on the medication being administered, with heparin almost always being administered in the abdomen, as well as preference. Injections administered frequently or repeatedly should be administered in a different location each time, either within the same general site or a different site, but at least one inch away from recent injections. Self-administration As opposed to intramuscular or intravenous injections, subcutaneous injections can be easily performed by people with minor skill and training required. The injection sites for self-injection of medication are the same as for injection by a healthcare professional, and the skill can be taught to patients using pictures, videos, or models of the subcutaneous tissue for practice. People who are to self-inject medicine subcutaneously should be trained how to evaluate and rotate the injection site if complications or contraindications arise. Self-administration by subcutaneous injection generally does not require disinfection of the skin outside of a hospital setting as the risk of infection is extremely low, but instead it is recommended to ensure that the site and person's hands are simply clean prior to administration. Infusion Subcutaneous infusion, also known as interstitial infusion or hypodermoclysis, is a form of subcutaneous (under the skin) administration of fluids to the body, often saline or glucose solutions. It is the infusion counterpart of subcutaneous injection with a syringe. Subcutaneous infusion can be used where a slow rate of fluid uptake is required compared to intravenous infusion. Typically, it is limited to 1 mL per minute, although it is possible to increase this by using two sites simultaneously. The chief advantages of subcutaneous infusion over intravenous infusion is that it is cheap and can be administered by non-medical personnel with minimal supervision. It is therefore particularly suitable for home care. The enzyme hyaluronidase can be added to the fluid to improve absorption during the infusion. Subcutaneous infusion can be speeded up by applying it to multiple sites simultaneously. The technique was pioneered by Evan O'Neill Kane in 1900. Kane was looking for a technique that was as fast as intravenous infusion but not so risky to use on trauma patients in unhygienic conditions in the field. See also Intramuscular injection Intravenous injection Intradermal injection References Dosage forms Routes of administration Injection (medicine)
Subcutaneous administration
Chemistry
2,310
17,947,077
https://en.wikipedia.org/wiki/Heinz%20Falk
Heinz Falk (born April 29, 1939, in Sankt Pölten, Lower Austria) is professor emeritus for organic chemistry at Johannes Kepler University of Linz and editor of "Progress in the Chemistry of Organic Natural Compounds". His research is focused on structural analysis, synthesis, stereochemistry and photochemistry of plant and animal photosensitizing and photosensory pigments, such as hypericin. Biography Early life Heinz Falk was born April 29, 1939, in Sankt Pölten, Austria, went to elementary school in Statzendorf and completed middle school in Krems an der Donau. After moving to Vienna in 1953 he completed a three-year program at HBLVA for Chemical Industry, Rosensteingasse and completed his high-school diploma in 1959 through classes at an evening school, where he met his future wife, Rotraud Falk (née Strohbach). Marriage and children Heinz Falk is married to Rotraud Falk since 1966 and they have one son: Alexander Falk (August 13, 1967) is the CEO of Altova. Education Heinz Falk studied chemistry at the University of Vienna starting in 1959 and completed his dissertation under his doctoral advisor, Karl Schlögl, in 1966. In 1971 Falk spent a year abroad to study at ETH Zürich. Upon his return to Vienna in 1972 he attained habilitation for organic chemistry at the University of Vienna. Career 1966-1979: University of Vienna Starting in 1966 Falk was an assistant at the Institute of Organic Chemistry at the University of Vienna. In 1975 he was promoted to associate professor of physical organic chemistry at the University of Vienna. In the summer of 1978 Falk was invited to speak at the Gordon Research Conference in Wolfeboro. 1979–present: Johannes Kepler University of Linz In 1979 Falk received a call to become full professor of organic chemistry at Johannes Kepler University of Linz, where he founded the new Institute of Organic Chemistry. From 1989 through 1991 he was elected Dean of the Faculty of Engineering and Natural Sciences (TNF) at Johannes Kepler University of Linz. In 2005 Falk was ranked #3 among the "Top 10" scientists in Upper Austria by the newspaper "OÖ Nachrichten". In 2008 he retired as Professor emeritus at the Institute of Organic chemistry of the JKU. Research interests Falk's main research area is the structural analysis, synthesis, stereochemistry and photochemistry of plant and animal photosensitizing and photosensory pigments. The main group of compounds covered in his work are pigments derived from the fundamental phenanthro[1,10,9,8-opqra]perylene-7,14-dione chromophore with natural pigments like hypericin, stentorin, the fringelites, the gymnochromes, and blepharismin. In addition, he is focusing on hemin-analogous corrphycene derivatives (e.g. as potential blood substitutes and heme oxygenase blocker) as well as on other natural compounds such as the natural sun blocker urocanic acid. Furthermore, research on applied problems of industrial relevance, like oxidation, ozonization, non natural amino acids and catalysis have been pursued. Published works Books Scientific articles H. Marko, N. Müller und H. Falk, Nuclear Magnetic Resonance Investigations of the Biliverdin/Apomyoglobin Complex. Eur. J. Biochem. 193, 573 (1990) H. Falk, H. Marko, N. Müller, W. Schmitzberger und H. Stumpe, Reconstitution of Apomyoglobin with Bile Pigments. Monatsh. Chem. 121, 893 (1990) H. Falk, H. Marko, N. Müller und W. Schmitzberger, On the Chemistry of Pyrrole Pigments 87.Mitt.: The Apomyoglobin Heme Pocket as a Reaction Vessel in Bile Pigment Chemistry. Monatsh. Chem. 121, 903 (1990) H. Falk und H. Marko, Reduction of a Bilindione-10-Thiol-Adduct as a Model for the Reduction Step of the Biliverdin Reductase System. Monatsh. Chem. 122, 319 (1991) U. Wagner, C. Kratky, H. Falk und H. Woess, Crystal Structure and Conformation of 10-Aryl-bilatrienes-abc. Monatsh. Chem. 122, 749 (1991) H. Falk und G. Schoppel, A Synthesis of Emodin Anthrone. Monatsh. Chem. 122, 739 (1991) H. Falk and W. Schmitzberger, On the Nature of "Soluble" Hypericin in Hypericum Species. Monatsh. Chem., 123, 731 (1992) H. Falk and D. Hemmer, On the Chemistry of Pyrrole Pigments, 88. Mitt.: Nonlinear Optical Properties of Linear Oligopyrroles. Monatsh. Chem. 123, 779 (1992) H. Falk and G. Schoppel, On the Synthesis of Hypericin by Oxidative Trimethylemodin Anthrone and Emodin Anthrone Dimerization: Isohypericin. Monatsh. Chem. 123, 931 (1992) H. Falk and W. Schmitzberger, On the Bromination of Hypericin: The Gymnochrome Chromophores. Monatsh. Chem. 124, 77 (1993) C. Etzlstorfer, H. Falk, N. Müller, W. Schmitzberger and U. Wagner, Tautomerism and Stereochemistry of Hypericin: Force Field, NMR, and X-ray Crystallographic Investigations. Monatsh. Chem., 124, 751 (1993) H. Falk and A. Suste, On the Chemistry of Pyrrole Pigments, XC: Pyridinologous Linear Tri- and Tetrapyrroles. Monatsh. Chem., 124, 881 (1993) H. Falk, J. Meyer and M. Oberreiter, A Convenient Semisynthetic Route to Hypericin. Monatsh. Chem., 124, 339 (1993) C. Etzlstorfer and H. Falk, Tautomerism and Stereochemistry of Isohypericin, Bromo-hypericines, and Gymnochromes: Force Field Investigations. Monatsh. Chem., 124, 1031 (1993) H. Pschierer, J. Friedrich, H. Falk and W. Schmitzberger, On the Correlation Between Pressure Shift and Solvent Shift: A Spectral Hole Burning Study. J. Phys. Chem., 97, 6902 (1993) A. Angerhofer, H. Falk, J. Meyer and G. Schoppel, The Lowest Triplet States of Hypericin and Isohypericin. J. Photochem. Photobiol., B20, 133 (1993) H. Falk and A. Suste, On the Chemistry of Pyrrole Pigments, XCI: Copper Complexes of Pyridinologous Linear Tri- and Tetrapyrroles as Cyclopropanation Catalysts. Monatsh. Chem., 125, 325 (1994) H. Falk and J. Meyer, On the Homo- and Heteroassociation of Hypericin. Monatsh. Chem., 125, 753 (1994) C. Etzlstorfer and H. Falk, Stereochemistry and Tautomerism of Stentorin, Isostentorin, and Fringelit D: Force Field Investigations. Monatsh. Chem., 125, 955 (1994) H. Falk, E. Mayr and A. Richter, Simple Diffuse Reflectance UV-Vis Spectroscopic Determination of Organic Pigments (Fringelites) in Fossils. Microchim. Acta, 117, 1 (1994) H. Falk and J. Leimhofer, Ozone as an Oxygen Source for Alkene Ene-Reactions. Monatsh. Chem., 126, 85 (1995) Q.-Q. Chen, H. Falk and R. Micura, On the Chemistry of Pyrrole Pigments, XCII: Syntheses of 1,2-Bis-pyrrolylethanes. Monatsh. Chem., 126, 473 (1995) H. Falk, C. Kratky, N. Müller, W. Schmitzberger and U. Wagner, Structure Determination of the Biliverdin Apomyoglobin Complex. Crystal Structure Analysis of Two Crystal Forms at 1.4 and 1.5 * Resolution. J. Mol. Biol., 247, 326 (1995) N. H. Tran-Thi and H. Falk, An Efficient Synthesis of the Plant Growth Hormone 1-Triacontanol. Monatsh. Chem., 126, 565 (1995) H. Falk and A. F. Vaisburg, Concerning the Absorption and Emission Properties of Phenanthro [1,10,9,8,o,p,q,r,a]perylene-7,14-dione. Monatsh. Chem., 126, 361 (1995) H. Falk and E. Mayr, Syntheses and Properties of Fringelite D (1,3,4,6,8,10,11,13-octahydroxy-phenanthro[1,10,9,8,o,p,q,r,a]perylene-7,14-dione). Monatsh. Chem., 126, 699 (1995) H. Falk, A.F. Vaisburg and A.M. Amer, On the Synthesis of w-Appended Hypericin Derivatives. Monatsh. Chem., 126, 993 (1995) R. Altmann and H. Falk, On the Syntheses and Chiroptical Properties of the Tri- and Tetragalloylquinic Acids. Monatsh. Chem. 126, 1225 (195) D. Shemin and H. Falk, Porphyrins and Bile Pigmens, Metabolism Encyclopedia of Human Biology, 2nd Ed., Academic Press, 177 (1996) H. Falk and Q.-Q. Chen, On the Chemistry of Pyrrole Pigments, XCVI: An Efficient Synyhesis of Corrphycenes. Mh. Chem., 127, 69 (1996) H. Falk and E. Mayr, Syntheses, Constitutions, and Properties of Stentorin and Isostentorin. Mh. Chem., 126, 1311 (1995) H. Falk and T.N.H. Tran, Synthesis and Properties of an w,wÕ - Appended Eighteen Carbon Chains Hypericin Derivative. Mh. Chem., 127, 717 (1996) C. Etzlstorfer, H. Falk, N. Müller and T.N.H. Tran, Structural Aspects and Electronic Absorption of the Hydroxyphenanthroperylene Quinones Fringelit D, Hypericin, and Stentorin. Mh. Chem., 127, 659 (1996) C. Etzlstorfer, H. Falk, E. Mayr and S. Schwarzinger, Concerning the Acidity and Hydrogen Bonding of Hydroxyphenantroperylene Quinones, like Fringelite D, Hypericin, and Stentorin. Mh. Chem., 127,1229 (1996) R. Altmann, C. Etzlstorfer and H. Falk, Chiroptical Properties and Absolute Configurations of the Hypericin Chromophore Propeller Enantiomers. Mh. Chem., 128, 785 (1997) H. Falk and M. Stanek, Two-Dimensional 1H and 13C NMR Spectroscopy and the Structural Aspects of Amylose and Amylopectin. Mh. Chem., 128, 777 (1997) H. Falk, A.A.O. Sarhan, H.T.N. Tran and R. Altmann, Synthesis and Properties of Hypericins Substituted with Acidic and Basic Residues: Hypericin Tetrasulfonic Acid – a Water Soluble Hypericin Derivative. Mh. Chem., 129, 309 (1998) E.I. Kapinuns, H. Falk and T.N.H. Tran, Spectroscopic Investigation of the Molecular Structures of Hypericin and its Salts. Mh. Chem., 130, 1237 - 1244 (1999) A.M. Amer, H. Falk, H.N.T. Tran, The Dissociation and Tautomerization Equilibria of Hypericin: Alkyl Protected Hydroxyl Derivatives. Mh. Chem., 130, 623 - 635 (1999) R. Obermüller, G. Schütz, H. Gruber and H. Falk, Concerning Regioselective Photochemical Intermolecular Proton Transfer from Hypericin. Mh. Chem., 130, 275 - 281 (1999) G. Kada, H. Falk and H. Gruber, Accurate Measurement of Avidin and Streptavidin in Crude Biofluids with a New, Optimized Biotin. Fluorescein Conjugate. Biochim. Biophys. Acta, 1427, 33 - 43 (1999) C. Etzlstorfer, I Gutman and H. Falk, Concerning the Deprotonation of the Photooxidized 3-Hypericinate Ion. Mh. Chem., 130, 1333 - 1339 (1999) T. Dax, H. Falk and E. Kapinus, A Structural Proof for the Hypericin 1,6-Dioxo Tautomer. Mh. Chem., 130, 827 - 831 (1999) H. Falk, Gosau Schleifsteine fuer den Fossiliensammler. Fossilien, 4, 248 - 250 (1999) H. Falk, Vom Photosensibilisator Hypericin zum Photorezeptor Stentorin - die Chemie der Phenanthroperylenchinone. Angew. Chemie, 111, 3306 - 3326 (1999) H. Falk, From the Photosensibilisator Hypericin to the Photoreceptor Stentorin - the Chemistry of the Phenanthroperylene Quinones. Angew. Chemie Int. Ed., 38, 3134 - 3154 (1999) S. Baumgartner, T. Dax, W. Praznik and H. Falk, Characterization of the high-molecular weight fructan isolated from garlic (Allium sativum L.). Carbohydrate Res., 328, 177-183 (2000) C. Etzlstorfer and H. Falk, Concerning the Association of Hypericin Tautomers and their Hypericinate Ions. Mh. Chem., 131, 333-340 (2000) T. Dax, E. Kapinus and H. Falk, A Remarkable Photoreaction of 3-O-Benzylhypericin. Helvetica Chimica Acta, 83, 1744-1752 (2000) B. Immitzer, C. Etzlstorfer, R. Obermüller, M. Sonnleitner, G. Schütz, and H. Falk, On the Photochemical Proton Expulsion Capability of Fringelite D — A Model of the Protist Photosensory Pigments of the Stentorin and Blepharismin Types. Mh. Chem., 131, 1039-1045 (2000) T. Dax, C. Etzlstorfer, and H. Falk, On the Ground State Energy Hypersurface of Blepharismins and Oxyblepharismins. Mh. Chem., 131, 1115-1122 (2000) B. Immitzer and H. Falk, Fringelite D, a Model of the Protist Photosensory Pigments of the Stentorin and Blepharismin Types: the Hypericin and Fringelite D Photosensitized Destruction of Bilirubin. Mh. Chem., 131, 1167-1171 (2000) T. Dax and H. Falk, An Unusual Photoreaction of 3,4,-Di-O-benzyl-hypericin. Mh. Chem., 131, 1217-1219 (2000) E. Delaey, R. Obermüller, I. Zupko, H. Falk, and P. de Witte, In vitro Study of the Photocytotoxicity of some Hypericin analogs on different Cell Lines. Photochem. Photobiol., 74, 164-171 (2001) R.A. Obermüller, K. Hohenthanner, and H. Falk, Towards Hypericin-Derived Potential Photdynamic Therapy Agents. Photochem. Photobiol., 74, 211-215 (2001) B. Tu, Q. Chen, F. Yan, J. Ma, K. Grubmayr, and H. Falk, Efficient Routes to w-Chloroalkyl Bilirubins and C12-N22 Bridged Biliverdins. Mh. Chem., 132, 693-705 (2001) R.A. Obermüller, T. Dax and H. Falk, Replacement of Methoxy- to tert-Butyl-Substitution on a Napththalene Residue – An Unexpected Reaction Observed During a Snieckus ortho-Lithiation. Mh. Chem., 132, 1057-1062 (2001) R.A. Obermüller and H. Falk, Concerning the Absorption and Photochemical Properties of an w-4-Dimethylaminobenzal Hypericin Derivative. Mh. Chem., 132, 1519-1526 (2001) R.A. Obermüller, C. Etzlstorfer and H. Falk, On the Chemistry of a Dibenzohypericin Derivative. Mh. Chem., 133, 89-96 (2002) J. Leonhartsberger and H. Falk, The Protonation and Deprotonation Equilibria of Hypericin Revisited. Mh. Chem., 133, 167-172 (2002) B. Lackner and H. Falk, Concerning the Diastereomerization of Stilbenoid Hypericin Derivatives. Mh. Chem., 133, 717-721 (2002) T.N. Tran and H. Falk, Concerning the Chiral Discrimination and Helix Inversion Barrier in Hypericinates and Hypericin Derivatives. Mh. Chem., 133, 1231-1237 (2002) M. Emsenhuber, P. Pöchlauer, J.-M. Aubry, V. Nardello and H. Falk, Evidence for the Generation of Singlet Oxygen (1O2, 1Dg) from Ozone Promoted by Inorganic Salts. Mh. Chem., 133, 387-391 (2003) M. Deak and H. Falk, On the Chemistry of the Resveratrol Diastereomers. Mh. Chem., 134, 883-888 (2003) T.A. Salama, B. Lackner and H. Falk, An Efficient Synthesis of O-Methyl Protected Emodin Aldehyde and Emodin Nitrile. Mh. Chem., 134, 1113-1119 (2003) Bettina Schwarzinger, and Heinz Falk, A Unique Photoreaction of Hypericinate Bound to Human Serum Albumin, Lipids, or Vesicles. Mh. Chem., 134, 1353-1358 (2003) Beate Hager, Mario Alva-Astudillo, and Heinz Falk, A Hemin-Analogous Corrphycene Derivative: Suppression of Heme Oxygenase and Reconstitution with Apomyoglobin. Mh. Chem., 134, 1499-1507 (2003) Tarek A. Salama, Bernd Lackner, and Heinz Falk, Synthesis of 6-Heterocyclically Appendend Tri-O-Methyl Protected 6-Desmethyl Emodin Derivatives. Mh. Chem., 135, 735-742 (2004) Thorsten Ganglberger, Walther G. Jary, Peter Pöchlauer, Jean-Marie Aubry, Veronique Nardello, and Heinz Falk, A Chemical (Dark) Source of Singlet Oxygen: Ozone Splitting Promoted by Tin(II) Salts. Mh. Chem., 135, 501-507 (2004) Walther G. Jary, Thorsten Ganglberger, Peter Pöchlauer, and Heinz Falk, Generation of Singlet Oxygen from Ozone Catalysed by Phosphinofer-rocenes. Mh. Chem., 136, 537-541 (2005) Bernd Lackner, Christoph Etzlstorfer and Heinz Falk, Synthesis and Properties of 10,11-Dibenzimidazolyl-10,11-didesmethyl-hypericin – The First Heterocyclically Substituted Hypericin Derivative. Mh. Chem., 135, 1157-1166 (2004) Bettina Schwarzinger and Heinz Falk, Concerning the Photodiastereomerization and Protic Equilibria of Urocanic Acid and its Complex with Human Serum Albumin. Mh. Chem., 135, 1297-1304 (2004) Mario Waser, Heinz Falk, Peter Pöchlauer and Walther G. Jary, Concerning Chemistry, Reactivity, and Mechanism of Transition Metal Catalysed Oxidation of Benzylic Compounds by Means of Ozone. Journal of Molecular Catalysis A - Chemical, 236, 187-193 (2005) Mario Waser and Heinz Falk, Intramolecularly Friedel-Crafts Acylated Emodin Derivatives: An Access to the Cores of Angucyclinones, Anthracyclinones, and to Hypericin Analogues. Mh. Chem., 136, 609-618 (2005) Bernd Lackner, Yulita Popova, Christiph Etzlstorfer, Andrija A. Smelcerovic, Christian W. Klampfl, and Heinz Falk, Syntheses and Properties of Two Heterocyclically Substituted Hypericin Derivatives: 10,11-Dibenzothiazolyl-10,11-didesmethyl-hypericin and 10,11-Dibenzoxazolyl-10,11-didesmethylhypericin. Mh. Chem., 136, 777-793 (2005) Mario Waser, Bernd Lackner, Joachim Zuschrader, Norbert Müller, and Heinz Falk, An efficient regioselective synthesis of endocrocin and structural related natural anthraquinones starting from emodin. Tetrahedron Lett., 46, 2377-2380 (2005) Bernd Lackner, Klaus Bretterbauer, and Heinz Falk, An Efficient Route to Emodic Amine and Analogous O-Methyl Protected Derivatives Starting from Emodin. Mh. Chem., 136, 1629-1639 (2005) Mario Waser, Yulita Popova, Christoph Etzlstorfer, Werner F. Huber, and Heinz Falk, Syntheses, Photochemical Properties, and Tautomerism of Intramolecularly Friedel-Crafts Acylated Hypericin Derivatives. Mh. Chem., 136, 1221-1231 (2005) David Geißlmeir, Walther G. Jary and Heinz Falk, The TEMPO/Copper Catalyzed Oxidation of Primary Alcohols to Aldehydes Using Oxygen as Stoichiometric Oxidant. Mh. Chem., 136, 1591-1599 (2005) Mario Waser, Yulita Popova, Christian W. Klampfl, and Heinz Falk, 9,12-Dibenzothiazolylhypericin and 10,11-Dibenzothiazolyl-10,11-Didemethylhypericin: Photochemical Properties of Hypericin Derivatives Depending on the Substitution Site. Mh.Chem., 136, 1791-1797 (2005) Klaus Wolkenstein, Jürgen H. Gross, Heinz Falk, and Heinz F. Schöler, Preservation of hypericin and related polycyclic quinone pigments in fossil crinoids. Proceedings of the Royal Society B, 273, 451-456 (2006) Beate Hager, Bettina Schwarzinger and Heinz Falk, Concerning the Thermal Diastereomerization of the Green Fluorescent Protein Chromophore. Mh. Chem., 137, 163-168 (2006) Mario Waser and Heinz Falk, Condensed Emodin Derivatives and Their Applicability for the Synthesis of a Fused Heterocyclic Hypericin Derivative. Eur. J. Org. Chem., 1200-1206 (2006) Mario Waser and Heinz Falk, Towards Second Generation Hypericin Based Photosensitizers for Photodynamic Therapy. Curr. Org. Chem. 11: 547-558 (2007) Heinz Falk: Karl Schlögl. Obituary. Mh. Chem., 138 (2007) Karoline Fendler, Beate Hager, and Heinz Falk, The Thermal Diastereomerization of the Tryptophane-Derived Green Fluorescent Protein Chromophore. Mh. Chem., 138, 859-862 (2007) Mieke Roelants, Heinz Falk, Bernd Lackner, Mario Waser, Peter A.M. de Witte, OC222 Bathochromically shifted hypericin derivatives: photosensitizing properties. Abstr. of 12th Congress of the European Society for Photobiology 2007, University of Bath, UK, Sept 1/6, (2007) Heinz Falk, Die 44. Mineralientage München - ein Rückblick. Fossilien, 25, 2-4 (2008) Heinz Falk. Karl Schlögl, Nachruf. Almanach d. Öst. Akademie der Wiss., 157, 469-477 (2008) S. Aigner and Hh. Falk: A microwave-assisted synthesis of phenanthroperylene quinones as exemplified with hypericin. Monatsh. Chem. 139 (2008) 991–993. Zuschrader, G. Reiter and H. Falk: ω,ω’-Urea- and dithioacetal-derivatives of hypericin. Monatsh. Chem. 139 (2008) 995–998. D. Geißlmeir and H. Falk: ω,ω’-Appended nucleo-base derivatives of hypericin. Monatsh. Chem. 139 (2008) 1127-1136. J. Zuschrader, W. Schöfberger, and H. Falk: A carbohydrate-linked hypericinic photosensitizing agent. Monatsh. Chem. 139 (2008) 1387–1390. S. Aigner and H. Falk: On synthesis and properties of hypericin-porphyrin hybrids. Monatsh. Chem. 139 (2008) 1513–1518. M. Roelants, B. Lackner, M. Waser, H. Falk, P. Agostinis, H. Van Poppel, and P. A. M. de Witte: In vitro study of the phototoxicity of bathochromically-shifted hypericin derivatives. Photochem. Photobiol. Sci. 8 (2009) 822–829. B. Hager, W. S. L. Strauss, and H. Falk: Cationic Hypericin Derivatives as Novel Agents with Photobactericidal Activity: Synthesis and Photodynamic Inactivation of Propionibacterium acnes. Photochem. Photobiol. 85 (2009) 1201–1206. H. Falk: Die 46. Mineralientage München: ein Rückblick. Fossilien 27 (2010) 3–5. H. Falk: Museumsportrait: Die Dauerausstellung „Natur“ im Schlossmuseum Linz. Fossilien 27 (2010) 300–303. H. Falk: Ein riesiger Mondfisch aus Österreich. Fossilien 27 (2010) 304–307. K. Wolkenstein, J. H. Gross, and H. Falk: Boron-containing organic pigments from a Jurassic red alga. Proc. Natl. Acad. Sci. USA 107 (2010) 19374–19378. H. Falk: Die 47. Mineralientage München: ein Rückblick. Fossilien 28 (2011) 3–5. H. Falk: Museumsprortrait: Das Kotsiomitis-Museum in Ligurio bei Epidauros. Fossilien 28 (2011) 57–59. K. Wolkenstein and H. Falk: Spuren des Lebens: Organische Verbindungen im Stein. Nachr. Chem. 59(5) (2011) 517–520. M. Waser and H. Falk: Progress in the Chemistry of Second Generation Hypericin Based Photosensitizers. Curr. Org. Chem. (2011) 3894–3907. H. Falk: Die 48. Mineralientage München: ein Rückblick. Fossilien 29 (2012) 3–6. I. Teasdale, M. Waser, S. Wilfert, H. Falk, and O. Brüggemann: Photoreactive, water-soluble conjugates of hypericin with polyphosphacenes. Monatsh. Chem./Chem. Monthly 147 (2012) 355–360. H. Falk: Emanuel Vogel, Nachruf. Almanach d. Österr. Akademie d. Wiss. 161 (2012) 547–552. H. Falk: Das Neueste aus der Welt der Mikro-Kameras: DigiMicro Mobile. Leitfossil.de (Mikromania) (2012) 28. 5. 2012. H. Falk: Der neue Sauriersaal des Naturhistorischen Museums Wien. Fossilien 29 (2012) 286–290. H. Falk: Naturhistorisches Museum Wien: Der Neue Meteoritensaal. Leitfossil.de (2012) 3.12.2012. H. Falk: Naturhistorisches Museum Wien: Die Neuen Anthropologiesäle. Leitfossil.de (2013) 28.4.2013. H. Falk: Heinz A. Staab, Nachruf. Almanach d. Österr. Akademie d. Wiss. 162 (2012) 503–510. H. Falk: American Museum of Natural History New York. Leitfossil.de (2013) 12.9.2013 H. Falk: Friedrich Simony zum 200sten Geburtstag. Leitfossil.de (2013) 2.11.2013 H. Falk: Die 50. Mineralientage München – ein Rückblick. Fossilien 31 (2014) 60–62. H. Falk: Ausstellung im NHM Wien: Gabonionta — mehrzellige Organismen vor 2,1 Milliarden Jahren! Leitfossil.de (2014) 17.3.2014. H. Falk: „Tintenfisch und Ammonit“ Ausstellung im Biologiezentrum des Oberösterreichischen Landesmuseums in Linz. Leitfossil.de (2014) 25. 4. 2014. W. P. Pfeiffer, S. K. Dey, D. A. Lightner, H. Falk: Homorubins and homoverdins. Monatsh. Chem./Chem. Monthly 145 (2014) 963-981. K. Wolkenstein, H. Sun, C. Griesinger, H. Falk: Identification of organic pigments in macrofossils: analytical challenges and recent advances. Abstr. of 2014 The Geological Society of America Meeting, Vancouver, B.C. (10–22 Oct. 2014), paper No. 108-14. H. Falk: Mammut-Eismumie aus Sibirien zu Gast im Naturhistorischen Museum Wien. Leitfossil.de (2015) 2. 1. 2015. H. Falk: Die 51. Mineralientage München – ein Rückblick. Fossilien 32 (2015) 59–61. H. Falk: Chemofossilien. Leitfossil.de (2015) 3. 3. 2015. H. Falk, A. D. Kinghorn: Foreword. Progr. Chem Org. Nat. Prod. 100 (2015) v-vi. K. Wolkenstein, H. Sun, C. Griesinger, H. Falk: Exceptional preservation of polyketide secondary metabolites in macrofossils. Abstr. 27th Intern. Meeting on Org. Geochem. Sept. 13–18, Prague, Cz, 226. H. Falk: Paratethys-Stromatolithen aus Ritzing (Burgenland, Österreich) als Zeugen einer Klimakrise im mittelmiozän. Leitfossil.de (2015) 14. 10. 2015. H. Falk: Naturhistorisches Museum Wien: Die neuen Säle der Prähistorie. Leitfossil.de (2015) 14. 10. 2015. K. Wolkenstein, H. Sun, C. Griesinger, H. Falk: Structure and Absolute Configuration of Jurassic Polyketide-Derived Spiroborate Pigments Obtained from Microgram Quantities. J. Am. Chem. Soc. 137 (2015) 13460-13463. H. Falk: Der Specht klopft im Biologiezentrum Linz. Leitfossil.de (2016) 20. 1. 2016. H. Falk: Wo die Wiener Mammuts grasten — Naturwissenschaftliche Entdeckungsreisen durch das heutige Wien. Leitfossil.de (2016) 10. 5. 2016 H. Falk: Ein Ammoniten-Denkmal auf der Rossmoosalm. Leitfossil.de (2016) 9. 6. 2016. H. Falk: Augensteine — Zeugen der großen Umbrüche in den Ostalpen in den letzten 35 Millionen Jahren. Leitfossil.de (2016) 13. 8. 2016. D. Kinghorn, H. Falk, S. Gibbons, J. Kobayashi: Phytocannabinoids — Unraveling the Complex Chemistry and Pharmacology of Cannabis sativa, Preface. Prog. Chem Org. Nat. Prod. 103 (2017) v-vi. H. Falk, K. Wolkenstein: Natural Product Molecular Fossils. Prog. Chem Org. Nat. Prod. 104 (2017) 1–126. Patents Process for the N-alkylation or ureas US Pat. 5124451 - Filed Jul 10, 1991 - Chemie Linz GmbH Process for the N-alkylation of ureas US Pat. 5169954 - Filed Dec 16, 1991 - Chemie Linz GmbH Process for the preparation of pure N,N'-asymmetrically substituted phenylureas US Pat. 5283362 - Filed Jul 31, 1992 - Chemie Linz GmbH Process for the preparation of Isocyanic Acid by Decomposition of N,N-trisubstituted Ureas Eur. Pat. EP 0582863A2 - Filed Feb 16, 1994 - US Pat. Nr. 5360601 Filed Nov 1, 1994 - Chemie Linz GmbH Isocyanates by Decomposition of N,N,N-trisubstituted Ureas Eur. Pat. EP 0583637A1 - Filed Feb 23, 1994 - Chemie Linz GmbH Amine-oxides US Pat. 5409532 - Filed Jan 21, 1993 - Lenzing AG Awards Theodor Körner Prize for Science and Art in Austria, 1970 Ernst Späth Prize of the Austrian Academy of Sciences, 1976 Sandoz Prize, 1977 Election to corresponding member of the New York Academy of Sciences, 1989 Election to corresponding member of the Mathematical and Natural Sciences Class of the Austrian Academy of Sciences, 1992 Upper Austrian Prize for Science, 1993 Election to full member of the Mathematical and Natural Sciences Class of the Austrian Academy of Sciences, 1997 Josef Loschmidt Medal of the Austrian Chemical Society, 1998 Scientific award of the Rudolf Trauner Stiftung, 2003 Silver medal of the government of Upper Austria, 2009 References General Hypericin Group at the Institute of Organic Chemistry, Johannes-Kepler University Linz Specific External links Hypericin Group at the Institute of Organic Chemistry @ JKU Deprecated web site of the Institute of Organic Chemistry @ JKU Current web site of the Institute of Organic Chemistry @ JKU Johannes Kepler University (JKU), Linz, Austria The Chemistry of Linear Oligopyrroles and Bile Pigments Austrian Academy of Sciences, member profile Academic Tree 1939 births Living people People from Sankt Pölten 20th-century Austrian chemists Organic chemists Austrian chemists Academic staff of Johannes Kepler University Linz
Heinz Falk
Chemistry
8,307
54,817,593
https://en.wikipedia.org/wiki/Anti%20inflammatory%20agents%20in%20breast%20milk
The anti-inflammatory components in breast milk are those bioactive substances that confer or increase the anti-inflammatory response in a breastfeeding infant. References Bibliography Breastfeeding Infant feeding Immune system Breast milk
Anti inflammatory agents in breast milk
Biology
43
23,829,996
https://en.wikipedia.org/wiki/Void%20safety
Void safety (also known as null safety) is a guarantee within an object-oriented programming language that no object references will have null or void values. In object-oriented languages, access to objects is achieved through references (or, equivalently, pointers). A typical call is of the form: x.f(a, ...) where f denotes an operation and x denotes a reference to some object. At execution time, however, a reference can be void (or null). In such cases, the call above will be a void call, leading to a run-time exception, often resulting in abnormal termination of the program. Void safety is a static (compile-time) guarantee that a void call will never arise. History In a 2009 talk, Tony Hoare traced the invention of the null pointer to his design of the ALGOL W language and called it a "mistake": Bertrand Meyer introduced the term "void safety". In programming languages An early attempt to guarantee void safety was the design of the Self programming language. The Eiffel language is void-safe according to its ISO-ECMA standard; the void-safety mechanism is implemented in EiffelStudio starting with version 6.1 and using a modern syntax starting with version 6.4. The Spec# language, a research language from Microsoft Research, has a notion of "non-nullable type" addressing void safety. The F# language, a functional-first language from Microsoft Research running on .NET framework, is void-safe except when interoperating with other .NET languages. Null safety based in union types Since 2011 several languages support union types and intersection types, which can be used to detect possible null pointers at compiling time, using a special class Null of which the value null is its unique instance. The null safety based in types appeared first in Ceylon, followed soon by TypeScript. The C# language implements compile-time null safety check since version 8. However, to stay compatible with older versions of the language, the feature is opt-in on a per project or per file basis. The Google's Dart language implements it since its version 2.0, in August 2018 Other languages that use null-safe types by default include JetBrains' Kotlin, Rust, and Apple's Swift. See also Nullable type Option type Safe navigation operator References Object-oriented programming
Void safety
Technology
491
2,863,360
https://en.wikipedia.org/wiki/Trembling%20hand%20perfect%20equilibrium
In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability. Definition First define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy in an -player strategic game where every pure strategy is played with positive probability. This is the "trembling hands" of the players; they sometimes play a different strategy, other than the one they intended to play. Then define a mixed strategy profile as being trembling hand perfect if there is a sequence of perturbed games strategy profiles that converges to such that for every and every player the strategy is a best reply to . Note: All completely mixed Nash equilibria are perfect. Note 2: The mixed strategy extension of any finite normal-form game has at least one perfect equilibrium. Example The game represented in the following normal form matrix has two pure strategy Nash equilibria, namely and . However, only is trembling-hand perfect. Assume player 1 (the row player) is playing a mixed strategy , for . Player 2's expected payoff from playing L is: Player 2's expected payoff from playing the strategy R is: For small values of , player 2 maximizes his expected payoff by placing a minimal weight on R and maximal weight on L. By symmetry, player 1 should place a minimal weight on D and maximal weight on U if player 2 is playing the mixed strategy . Hence is trembling-hand perfect. However, similar analysis fails for the strategy profile . Assume player 2 is playing a mixed strategy . Player 1's expected payoff from playing U is: Player 1's expected payoff from playing D is: For all positive values of , player 1 maximizes his expected payoff by placing a minimal weight on D and maximal weight on U. Hence is not trembling-hand perfect because player 2 (and, by symmetry, player 1) maximizes his expected payoff by deviating most often to L if there is a small chance of error in the behavior of player 1. Equilibria of two-player games For 2x2 games, the set of trembling-hand perfect equilibria coincides with the set of equilibria consisting of two undominated strategies. In the example above, we see that the equilibrium <Down,Right> is imperfect, as Left (weakly) dominates Right for Player 2 and Up (weakly) dominates Down for Player 1. Equilibria of extensive form games There are two possible ways of extending the definition of trembling hand perfection to extensive form games. One may interpret the extensive form as being merely a concise description of a normal form game and apply the concepts described above to this normal form game. In the resulting perturbed games, every strategy of the extensive-form game must be played with non-zero probability. This leads to the notion of a normal-form trembling hand perfect equilibrium. Alternatively, one may recall that trembles are to be interpreted as modelling mistakes made by the players with some negligible probability when the game is played. Such a mistake would most likely consist of a player making another move than the one intended at some point during play. It would hardly consist of the player choosing another strategy than intended, i.e. a wrong plan for playing the entire game. To capture this, one may define the perturbed game by requiring that every move at every information set is taken with non-zero probability. Limits of equilibria of such perturbed games as the tremble probabilities goes to zero are called extensive-form trembling hand perfect equilibria. The notions of normal-form and extensive-form trembling hand perfect equilibria are incomparable, i.e., an equilibrium of an extensive-form game may be normal-form trembling hand perfect but not extensive-form trembling hand perfect and vice versa. As an extreme example of this, Jean-François Mertens has given an example of a two-player extensive form game where no extensive-form trembling hand perfect equilibrium is admissible, i.e., the sets of extensive-form and normal-form trembling hand perfect equilibria for this game are disjoint. An extensive-form trembling hand perfect equilibrium is also a sequential equilibrium. A normal-form trembling hand perfect equilibrium of an extensive form game may be sequential but is not necessarily so. In fact, a normal-form trembling hand perfect equilibrium does not even have to be subgame perfect. Problems with perfection Myerson (1978) pointed out that perfection is sensitive to the addition of a strictly dominated strategy, and instead proposed another refinement, known as proper equilibrium. References Further reading Game theory equilibrium concepts Non-cooperative games
Trembling hand perfect equilibrium
Mathematics
1,044
18,567,298
https://en.wikipedia.org/wiki/Digital%20pattern%20generator
A digital pattern generator is a piece of electronic test equipment or software used to generate digital electronic stimuli. Digital electronics stimuli are a specific kind of electrical waveform varying between two conventional voltages that correspond to two logic states ("low state" and "high state", "0" and "1"). The main purpose of a digital pattern generator is to stimulate the inputs of a digital electronic device. For that reason, the voltage levels generated by a digital pattern generator are often compatible with digital electronics I/O standards – TTL, LVTTL, LVCMOS and LVDS, for instance. Digital pattern generators are sometimes referred to as "pulse generator" or "pulse pattern generator" which may be able to function as digital pattern generators as well. Hence, the distinction between the two types of equipment may not be clear. A digital pattern generator is a source of synchronous digital stimulus; the generated signal is interesting for testing digital electronics at the logic level - this is why they are also called "logic source". A pulse generator is of purpose to generate an electrical pulse of different shapes; they are mostly used for tests at an electrical or analog level. Another common name for such equipment is "digital logic source" or "logic source". Digital pattern generators can produce either repetitive, or single-shot signals in which case some kind of triggering source is required (internal or external). Types of digital pattern generators Digital pattern generators are today available as stand-alone units, add-on hardware modules for other equipment such as a [logic analyzer] or as PC-based equipment. Stand-alone units are self-contained devices that include everything from the user interface to define the patterns that should be generated to the electronic equipment that actually generates the output signal. Some test equipment manufacturers propose pattern generators as add-on modules for logic analyzers (see for example the PG3A module for Tektronix' TLA7000 series of logic analyzers or Hewlett-Packard 16520A/16522A modules for 16500-series of logic analyzers). In this case, the pattern generator is the "generation counterpart" to the analysis functionality offered by logic analyzers. PC-based digital pattern generators are connected to a PC through peripheral ports such as PCI, USB, and/or Ethernet (see, for example, the "Wave Generator Xpress" from Byte Paradigm, connected through USB). They use the PC as a user interface for defining and storing the digital patterns to be sent. Features Digital pattern generators are primarily characterized by a number of digital channels, a maximum rate and the supported voltage standards. The number of digital channels defines the maximum width of any pattern generated - typically, 8-bit, 16-bit, or 32-bit pattern generator. A 16-bit pattern generator is able to generate arbitrary digital samples on any number of bits from 1 to 16. The maximum rate defines the minimum time interval between 2 successive patterns. For instance, a 50 MHz (50 MSample/s) digital pattern generator is able to output a new pattern every 20 nanoseconds. The supported voltage standards ultimately define the set of electronic devices a digital pattern generator can be used with. Concretely, the voltages and the transition characteristics of the signal at the output of the digital pattern generator will be compliant to these voltage standards. Examples of supported voltage standards: TTL, LVTTL, LVCMOS, LVDS. Most digital pattern generator add features such as the ability to generate a repetitive sequence or a digital clock signal at a specified frequency, the ability to use an external clock input and triggering options, to start pattern generation upon the reception of an event from an external input. Common applications Digital electronics and embedded system testing and debugging Stimulation of digital signal processing hardware Digital-to-analog converter stimulation Special purpose digital pattern generators Video digital pattern generators are digital pattern generators dedicated to the generation of a specific test pattern in one particular digital video format, such as DVI or HDMI. In safety-critical technology such as automotive, specialized electronic systems are involved in keeping its correct functioning. These can be used to monitor brakes, motors, and airbags. Internal testing is undertaken for these electronic systems by pattern generators, i.e., LFSR that feeds the circuit under test, verifying output correctness. Manufacturers of digital pattern generators Berkeley Nucleonics Chroma Keysight (formerly Agilent and HP) National Instruments Tektronix Active Technologies See also Arbitrary waveform generator Pulse generator Signal generator References Witte, Robert A.: "Electronic Test Instruments: Analog and Digital Measurements, 2nd Edition", Prentice Hall, 2002 Leens, F. : "The Digital Pattern Generator - An essential instrument for digital system development", Byte Paradigm, 2010 White Paper Electronic test equipment
Digital pattern generator
Technology,Engineering
988
66,215,909
https://en.wikipedia.org/wiki/Wenxian%20Shen
Wenxian Shen is a Chinese-American mathematician known for her work in topological dynamics, almost-periodicity, waves and other spatial patterns in dynamical systems. She is Don Logan Chair of Mathematics at Auburn University. Education Shen graduated from Zhejiang Normal University in 1982, and earned a master's degree at Peking University in 1987. She completed a Ph.D. in mathematics at the Georgia Institute of Technology in 1992, with the dissertation Stability and Bifurcation of Traveling Wave Solutions supervised by Shui-Nee Chow. Books Shen is the coauthor of two monographs, Almost Automorphic and Almost Periodic Dynamics in Skew-Product Semiflows (with Yingfei Yi, American Mathematical Society, 1998), and Spectral Theory for Random and Nonautonomous Parabolic Equations and Applications (with Janusz Mierczyński, CRC Press, 2008). References External links Home page Year of birth missing (living people) Living people 20th-century American mathematicians 21st-century American mathematicians Chinese mathematicians Chinese women mathematicians Dynamical systems theorists Zhejiang Normal University alumni Peking University alumni Georgia Tech alumni Auburn University faculty 20th-century American women mathematicians 21st-century American women mathematicians
Wenxian Shen
Mathematics
241
37,856
https://en.wikipedia.org/wiki/Alcubierre%20drive
The Alcubierre drive () is a speculative warp drive idea according to which a spacecraft could achieve apparent faster-than-light travel by contracting space in front of it and expanding space behind it, under the assumption that a configurable energy-density field lower than that of vacuum (that is, negative mass) could be created. Proposed by theoretical physicist Miguel Alcubierre in 1994, the Alcubierre drive is based on a solution of Einstein's field equations. Since those solutions are metric tensors, the Alcubierre drive is also referred to as Alcubierre metric. Objects cannot accelerate to the speed of light within normal spacetime; instead, the Alcubierre drive shifts space around an object so that the object would arrive at its destination more quickly than light would in normal space without breaking any physical laws. Although the metric proposed by Alcubierre is consistent with the Einstein field equations, construction of such a drive is not necessarily possible. The proposed mechanism of the Alcubierre drive implies a negative energy density and therefore requires exotic matter or manipulation of dark energy. If exotic matter with the correct properties does not exist, then the drive cannot be constructed. At the close of his original article, however, Alcubierre argued (following an argument developed by physicists analyzing traversable wormholes) that the Casimir vacuum between parallel plates could fulfill the negative-energy requirement for the Alcubierre drive. Another possible issue is that, although the Alcubierre metric is consistent with Einstein's equations, general relativity does not incorporate quantum mechanics. Some physicists have presented arguments to suggest that a theory of quantum gravity (which would incorporate both theories) would eliminate those solutions in general relativity that allow for backward time travel (see the chronology protection conjecture) and thus make the Alcubierre drive invalid. History In 1994, Miguel Alcubierre proposed a method for changing the geometry of space by creating a wave that would cause the fabric of space ahead of a spacecraft to contract and the space behind it to expand. The ship would then ride this wave inside a region of flat space, known as a warp bubble, and would not move within this bubble but instead be carried along as the region itself moves due to the actions of the drive. The local velocity relative to the deformed spacetime would be subluminal, but the speed at which a spacecraft could move would be superluminal, thereby rendering possible interstellar flight, such as a visit to Proxima Centauri within a few days. Alcubierre metric The Alcubierre metric defines the warp-drive spacetime. It is a Lorentzian manifold that, if interpreted in the context of general relativity, allows a warp bubble to appear in previously flat spacetime and move away at effectively faster-than-light speed. The interior of the bubble is an inertial reference frame and inhabitants experience no proper acceleration. This method of transport does not involve objects in motion at faster-than-light speeds with respect to the contents of the warp bubble; that is, a light beam within the warp bubble would still always move more quickly than the ship. Because objects within the bubble are not moving (locally) more quickly than light, the mathematical formulation of the Alcubierre metric is consistent with the conventional claims of the laws of relativity (namely, that an object with mass cannot attain or exceed the speed of light) and conventional relativistic effects such as time dilation would not apply as they would with conventional motion at near-light speeds. An extension of the Alcubierre metric that eliminates the expansion of the volume elements and instead relies on the change in distances along the direction of travel is that of mathematician José Natário. In his metric, spacetime contracts towards the prow of the ship and expands in the direction perpendicular to the motion, meaning that the bubble actually "slides" through space, roughly speaking by "pushing space aside". The Alcubierre drive remains a hypothetical concept with seemingly difficult problems, although the amount of energy required is no longer thought to be unobtainably large. Furthermore, Alexey Bobrick and Gianni Martire claim that, in principle, a class of subluminal, spherically symmetric warp drive spacetimes can be constructed based on physical principles presently known to humanity, such as positive energy. Mathematics Using the ADM formalism of general relativity, the spacetime is described by a foliation of space-like hypersurfaces of constant coordinate time , with the metric taking the following general form: where is the lapse function that gives the interval of proper time between nearby hypersurfaces, is the shift vector that relates the spatial coordinate systems on different hypersurfaces, is a positive-definite metric on each of the hypersurfaces. The particular form that Alcubierre studied is defined by: where with arbitrary parameters and . Alcubierre's specific form of the metric can thus be written: With this particular form of the metric, it can be shown that the energy density measured by observers whose 4-velocity is normal to the hypersurfaces is given by: where is the determinant of the metric tensor. Thus, because the energy density is negative, one needs exotic matter to travel more quickly than the speed of light. The existence of exotic matter is not theoretically ruled out; however, generating and sustaining enough exotic matter to perform feats such as faster-than-light travel (and to keep open the "throat" of a wormhole) is thought to be impractical. According to writer Robert Low, within the context of general relativity it is impossible to construct a warp drive in the absence of exotic matter. Connection to dark energy and dark matter Astrophysicist Jamie Farnes from the University of Oxford has proposed a theory, published in the peer-reviewed scientific journal Astronomy & Astrophysics, that unifies dark energy and dark matter into a single dark fluid, and which is expected to be testable by the Square Kilometre Array around 2030. Farnes found that Albert Einstein had explored the idea of gravitationally repulsive negative masses while developing the equations of general relativity, an idea which leads to a "beautiful" hypothesis where the cosmos has equal amounts of positive and negative qualities. Farnes' theory relies on negative masses that behave identically to the physics of the Alcubierre drive, providing a natural solution for the current "crisis in cosmology" due to a time-variable Hubble parameter. As Farnes' theory allows a positive mass (i.e. a ship) to reach a speed equal to the speed of light, it has been dubbed "controversial". If the theory is correct, which has been highly debated in the scientific literature, it would explain dark energy, dark matter, allow closed timelike curves (see time travel), and suggest that an Alcubierre drive is physically possible with exotic matter. Physics With regard to certain specific effects of special relativity, such as Lorentz contraction and time dilation, the Alcubierre metric has some apparently peculiar aspects. In particular, Alcubierre has shown that a ship using an Alcubierre drive travels on a free-fall geodesic even while the warp bubble is accelerating: its crew would be in free fall while accelerating without experiencing accelerational g-forces. Enormous tidal forces, however, would be present near the edges of the flat-space volume because of the large space curvature there, but a suitable specification of the metric would keep the tidal forces very small within the volume occupied by the ship. The original warp-drive metric and simple variants of it happen to have the ADM form, which is often used in discussing the initial-value formulation of general relativity. This might explain the widespread misconception that this spacetime is a solution of the field equation of general relativity. Metrics in ADM form are adapted to a certain family of inertial observers, but these observers are not really physically distinguished from other such families. Alcubierre interpreted his "warp bubble" in terms of a contraction of space ahead of the bubble and an expansion behind, but this interpretation could be misleading, since the contraction and expansion actually refer to the relative motion of nearby members of the family of ADM observers. In general relativity, one often first specifies a plausible distribution of matter and energy, and then finds the geometry of the spacetime associated with it; but it is also possible to run the Einstein field equations in the other direction, first specifying a metric and then finding the energy–momentum tensor associated with it, and this is what Alcubierre did in building his metric. This practice means that the solution can violate various energy conditions and require exotic matter. The need for exotic matter raises questions about whether one can distribute the matter in an initial spacetime that lacks a warp bubble in such a way that the bubble is created at a later time, although some physicists have proposed models of dynamical warp-drive spacetimes in which a warp bubble is formed in a previously flat space. Moreover, according to Serguei Krasnikov, generating a bubble in a previously flat space for a one-way faster-than-light trip requires forcing the exotic matter to move at local faster-than-light speeds, something that would require the existence of tachyons, although Krasnikov also notes that when the spacetime is not flat from the outset, a similar result could be achieved without tachyons by placing in advance some devices along the travel path and programming them to come into operation at preassigned moments and to operate in a preassigned manner. Some suggested methods avoid the problem of tachyonic motion, but would probably generate a naked singularity at the front of the bubble. Allen Everett and Thomas Roman comment on Krasnikov's finding (Krasnikov tube): [The finding] does not mean that Alcubierre bubbles, if it were possible to create them, could not be used as a means of superluminal travel. It only means that the actions required to change the metric and create the bubble must be taken beforehand by some observer whose forward light cone contains the entire trajectory of the bubble. For example, if one wanted to travel to Deneb (2,600 light-years away) and arrive less than 2,600 years in the future according to external clocks, it would be required that someone had already begun work on warping the space from Earth to Deneb at least 2,600 years ago: A spaceship appropriately located with respect to the bubble trajectory could then choose to enter the bubble, rather like a passenger catching a passing trolley car, and thus make the superluminal journey ... as Krasnikov points out, causality considerations do not prevent the crew of a spaceship from arranging, by their own actions, to complete a round trip from Earth to a distant star and back in an arbitrarily short time, as measured by clocks on Earth, by altering the metric along the path of their outbound trip. Difficulties Mass–energy requirement The metric of this form has significant difficulties because all known warp-drive spacetime theories violate various energy conditions. Nevertheless, an Alcubierre-type warp drive might be realized by exploiting certain experimentally verified quantum phenomena, such as the Casimir effect, that lead to stress–energy tensors that also violate the energy conditions, such as negative mass–energy, when described in the context of the quantum field theories. If certain quantum inequalities conjectured by Ford and Roman hold, the energy requirements for some warp drives may be unfeasibly large as well as negative. For example, the energy equivalent of −1064 kg might be required to transport a small spaceship across the Milky Way—an amount orders of magnitude greater than the estimated mass of the observable universe. Counterarguments to these apparent problems have also been offered, although the energy requirements still generally require a Type III civilization on the Kardashev scale. Chris Van Den Broeck of the Katholieke Universiteit Leuven in Belgium, in 1999, tried to address the potential issues. By contracting the 3+1-dimensional surface area of the bubble being transported by the drive, while at the same time expanding the three-dimensional volume contained inside, Van Den Broeck was able to reduce the total energy needed to transport small atoms to less than three solar masses. Later in 2003, by slightly modifying the Van den Broeck metric, Serguei Krasnikov reduced the necessary total amount of negative mass to a few milligrams. Van Den Broeck detailed this by saying that the total energy can be reduced dramatically by keeping the surface area of the warp bubble itself microscopically small, while at the same time expanding the spatial volume inside the bubble. However, Van Den Broeck concludes that the energy densities required are still unachievable, as are the small size (a few orders of magnitude above the Planck scale) of the spacetime structures needed. In 2012, physicist Harold White and collaborators announced that modifying the geometry of exotic matter could reduce the mass–energy requirements for a macroscopic space ship from the equivalent of the planet Jupiter to that of the Voyager 1 spacecraft (c. 700 kg) or less, and stated their intent to perform small-scale experiments in constructing warp fields. White proposed to thicken the extremely thin wall of the warp bubble, so the energy is focused in a larger volume, but the overall peak energy density is actually smaller. In a flat 2D representation, the ring of positive and negative energy, initially very thin, becomes a larger, fuzzy torus (donut shape). However, as this less energetic warp bubble also thickens toward the interior region, it leaves less flat space to house the spacecraft, which has to be smaller. Furthermore, if the intensity of the space warp can be oscillated over time, the energy required is reduced even more. According to White, a modified Michelson–Morley interferometer could test the idea: one of the legs of the interferometer would appear to have a slightly different length when the test devices were energised. Alcubierre has expressed skepticism about the experiment, saying: "from my understanding there is no way it can be done, probably not for centuries if at all". In 2021, physicist Erik Lentz described a way warp drives sourced from known and familiar purely positive energy could exist—warp bubbles based on superluminal self-reinforcing "soliton" waves. The claim is controversial, with other physicists arguing that all physically reasonable warp drives violate the weak energy condition, as well as both the strong and dominant energy conditions. Placement of matter Krasnikov proposed that if tachyonic matter cannot be found or used, then a solution might be to arrange for masses along the path of the vessel to be set in motion in such a way that the required field was produced. But in this case, the Alcubierre drive vessel can only travel routes that, like a railroad, have first been equipped with the necessary infrastructure. The pilot inside the bubble is causally disconnected from its walls and cannot carry out any action outside the bubble: the bubble cannot be used for the first trip to a distant star because the pilot cannot place infrastructure ahead of the bubble while "in transit". For example, traveling to Vega (which is 25 light-years from Earth) requires arranging everything so that the bubble moving toward Vega with a superluminal velocity would appear; such arrangements will always take more than 25 years. Coule has argued that schemes, such as the one proposed by Alcubierre, are infeasible because matter placed en route of the intended path of a craft must be placed at superluminal speed—that constructing an Alcubierre drive requires an Alcubierre drive even if the metric that allows it is physically meaningful. Coule further argues that an analogous objection will apply to any proposed method of constructing an Alcubierre drive. Survivability inside the bubble An article by José Natário (2002) argues that crew members could not control, steer or stop the ship in its warp bubble because the ship could not send signals to the front of the bubble. A 2009 article by Carlos Barceló, Stefano Finazzi, and Stefano Liberati uses quantum theory to argue that the Alcubierre drive at faster-than-light velocities is impossible mostly because extremely high temperatures caused by Hawking radiation would destroy anything inside the bubble at superluminal velocities and destabilize the bubble itself; the article also argues that these problems are absent if the bubble velocity is subluminal, although the drive still requires exotic matter. Damaging effect on destination Brendan McMonigal, Geraint F. Lewis, and Philip O'Byrne have argued that were an Alcubierre-driven ship to decelerate from superluminal speed, the particles that its bubble had gathered in transit would be released in energetic outbursts akin to the infinitely-blueshifted radiation hypothesized to occur at the inner event horizon of a Kerr black hole; forward-facing particles would thereby be energetic enough to destroy anything at the destination directly in front of the ship. Wall thickness The amount of negative energy required for such a propulsion is not yet known. Pfenning and Allen Everett of Tufts hold that a warp bubble traveling at 10-times the speed of light must have a wall thickness of no more than 10−32 meters—close to the limiting Planck length, 1.6 × 10−35 meters. In Alcubierre's original calculations, a bubble macroscopically large enough to enclose a ship of 200 meters would require a total amount of exotic matter greater than the mass of the observable universe, and straining the exotic matter to an extremely thin band of 10−32 meters is considered impractical. Similar constraints apply to Krasnikov's superluminal subway. Chris Van den Broeck constructed a modification of Alcubierre's model that requires much less exotic matter but places the ship in a curved spacetime "bottle" whose neck is about 10−32 meters. Causality violation and semiclassical instability Calculations by physicist Allen Everett show that warp bubbles could be used to create closed timelike curves in general relativity, meaning that the theory predicts that they could be used for backwards time travel. While it is possible that the fundamental laws of physics might allow closed timelike curves, the chronology protection conjecture hypothesizes that in all cases where the classical theory of general relativity allows them, quantum effects would intervene to eliminate the possibility, making these spacetimes impossible to realize. A possible type of effect that would accomplish this is a buildup of vacuum fluctuations on the border of the region of spacetime where time travel would first become possible, causing the energy density to become high enough to destroy the system that would otherwise become a time machine. Some results in semiclassical gravity appear to support the conjecture, including a calculation dealing specifically with quantum effects in warp-drive spacetimes that suggested that warp bubbles would be semiclassically unstable, but ultimately the conjecture can only be decided by a full theory of quantum gravity. Alcubierre briefly discusses some of these issues in a series of lecture slides posted online, where he writes: "beware: in relativity, any method to travel faster than light can in principle be used to travel back in time (a time machine)". In the next slide, he brings up the chronology protection conjecture and writes: "The conjecture has not been proven (it wouldn't be a conjecture if it had), but there are good arguments in its favor based on quantum field theory. The conjecture does not prohibit faster-than-light travel. It just states that if a method to travel faster than light exists, and one tries to use it to build a time machine, something will go wrong: the energy accumulated will explode, or it will create a black hole." Relation to Star Trek warp drive The Star Trek television series and films use the term "warp drive" to describe their method of faster-than-light travel. Neither the Alcubierre theory, nor anything similar, existed when the series was conceived—the term "warp drive" and general concept originated with John W. Campbell's 1931 science fiction novel Islands of Space. Alcubierre stated in an email to William Shatner that his theory was directly inspired by the term used in the show and cites the "'warp drive' of science fiction" in his 1994 article. A USS Alcubierre appears in the Star Trek tabletop RPG Star Trek Adventures. Since the release of Star Trek: The Original Series, more recent Star Trek spin-off series have made closer use of the theory behind the Alcubierre Drive incorporating warp bubbles/fields into the in-universe science. See also EmDrive Exact solutions in general relativity (for more on the sense in which the Alcubierre spacetime is a solution) IXS Enterprise Quantum vacuum thruster Reactionless drive Spacecraft propulsion Unruh effect Notes References External links It describes the concept in laymans terms. (hosted by John Michael Godier). A short video clip of the hypothetical effects of the warp drive. Marcelo B. Ribeiro's Page on Warp Drive Theory. Interstellar travel Warp drive theory Lorentzian manifolds Science fiction themes Hypothetical technology 1994 introductions Exact solutions in general relativity
Alcubierre drive
Astronomy,Mathematics
4,403
14,340,833
https://en.wikipedia.org/wiki/Nucleolin
Nucleolin is a protein that in humans is encoded by the NCL gene. Gene The human NCL gene is located on chromosome 2 and consists of 14 exons with 13 introns and spans approximately 11kb. Intron 11 of the NCL gene encodes a small nucleolar RNA, termed U20. Function Nucleolin is the major nucleolar protein of growing eukaryotic cells. It is found associated with intranucleolar chromatin and pre-ribosomal particles. It induces chromatin decondensation by binding to histone H1. It is thought to play a role in pre-rRNA transcription and ribosome assembly. May play a role in the process of transcriptional elongation. Binds RNA oligonucleotides with 5'-UUAGGG-3' repeats more tightly than the telomeric single-stranded DNA 5'-TTAGGG-3' repeats. Nucleolin is also able to act as a transcriptional coactivator with Chicken Ovalbumin Upstream Promoter Transcription Factor II (COUP-TFII). Clinical significance Midkine and pleiotrophin bind to cell-surface nucleolin as a low affinity receptor. This binding can inhibit HIV infection. Nucleolin at the cell surface is the receptor for the respiratory syncytial virus (RSV) fusion protein. Interference with the nucleolinRSV fusion protein interaction has been shown to be therapeutic against RSV infection in cell cultures and animal models. Interactions Nucleolin has been shown to interact with: MTDH, CSNK2A2, Centaurin, alpha 1, HuR, NPM1, P53, PPP1CB, S100A11, Sjögren syndrome antigen B, TOP1, and Telomerase reverse transcriptase. References Further reading Proteins
Nucleolin
Chemistry
385
40,921,638
https://en.wikipedia.org/wiki/Carboceric%20acid
Carboceric acid, or heptacosanoic acid or heptacosylic acid, is a 27-carbon long-chain saturated fatty acid with the chemical formula . Its name derives from a combination of the word "Carbon" and κηρός (Keros), meaning beeswax or honeycomb in Ancient Greek, since the acid can be found in the mineral ozokerite, also known as ozocerite. See also List of saturated fatty acids Very long chain fatty acids List of carboxylic acids References Fatty acids Alkanoic acids
Carboceric acid
Chemistry
119
8,384,990
https://en.wikipedia.org/wiki/Network%20DVR
Network DVR (NDVR), or network personal video recorder (NPVR), or remote storage digital video recorder (RS-DVR) is a network-based digital video recorder (DVR) stored at the provider's central location rather than at the consumer's private home. Traditionally, media content was stored in a subscriber's set-top box hard drive, but with NDVR the service provider owns a large number of servers, on which the subscribers' media content is stored. The term RS-DVR is used by Cablevision for their version of this technology. Overview NDVR is a consumer service where real-time broadcast television is captured in the network on a server allowing the end user to access the recorded programs at will, rather than being tied to the broadcast schedule. The NDVR system provides time-shifted viewing of broadcast programs, allowing subscribers to record and watch programs at their convenience, without the requirement of a local PVR device. It can be considered as a "PVR that is built into the network". NDVR subscribers can choose from the programmes available in the network-based library, when they want, without needing yet another device or remote control. However, many people would still prefer to have their own PVR device, as it would allow them to choose exactly what they want to record. Local PVR bypasses the strict rights and licensing regulations, as well as other limitations, that often prevent the network itself from providing "on demand" access to certain programmes. In contrast, RS-DVR (Remote Storage Digital Video Recorder) refers to a service where a subscriber can record a program and store it on the network. A stored program is only available to the person who recorded it. Should any two persons record the same program, it must for legal reasons be recorded and stored as separate copies. Essentially implementing a traditional DVR with network based storage. In Greece, On Telecoms offers an NPVR service to all subscribers in their basic package with all the programming of all major national Greek TV channels for the last 72 hours. The user has to sign in their contract that they agree that the company will record national programming of the last 72 hours for them so that they can get around any legal implications (like the ones mentioned in the NPVR article) as this service would work like a personal PVR. Cablevision litigation in the U.S. After Cablevision announced the RS-DVR in March 2006, several content providers including 20th Century Fox, Universal Studios, and Walt Disney sued Cablevision in federal district court. The content providers sought a permanent injunction that would effectively prevent Cablevision from implementing the system. The content providers prevailed at the district court level, and Cablevision appealed. On August 5, 2008, the 2nd U.S. Circuit Court of Appeals, in Cartoon Network, LP v. CSC Holdings, Inc., reversed the lower court decision that found the use of RS-DVRs in violation of copyright law. It agreed with Cablevision's argument that a RS-DVR should be treated essentially the same as a customer owned DVR. Only the location of the DVR really differs. Certain content providers began the process of appealing to the U.S. Supreme Court, seeking cert in late 2008. The Supreme Court delayed hearing the case and instead referred it to the United States Solicitor General's office for the federal government's opinion on the case. In June 2009 the US Supreme Court refused to hear a final appeal in the Cablevision remote DVR case, thereby bringing the years-long litigation to a close. Future of RS-DVRs As the Cablevision litigation had a favorable outcome, the future looks bright for RS-DVR systems. Many major U.S. cable companies are expected to implement their own RS-DVR systems, as RS-DVRs allow wider access to DVRs at a lesser cost to subscribers and innovative new methods of advertising that appeal to advertisers. NDVRs have been launched in countries like Hong Kong (Now TV), Singapore (recordTV.com), Italy (Vcast - Faucet PVR), Germany (shift.tv), Finland (tvkaista.fi), Lithuania (teo.lt) and other European countries. While Cablevision provided an RS-DVR that allowed an in-home set-top-box to remotely record a TV show, Cloud DVRs required no such in-home technology infrastructure. As such, Cloud DVRs (that have not sought licenses from broadcasters) have been challenged and have had mixed results in various jurisdictions, as analysed by the law firm Olswang. Litigation against RecordTV.com's Cloud DVR Cloud DVRs were launched in numerous countries but were inevitably challenged in court. In Singapore, RecordTV.com became the first Cloud DVR in the world to be declared legal in a landmark lawsuit where RecordTV.com, led by its CEO, Carlos Nicholas Fernandes who fought Singapore's state owned broadcaster, MediaCorp. MediaCorp was represented by Drew and Napier CEO and Senior Counsel, Davinder Singh who cross-examined Carlos Fernandes over 3.5 days during the trial. RecordTV.com lost the lawsuit in the High Court of Singapore, only to have the ruling overturned at the Court of Appeals where the Court ruled in favour of RecordTV and awarded costs and damages. The litigation was declared to be a David vs. Goliath battle, by the Business Times, the main business newspaper owned by the Singapore Press Holdings. Carlos Nicholas Fernandes was subsequently named as a Young Global Leader at the World Economic Forum. The case has become one of the most seminal cases in copyright law. William Patry and David Post both well known legal scholars in the area of copyright wrote about the case on their respective blogs prior to the appeal. The ruling was even cited at WIPO. See also Digital video recorder Cablevision RecordTV.com References Digital video recorders
Network DVR
Technology
1,237
9,861,601
https://en.wikipedia.org/wiki/Alexander%20Nesmeyanov
Alexander Nikolayevich Nesmeyanov (; – 17 January 1980) was a Soviet chemist and academician (1943) specializing in organometallic chemistry. Biography He was born in Moscow. He had two brothers Vasily (1904) and Andrei (1911) and a sister Tatyana (1908) (two born sisters died in infancy). His father (Nikolai Vasilyevich Nesmeyanov), graduated with excellence Vladimir Gymnasium, and then the Faculty of Law of Moscow University. He became interested in enlightenment and was working as a public teacher in the village of Bushov (Tula province) for 10 years. He had married in 1898 and worked at the Moscow city government, then he was a director Bakhrushinsky orphanage in Moscow (1901 – 1917). Alexander's mother, Lyudmila Danilovna (1878 – 1958), was a multi-talented teacher. At ten years Alexander became a vegetarian, and in 1913 he stopped eating fish. It was not easy to follow this conviction, especially in the famine years of 1918 – 1921, when roach and herring were an essential food product. He had become interested in various branches of biology: entomology, hydrobiology, ornithology and from the age of thirteen became interested in chemistry. Education In 1909, parents sent Alexander to P. N. Strakhov's private Moscow gymnasium, which he graduated with honors. In 1917, he entered the natural department of the Faculty of Physics and Mathematics at Moscow University. There were no entrance exams due to the passage of the revolution. Studying in this difficult time required great self-sacrifice and fanatical dedication. They studied in unheated rooms and there was not enough laboratory equipment. Transport was bad, and sometimes Alexander had to walk on foot to the university from Sokolniki. In 1920, classes at Moscow State University were frozen due to problems with heating, and Nesmeyanov entered the Military Pedagogical Academy on Bolshaya Gruzinskaya Street. At the same time, he worked in the laboratories of the Shanyavsky Moscow City People's University. By the end of 1920, Alexander Nikolayevich returned to studies at the academy and at Moscow University, where heating had already been restored. He meets the "future scientific mentor", Professor N. D. Zelinsky. While he was working as a night watchman at the faculty, Nesmeyanov lived in the laboratory of N. D. Zelinsky and devoting all his time to scientific experiments. After university After graduating from the university (1922), Nesmeyanov remained at the department of Academician N. D. Zelinsky. He held the positions of assistant (1924–1938), associate professor, professor (since 1935). He headed the Department of Organic Chemistry at the Institute of Fine Chemical Technology since 1938. From 1939 to 1954 Nesmeyanov was director of the Institute of Organic Chemistry of the Academy of Sciences of the Soviet Union. In 1939 he was elected as a corresponding member of Academy of Sciences of the Soviet Union, and in 1943 - an academician of Academy of Sciences of the Soviet Union. In 1946 - 1951 he was an Academician-Secretary of the Department of Chemical Sciences of Academy of Sciences of the Soviet Union. Member of the VK P (b) since 1944. At the end of the Great Patriotic War, Alexander Nikolayevich returned to his native university. He headed the Department of Organic Chemistry (1944 – 1958), he was the dean of the Faculty of Chemistry (1944 – 1948), and then became the rector of Moscow State University(1948 – 1951). Thanks to Nesmeyanov's research in the field of organometallic compounds during the war and post-war years, a number of important results were obtained that are of great theoretical and practical importance. These studies were aimed at developing methods for the synthesis and studying the chemical properties of various representatives of an important and extensive class of compounds located at the junction of inorganic and organic chemistry. During the period of Nesmeyanov's rectorship, the construction of a large complex of university buildings on the Sparrow (Lenin) Hills began. Under his leadership, competent commissions were created to develop technical specifications for the placement of university units in a new location. They worked in close creative contact with the author's group of architects (full members of the Academy of Architecture of the USSR L. V. Rudnev, S. E. Chernyshev, architects A. F. Khryakov, P. V. Abrosimov), with builders (A. N. Komarovsky, A. V. Voronkov). Simultaneously with the colossal construction, the development of the university structure is taking place, and curricula are being improved. Thus, courses on the history of sciences were introduced into the curricula of the natural faculties. In 1948, the Faculty of Biology was reorganized into the Faculty of Biology and Soil. In 1949 construction began on an agrobiological station in Chashnikovo. At the same time, the Faculty of Geology was created and the departments were organized: crystallography and crystal chemistry; history of geological sciences. In 1950, assistance was provided to the University of Chisinau with literature and equipment. In 1951, after the death of the President of the Academy of Sciences of the USSR, S. I. Vavilov Nesmeyanov was summoned to a member of the Politburo of the Central Committee, G. M. Malenkov , who offered to take the vacant post: On February 16, 1951, at an extraordinary session of the general meeting of the Academy of Sciences, Nesmeyanov was elected its president. In 1952, he founded the Institute for Scientific Information. In 1954, he opened the Institute of Organoelement Compounds of the Academy of Sciences of the USSR , which he led until his death (at present, the institute bears the name of A.N. Nesmeyanov). On May 19, 1961, Nesmeyanov resigned as president of the Academy of Sciences of the Soviet Union of his own free will. In May 1969, at a meeting of the Academic Council of the Institute of Organoelement Compounds, Nesmeyanov spoke out against being elected a senior researcher Candidate of Chemical Sciences Rokhlin, stating “I am a vindictive person. Last year, Rokhlin was among those who, at an institute rally, spoke out against the introduction of Soviet troops into Czechoslovakia. This speech did not affect the results of the vote, and Rokhlin was elected a senior researcher. Deputy of the Supreme Soviet of the USSR 3-5 convocations (1950 – 1962). He was one of the academicians of the Academy of Sciences of the USSR , who in 1973 signed a letter from scientists to the Pravda newspaper condemning "the behavior of Academician A. D. Sakharov ". In the letter, Sakharov was accused of having "made a number of statements discrediting the state system, the foreign and domestic policy of the Soviet Union", and academicians assessed his human rights activities as "defaming the honor and dignity of the Soviet scientist". He was interested in literature and painting, wrote poetry, sketches, was an avid mushroom picker. Died January 17, 1980. He was buried in Moscow at the Novodevichy Cemetery. Family He has been married twice. The first wife – Nina Vladimirovna Koperina (1900–1986) – is a chemist, she has worked at Moscow State University. Children from the first marriage: Olga (1930–2014) – candidate of chemical sciences; Nikolai (1932–1992) – Doctor of Chemical Sciences, Professor я. The second wife – Vinogradova Marina Anatolyevna (1921–2013) – is a philologist and a writer. Nesmeyanov had two brothers and a sister: Andrey Nesmeyanov (1911–1983) – radiochemist, head of the radiochemistry department of Moscow State University, professor, corresponding member of the Academy of Sciences of the Soviet Union. Vasily Nesmeyanov (1904–1941) – worked as deputy head of the Topographic and Geodetic Service of the Main Directorate of Geodesy and Cartography under the Council of People's Commissars of the USSR. He was shot on July 28, 1941 on charges of espionage. He was rehabilitated on September 17, 1955 by the decision of the Military Collegium of the Supreme Court of the USSR. Tatyana Nesmeyanova (1908–1991). Scientific activity Alexander Nikolayevich was one of the greatest organic chemists of the 20th century. He organised a number of fundamental works on the theory of the structure and reactivity of organic compounds. He created a new discipline lying on the border of inorganic and organic chemistry, which was called "chemistry of organoelement compounds". Also he has researched the production of synthetic food, the creation of new drugs and the synthesis of a number of technical materials. Nesmeyanov's diazomethod (Nesmeyanov's reaction) Studying the decomposition of double salts of aryldiazonium halides with mercury halides by copper powder, in 1929 Nesmeyanov proposed a new method for obtaining arylmercury halides. Later, the diazo method was extended to the synthesis of organometallic compounds of thallium, germanium, tin, lead, arsenic, antimony, and bismuth. The features of the diazo method, which distinguish it from direct metalation methods, are the possibility of obtaining organometallic compounds with different functional groups in the carbon radical and the possibility of selectively introducing a metal atom into a certain position. In 1935 – 1948, Nesmeyanov, together with K. A. Kocheshkov, obtained organic derivatives of tin, lead, antimony and other metals. Due to mutual transitions from organic derivatives of some elements to organic compounds of other elements, organometallic compounds obtained by the diazo method have found new applications in synthesis. Stereochemistry of unsaturated organometallic compounds The study by Nesmeyanov of the products of the addition of mercuric chloride to ethylene, acetylene and their derivatives led to the concept of the “dual reactivity” of a substance and the “transfer of the reaction center” along the chain π, π-, σ, π-, σ, σ- and p,π-conjugations in a molecule. Further research showed that these phenomena are fundamentally different from tautomerism. With the participation of his colleague A.E. Borisov, Nesmeyanov formulated the rule according to which electrophilic and homolytic substitution at the olefin carbon atom occurs with the preservation of the geometric configuration (Nesmeyanov – Borisov rule). Thanks to his research, which showed the enolate structure of ketone derivatives with alkali metals and magnesium, that is, the existence of O-derivatives of ketones, Nesmeyanov refuted Knorr 's concept of "pseudomerism ". Metallotropy In the study of the structure of mercury b- , lead- and organotin derivatives of nitrosophenols, Nesmeyanov discovered the phenomenon of metallotropy, that is, a special tautomerism in which a reversible transfer of an organometallic group occurs. Joint studies by A. N. Nesmeyanov and I. F. Lutsenko discovered heteroatomic tautomerism (between carbon and oxygen atoms) in keto-enol systems of tin , o- and germanium compounds. Nesmeyanov, together with Yu. A. and N. A. Ustynyuk, discovered a new type of metallotropy: it was found that in fluorenylchromium tricarbonyl anions, η 6 -complexes are equilibrium and reversibly isomerized into η 5 -complexes. Research on ferrocene and its derivatives In 1954, research into the chemistry of ferrocene began at the Department of Organic Chemistry at Moscow State University and at INEOS under the direction of Nesmeyanov. It turned out that the functional derivatives of ferrocene react similarly to aromatic compounds. However, it has been shown that the electronic effects of the substituents are transmitted through the metallocene core by an inductive mechanism, and therefore have a lesser effect than on benzene derivatives. Research on ferrocene and its derivatives made it possible to create a number of photosensitive compositions that allow obtaining a stable image on paper, fabric, plastics and metals, and also led to the creation of a new drug, ferrocerone, which fights diseases associated with iron deficiency. On the basis of cymantrene, Nesmeyanov proposed a new antiknock agent for motor gasoline. Research in organic chemistry Nesmeyanov, together with N.K. Kochetkov and M.I. Rybinskaya, developed a method for the synthesis of various five- and six-membered heterocycles, which is based on the high activity of carbonyl groups and the mobility of the β-substituent in compounds of the type RCOCH=CHX. The same group of scientists developed the method of "β-ketovinylation", which consists in introducing an RCOCH=CH group into the molecule. The reaction of β-substituted vinyl ketones with an azide ion made it possible to study the stereochemistry and propose a mechanism for nucleophilic substitution at the activated double bond. In collaboration with other scientists, Nesmeyanov carried out a number of works in the field of radical telomerization and rearrangement radicals. In addition to studies of already known reactions, thermal telomerization of ethylene and propylene with silicon hydrides has been developed and other new telomerization reactions. Also, new routes for the synthesis of compounds containing groups such as CCl3, CCl3CHCl, CCl3C<, CCl 2=CH, CCl2=CHX and others. The study of compounds containing the CCl3-C=C< group showed the rearrangement during reactions with attack on the terminal atoms of the system, which confirmed the presence of σ,π-conjugation. Nesmeyanov together with R. H. Freidlina and V. N. Kost discovered the chain radical isomerization of CCl3CBr=CH2 to CCl2=CClCH2Br under ultraviolet illumination. In continuation of the work related to the previously created diazo method, Nesmeyanov and L. G. Makarova investigated the mechanism of decomposition of aryldiazonium and diaryliodonium salts. This made it possible to synthesize new types of onium compounds - diphenylbromonium, diphenylchloronium and triphenyloxonium salts. Together with T. P. Tolstaya and other scientists, Nesmeyanov showed that double salts of diphenylbromonium and diphenylchloronium halides with heavy metal halides are decomposed by powders of the corresponding metals with the formation of organometallic compounds. Thus, the diazo method began to be used to obtain σ-aryl complexes of transition metals and other organometallic compounds. The scientific basis for obtaining new forms of food In 1961, Nesmeyanov formulated the idea of obtaining food by synthetic methods, bypassing agriculture. The idea was based on the works of D. I. Mendeleev and M. Berthelot , as well as an awareness of the modern possibilities of organic synthesis, the problems of preserving the environment and the efficiency of food production. The main areas of work were: the development of highly efficient methods for obtaining nutrients; reproduction of the appearance, taste, smell, color, shape, consistency and other properties of natural products in synthetic food substances. As a result of research at INEOS , processes have been developed for obtaining black caviar, new forms of potato products, pasta and cereals and combined meat products based on vegetable and animal proteins. Recognition Alexander Nikolayevich's work on the chemistry of organoelement compounds brought him fame and recognition not only in the Soviet Union, but also in the world. He was elected an honorary member of several dozen foreign national academies and scientific societies. Awards and prizes Stalin Prize first degree (1943) - for research in the field of organometallic compounds, the results of which were published in 1941 and 1942 in a series of articles: “On the interaction of diazoacetic ether with tin chloride and ferric chloride”, “From the field of organomercuric compounds ”, “On the reaction of nitroso compounds with nitric oxide” and in the monograph “Synthetic methods in the field of organometallic compounds of mercury” (1942) Lenin Prize (1966) - for a cycle of research in the field of organoelement compounds Twice Hero of Socialist Labour (1969, 1979) Gold medal named after D. I. Mendeleev (1977) - for a series of works in the field of organometallic compounds and obtaining food from non-traditional sources V Mendeleev Reader Large Gold Medal named after M. V. Lomonosov Academy of Sciences of the Soviet Union (1962) Seven Orders of Lenin (11/04/1944; 06/10/1945; 09/19/1953; 09/08/1959; 04/27/1967; 03/13/1969; 09/07/1979) Order of the October Revolution (09/13/1974) Order of the Red Banner of Labour (09/14/1949) Silver medal of the World Peace Council (1959) Academies and societies Honorary Member of the Academy of Sciences of the Armenian SSR Honorary Member of Academy of Sciences of the Tajik SSR Honorary member of Academy of Sciences of the Turkmen SSR Honorary Member of Bulgarian Academy of Sciences (1952) Honorary Member of Hungarian Academy of Sciences (1953) Honorary Member of the Romanian Academy of Sciences (1957) Honorary Member of New York Academy of Sciences USA (1958) Honorary Member of American Academy of Arts and Sciences in Boston (1960) Honorary Member of the London Chemical Society Honorary Member of the Chemical Industry Society of Great Britain Honorary Member of the Polish Chemical Society Honorary Member of National Institute of Sciences of India Honorary Member of the Royal Society of Edinburgh Full member of German Academy of Naturalists "Leopoldina" (1959) Full member of International Academy of Astronautics (1966) Member of the Polish Academy of Sciences (1954) Full member of Czechoslovak Academy of Sciences (1957) Foreign Member Royal Society of London (1961) Foreign member GDR Academy of Sciences (1950) Member of the European Society of Cultural Workers Doctor "honoris causa" University of Paris (1964) Doctor "honoris causa" University of Bordeaux (1966) Doctor "honoris causa" University of Jena Doctor "honoris causa" University of Calcutta Doctor Iasi Polytechnic Institute Member of World Peace Council (1950) Memory Institute of Organoelement Compounds. A. N. Nesmeyanov RAS. In front of Institute building a memorial bust was installed (sculptor Oleg Komov). At the Institute of the annual annual day of memory of A. N. Nesmeyanov with relatives and graduate students. On September 26, 1980, one of the streets of the Gagarinsky district of Moscow was named after Alexander Nikolayevich. Russian Academy of Sciences was founder Prize named after A.N. Nesmeyanov, awarded since 1994 for outstanding work in the field of chemistry of organoelement compounds. In December 1980, a stamp in memory of A. N. Nesmeyanov was issued in the USSR. Alexander Petrovich Kazantsev dedicated to him the novel The Dome of Hope felt the phrase: “To the vivid memory of America, the Hero of Socialist Labor, Academician Alexander Nikolayevich NESMEYANOV, as a token of admiration for his life and work, I dedicate this novel-dream. Author». In 1981, a memorial plaque with his name was unveiled at Chemical Faculty of Moscow State University. The name of Academician Nesmeyanov was carried by the research ship of the Vityaz type in 1980-2009. Main works Nesmeyanov A.N. D.I. Mendeleev's Periodic Table of Elements and Organic Chemistry. Series: Reports at the plenary session/ VIII Mendeleev Congress on General and Applied Chemistry. Moscow: Publishing House Acad. Sciences of the USSR, 1959. Nesmeyanov A.N. Ed. acad. A. V. Topchiev Selected Works. Moscow: Publishing House Acad. Sciences of the USSR, 1959. Ioffe S.T. and Nesmeyanov A.N. Ed. A. N. Nesmeyanova and К. A. Kocheshkova Magnesium, beryllium, calcium, strontium, barium. Series: Methods of elemental organic chemistry. Moscow: Publishing House Acad. Sciences of the USSR, 1963. Nesmeyanov A.N. and Sokolik R.A. Ed. A. N. Nesmeyanova and К. A. Kocheshkova Bor. Aluminum. Gallium. Indium. Thallium. Series: Methods of elemental organic chemistry. Moscow: Publishing House Acad. Sciences of the USSR, 1964. Makarova L.G. and Nesmeyanov A.N. Ed. A. N. Nesmeyanova and К. A. Kocheshkova Mercury. Series: Methods of elemental organic chemistry. Moscow: Publishing House Acad. Sciences of the USSR, 1965. Nesmeyanov A.N., Belikov V.M.Problem of food synthesis. Series: Report at the plenary session / XI Mendeleev Congress on General and Applied Chemistry. Moscow: Nauka, 1965. Nesmeyanov A.N.Research in Organic Chemistry. Selected works 1959-1969. Moscow: Nauka, 1971. Nesmeyanov A.N. and Nesmeyanov N.A. The Beginnings of Organic Chemistry. In two books. Moscow: Chemistry, 1969. Nesmeyanov A.N. and Nesmeyanov N.A. The Beginnings of Organic Chemistry. In two books. Moscow: Chemistry, 1970. References Literature Moscow University in the Great Patriotic War, 4th edition, revised and supplemented. M.: Moscow State University, 2020. 65, 116, 117, 118, 551s. ISBN 978-5-19-011499-7. Great Soviet Encyclopedia. Article: Nesmeyanov Alexander Nikolayevich. Levchenkov S.I.. Great Russian Encyclopedia. Article: Nesmeyanov Alexander Nikolayevich Nesmeyanov M.A. The Light of Love: Memories of Alexander Nikolevich Nesmeyanov. M.: Nauka Publishing House, 1999. ISBN 5-02-008355-0. Goryacheva R.I., Orlova V.Ya. Fokin A.V. etc. Alexander Nikolayevich Nesmeyanov: 1899-1980. Moscow: Nauka Publishing House, 1992. ISBN 5-02-001607-1. Ilchenko E.V., Ilchenko V.I. Academician Alexander Nikolayevich Nesmeyanov - Rector of Moscow University and President of the Academy of Sciences of the Soviet Union. Moscow: Moscow State University, 2014. 440 p. ISBN 978-5-19-010865-1. Ilchenko E.V., Ilchenko V.I. Academician Alexander Nikolayevich Nesmeyanov - Rector of Moscow University and President of the Academy of Sciences of the Soviet Union. Moscow: Moscow State University, 2014. 440 p. ISBN 978-5-19-010865-1. External links V. N. Zagrebaeva Alexander Nikolayevich Nesmeyanov // Website of the Russian Academy of Sciences electronic library "Scientific heritage of Russia". Nesmeyanov Alexander Nikolayevich Monument to Academician Nesmeyanov near InEOS RAS 1899 births 1980 deaths 20th-century Russian chemists Scientists from Moscow Foreign members of the Royal Society Foreign fellows of the Indian National Science Academy Full Members of the USSR Academy of Sciences Moscow State University alumni Presidents of the Russian Academy of Sciences Presidents of the USSR Academy of Sciences Rectors of Moscow State University Members of the Central Committee of the 19th Congress of the Communist Party of the Soviet Union Members of the Central Committee of the 20th Congress of the Communist Party of the Soviet Union Members of the Supreme Soviet of the Russian Soviet Federative Socialist Republic, 1947–1951 Third convocation members of the Soviet of the Union Fourth convocation members of the Soviet of the Union Fifth convocation members of the Soviet of the Union Heroes of Socialist Labour Recipients of the Lenin Prize Recipients of the Lomonosov Gold Medal Recipients of the Order of Lenin Recipients of the Order of the October Revolution Recipients of the Order of the Red Banner of Labour Recipients of the Stalin Prize Russian organic chemists Russian vegetarianism activists Soviet organic chemists Burials at Novodevichy Cemetery
Alexander Nesmeyanov
Chemistry,Technology
5,253
16,796,781
https://en.wikipedia.org/wiki/HD%2027894%20b
HD 27894 b is a gas giant with a mass at least two thirds that of Jupiter, or twice that of Saturn. The distance from the planet to the star is one third compared that of Mercury from the Sun, and it takes almost exactly 18 days to complete one roughly circular orbit. References External links Exoplanets discovered in 2005 Giant planets Reticulum Exoplanets detected by radial velocity de:HD 27894 b
HD 27894 b
Astronomy
91
74,259,977
https://en.wikipedia.org/wiki/Geoffrey%20Brooks
Geoffrey Brooks (born 9 November 1962) is a Professor of Engineering at the Swinburne University of Technology, Known for fundamentals of steelmaking and non-ferrous metallurgy. His Research in these fields has earned him awards from organizations such as Association for Iron and Steel Technology (AIST), the Minerals, Metals and Materials Society (TMS) and the Institute of Materials, Minerals and Mining (IOM3) as well as winning several best paper awards with his co-workers in the field of pyrometallurgy. Research Brooks’ key work includes modelling of steelmaking,leading teams on interaction of jets and liquids in steelmaking, heat transfer in steelmaking and reaction kinetics of steelmaking process. Collaborating with researchers at McMaster University and Swinburne University of Technology, he developed the Bloated Droplet Theory in Oxygen Steelmaking, correlating the steelmaking kinetics with iron droplets bloating and reacting with FeO rich slag. Media recognition Brooks has been interviewed on several occasions in the Australian media on matters relating to the Australian Steel industry but also on his research into processing minerals on the moon. He has been a regular contributor to The Conversation commenting on a range of issues relating to the metallurgical industry. References External links Steelmaking Swinburne University of Technology Australian scientists 1962 births Living people
Geoffrey Brooks
Chemistry
280
4,229,296
https://en.wikipedia.org/wiki/NGC%205164
NGC 5164 is a barred spiral galaxy in the constellation Ursa Major. It was discovered by William Herschel on April 14, 1789. References External links Barred spiral galaxies Ursa Major 5164 08458 047124
NGC 5164
Astronomy
48
72,395,982
https://en.wikipedia.org/wiki/KIF25
Kinesin family member 25 (KIF25), also known as kinesin-14, is a human protein encoded by the KIF25 gene. It is part of the kinesin family of motor proteins. Function KIF25 is a minus-end directed microtubule motor protein, and its activity delays the separation of chromosomes during mitosis. References
KIF25
Chemistry
77
48,204,802
https://en.wikipedia.org/wiki/TOPCIT
Test Of Practical competency in ICT (TOPCIT) is a performance-evaluation-centered test designed to diagnose and assess the competency of information technology specialists and software developers that is critically needed to perform jobs on the professional frontier. TOPCIT was developed and is administered by Korea's Ministry of Science, ICT and Future Planning and the Institute for Information and Communications Technology Planning and Evaluation. They are government agencies that oversee and manage ICT related R&D, policy, and HR development. Background Companies and higher education institutions voiced the need for a standardized and objective competency index that can reinforce the on-site competency of ICT/SW college students and narrow the gap between the viewpoints of industrial and academic circles regarding the qualifications of a competent specialist in the field. Objective To improve the quality of ICT/SW education at universities, resolve the manpower shortage experienced by ICT companies, and expand the growth potential of ICT/SW industries and education system: TOPCIT has been developed to objectively assess the competency of those planning on entering the ICT field. The analyzed data will assist universities and industries in admitting students or hiring new recruits respectively. TOPCIT measures the competency by evaluating the test-takers’ answers to a series of creative problem solving questions and by assessing their executive ability. Participating companies and universities A total of 269 people from 231 companies and educational academies participated and founded the TOPCIT in August, 2013. Through mutual development of companies and academies, TOPCIT was made with an objective of closing the gap between the industry and academic circles regarding the practical qualifications of a competent specialist in this field. Through the systematic network between companies and schools, the gap between the demand of skilled workers that companies want and the skilled workers that the educational academies graduate will also be closed. Contents TOPCIT has a total of 65 questions with up to 1,000 points. There are 4 types of questions in the test: multiple choice, short answer, descriptive writing, and critical thinking questions. There is a technical field and business field in the TOPCIT. Technical field The technical field tests the ability of software development, database design and operation, and the understanding and utilization of network security. Software: The software module tests understanding of software, ability to analyze and design software, develop and test software, manage software, and implement integrated technology. Database: The database module tests the knowledge of concepts and structure of database, ability to design, program, and operate database, and the understanding of database applications. Network and security: this module tests knowledge of network concepts, network infrastructure technology, network application technology, IT security, ability-run IT security, and the knowledge of the latest IT security technology and standards. Business field The business field tests understanding IT business, technical communication skills, and project management. IT business: this module consists of understanding IT business and utilizing IT business Technical communication: this module consists of understanding business communications and utilizing technical documentation. Project management: this module consists of understanding of project, project management, and project tools and evaluation. Test scores There are five TOPCIT competency levels: Notes and references External links Test of practical competency in ICT Information technology qualifications
TOPCIT
Technology
634
3,259,030
https://en.wikipedia.org/wiki/Polarimeter
A polarimeter is a scientific instrument used to measure optical rotation: the angle of rotation caused by passing linearly polarized light through an optically active substance. Some chemical substances are optically active, and linearly polarized (uni-directional) light will rotate either to the left (counter-clockwise) or right (clockwise) when passed through these substances. The amount by which the light is rotated is known as the angle of rotation. The direction (clockwise or counterclockwise) and magnitude of the rotation reveals information about the sample's chiral properties such as the relative concentration of enantiomers present in the sample. History Polarization by reflection was discovered in 1808 by Étienne-Louis Malus (1775–1812). Measuring principle The ratio, the purity, and the concentration of two enantiomers can be measured via polarimetry. Enantiomers are characterized by their property to rotate the plane of linear polarized light. Therefore, those compounds are called optically active and their property is referred to as optical rotation. Light sources such as a light bulb, Tungsten Halogen, or the sun emit electromagnetic waves at the frequency of visible light. Their electric field oscillates in all possible planes relative to their direction of propagation. In contrast to that, the waves of linear-polarized light oscillate in parallel planes. If light encounters a polarizer, only the part of the light that oscillates in the defined plane of the polarizer may pass through. That plane is called the plane of polarization. The plane of polarization is turned by optically active compounds. According to the direction in which the light is rotated, the enantiomer is referred to as dextro-rotatory or levo-rotatory. The optical activity of enantiomers is additive. If different enantiomers exist together in one solution, their optical activity adds up. That is why racemates are optically inactive, as they nullify their clockwise and counter clockwise optical activities. The optical rotation is proportional to the concentration of the optically active substances in solution. Polarimeters may therefore be applied for concentration measurements of enantiomer-pure samples. With a known concentration of a sample, polarimeters may also be applied to determine the specific rotation when characterizing a new substance. The specific rotation is a physical property and defined as the optical rotation α at a path length l of 1 dm, a concentration c of 10 g/L, a temperature T (usually 20 °C) and a light wavelength λ (usually sodium D line at 589.3 nm): This tells us how much the plane of polarization is rotated when the ray of light passes through a specific amount of optically active molecules of a sample. Therefore, the optical rotation depends on temperature, concentration, wavelength, path length, and the substancebeing analyzed. Construction The polarimeter is made up of two Nicol prisms (the polarizer and analyzer). The polarizer is fixed and the analyzer can be rotated. The prisms may be thought of as slits S1 and S2. The light waves may be considered to correspond to waves in the string. The polarizer S1 allows only those light waves which move in a single plane. This causes the light to become plane polarized. When the analyzer is also placed in a similar position it allows the light waves coming from the polarizer to pass through it. When it is rotated through the right angle no waves can pass through the right angle and the field appears to be dark. If now a glass tube containing an optically active solution is placed between the polarizer and analyzer the light now rotates through the plane of polarization through a certain angle, the analyzer will have to be rotated in same angle. Operation Polarimeters measure this by passing monochromatic light through the first of two polarising plates, creating a polarized beam. This first plate is known as the polarizer. This beam is then rotated as it passes through the sample. After passing through the sample, a second polarizer, known as the analyzer, rotates either via manual rotation or automatic detection of the angle. When the analyzer is rotated such that all the light or no light can pass through, then one can find the angle of rotation which is equal to the angle θ by which the analyser was rotated in the former case, or 90-θ in the latter case. Types of polarimeter Laurent's half-shade polarimeter When plane-polarised light passes through some crystals, the velocity of left-polarized light is different from that of the right-polarized light, thus the crystals are said to have two refractive indices, i.e. double refracting. Construction: The polarimeter consists of a monochromatic source S which is placed at focal point of a convex lens L. Just after the convex lens there is a Nicol Prism P which acts as a polariser. H is a half shade device which divides the field of polarized light emerging out of the Nicol P into two halves, generally of unequal brightness. T is a glass tube in which an optically active solution is filled. The light, after passing through T, is allowed to fall on the analyzing Nicol A which can be rotated about the axis of the tube. The rotation of the analyzer can be measured with the help of a scale C. Working principle: To understand the need of a half-shade device, let us suppose that it is not present. The position of the analyzer is adjusted so that the field of view is dark when the tube is empty. The position of the analyzer is noted on the circular scale. Now the tube is filled with the optically active solution and it is set in its proper position. The optically active solution rotates the plane of polarization of the light emerging out of the polarizer P by some angle, so the light is transmitted by analyzer A and the field of view of the telescope becomes bright. Now the analyzer is rotated by a finite angle so that the field of view of the telescope again becomes dark. This will happen only when the analyzer is rotated by the same angle by which the plane of polarization of light is rotated by the optically active solution. The position of the analyzer is again noted. The difference of the two readings will give the angle of rotation of the plane of polarization. A difficulty faced in the above procedure is that when analyzer is rotated for the total darkness, then it is attained gradually and hence it is difficult to find the exact position correctly for which complete darkness is obtained. To overcome the above difficulty, the half-shade device is introduced between polarizer P and the glass tube T. Half shade device: It consist of two semicircular plates ACB and ADB. One half ACB is made of glass while other half is made of quartz. Both halves are cemented together. The quartz is cut parallel to the optic axis. Thickness of the quartz is selected in such a way that it introduces a path difference of ’A/2 between ordinary and extraordinary ray. The thickness of the glass is selected in such a way that it absorbs the same amount of light as is absorbed by the quartz half. Consider that the vibration of polarization is along OP. On passing through the glass half the vibrations remain along OP. But on passing through the quartz half these vibrations will split into 0- and £-components. The £-components are parallel to the optic axis while O- component is perpendicular to optic axis. The O-component travels faster in quartz and hence an emergence 0-component will be along OD instead of along OC. Thus components OA and OD will combine to form a resultant vibration along OQ which makes the same angle with optic axis as OP. Now if the Principal plane of the analyzing Nicol is parallel to OP then the light will pass through the glass half unobstructed. Hence the glass half will be brighter than the quartz half or we can say that the glass half will be bright and the quartz half will be dark. Similarly if the principal plane of the analyzing Nicol is parallel to OQ then the quartz half will be bright and the glass half will be dark. When the principal plane of the analyzer is along AOB then both halves will be equally bright. On the other hand, if the principal plane of the analyzer is along DOC then both the halves will be equally dark. Thus it is clear that if the analyzing Nicol is slightly disturbed from DOC then one half becomes brighter than the other. Hence by using the half shade device, one can measure the angle of rotation more accurately. Determination of specific rotation: In order to determine a specific rotation of an optically active substance (say, sugar), the polarimeter tube is first filled with pure water and the analyzer is adjusted for equal darkness (both the halves should be equally dark) point. The position of the analyzer is noted with the help of the scale. Now the polarimeter tube is filled with a sugar solution of known concentration and again the analyzer is adjusted in such a way that again the equally dark point is achieved. The position of the analyzer is again noted. The difference of the two readings will give the angle of rotation θ. Hence, a specific rotation S is determined as S = θ/LC, where L is the optical path length and C is concentration of the substance. Biquartz polarimeter A biquartz polarimeter uses a biquartz plate, consisting of two semicircular plates of quartz, each of thickness 3.75mm. One half consists of right-handed optically active quartz, while the other is left-handed optically active quartz. Manual The earliest polarimeters, which date back to the 1830s, required the user to physically rotate one polarizing element (the analyzer) whilst viewing through another static element (the detector). The detector was positioned at the opposite end of a tube containing the optically active sample, and the user used his/her eye to judge the "alignment" when least light was observed. The angle of rotation was then read from a simple fixed to the moving polariser to within a degree or so. Although most manual polarimeters produced today still adopt this basic principle, the many developments applied to the original opto-mechanical design over the years have significantly improved measurement performance. The introduction of a half-wave plate increased "distinction sensitivity", whilst a precision glass scale with vernier drum facilitated the final reading to within ca. ±0.05º. Most modern manual polarimeters also incorporate a long-life yellow LED in place of the more costly sodium arc lamp as a light source. Semi-automatic Today, semi-automatic polarimeters are available. The operator views the image via a digital display adjusts the analyzer angle with electronic controls. Fully automatic Fully automatic polarimeters are now widely used and simply require the user to press a button and wait for a digital readout. Fast automatic digital polarimeters yield an accurate result within a few seconds, regardless of the rotation angle of the sample. In addition, they provide continuous measurement, facilitating High-performance liquid chromatography and other kinetic investigations. Another feature of modern polarimeters is the Faraday modulator. The Faraday modulator creates an alternating current magnetic field. It oscillates the plane of polarization to enhance the detection accuracy by allowing the point of maximal darkness to be passed through again and again and thus be determined with even more accuracy. As the temperature of the sample has a significant influence on the optical rotation of the sample, modern polarimeters have already included Peltier elements to actively control the temperature. Special techniques as temperature controlled sample tubes reduce measuring errors and ease operation. Results can directly be transferred to computers or networks for automatic processing. Traditionally, accurate filling of the sample cell had to be checked outside the instrument, as an appropriate control from within the device was not possible. Nowadays a camera system can help to monitor the sample and accurate filling conditions in the sample cell. Furthermore, features for automatic filling introduced by few companies are available on the market. When working with caustic chemicals, acids, and bases it can be beneficial to not load the polarimeter cell by hand. Both of these options help to avoid potential errors caused by bubbles or particles. Sources of error The angle of rotation of an optically active substance can be affected by: Concentration of the sample Wavelength of light passing through the sample (generally, angle of rotation and wavelength tend to be inversely proportional) Temperature of the sample (generally the two are directly proportional) Length of the sample cell (input by the user into most automatic polarimeters to ensure better accuracy) Filling conditions (bubbles, temperature and concentration gradients) Most modern polarimeters have methods for compensating or/and controlling these errors. Calibration Traditionally, a sucrose solution with a defined concentration was used to calibrate polarimeters relating the amount of sugar molecules to the light polarization rotation. The International Commission for Uniform Methods of Sugar Analysis (ICUMSA) played a key role in unifying analytical methods for the sugar industry, set standards for the International Sugar Scale (ISS) and the specifications for polarimeters in sugar industry. However, sugar solutions are prone to contamination and evaporation. Moreover, the optical rotation of a substance is very sensitive to temperature. A more reliable and stable standard was found: crystalline quartz which is oriented and cut in a way that it matches the optical rotation of a normal sugar solution, but without showing the disadvantages mentioned above. Quartz (silicon dioxide, SiO2) is a common mineral, a trigonal chemical compound of silicon and oxygen. Nowadays, quartz plates or quartz control plates of different thickness serve as standards to calibrate polarimeters and saccharimeters. In order to ensure reliable and comparable results, quartz plates can be calibrated and certified by metrology institutes. However,. Alternatively, calibration may be checked using a Polarization Reference Standard, which consists of a plate of quartz mounted in a holder perpendicular to the light path. These standards are available, traceable to NIST, by contacting Rudolph Research Analytical, located at 55 Newburgh Road, Hackettstown, NJ 07840, USA. A calibration first consists of a preliminary test in which the fundamental calibration capability is checked. The quartz control plates must meet the required minimum requirements with respect to their dimensions, optical pureness, flatness, parallelism of the faces and optical axis errors. After that, the actual measurement value - the optical rotation - is measured with the precision polarimeter. The measurement uncertainty of the polarimeter amounts to 0.001° (k=2). Applications Because many optically active chemicals such as tartaric acid, are stereoisomers, a polarimeter can be used to identify which isomer is present in a sample – if it rotates polarized light to the left, it is a levo-isomer, and to the right, a dextro-isomer. It can also be used to measure the ratio of enantiomers in solutions. The optical rotation is proportional to the concentration of the optically active substances in solution. Polarimetry may therefore be applied for concentration measurements of enantiomer-pure samples. With a known concentration of a sample, polarimetry may also be applied to determine the specific rotation (a physical property) when characterizing a new substance. Chemical industry Many chemicals exhibit a specific rotation as a unique property (an intensive property like refractive index or Specific gravity) which can be used to distinguish it. Polarimeters can identify unknown samples based on this if other variables such as concentration and length of sample cell length are controlled or at least known. This is used in the chemical industry. By the same token, if the specific rotation of a sample is already known, then the concentration and/or purity of a solution containing it can be calculated. Most automatic polarimeters make this calculation automatically, given input on variables from the user. Food, beverage and pharmaceutical industries Concentration and purity measurements are especially important to determine product or ingredient quality in the food & beverage and pharmaceutical industries. Samples that display specific rotations that can be calculated for purity with a polarimeter include: Steroids Diuretics Antibiotics Narcotics Vitamins Analgesics Amino acids Essential oils Polymers Starches Sugars Polarimeters are used in the sugar industry for determining quality of both juice from sugar cane and the refined sucrose. Often, the sugar refineries use a modified polarimeter with a flow cell (and used in conjunction with a refractometer) called a saccharimeter. These instruments use the International Sugar Scale, as defined by the International Commission for Uniform Methods of Sugar Analysis (ICUMSA). See also Optical rotation Polarimetry Polarization Chirality Enantiomers References Polarization (waves) Optical instruments French inventions
Polarimeter
Physics
3,476