id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
49,994,085 | https://en.wikipedia.org/wiki/MY%20Apodis | MY Apodis, also known as L 19-2, GJ 2108, or WD 1425-811, is a single white dwarf star located in the far southern constellation Apus. It is a low-amplitude variable star with an average apparent visual magnitude of 13.75 and thus is much too faint to be visible to the naked eye. Based on parallax measurements, this star is located at a distance of 68.3 light-years from the Sun. It is drifting further away with a radial velocity of 58.0
This compact stellar remnant has a class of DA4.1, which indicates a hydrogen-rich outer atmosphere. It is a pulsating white dwarf (ZZ Ceti star) that varies photometrically with an amplitude of 0.05 in visual magnitude. The low-amplitude variability of this ZZ Ceti analog was discovered by James E. Hesser and associates in 1974, who found it showed periods of 192.75 and 113.77 seconds. By 2015, ten different pulsation modes had been identified, and it remained stable over four decades of observation.
MY Apodis has 70.5% of the mass of the Sun compressed down into 1.1% of the Sun's radius. It is spinning rapidly with a rotation period of 13 hours. The star is radiating just 0.35% of the luminosity of the Sun at an effective temperature of 12,330 K. Astroseismological models suggest the star has a thin outer hydrogen shell with a mass of , an intermediate helium layer of 1.5 to , and a core of 20% carbon and 80% oxygen that extends out to 60% of the stellar radius.
References
Pulsating white dwarfs
Apus
Apodis, MY | MY Apodis | Astronomy | 362 |
1,306,265 | https://en.wikipedia.org/wiki/Chomp | Chomp is a two-player strategy game played on a rectangular grid made up of smaller square cells, which can be thought of as the blocks of a chocolate bar. The players take it in turns to choose one block and "eat it" (remove from the board), together with those that are below it and to its right. The top left block is "poisoned" and the player who eats this loses.
The chocolate-bar formulation of Chomp is due to David Gale, but an equivalent game expressed in terms of choosing divisors of a fixed integer was published earlier by Frederik Schuh.
Chomp is a special case of a poset game where the partially ordered set on which the game is played is a product of total orders with the minimal element (poisonous block) removed.
Example game
Below shows the sequence of moves in a typical game starting with a 5 × 4 bar:
Player A eats two blocks from the bottom right corner; Player B eats three from the bottom row; Player A picks the block to the right of the poisoned block and eats eleven blocks; Player B eats three blocks from the remaining column, leaving only the poisoned block. Player A must eat the last block and so loses.
Note that since it is provable that player A can win when starting from a 5 × 4 bar, at least one of A's moves is a mistake.
Positions of the game
The intermediate positions in an m × n Chomp are integer-partitions (non-increasing sequences of positive integers) λ1 ≥ λ2 ≥···≥ λr, with λ1 ≤ n
and r ≤ m. Their number is the binomial coefficient , which grows exponentially with m and n.
Winning the game
Chomp belongs to the category of impartial two-player perfect information games, making it also analyzable by Nim because of the Sprague–Grundy theorem.
For any rectangular starting position, other than 1×1, the first player can win. This can be shown using a strategy-stealing argument: assume that the second player has a winning strategy against any initial first-player move. Suppose then, that the first player takes only the bottom right hand square. By our assumption, the second player has a response to this which will force victory. But if such a winning response exists, the first player could have played it as their first move and thus forced victory. The second player therefore cannot have a winning strategy.
Computers can easily calculate winning moves for this game on two-dimensional boards of reasonable size. However, as the number of positions grows exponentially, this is infeasible for larger boards.
For a square starting position (i.e., n × n for any n ≥ 2), the winning strategy can easily be given explicitly. The first player should present the second with an L shape of one row and one column only, of the same length, connected at the poisonous square. Then, whatever the second player does on one arm of the L, the first player replies with the same move on the second arm, always presenting the second player again with a symmetric L shape. Finally, this L will degenerate into the single poisonous square, and the second player would lose.
Generalisations of Chomp
Three-dimensional Chomp has an initial chocolate bar of a cuboid of blocks indexed as (i,j,k). A move is to take a block together with any block all of whose indices are greater or equal to the corresponding index of the chosen block. In the same way Chomp can be generalised to any number of dimensions.
Chomp is sometimes described numerically. An initial natural number is given, and players alternate choosing positive divisors of the initial number, but may not choose 1 or a multiple of a previously chosen divisor. This game models n-dimensional Chomp, where the initial natural number has n prime factors and the dimensions of the Chomp board are given by the exponents of the primes in its prime factorization.
Ordinal Chomp is played on an infinite board with some of its dimensions ordinal numbers: for example a 2 × (ω + 4) bar. A move is to pick any block and remove all blocks with both indices greater than or equal the corresponding indices of the chosen block. The case of ω × ω × ω Chomp is a notable open problem; a $100 reward has been offered for finding a winning first move.
More generally, Chomp can be played on any partially ordered set with a least element. A move is to remove any element along with all larger elements. A player loses by taking the least element.
All varieties of Chomp can also be played without resorting to poison by using the misère play convention: The player who eats the final chocolate block is not poisoned, but simply loses by virtue of being the last player. This is identical to the ordinary rule when playing Chomp on its own, but differs when playing the disjunctive sum of Chomp games, where only the last final chocolate block loses.
See also
Nim
Hackenbush
References
External links
More information about the game
A freeware version for windows
Play Chomp online
All the winning bites for size up to 14
Abstract strategy games
Mathematical games
Combinatorial game theory
Paper-and-pencil games | Chomp | Mathematics | 1,089 |
29,516,819 | https://en.wikipedia.org/wiki/Bar%20%28tropical%20cyclone%29 | The bar of a mature tropical cyclone is a very dark gray-black layer of cloud that appears to be near to the horizon as seen from an observer preceding the approach of the storm, and is composed primarily of dense stratocumulus clouds. Cumulus and cumulonimbus clouds bearing precipitation follow immediately after the passage of the wall-like bar. Altostratus, cirrostratus and cirrus clouds are usually visible in ascending order above the top of the bar, while the wind direction for an observer facing toward the bar is typically from the left and slightly behind the observer.
History
The dark layer of clouds on the horizon seen prior to a tropical cyclone's passage over a location was first described in 1687 and the observed phenomenon later published in 1697 by William Dampier while observing a typhoon in the South China Sea during a circumnavigation by pirate ship. These observations led to an improved understanding of the nature of tropical cyclones. The use of "bar" as a term to describe this cloud layer first appeared in the 19th century.
Inside the bar
When the bar of the storm approaches an observer, it appears stationary in azimuthal position, whereas a storm moving toward the observer at an angle or perpendicular to the observer will appear to drift along the horizon. A reddish tint sometimes appears toward the top of the bar, while the darkness often varies depending on the storm's intensity. Tints of indigo, green, yellow or violet may also be present depending on the time of day due to the prismatic effect of water droplets in the cloud. Cumulus clouds first appear at the outflow boundary of the lower part of the bar. As the bar passes overhead, barometric pressure at the observer's location falls steadily. The first rainband arrives, the clouds moving from left to right roughly along isobar lines, but this is often followed by short periods of relative calm. During the onset of the bar's arrival, winds continuously increase in intensity leading toward the eyewall. Navigators of ships at sea often use the first appearance of a bar to steer clear of the approaching tropical cyclone.
Closer to the center of tropical cyclone, the eyewall also exhibits the appearance of a bar, and high lightning activity occurs within this central bar. After the bar passes through, the strongest winds and often the heaviest precipitation abruptly changes to calmer conditions within the eye, before the eyewall passes over again and the strongest winds arrive from the opposite direction.
At sea, wind speeds typically reach level 8 on the Beaufort scale while waves become drastically higher when the bar reaches overhead and squall lines begin to arrive.
References
Clouds
Marine meteorology
Navigation
Severe weather and convection
Tropical cyclones
Weather hazards
Wind | Bar (tropical cyclone) | Physics | 551 |
6,988,897 | https://en.wikipedia.org/wiki/Defense%20physiology | Defense physiology is a term used to refer to the symphony of body function (physiology) changes which occur in response to a stress or threat.
When the body executes the "fight-or-flight" reaction or stress response, the nervous system initiates, coordinates and directs specific changes in how the body is functioning, preparing the body to deal with the threat. (See also General adaptation syndrome.)
Definitions
Stress : As it pertains to the term defense physiology, the term stress refers to a perceived threat to the continued functioning of the body / life according to its current state.
Threat: A threat may be consciously recognized or not. A physical event (a loud noise or car collision or a coming attack), a chemical or a biological agent which alters (or has the possibility to alter) body function (physiology) away from optimum or healthy functioning (or away from its current state of functioning) may be perceived as a threat (also called a stressor).
Life circumstances, though posing no immediate physical danger, could be perceived as a threat. Anything that could change the continuing of the person’s life as they are currently experiencing it could be perceived as a threat.
Physiological reactions to threat (or perceived threat)
A threat may be either empirical (an outside observer may agree that the event or circumstance poses a threat) or a priori (an outside observer would not agree that the event or circumstance poses a threat). What is important to the individual, in terms of the body’s response, is that a threat is perceived.
The perception of a threat may also trigger an associated ‘feeling of distress’.
Physiological reactions triggered by mind cannot differentiate both the physical or mental threat separately, Hence the "fight-or-flight" response of mind for the both reactions will be same.
Duration of threat and its different physiological effects on the nervous system.
Acute Stress Reaction - The body executes the “fight-or-flight” reaction to get the body out of danger quickly. When the timing between the threat and the resolution of the threat are close, the “fight-or-flight" reaction is executed, the threat is handled, and the body returns to its previous state (taking care of the business of life - digestion, relaxation, tissue repair etc.). The body has evolved to stay in this mode for only a short time.
Chronic Stress State - When the timing between the threat and the resolution of the threat are more distant (the threat or the perception of threat is prolonged or other threats occur before the body has recovered), the “fight-or-flight" reaction continues and becomes the new "standard operating condition" of the body, "chronic defense physiology". Continuing in this mode produces significant negative effects (distress) in many aspects of body functioning (physical, mental and emotional distress).
See also
Hypothalamic–pituitary–adrenal axis
References
Physiology
Stress (biology)
Endocrine system | Defense physiology | Biology | 602 |
29,344,406 | https://en.wikipedia.org/wiki/Pseudolinkage | In genetics, pseudolinkage is a characteristic of a heterozygote for a reciprocal translocation, in which genes located near the translocation breakpoint behave as if they are linked even though they originated on nonhomologous chromosomes.
Linkage is the proximity of two or more markers on a chromosome; the closer together the markers are, the lower the probability that they will be separated by recombination. Genes are said to be linked when the frequency of parental type progeny exceeds that of recombinant progeny.
Not occur in translocation homozygote
During meiosis in a translocation homozygote, chromosomes segregate normally according to Mendelian principles. Even though the genes have been rearranged during crossover, both haploid sets of chromosomes in the individual have the same rearrangement. As a result, all chromosomes will find a single partner with which to pair at meiosis, and there will be no deleterious consequences for the progeny.
In translocation heterozygote
In translocation heterozygote, however, certain patterns of chromosome segregation during meiosis produce genetically unbalanced gametes that at fertilization become deleterious to the zygote. In a translocation heterozygote, the two haploid sets of chromosomes do not carry the same arrangement of genetic information. As a result, during prophase of the first meiotic division, the translocated chromosomes and their normal homologs assume a crosslike configuration in which four chromosomes, rather than the normal two, pair to achieve a maximum of synapsis between similar regions. We denote the chromosomes carrying translocated material with a T and the chromosomes with a normal order of genes with an N. Chromosomes N1 and T1 have homologous centromeres found in wild type on chromosome 1; N2 and T2 have centromeres found in wild type on chromosome 2.
During anaphase of meiosis I, the mechanisms that attach the spindle to the chromosomes in this crosslike configuration still usually ensure the disjunction of homologous centromeres, bringing homologous chromosomes to opposite spindle poles. Depending on the arrangement of the four chromosomes on the metaphase plate, this normal disjunction of homologous produces one of two equally likely patterns of segregation.
Alternate segregation pattern
In the alternate segregation pattern, the two translocation chromosomes (T1 and T2) go to one pole, while the two normal chromosomes (N1 and N2) move to the opposite pole. Both kinds of gametes resulting from this segregation (T1, T2, and N1, N2) carry the correct haploid number of genes; and the zygotes formed by union of these gametes with normal gamete will be viable.
Adjacent-1 segregation pattern
In the adjacent-1 segregation pattern, homologous centromeres disjoin so that T1 and N2 go to one pole, while the N1 and T2 go to the opposite pole. Consequently, each gamete contains a large duplication (of the region found in both the normal and the translocated chromosome in that gamete) and a correspondingly large deletion (of the region found in neither of the chromosomes in that gametes), which make them genetically unbalanced. Zygotes formed by union of these gametes with a normal gametes are usually not viable.
Adjacent-2 segregation pattern
Because of the unusual cruciform pairing configuration in translocation heterozygotes, nondisjunction of homologous centromeres occurs at a measurable but low rate. This nondisjunction produces an adjacent-2 segregation pattern in which the homologous centromeres N1 and T1 go to the same spindle pole while the homologous centromeres N2 and T2 go to the other spindle pole. The resulting genetic imbalances are lethal after fertilization to the zygotes containing them.
Thus, in a translocation heterozygote, only the alternate segregation pattern yields viable progeny in outcrosses, the equally likely adjacent-1 pattern and the rare adjacent-2 pattern do not.
Because of this, genes near the translocation breakpoints on the nonhomologous chromosomes participating in a reciprocal translocation exhibit pseudolinkage: They behave as if they are linked.
References
Genetics | Pseudolinkage | Biology | 931 |
7,088,707 | https://en.wikipedia.org/wiki/Halocarban | Halocarban (INN; also known as cloflucarban (USAN) and trifluoromethyldichlorocarbanilide; brand name ) is a chemical with antibacterial properties sometimes used in deodorant and soap.
References
Disinfectants
Ureas
Trifluoromethyl compounds
Chloroarenes
4-Chlorophenyl compounds | Halocarban | Chemistry | 85 |
47,551,794 | https://en.wikipedia.org/wiki/Breast%20crawl | Breast crawl is the instinctive movement of a newborn mammal toward the nipple of its mother for the purpose of latching on to initiate breastfeeding. In humans, if the newborn is laid on its mother's abdomen, movements commence at 12 to 44 minutes after birth, with spontaneous suckling being achieved roughly 27 to 71 minutes after birth.
Background
The Baby Friendly Hospital Initiative, developed by the World Health Organization and UNICEF, recommends that all babies have access to immediate skin-to-skin contact (SSC) following vaginal or Caesarean section birth. Immediate SSC after a Caesarean that used spinal or epidural anesthesia is achievable because the mother remains alert; however, after the use of general anesthesia, the newborn should be placed skin to skin as soon as the mother becomes alert and responsive.
If the mother is not immediately able to begin SSC, her partner or other helper can assist or place the infant SSC on their chest or breast. It is recommended that SSC be facilitated immediately after birth, as this is the time when the newborn is most likely to follow its natural instincts to find and attach to the breast and then breastfeed.
To find the nipple, the newborn uses a variety of sensory stimuli: visual (the sight of the mother's face and areola); auditory (the sound of its mother's voice); and olfactory (the scent of the areola, which resembles that of amniotic fluid).
Nine stages of breast crawl
Newborn babies go through nine distinct stages after birth within the first hour or so:
Birth cry: Intense crying just after birth
Relaxation phase: Infant resting and recovering. No activity of mouth, head, arms, legs or body
Awakening phase: Infant begins to show signs of activity. Small thrusts of head: up, down, from side-to-side. Small movements of limbs and shoulders
Active phase: Infant moves limbs and head, is more determined in movements. Rooting activity, ‘pushing’ with limbs without shifting body
Crawling phase: ‘Pushing’ which results in shifting body
Resting phase: Infant rests, with some activity, such as mouth activity, sucks on hand
Familiarization: Infant has reached areola/nipple with mouth positioned to brush and lick areola/nipple
Suckling phase: Infant has taken nipple in mouth and commences suckling
Sleeping phase: The baby has closed its eyes. Mother may also fall asleep.
References
Breastfeeding
Ethology
Babycare | Breast crawl | Biology | 503 |
331,821 | https://en.wikipedia.org/wiki/Adipocere | Adipocere (), also known as corpse wax, grave wax or mortuary wax, is a wax-like organic substance formed by the anaerobic bacterial hydrolysis of fat in tissue, such as body fat in corpses. In its formation, putrefaction is replaced by a permanent firm cast of fatty tissues, internal organs, and the face.
History
Adipocere was first described by Sir Thomas Browne in his discourse Hydriotaphia, Urn Burial (1658):
The chemical process of adipocere formation, saponification, came to be understood in the 17th century when microscopes became widely available.
In 1825, physician and lecturer Augustus Granville is believed to have (somewhat unwittingly) made candles from the adipocere of a mummy and used them to light the public lecture he gave to report on the mummy's dissection. Granville apparently thought that the waxy material from which he made the candles had been used to preserve the mummy, rather than its being a product of the saponification of the mummified body.
The body of the "Soap Lady", whose corpse turned into adipocere, is displayed in the Mütter Museum in Philadelphia, Pennsylvania.
Probably the most famous known case of adipocere is that of Scotland's Higgins brothers, murdered by their father in 1911 but whose bodies were not found until 1913. The bodies had been left floating in a flooded quarry, resulting in an almost complete transformation into adipocere. Pathologists Sydney Smith and Professor Littlejohn were able to find more than enough evidence from the preserved remains for police to identify the victims and charge the killer, who was hanged. At the same time, the pathologists secretly took some of the remains back to Edinburgh University for further study; nearly a century later, a relative requested the return of those remains so they could be given a Christian burial. The university agreed to do so if the claimant could prove her relationship to the boys and if other relatives agreed to her plan, and the remains were eventually cremated in 2009.
Appearance
Adipocere is a crumbly, waxy, water-insoluble material consisting mostly of saturated fatty acids. Depending on whether it was formed from white or brown body fat, adipocere is either grayish white or tan in color.
In corpses, the firm cast of adipocere allows some estimation of body shape and facial features, and injuries are often well-preserved.
Formation
Adipocere is formed by the anaerobic bacterial hydrolysis of fat in tissue. The transformation of fats into adipocere occurs best in an environment that has high levels of moisture and an absence of oxygen, such as in wet ground or mud at the bottom of a lake or a sealed casket, and it can occur with both embalmed and untreated bodies. Adipocere formation begins within a month of death, and, in the absence of air, it can persist for centuries. Adipocerous formation preserved the left hemisphere of the brain of a 13th-century infant such that sulci, gyri, and even Nissl bodies in the motor cortex could be distinguished in the 20th century. An exposed, insect-infested body or a body in a warm environment is unlikely to form deposits of adipocere.
Corpses of women, infants and overweight persons are particularly prone to adipocere transformation because they contain more body fat. In forensic science, the utility of adipocere formation to estimate the postmortem interval is limited because the speed of the process is temperature-dependent. It is accelerated by warmth, but temperature extremes impede it.
The degradation of adipocere continues after exhumation at the microscopic level resulting from the combination of exposure to air, handling, dissection and the enzymatic activity of microbiota.
See also
Bog body
Putrefaction
Saponification
Footnotes
Lipids
Forensic phenomena | Adipocere | Chemistry | 818 |
25,068,812 | https://en.wikipedia.org/wiki/Intelligent%20workload%20management | Intelligent workload management (IWM) is a paradigm for IT systems management arising from the intersection of dynamic infrastructure, virtualization, identity management, and the discipline of software appliance development. IWM enables the management and optimization of computing resources in a secure and compliant manner across physical, virtual and cloud environments to deliver business services for end customers.
The IWM paradigm builds on the traditional concept of workload management whereby processing resources are dynamically assigned to tasks, or "workloads," based on criteria such as business process priorities (for example, in balancing business intelligence queries against online transaction processing), resource availability, security protocols, or event scheduling, but extends the concept into the structure of individual workloads themselves.
Definition of "workload"
In the context of IT systems and data center management, a "workload" can be broadly defined as "the total requests made by users and applications of a system." However, it is also possible to break down the entire workload of a given system into sets of self-contained units. Such a self-contained unit constitutes a "workload" in the narrow sense: an integrated stack consisting of application, middleware, database, and operating system devoted to a specific computing task. Typically, a workload is "platform agnostic," meaning that it can run in physical, virtual or cloud computing environments. Finally, a collection of related workloads which allow end users to complete a specific set of business tasks can be defined as a "business service."
Making workloads "intelligent"
A workload is considered "intelligent" when it a) understands its security protocols and processing requirements so it can self-determine whether it can deploy in the public cloud, the private cloud or only on physical machines; b) recognizes when it is at capacity and can find alternative computing capacity as required to optimize performance; c) carries identity and access controls as well as log management and compliance reporting capabilities with it as it moves across environments; and d) is fully integrated with the business service management layer, ensuring that end user computing requirements are not disrupted by distributed computing resources, and working with current and emergent IT management frameworks.
Intelligent workloads and security in the cloud
The deployment of individual workloads and workload-based business services in the "hybrid distributed data center," - including physical machines, data centers, private clouds, and the public cloud - raises a host of issues for the efficient management of provisioning, security, and compliance. By making workloads "intelligent" so that they can effectively manage themselves in terms of where they run, how they run, and who can access them, intelligent workload management addresses these issues in a way that is efficient, flexible, and scalable. The 1989 seminal work by D.F. Ferguson, Y. Yemini, and C. Nikolaou "Microeconomic Algorithms for Load Balancing in Distributed Computing Systems" developed a theory by which workloads could be made "intelligent" to manage themselves. This theory has since been patented and was commercialized by the Boston-based company, VMTurbo, in 2009.
See also
Cloud computing
Dynamic infrastructure
Identity management
Portable application
Software appliance
Virtual appliance
References
Information technology management | Intelligent workload management | Technology | 666 |
41,604,835 | https://en.wikipedia.org/wiki/Caramboxin | Caramboxin (CBX) is a toxin found in star fruit (Averrhoa carambola) and the related bilimbi fruit (Averrhoa bilimbi). Individuals with some types of kidney disease are susceptible to adverse neurological effects including intoxication, seizures and even death after eating star fruit and bilimbi fruit. In 2013, caramboxin was identified as the neurotoxin responsible for these effects.
Caramboxin is a non-proteinogenic amino acid, with a chemical structure similar to the amino acid phenylalanine, but with extra hydroxyl, carboxyl and methoxy substituents, making it also a phenol, a benzoic acid, and a phenol ether. Caramboxin stimulates the glutamate receptors in neurons, being an agonist of both NMDA and AMPA glutamatergic ionotropic receptors with potent excitatory, convulsant, and neurodegenerative properties, resulting in symptoms of central nervous system disorder, including mental confusion, seizures, and status epilepticus.
A possible interaction between caramboxin and oxalic acid in starfruit can lead to both neurotoxic and nephrotoxic effects. Consuming large amounts of starfruit or its juice on an empty stomach is not recommended, even for individuals with normal kidney function. As caramboxin is water soluble, intense hemodialysis has often been used to improve the outcome for patients.
An enantioselective total synthesis of caramboxin was first published in 2024. It involves a catalytic phase-transfer alkylation of a glycine imine by ethyl acetoacetate.
References
Neurotoxins
Alpha-Amino acids
Phenols
Benzoic acids
Toxic amino acids
Plant toxins
NMDA receptor agonists
AMPA receptor agonists | Caramboxin | Chemistry | 395 |
72,154,418 | https://en.wikipedia.org/wiki/Oberheim%20DS-2 | The Oberheim DS-2 is a pre-MIDI digital music sequencer. Designed and built in 1974 by Tom Oberheim, it is considered one of the first ever digital musical sequencers.
Features
The DS-2 is capable of storing and sequencing 48 notes and provides a single channel of CV/Gate input and output. It can also be clocked externally. A later model, the DS-2a was capable of storing up to 144 notes.
Music sequencers
Oberheim synthesizers | Oberheim DS-2 | Engineering | 101 |
77,393,973 | https://en.wikipedia.org/wiki/NGC%201340 | NGC 1340 is an elliptical galaxy located in the constellation Fornax. Its speed relative to the cosmic microwave background is 1,126 ± 17 km/s, which corresponds to a Hubble Distance of 16.6 ± 1.2 Mpc (∼54.1 million ly). It was discovered by the German-British astronomer William Herschel in 1790, but it was added to the New General Catalog under the designation NGC 1344 later.
This galaxy was later observed by the British astronomer John Herschel on November 19, 1835, and it is this observation that was added to the New General Catalog under the designation NGC 1340.
To date, 34 non-redshift measurements yield a distance of 18.688 ± 3.160 Mpc (∼61 million ly), which is within the Hubble distance range.
NGC 1399 group
NGC 1340 (NGC 1344 in Garcia's article) is part of the NGC 1399 group. This group is part of the Fornax cluster and it includes at least 42 galaxies, including NGC 1326, NGC 1336, NGC 1339, NGC 1351, NGC 1366, NGC 1369, NGC 1373, NGC 1374, NGC 1379, NGC 1387, NGC 1399, NGC 1406, NGC 1419, NGC 1425, NGC 1427, NGC 1428, NGC 1436 (NGC 1437), NGC 1460, IC 1913 and IC 1919.
See also
List of NGC objects (1001–2000)
External links
NGC 1340 at NASA/IPAC
NGC 1340 at SIMBAD
NGC 1340 at LEDA
References
Elliptical galaxies
Fornax Cluster
Fornax
Astronomical objects discovered in 1790
Discoveries by William Herschel
1340
12923
ESO objects
-05-09-005 | NGC 1340 | Astronomy | 367 |
24,432,837 | https://en.wikipedia.org/wiki/History%20of%20the%20Tokyo%20Game%20Show | The history of began with its creation in 1996 and has continued through the current expo in 2017. It has been held in Chiba, Japan, annually since 1996 by Computer Entertainment Supplier's Association (CESA) and the Nikkei Business Publications.
History
1996 (August 22–24)
The first Tokyo Game Show was held on August 22 to 24, 1996. The attendance was over 109,000, and the 87 participating companies displayed a total of 365 games. Originally, the show was held twice a year, once in the spring and once in the autumn (in the Tokyo Big Sight) starting in 1997, but this format was discontinued in 2002 when the show was held only in the autumn. Since then, the show is held once a year.
1997 (April 4–6) (September 5–7)
Tokyo Game Show 1997 was held April 4 to 6 in spring and September 5 to 7 in autumn. This was the first show to function with the spring/autumn format. Attendance at the spring show was over 120,000, and over 100,000 at the autumn show. Nintendo had no presence at the show, opting to support their own Shoshinkai show instead. High-profile software unveiled included Sonic Jam, Panzer Dragoon Saga, Ghost in the Shell, Resident Evil 2, and Tobal 2. A PaRappa the Rapper stage show drew massive crowds.
1998 (March 20–22) (October 9–11)
Tokyo Game Show 1998 was held March 20 to 22 in spring and October 9 to 11 in autumn.
1999 (March 19–21) (September 17–19)
Tokyo Game Show 1999 was held March 19 to 21 in spring and September 17 to 19 in autumn. Sony showcased the PlayStation 2 and many games ahead of its release in March. Many PlayStation and PlayStation 2 games were presented, and playable demo's and booths were available for guests to play. This included many launch titles for the PlayStation 2, and projectors played movies of the games on stage such as Tekken Tag Tournament, Gran Turismo 2000, and Dark Cloud.
2000 (March 31–April 2) (September 22–24)
Tokyo Game Show 2000 was held March 31 to April 2 in spring and September 22 to 24 in autumn.
2001 (March 30–April 1) (October 12–14)
Tokyo Game Show 2001 was held March 30 to April 1 in spring and October 12 to 14 in autumn. This was the last show to function with the spring/autumn format.
2002 (September 20–22)
Tokyo Game Show 2002 was held September 20 to 22 in autumn. This was the first show to abandon the spring/autumn format and started only being held once a year within autumn.
2003 (September 26–27)
Tokyo Game Show 2003 was held September 26 to 27.
2004 (September 24–26)
Tokyo Game Show 2004 was held on September 24 to 26, 2004. It featured 117 exhibitors showing off more than 500 computer and video game-related products to the 160,000 visitors.
2005 (September 16–18)
Tokyo Game Show 2005 was held from September 16 to 18 in 2005. Microsoft held its own press event on 15 September 2005, one day before the opening of Tokyo Game Show. The show was opened with two keynote speeches on September 16. The first was given by Robert J. Bach, senior Vice President for the Home and Entertainment Division and chief Xbox officer at Microsoft. While traditionally Nintendo does not participate in Tokyo Game Show, its president, Satoru Iwata held a keynote speech there in 2005, where he revealed the controller for Nintendo's next generation video game console Wii, then known as the Revolution. There were hints by Ken Kutaragi that the PlayStation 3 would be playable at Tokyo Game Show, but this was not the case. Metal Gear Solid 4: Guns of the Patriots was shown publicly for the first time in trailer form. The MGS4 demo was also demonstrated by Hideo Kojima on the Konami stage, running in real time on a PS3 devkit.
2006 (September 22–24)
Tokyo Game Show 2006 was held September 22 to 24.
2007 (September 20–23)
Tokyo Game Show 2007 was held on September 20 to 23. During TGS 2007, three Kingdom Hearts games; Birth by Sleep (PSP), 358/2 Days (DS) and coded (Mobile) were revealed by Square Enix. Sony announced the PSP game Secret Agent Clank and the rumble PS3 controller by the name "DualShock 3", which was released in Japan in November 2007, and in North America and Europe in spring 2008. With the announcement of a PlayStation Store service launched for PlayStation Portable in Japan, PlayStation Home was delayed until the spring of 2008. Also, Microsoft announced Ninja Gaiden II would be released exclusively for the Xbox 360.
2008 (October 9–12)
Tokyo Game Show 2008 was held from October 9 to 12. Days 1 and 2 were open only to the press while days 3 and 4 were open to the general public. The CESA reports the total visitors for TGS 2008 exceeded 195,000, breaking all attendance records of the time. The most popular game shown was Ace Attorney Investigations: Miles Edgeworth.
2009 (September 24–27)
Tokyo Game Show 2009 was held from September 24 to September 27 following the same business and public days format as the last 2 years. According to Nikkei, 185,030 people came to the 2009 show.
2010 (September 26–29)
Tokyo Game Show 2010 was held September 26 to September 29 show continued with the same format with the show and included new features like the "Family games" and "Gadgets" areas. The 2010 show had 207,647 visitors in total.
2011 (September 16–18)
Tokyo Game Show 2011 was held September 16 to 18. had become overshadowed in Europe and North America by the Los Angeles-based Electronic Entertainment Expo, and there have been few revelations strong enough to compete with other video game conventions. TGS 2011 attendance was 222,668.
2012 (September 20–23)
Tokyo Game Show 2012 was held September 20–23 and saw a slight increase to 223,753 attendees.
2013 (September 19–22)
Tokyo Game Show 2013 was held from September 19 to 22. As the new generation of gaming anchored its fresh wave of hardware, software, and accessories into the market, Sony and Microsoft appeared to demonstrate new products to consumers and media. Nintendo did not attend the show, though third parties did show their own 3DS and Wii-U software. TGS attendance increased nearly every year that the show had been in its modern format, including 2013, when it reached a record-high 270,197 total attendees. To date, it is still the most attended Tokyo Game Show in history.
2014 (September 18–21)
Tokyo Game Show 2014 was held from September 18 to 21. TGS 2014 marked the first time of the modern era that attendance did not increase over the previous year. Still, the 2014 show brought in 251,832 visitors, the second highest total in its history.
2015 (September 17–20)
Tokyo Game Show 2015 was held from September 17 to 20.
2016 (September 15–18)
Tokyo Game Show 2016 was held from September 15 to 18.
2017 (September 21–24)
Tokyo Game Show 2017 was held September 21 to 24.
2018 (September 20–23)
Tokyo Game Show 2018 was held from September 20 to 23. This year saw a record attendance of 298,690 people.
2019 (September 12–15)
Tokyo Game Show 2019 was held from September 12 to 15.
2020 Online (September 24–27)
Tokyo Game Show 2020 was held from September 24 to 27, but due to the COVID-19 pandemic, the physical event was canceled. The organizers instead opted to host online events in replacement, known as Tokyo Game Show 2020 Online.
2021 Online (September 30–October 3)
Tokyo Game Show 2021 was held from September 30 to October 3. It was originally planned to return to its physical format, but later was announced that the event would be switched to a virtual online format.
2022 (September 15–18)
Tokyo Game Show 2022 was held from September 15 to 18. The show was a combined physical and virtual event.
2023 (September 21–24)
Tokyo Game Show 2023 was held from September 21 to 24.
2024 (September 26–29)
Tokyo Game Show 2024 was held from September 26 to 29.
See also
History of the Electronic Entertainment Expo
References
Video game trade shows
Video gaming in Japan | History of the Tokyo Game Show | Technology | 1,740 |
3,781,565 | https://en.wikipedia.org/wiki/Solomonic%20column | The Solomonic column, also called barley-sugar column, is a helical column, characterized by a spiraling twisting shaft like a corkscrew. It is not associated with a specific classical order, although most examples have Corinthian or Composite capitals. But it may be crowned with any design, for example, making a Roman Doric solomonic or Ionic solomonic column.
Perhaps originating in the Near East, it is a feature of Late Roman architecture, which was revived in Baroque architecture, especially in the Spanish and Portuguese-speaking worlds. Two sets of columns, both in the very prestigious setting of St. Peter's Basilica in Rome, were probably important in the wide diffusion of the style. The first were relatively small, and given by Constantine the Great in the 4th century. These were soon believed to have come from the Temple in Jerusalem, hence the style's naming after the biblical Solomon. The second set are those of Bernini's St. Peter's Baldacchino, finished in 1633.
Etymology and origin
Unlike the classical example of Trajan's Column of ancient Rome, which has a turned shaft decorated with a single continuous helical band of low-reliefs depicting Trajan's military might in battle, the twisted column is known to be an eastern motif taken into Byzantine architecture and decoration. Twist-fluted columns were a feature of some eastern architecture of Late Antiquity.
In the 4th century, Constantine the Great brought a set of columns to Rome and gave them to the original St. Peter's Basilica for reuse in the high altar and presbytery; The Donation of Constantine, a painting from Raphael's workshop, shows these columns in their original location. According to tradition, these columns came from the "Temple of Solomon", even though Solomon's temple was the First Temple, built in the 10th century BC and destroyed in 586 BC, not the Second Temple, destroyed in 70 AD. These columns, now considered to have been made in the 2nd century AD, became known as "Solomonic". In actuality, the columns probably came from neither temple. Constantine is recorded as having brought them de Grecias i.e., from Greece, and they are archaeologically documented as having been cut from Greek marble. A small number of Roman examples of similar columns are known. All that can firmly be said is that they are early and, because they have no Christian iconography in the carving and their early date (before the construction of elaborate churches), are presumably reused from some non-church building. The columns have distinct sections that alternate from ridged to smooth with sculpted grape leaves.
Some of these columns remained on the altar until the old structure of St. Peter's was torn down in the 16th century. While removed from the altar, eight of these columns remain part of the structure of St. Peter's. Two columns were placed below the pendentives on each of the four piers beneath the dome. Another column can now be observed up close in the St. Peter's Treasury Museum. Other columns from this set of twelve have been lost over the course of time.
If these columns really were from one of the Temples in Jerusalem, the spiral pattern may have represented the oak tree which was the first Ark of the Covenant, mentioned in Joshua 24:26. These columns have sections of twist-fluting alternating with wide bands of foliated reliefs.
From Byzantine examples, the Solomonic column passed to Western Romanesque architecture. In Romanesque architecture some columns also featured spiraling elements twisted round each other like hawser. Such variety adding life to an arcade is combined with Cosmatesque spiralling inlays in the cloister of St. John Lateran. These arcades were prominent in Rome and may have influenced the baroque Solomonic column.
In Baroque architecture
The Solomonic column was revived as a feature of Baroque architecture. The twisted S-curve shaft gives energy and dynamism to the traditional column form which fits these qualities that are characteristically Baroque.
Easily the best-known Solomonic columns are the colossal bronze Composite columns by Bernini in his Baldacchino at St. Peter's Basilica. The construction of the baldachin, actually a ciborium, which was finished in 1633, required that the original ones of Constantine be moved.
During the succeeding century, Solomonic columns were commonly used in altars, furniture, and other parts of design. Sculpted vines were sometimes carved into the spiralling cavetto of the twisting columns, or made of metal, such as gilt bronze. In an ecclesiastical context such ornament may be read as symbolic of the wine used in the Eucharist.
In the 16th century Raphael depicted these columns in his tapestry cartoon The Healing of the Lame at the Beautiful Gate, and Anthony Blunt noticed them in Bagnocavallo's Circumcision at the Louvre and in some Roman altars, such as one in Santo Spirito in Sassia, but their full-scale use in actual architecture was rare: Giulio Romano employed a version as half-columns decoratively superimposed against a wall in the Cortile della Cavallerizza of the Palazzo Ducale, Mantua (1538-39).
Peter Paul Rubens employed Solomonic columns in tapestry designs, ca 1626 , where he provided a variant of an Ionic capital for the columns as Raphael had done, and rusticated and Solomonic columns appear in the architecture of his paintings with such consistency and in such variety that Anthony Blunt thought it would be pointless to give a complete list.
The columns became popular in Catholic Europe including southern Germany. The Solomonic column spread to Spain at about the same time as Bernini was making his new columns, and from Spain to Spanish colonies in the Americas, where the salomónica was often used in churches as an indispensable element of the Churrigueresque style. The design was most infrequently used in Britain, the south porch of St Mary the Virgin, Oxford, being the only exterior example found by Robert Durman, and was still rare in English interior design, an example noted by Durman is the funerary monument for Helena, Lady Gorges (died 1635) at Salisbury perhaps the sole use.
After 1660, such twist-turned columns became a familiar feature in the legs of French, Dutch and English furniture, and on the glazed doors that protected the dials of late 17th- and early 18th-century bracket and longcase clocks. English collectors and dealers sometimes call these twist-turned members "barley sugar twists" after the type of sweet traditionally sold in this shape.
Gallery
See also
Boaz and Jachin
Solomon
References
External links
Two of the columns as presently used in St. Peter's
Raphael's Healing of the Lame Man, tapestry cartoon, 1515–16
Rubens' The Gathering of Manna, oil sketch for tapestry, ca. 1626, for a tapestry at the Convent of the Descalzas Reales, Madrid
Baroque architectural features
Orders of columns
Columns and entablature | Solomonic column | Technology | 1,445 |
40,880,508 | https://en.wikipedia.org/wiki/Blavatnik%20Awards%20for%20Young%20Scientists | Blavatnik Awards for Young Scientists was established in 2007 through a partnership between the Blavatnik Family Foundation, headed by Leonard Blavatnik (Russian: Леонид Валентинович Блаватник), chairman of Access Industries, and the New York Academy of Sciences, headed by president Nicholas Dirks.
These cash grant awards are given annually to selected faculty and postdoctoral researchers age 42 years and younger who work in the life and physical sciences and engineering at institutions in New York, New Jersey, and Connecticut. The first Blavatnik Awards were given in New York City on Monday, November 12, 2007. On June 3, 2013, the Blavatnik Family Foundation and the New York Academy of Sciences announced the expansion of the faculty competition to include young scientists from institutions throughout the United States. In April 2017, the Blavatnik Awards program was expanded to the United Kingdom (UK) and Israel. By the end of 2022, the Blavatnik Awards for Young Scientists will have awarded prizes totaling US$13.6 million; Blavatnik Award recipients have hailed from 48 countries across six continents.
Blavatnik National Awards are for faculty-rank scientists and engineers in Chemistry, Physical Sciences and Engineering, and Life Sciences.
Blavatnik Regional Awards are for postdoctoral scientists working in the fields of Chemistry, Physical Sciences and Engineering, and Life Sciences in New York, New Jersey, and Connecticut.
Blavatnik Awards for Young Scientists in the United Kingdom are for young, faculty-rank scientists and engineers from Scotland, Wales, Northern Ireland, and England.
Blavatnik Awards for Young Scientists in Israel are for young faculty-rank scientists and engineers early in their independent research careers.
U.S. Regional Postdoctoral Competition
The regional program recognizes postdoctoral researchers working at institutions in New York, New Jersey, and Connecticut. The regional program accepts nominations for scientists working in the life sciences, physical sciences, mathematics, and engineering. Nominations are accepted from institutions in New York, New Jersey, and Connecticut. Submissions for the regional program are reviewed by a Judging Panel of senior scientists, science editors, and past Blavatnik winners from the Mid-Atlantic area. As of 2013, winners of the postdoctoral competition receive US$30,000 and finalists receive US$10,000, each in unrestricted cash prizes.
Past U.S. Regional Winners and Finalists
U.S. National Faculty Competition
Beginning with the 2014 awards cycle, the national faculty competition accepts nominations for scientists working in three disciplinary categories: Life Sciences, Physical Sciences & Engineering, and Chemistry. Nominations are accepted from institutions throughout the United States. Members of the Awards’ Scientific Advisory Council may also submit nominations. Submissions are reviewed by a Judging Panel of senior scientists and past Blavatnik Awards winners. The awards are conferred annually with one winner (“Laureate”) from each disciplinary category selected each year (for a total of three Laureates per year). Each Laureate will receive a US$250,000 unrestricted cash prize and is honored at a ceremony in New York City every fall.
Past U.S. National Laureates and Finalists
Israel Faculty Competition
In 2017 the Blavatnik Awards launched a national competition in Israel modeled on the U.S. Faculty awards. The Blavatnik Awards in Israel are administered by The New York Academy of Sciences in collaboration with the Israel Academy of Sciences and Humanities. Three Laureates from Israel are chosen each awards cycle and receive US$100,000 in unrestricted funds. The first awards were granted during a ceremony held at the Israel Museum in Jerusalem on February 4, 2018.
Past Israel Laureates
United Kingdom (UK) Faculty Competition
In 2017 the Blavatnik Awards launched a national competition across the United Kingdom modeled after the U.S. Faculty awards. A laureate and two finalists in each of three categories (Chemistry, Life Sciences, and Physical Sciences and Engineering) are chosen in the UK every awards cycle. In 2022, prize monies were increased for the UK competition and Laureates are awarded £100,000 and finalists receive £30,000. The first awards were granted during a ceremony held at the Victoria and Albert Museum in London on March 7, 2018.
Past United Kingdom (UK) Laureates and Finalists
See also
List of general science and technology awards
References
Science and technology awards | Blavatnik Awards for Young Scientists | Technology | 893 |
20,759,874 | https://en.wikipedia.org/wiki/Falipamil | Falipamil is a calcium channel blocker.
Research
Falipamil is a bradycardic drug focused on decreasing the heart rate of animals. The drug focuses on treating sinuses in some animals, with the most common experiments being conducted in dogs. Falipamil is commonly administered to reduce sinus, and different dosage administrations have proven to bear different results When given in small doses, the drug is effective in reducing sinus rate, but when given in high doses, the drug increases the sinus rate as the drug increases the atrial pumping rate, thus increasing the amount of body fluid pumped through the body to the face. When falipamil is administered, the drug decreases the ventricular rate of the heart, which in turn helps reduce the sinus rate in an organism.
Falipamil has different effects on the electrophysiological structure of the heart, where different dosages result in different heart activity rates with diverse vagolytic actions. Recent studies have been carried out on dogs to determine the effectiveness of the drug in treating sinuses. When administered to a conscious dog, the sinus heart rate of the dog increases, whereas when administered to a stale dog, the animal experiences a lessened heart rate. The electrophysiological result of administering falipamil shows that the drug decreases the maximal atrial driving frequency when administered to a conscious dog, which is an effective measure in reducing sinus in a living organism. Falipamil administration also shows that the administration of the drug increases the body's action potential exerting less bradycardic effects that are effective in reducing sinuses. Fallipamil does have different recovery times when administered to dogs involved in different activities. Intact dogs are likely to have short sinus recovery time-conscious dogs. Falipamil has a positive effect on the heart's refractory period, where the drug prolongs the atrial refractory period.
References
Calcium channel blockers
Phenol ethers
Isoindolines
Lactams
Amines | Falipamil | Chemistry | 418 |
41,557,799 | https://en.wikipedia.org/wiki/Kepler-90e | Kepler-90e is an exoplanet orbiting the star Kepler-90, located in the constellation Draco. It was discovered by the Kepler telescope in October 2013. It orbits its parent star at only 0.42 astronomical units away, and at its distance it completes an orbit once every 91.94 days.
Host star
The planet orbits a G-type star named Kepler-90, its host star. The star is 1.2 times as massive as the Sun and is 1.2 times as large as the Sun. It is estimated to be 2 billion years old, with a surface temperature of 6080 K. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5778 K.
References
Transiting exoplanets
Exoplanets discovered by the Kepler space telescope
Draco (constellation)
Exoplanets discovered in 2013 | Kepler-90e | Astronomy | 181 |
39,066,649 | https://en.wikipedia.org/wiki/Doubletime%20%28gene%29 | Doubletime (DBT), also known as discs overgrown (DCO), is a gene that encodes the doubletime protein in fruit flies (Drosophila melanogaster). Michael Young and his team at Rockefeller University first identified and characterized the gene in 1998.
The DBT-encoded protein is a kinase that phosphorylates the period (PER) protein, which is crucial in controlling the biological clock that regulates circadian rhythms. Various mutations in the DBT gene have been observed to cause alterations in the period of locomotor activity in flies, including lengthening, shortening, or complete loss of the period in flies. In mammals, the homolog of DBT is casein kinase I epsilon, which has a similar role in regulating the circadian rhythm.
The circadian function of Drosophila and certain vertebrate Casein kinase 1 enzymes has been conserved over a long evolutionary timescale, making DBT and its homologs essential targets for research into the molecular mechanisms that underlie circadian rhythm regulation in various organisms.
Discovery
The doubletime gene (DBT) was first discovered and characterized in 1998 by Michael Young and his team at Rockefeller University. Young's research group, headed by Jeffrey Price, published their findings in a paper which characterized three alleles of DBT in fruit flies. It was reported that two mutant alleles, named short and long (DBTs and DBTl, respectively), were able to disrupt the normal cycling of the genes Period (per) and Timeless (TIM).
The team suspected that the delay between the rise in mRNA levels of per and TIM and the rise of PER and TIM protein was due to the effects of another protein.
Young suspected that this protein postponed the intercellular accumulation of PER protein by destroying it. Only when PER was paired with TIM was this breakdown not possible. This work showed that DBT regulated the break-down of PER.
Young named the novel gene "doubletime" due to its effect on the normal period of Drosophila. Mutant flies that only expressed DBTS had an 18-hour period, while those expressing DBTL had a 28-hour period. Young's team also identified a third allele, DBTP, which is lethal to pupae while ablating any per or TIM products in larvae. DBTP mutants are important because they provided clues as to how the gene product functioned.
Without functional DBT protein, flies accumulate high levels of PER. These PER proteins do not disintegrate without pairing with TIM proteins. These mutants expressed higher cytosolic levels of PER than cells in which PER protein was associated with TIM protein. The doubletime gene regulates the expression of PER, which in turn controls circadian rhythm. Young's team later cloned the DBT gene and found that the DBT protein was a kinase that specifically phosphorylated PER proteins; they concluded that PER proteins were not phosphorylated by DBT protein in DBTP mutants.
Gene
The gene is located on the right arm of chromosome 3. The mRNA transcript for DBT is 3.2 kilobase pairs long and contains four exons and three introns.
Protein
The DBT protein is composed of 440 amino acids. The protein has an ATP binding site, serine/threonine kinase catalytic domains, and several potential phosphorylation sites, including a site for autophosphorylation.
Function
Regulation of circadian rhythm
In Drosophila, a molecularly-driven clock mechanism works to regulate circadian rhythms such as locomotor activity and eclosion by oscillating the levels of the proteins PER and TIM via positive and negative feedback loops. Dbt produces a kinase that phosphorylates PER to regulate its accumulation in the cytoplasm and its degradation in the nucleus. In the cytoplasm, PER and TIM levels rise during the night, and DBT binds to PER while levels of TIM are still low. DBT phosphorylates the cytoplasmic PER, which leads to its degradation. When TIM accumulates, PER and TIM bind, which inhibits the degradation of PER. This cytoplasmic PER degradation, followed by accumulation, causes a four to six hour delay between the levels of per mRNA and PER protein. The PER/TIM complex, still bound to DBT, migrates into the nucleus, where it suppresses the transcription of per and tim. TIM is lost from the complex, following which DBT phosphorylates PER, degrading it. This mechanism allows for the transcription of the CLOCK and the genes it controls (with transcription controlled by circadian mechanisms).
The transcription of DBT mRNA and the levels of the DBT protein are consistent throughout the day and not controlled by PER/TIM levels—however, the location and concentration of the DBT protein within the cell change throughout the day. It is consistently present in the nucleus at varying levels, but in the cytoplasm it is predominantly present in the late day and early night, when PER and TIM levels peak.
Before DBT begins phosphorylating PER, a different protein called NEMO/NLK kinase begins phosphorylating PER at its per-short domain. The phosphorylation stimulates DBT to begin phosphorylating PER at multiple nearby sites. In total, there are about 25-30 phosphorylation sites on PER. The phosphorylated PER binds to the F-box protein SLIMB, and it is then targeted for degradation through the ubiquitin-proteasome pathway; Syed and Saez conclude the phosphorylation of PER by DBT leads to a decrease in PER abundance, which is a necessary step in the function of the organism's internal clock.
The activity of DBT on PER is aided by the activity of the proteins CKII and SHAGGY (SGG), as well as a rhythmically expressed protein phosphatase that acts as an antagonist. It is possible, but currently unknown, if DBT regulates other functions of PER or other circadian proteins. There has been no evidence that suggests that DBT binds directly to TIM. The only kinase known to directly phosphorylate TIM is the SGG kinase protein, but this does not majorly affect TIM stability, suggesting the presence of a different kinase or phosphatase. DBT is involved in recruiting other kinases into PER repression complexes. These kinases phosphorylate the transcription factor CLK, which releases the CLK-CYC complex from the E-Box and represses transcription.
Mutant alleles
There are three primary mutant alleles of DBT: DBTS, which shortens the organism's free-running period (its internal period in constant light conditions); DBTL, which lengthens the free-running period; and DBTP, which causes pupal lethality and eliminates circadian cycling proteins and the transcription of per and TIM. All mutants except for DBTS produce differential PER degradation that directly corresponds with their phenotypic behavior. DBTS PER degradation resembles wild-type DBT, suggesting that DBTS does not affect the clock through this degradation mechanism. It has been suggested that DBTS works by acting as a repressor or producing a different phosphorylation pattern of the substrate. DBTS causes early termination of per transcription.
The DBTL mutation causes the period of PER and TIM oscillations and animal behavioral activity to lengthen to about 27 hours. This extended rhythm is caused by a decreased rate of phosphorylation of PER due to lower DBT kinase activity levels. This mutation is caused by a substitution in the protein sequence (Met-80→Ile mutation).
The DBTS mutation causes a PER/TIM oscillation period of 18–20 hours. There is no current evidence for the mechanism affected by the mutation, but it is caused by a substitution in the protein sequence (Pro-47→ Ser mutation).
Another DBT mutation is DBTAR, which causes arrhythmic activities in Drosophila. It is a hypermorphic allele resulting from a His 126→Tyr mutation. Homozygous flies with this mutation are viable but arrhythmic, whereas DBTAR/+ heterozygotes have extra-long periods of about 29 hours, and their DBT kinase activity is reduced to the lowest rate of all the DBT alleles.
Noncircadian roles
Clock gene mutations, including those in Drosophila's DBT, alter the sensitization of drug-induced locomotor activity after repeated exposure to psychostimulants. Drosophila with mutant alleles of DBT failed to display locomotor sensitization in response to repeated cocaine exposure. Additionally, there is experimental evidence for this gene to function in 13 unique biological processes: biological regulation, phosphorus metabolic process, the establishment of planar polarity, positive regulation of the biological process, cellular process, single-organism developmental process, response to stimulus, response to an organic substance, sensory organ development, macromolecule modification, growth, cellular component organization or biogenesis, and rhythmic process. The gene's alternative name, discs overgrown, refers to its role as a cell growth-regulating gene that has strong effects on cell survival and growth control in imaginal discs, an attribute of the larvae fly stage. The protein is necessary in the mechanism linking cell survival during proliferation and growth arrest.
Noncatalytic role
The DBT protein may play a noncatalytic role in attracting kinases that phosphorylate CLOCK (CLK), an activator of transcription. DBT has a noncatalytic role in recruiting kinases, some of which have not yet been discovered, into the transcription-translation feedback loop. DBT's catalytic activity is not associated with the phosphorylation of CLK or its transcriptional repression. PER phosphorylation by DBT is integral to repressing CLK-dependent transcription. The DBT protein is noncatalytic in recruiting additional kinases that indirectly phosphorylate CLK, which downregulates transcription. A similar pathway exists in mammals due to the mechanistic conservation of the CKI homolog. In 2004, Drosophila cells were observed to have reduced CKI-7 activity in DBTs and DBTl mutants.
Mammalian homologs
Casein kinase I
The casein kinase 1 (CK1) family of kinases comprises a highly-conserved group of proteins found in organisms ranging from Arabidopsis to Drosophila to humans. Since DBT is a member of this family, it has prompted questions regarding the roles of these related genes in other model systems. Within mammals, there are seven CK1 isoforms, each with distinct roles surrounding protein phosphorylation. CK1ε was found to be the most homologous to DBT with a similarity of 86%. This genetic similarity extends to functional homology; for instance, while phosphorylation by DBT in Drosophila targets PER proteins for proteasome degradation, CK1ε phosphorylation marks mammalian PER proteins for degradation by reducing their stability. Although DBT and CK1ε play similar roles in their respective organisms, studies examining the effectiveness of CK1ε in Drosophila have revealed they are not completely functionally interchangeable, though their functions are highly analogous; for example, CK1ε has been shown to reduce the half-life of mPER1, one of the three mammalian PER homologs. The nuclear localization of mPER proteins is associated with phosphorylation, underscoring another vital function of the CK1ε protein.
Role of CKIε
Initially, the role of CKIε within the circadian clock of mammals was discovered due to a mutation in hamsters. The tau mutation in the Syrian golden hamster was the first to show a heritable abnormality of circadian rhythms in mammals. Hamsters with the mutation exhibit a shorter period than the wild-type. Heterozygotes have a period of about 22 hours, whereas the period of homozygotic mutants is at about 20 hours. Because of previous research investigating the role of DBT in establishing periods, the tau mutation was found to be at the same locus as the CKIε gene. The mutation is similar to the mutations DBTS and DBTL, which both affect the internal period of Drosophila. However, the forces driving these changes in the period seem different. It was found that the point mutation resulting in the tau mutant decreased the activity of the CKIε kinase in vitro. In flies, the DBTL mutation is associated with a decrease in DBT activity and a longer period, which is consistent with another experiment done on hamsters that showed a lengthening of the period caused by CKI inhibition. To investigate this discrepancy, researchers studied the half-life of PER2 in relation to wild-type CKIε, CKIεtau, and CKIε (K38A), which is a kinase-inactive mutant. The results indicated that the tau mutation was actually a gain-of-function mutation that caused the more rapid degradation of the PER proteins.
Importance of Rhythmic Phosphorylation
CKIε also plays a role in humans concerning Familial Advanced Sleep Phase Syndrome, where individuals exhibit a significantly shorter circadian period compared to the general population. The anomaly does not appear to be due to a mutation in the CKIε protein, but rather in the binding site for phosphorylation on the PER2 protein.
Kinase activity is implicated in the nuclear localization of PER and other genes pivotal to circadian rhythmicity.
There is a proposition that rhythmic phosphorylation could be a fundamental driver of circadian clocks. Traditionally, the transcription-translation negative feedback loop has been recognized as the source of oscillations and rhythms in biological clocks. However, in vitro experiments showcasing the phosphorylation of the cyanobacterial protein KaiC demonstrated that rhythmic oscillations could persist even in the absence of transcription or translation processes.
References
External links
circadian rhythm
Protein kinases | Doubletime (gene) | Biology | 2,927 |
2,482,408 | https://en.wikipedia.org/wiki/Photochromism | Photochromism is the reversible change of color upon exposure to light. It is a transformation of a chemical species (photoswitch) between two forms by the absorption of electromagnetic radiation (photoisomerization), where the two forms have different absorption spectra.
Applications
Sunglasses
One of the most famous reversible photochromic applications is color changing lenses for sunglasses. The largest limitation in using photochromic technology is that the materials cannot be made stable enough to withstand thousands of hours of outdoor exposure so long-term outdoor applications are not appropriate at this time.
The switching speed of photochromic dyes is highly sensitive to the rigidity of the environment around the dye. As a result, they switch most rapidly in solution and slowest in the rigid environment like a polymer lens. In 2005 it was reported that attaching flexible polymers with low glass transition temperature (for example siloxanes or polybutyl acrylate) to the dyes allows them to switch much more rapidly in a rigid lens. Some spirooxazines with siloxane polymers attached switch at near solution-like speeds even though they are in a rigid lens matrix.
Supramolecular chemistry
Photochromic units have been employed extensively in supramolecular chemistry. Their ability to give a light-controlled reversible shape change means that they can be used to make or break molecular recognition motifs, or to cause a consequent shape change in their surroundings. Thus, photochromic units have been demonstrated as components of molecular switches. The coupling of photochromic units to enzymes or enzyme cofactors even provides the ability to reversibly turn enzymes "on" and "off", by altering their shape or orientation in such a way that their functions are either "working" or "broken".
Data storage
The possibility of using photochromic compounds for data storage was first suggested in 1956 by Yehuda Hirshberg. Since that time, there have been many investigations by various academic and commercial groups, particularly in the area of 3D optical data storage which promises discs that can hold a terabyte of data. Initially, issues with thermal back-reactions and destructive reading dogged these studies, but more recently more stable systems have been developed.
Novelty items
Reversible photochromics are also found in applications such as toys, cosmetics, clothing and industrial applications. If necessary, they can be made to change between desired colors by combination with a permanent pigment.
Solar energy storage
Researchers at the Center for Exploitation of Solar Energy at the University of Copenhagen Department of Chemistry are studying the photochromic dihydroazulene–vinylheptafulvene system as a method to harvest and store solar energy.
History
Photochromism was discovered in the late 1880s, including work by Markwald, who studied the reversible change of color of 2,3,4,4-tetrachloronaphthalen-1(4H)-one in the solid state. He labeled this phenomenon "phototropy", and this name was used until the 1950s when Yehuda Hirshberg, of the Weizmann Institute of Science in Israel proposed the term "photochromism". Photochromism can take place in both organic and inorganic compounds, and also has its place in biological systems (for example retinal in the vision process).
Overview
Photochromism does not have a rigorous definition, but is usually used to describe compounds that undergo a reversible photochemical reaction where an absorption band in the visible part of the electromagnetic spectrum changes dramatically in strength or wavelength. In many cases, an absorbance band is present in only one form. The degree of change required for a photochemical reaction to be dubbed "photochromic" is that which appears dramatic by eye, but in essence there is no dividing line between photochromic reactions and other photochemistry. Therefore, while the trans-cis isomerization of azobenzene is considered a photochromic reaction, the analogous reaction of stilbene is not. Since photochromism is just a special case of a photochemical reaction, almost any photochemical reaction type may be used to produce photochromism with appropriate molecular design. Some of the most common processes involved in photochromism are pericyclic reactions, cis-trans isomerizations, intramolecular hydrogen transfer, intramolecular group transfers, dissociation processes and electron transfers (oxidation-reduction).
Another requirement of photochromism is two states of the molecule should be thermally stable under ambient conditions for a reasonable time. All the same, nitrospiropyran (which back-isomerizes in the dark over ~10 minutes at room temperature) is considered photochromic. All photochromic molecules back-isomerize to their more stable form at some rate, and this back-isomerization is accelerated by heating. There is therefore a close relationship between photochromic and thermochromic compounds. The timescale of thermal back-isomerization is important for applications, and may be molecularly engineered. Photochromic compounds considered to be "thermally stable" include some diarylethenes, which do not back isomerize even after heating at 80 C for 3 months.
Since photochromic chromophores are dyes, and operate according to well-known reactions, their molecular engineering to fine-tune their properties can be achieved relatively easily using known design models, quantum mechanics calculations, and experimentation. In particular, the tuning of absorbance bands to particular parts of the spectrum and the engineering of thermal stability have received much attention.
Sometimes, and particularly in the dye industry, the term irreversible photochromic is used to describe materials that undergo a permanent color change upon exposure to ultraviolet or visible light radiation. Because by definition photochromics are reversible, there is technically no such thing as an "irreversible photochromic"—this is loose usage, and these compounds are better referred to as "photochangable" or "photoreactive" dyes.
Apart from the qualities already mentioned, several other properties of photochromics are important for their use. These include quantum yield, fatigue resistance, photostationary state, and polarity and solubility. The quantum yield of the photochemical reaction determines the efficiency of the photochromic change with respect to the amount of light absorbed. The quantum yield of isomerization can be strongly dependent on conditions. In photochromic materials, fatigue refers to the loss of reversibility by processes such as photodegradation, photobleaching, photooxidation, and other side reactions. All photochromics suffer fatigue to some extent, and its rate is strongly dependent on the activating light and the conditions of the sample. Photochromic materials have two states, and their interconversion can be controlled using different wavelengths of light. Excitation with any given wavelength of light will result in a mixture of the two states at a particular ratio, called the photostationary state. In a perfect system, there would exist wavelengths that can be used to provide 1:0 and 0:1 ratios of the isomers, but in real systems this is not possible, since the active absorbance bands always overlap to some extent. In order to incorporate photochromics in working systems, they suffer the same issues as other dyes. They are often charged in one or more state, leading to very high polarity and possible large changes in polarity. They also often contain large conjugated systems that limit their solubility.
Tenebrescence
Tenebrescence, also known as reversible photochromism, is the ability of minerals to change color when exposed to light. The effect can be repeated indefinitely, but is destroyed by heating.
Tenebrescent minerals include hackmanite, spodumene and tugtupite.
Photochromic complexes
A photochromic complex is a kind of chemical compound that has photoresponsive parts on its ligand. These complexes have a specific structure: photoswitchable organic compounds are attached to metal complexes. For the photocontrollable parts, thermally and photochemically stable chromophores (azobenzene, diarylethene, spiropyran, etc.) are usually used. And for the metal complexes, a wide variety of compounds that have various functions (redox response, luminescence, magnetism, etc.) are applied.
The photochromic parts and metal parts are so close that they can affect each other's molecular orbitals. The physical properties of these compounds shown by parts of them (i.e., chromophores or metals) thus can be controlled by switching their other sites by external stimuli. For example, photoisomerization behaviors of some complexes can be switched by oxidation and reduction of their metal parts. Some other compounds can be changed in their luminescence behavior, magnetic interaction of metal sites, or stability of metal-to-ligand coordination by photoisomerization of their photochromic parts.
Classes of photochromic materials
Photochromic molecules can belong to various classes: triarylmethanes, stilbenes, azastilbenes, nitrones, fulgides, spiropyrans, naphthopyrans, spiro-oxazines, quinones and others.
Spiropyrans and spirooxazines
One of the oldest, and perhaps the most studied, families of photochromes are the spiropyrans. Very closely related to these are the spirooxazines. For example, the spiro form of an oxazine is a colorless leuco dye; the conjugated system of the oxazine and another aromatic part of the molecule is separated by a sp³-hybridized "spiro" carbon. After irradiation with UV light, the bond between the spiro-carbon and the oxazine breaks, the ring opens, the spiro carbon achieves sp² hybridization and becomes planar, the aromatic group rotates, aligns its π-orbitals with the rest of the molecule, and a conjugated system forms with ability to absorb photons of visible light, and therefore appear colorful. When the UV source is removed, the molecules gradually relax to their ground state, the carbon-oxygen bond reforms, the spiro-carbon becomes sp³ hybridized again, and the molecule returns to its colorless state.
This class of photochromes in particular are thermodynamically unstable in one form and revert to the stable form in the dark unless cooled to low temperatures. Their lifetime can also be affected by exposure to UV light. Like most organic dyes they are susceptible to degradation by oxygen and free radicals. Incorporation of the dyes into a polymer matrix, adding a stabilizer, or providing a barrier to oxygen and chemicals by other means prolongs their lifetime.
Diarylethenes
The "diarylethenes" were first introduced by Irie and have since gained widespread interest, largely on account of their high thermodynamic stability. They operate by means of a 6-pi electrocyclic reaction, the thermal analog of which is impossible due to steric hindrance. Pure photochromic dyes usually have the appearance of a crystalline powder, and in order to achieve the color change, they usually have to be dissolved in a solvent or dispersed in a suitable matrix. However, some diarylethenes have so little shape change upon isomerization that they can be converted while remaining in crystalline form.
Azobenzenes
The photochromic trans-cis isomerization of azobenzenes has been used extensively in molecular switches, often taking advantage of its shape change upon isomerization to produce a supramolecular result. In particular, azobenzenes incorporated into crown ethers give switchable receptors and azobenzenes in monolayers can provide light-controlled changes in surface properties.
Photochromic quinones
Some quinones, and phenoxynaphthacene quinone in particular, have photochromicity resulting from the ability of the phenyl group to migrate from one oxygen atom to another. Quinones with good thermal stability have been prepared, and they also have the additional feature of redox activity, leading to the construction of many-state molecular switches that operate by a mixture of photonic and electronic stimuli.
Inorganic photochromics
Many inorganic substances also exhibit photochromic properties, often with much better resistance to fatigue than organic photochromics. In particular, silver chloride is extensively used in the manufacture of photochromic lenses. Other silver and zinc halides are also photochromic. Yttrium oxyhydride is another inorganic material with photochromic properties.
Photochromic coordination compounds
Photochromic coordination complexes are relatively rare in comparison to the organic compounds listed above. There are two major classes of photochromic coordination compounds. Those based on sodium nitroprusside and the ruthenium sulfoxide compounds. The ruthenium sulfoxide complexes were created and developed by Rack and coworkers. The mode of action is an excited state isomerization of a sulfoxide ligand on a ruthenium polypyridine fragment from S to O or O to S. The difference in bonding from between Ru and S or O leads to the dramatic color change and change in Ru(III/II) reduction potential. The ground state is always S-bonded and the metastable state is always O-bonded. Typically, absorption maxima changes of nearly 100 nm are observed. The metastable states (O-bonded isomers) of this class often revert thermally to their respective ground states (S-bonded isomers), although a number of examples exhibit two-color reversible photochromism. Ultrafast spectroscopy of these compounds has revealed exceptionally fast isomerization lifetimes ranging from 1.5 nanoseconds to 48 picoseconds.
See also
Photosensitive glass
Hexaarylbiimidazole
References
Photochemistry
Chromism
Minerals | Photochromism | Physics,Chemistry,Materials_science,Engineering | 2,978 |
7,321,060 | https://en.wikipedia.org/wiki/Interferometric%20synthetic-aperture%20radar | Interferometric synthetic aperture radar, abbreviated InSAR (or deprecated IfSAR), is a radar technique used in geodesy and remote sensing. This geodetic method uses two or more synthetic aperture radar (SAR) images to generate maps of surface deformation or digital elevation, using differences in the phase of the waves returning to the satellite or aircraft. The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineering, in particular monitoring of subsidence and structural stability.
Technique
Synthetic aperture radar
Synthetic aperture radar (SAR) is a form of radar in which sophisticated processing of radar data is used to produce a very narrow effective beam. It can be used to form images of relatively immobile targets; moving targets can be blurred or displaced in the formed images. SAR is a form of active remote sensing – the antenna transmits radiation that is reflected from the image area, as opposed to passive sensing, where the reflection is detected from ambient illumination. SAR image acquisition is therefore independent of natural illumination and images can be taken at night. Radar uses electromagnetic radiation at microwave frequencies; the atmospheric absorption at typical radar wavelengths is very low, meaning observations are not prevented by cloud cover.
Phase
SAR makes use of the amplitude and the absolute phase of the return signal data. In contrast, interferometry uses differential phase of the reflected radiation, either from multiple passes along the same trajectory and/or from multiple displaced phase centers (antennas) on a single pass. Since the outgoing wave is produced by the satellite, the phase is known, and can be compared to the phase of the return signal. The phase of the return wave depends on the distance to the ground, since the path length to the ground and back will consist of a number of whole wavelengths plus some fraction of a wavelength. This is observable as a phase difference or phase shift in the returning wave. The total distance to the satellite (i.e., the number of whole wavelengths) is known based on the time that it takes for the energy to make the round trip back to the satellite—but it is the extra fraction of a wavelength that is of particular interest and is measured to great accuracy.
In practice, the phase of the return signal is affected by several factors, which together can make the absolute phase return in any SAR data collection essentially arbitrary, with no correlation from pixel to pixel. To get any useful information from the phase, some of these effects must be isolated and removed. Interferometry uses two images of the same area taken from the same position (or, for topographic applications, slightly different positions) and finds the difference in phase between them, producing an image known as an interferogram. This is measured in radians of phase difference and, because of the cyclic nature of phase, is recorded as repeating fringes that each represent a full cycle.
Factors affecting phase
The most important factor affecting the phase is the interaction with the ground surface. The phase of the wave may change on reflection, depending on the properties of the material. The reflected signal back from any one pixel is the summed contribution to the phase from many smaller 'targets' in that ground area, each with different dielectric properties and distances from the satellite, meaning the returned signal is arbitrary and completely uncorrelated with that from adjacent pixels. Importantly though, it is consistent – provided nothing on the ground changes the contributions from each target should sum identically each time, and hence be removed from the interferogram.
Once the ground effects have been removed, the major signal present in the interferogram is a contribution from orbital effects. For interferometry to work, the satellites must be as close as possible to the same spatial position when the images are acquired. This means that images from two satellite platforms with different orbits cannot be compared, and for a given satellite data from the same orbital track must be used. In practice the perpendicular distance between them, known as the baseline, is often known to within a few centimetres but can only be controlled on a scale of tens to hundreds of metres. This slight difference causes a regular difference in phase that changes smoothly across the interferogram and can be modelled and removed.
The slight difference in satellite position also alters the distortion caused by topography, meaning an extra phase difference is introduced by a stereoscopic effect. The longer the baseline, the smaller the topographic height needed to produce a fringe of phase change – known as the altitude of ambiguity. This effect can be exploited to calculate the topographic height, and used to produce a digital elevation model (DEM).
If the height of the topography is already known, the topographic phase contribution can be calculated and removed. This has traditionally been done in two ways. In the two-pass method, elevation data from an externally derived DEM is used in conjunction with the orbital information to calculate the phase contribution. In the three-pass method two images acquired a short time apart are used to create an interferogram, which is assumed to have no deformation signal and therefore represent the topographic contribution. This interferogram is then subtracted from a third image with a longer time separation to give the residual phase due to deformation.
Once the ground, orbital and topographic contributions have been removed the interferogram contains the deformation signal, along with any remaining noise (see Difficulties below). The signal measured in the interferogram represents the change in phase caused by an increase or decrease in distance from the ground pixel to the satellite, therefore only the component of the ground motion parallel to the satellite line of sight vector will cause a phase difference to be observed. For sensors like ERS with a small incidence angle this measures vertical motion well, but is insensitive to horizontal motion perpendicular to the line of sight (approximately north–south). It also means that vertical motion and components of horizontal motion parallel to the plane of the line of sight (approximately east–west) cannot be separately resolved.
One fringe of phase difference is generated by a ground motion of half the radar wavelength, since this corresponds to a whole wavelength increase in the two-way travel distance. Phase shifts are only resolvable relative to other points in the interferogram. Absolute deformation can be inferred by assuming one area in the interferogram (for example a point away from expected deformation sources) experienced no deformation, or by using a ground control (GPS or similar) to establish the absolute movement of a point.
Difficulties
A variety of factors govern the choice of images which can be used for interferometry. The simplest is data availability – radar instruments used for interferometry commonly don't operate continuously, acquiring data only when programmed to do so. For future requirements it may be possible to request acquisition of data, but for many areas of the world archived data may be sparse. Data availability is further constrained by baseline criteria. Availability of a suitable DEM may also be a factor for two-pass InSAR; commonly 90 m SRTM data may be available for many areas, but at high latitudes or in areas of poor coverage alternative datasets must be found.
A fundamental requirement of the removal of the ground signal is that the sum of phase contributions from the individual targets within the pixel remains constant between the two images and is completely removed. However, there are several factors that can cause this criterion to fail. Firstly the two images must be accurately co-registered to a sub-pixel level to ensure that the same ground targets are contributing to that pixel. There is also a geometric constraint on the maximum length of the baseline – the difference in viewing angles must not cause phase to change over the width of one pixel by more than a wavelength. The effects of topography also influence the condition, and baselines need to be shorter if terrain gradients are high. Where co-registration is poor or the maximum baseline is exceeded the pixel phase will become incoherent – the phase becomes essentially random from pixel to pixel rather than varying smoothly, and the area appears noisy. This is also true for anything else that changes the contributions to the phase within each pixel, for example changes to the ground targets in each pixel caused by vegetation growth, landslides, agriculture or snow cover.
Another source of error present in most interferograms is caused by the propagation of the waves through the atmosphere. If the wave travelled through vacuum, it should theoretically be possible (subject to sufficient accuracy of timing) to use the two-way travel-time of the wave in combination with the phase to calculate the exact distance to the ground. However, the velocity of the wave through the atmosphere is lower than the speed of light in vacuum, and depends on air temperature, pressure and the partial pressure of water vapour. It is this unknown phase delay that prevents the integer number of wavelengths being calculated. If the atmosphere was horizontally homogeneous over the length scale of an interferogram and vertically over that of the topography then the effect would simply be a constant phase difference between the two images which, since phase difference is measured relative to other points in the interferogram, would not contribute to the signal. However, the atmosphere is laterally heterogeneous on length scales both larger and smaller than typical deformation signals. This spurious signal can appear completely unrelated to the surface features of the image, however, in other cases the atmospheric phase delay is caused by vertical inhomogeneity at low altitudes and this may result in fringes appearing to correspond with the topography.
Persistent scatterer InSAR
Persistent or permanent scatterer techniques are a relatively recent development from conventional InSAR, and rely on studying pixels which remain coherent over a sequence of interferograms. In 1999, researchers at Politecnico di Milano, Italy, developed a new multi-image approach in which one searches the stack of images for objects on the ground providing consistent and stable radar reflections back to the satellite. These objects could be the size of a pixel or, more commonly, sub-pixel sized, and are present in every image in the stack. That specific implementation is patented.
Some research centres and companies, were inspired to develop variations of their own algorithms which would also overcome InSAR's limitations. In scientific literature, these techniques are collectively referred to as persistent scatterer interferometry or PSI techniques. The term persistent scatterer interferometry (PSI) was proposed by European Space Agency (ESA) to define the second generation of radar interferometry techniques. This term is nowadays commonly accepted by scientific and the end user community.
Commonly such techniques are most useful in urban areas with many permanent structures, for example the PSI studies of European geohazard sites undertaken by the Terrafirma project. The Terrafirma project provides a ground motion hazard information service, distributed throughout Europe via national geological surveys and institutions. The objective of this service is to help save lives, improve safety, and reduce economic loss through the use of state-of-the-art PSI information. Over the last 9 years this service has supplied information relating to urban subsidence and uplift, slope stability and landslides, seismic and volcanic deformation, coastlines and flood plains.
Producing interferograms
The processing chain used to produce interferograms varies according to the software used and the precise application but will usually include some combination of the following steps.
Two SAR images are required to produce an interferogram; these may be obtained pre-processed, or produced from raw data by the user prior to InSAR processing. The two images must first be co-registered, using a correlation procedure to find the offset and difference in geometry between the two amplitude images. One SAR image is then re-sampled to match the geometry of the other, meaning each pixel represents the same ground area in both images. The interferogram is then formed by cross-multiplication of each pixel in the two images, and the interferometric phase due to the curvature of the Earth is removed, a process referred to as flattening. For deformation applications a DEM can be used in conjunction with the baseline data to simulate the contribution of the topography to the interferometric phase, this can then be removed from the interferogram.
Once the basic interferogram has been produced, it is commonly filtered using an adaptive power-spectrum filter to amplify the phase signal. For most quantitative applications the consecutive fringes present in the interferogram will then have to be unwrapped, which involves interpolating over the 0 to 2π phase jumps to produce a continuous deformation field. At some point, before or after unwrapping, incoherent areas of the image may be masked out. The final processing stage involves geocoding the image, which resamples the interferogram from the acquisition geometry (related to direction of satellite path) into the desired geographic projection.
Hardware
Spaceborne
Early exploitation of satellite-based InSAR included use of Seasat data in the 1980s, but the potential of the technique was expanded in the 1990s, with the launch of ERS-1 (1991), JERS-1 (1992), RADARSAT-1 and ERS-2 (1995). These platforms provided the stable, well-defined orbits and short baselines necessary for InSAR. More recently, the 11-day NASA STS-99 mission in February 2000 used a SAR antenna mounted on the Space Shuttle to gather data for the Shuttle Radar Topography Mission (SRTM). In 2002 ESA launched the ASAR instrument, designed as a successor to ERS, aboard Envisat. While the majority of InSAR to date has utilized the C-band sensors, recent missions such as the ALOS PALSAR, TerraSAR-X and COSMO-SkyMed are expanding the available data in the L- and X-band.
Sentinel-1A and Sentinel-1B, both C-band sensors, were launched by the ESA in 2014 and 2016, respectively. Together, they provide InSAR coverage on a global scale and on a six-day repeat cycle.
Airborne
Airborne InSAR data acquisition systems are built by companies such as the American Intermap, the German AeroSensing, and the Brazilian OrbiSat.
Terrestrial or ground-based
Terrestrial or ground-based SAR interferometry (TInSAR or GBInSAR) is a remote sensing technique for the displacement monitoring of slopes, rock scarps, volcanoes, landslides, buildings, infrastructures etc. This technique is based on the same operational principles of the satellite SAR interferometry, but the synthetic aperture of the radar (SAR) is obtained by an antenna moving on a rail instead of a satellite moving around an orbit. SAR technique allows 2D radar image of the investigated scenario to be achieved, with a high range resolution (along the instrumental line of sight) and cross-range resolution (along the scan direction). Two antennas respectively emit and receive microwave signals and, by calculating the phase difference between two measurements taken in two different times, it is possible to compute the displacement of all the pixels of the SAR image. The accuracy in the displacement measurement is of the same order of magnitude as the EM wavelength and depends also on the specific local and atmospheric conditions.
Applications
Tectonic
InSAR can be used to measure tectonic deformation, for example ground movements due to earthquakes. It was first used for the 1992 Landers earthquake, but has since been utilised extensively for a wide variety of earthquakes all over the world. In particular the 1999 Izmit and 2003 Bam earthquakes were extensively studied. InSAR can also be used to monitor creep and strain accumulation on faults.
Volcanic
InSAR can be used in a variety of volcanic settings, including deformation associated with eruptions, inter-eruption strain caused by changes in magma distribution at depth, gravitational spreading of volcanic edifices, and volcano-tectonic deformation signals. Early work on volcanic InSAR included studies on Mount Etna, and Kilauea, with many more volcanoes being studied as the field developed. The technique is now widely used for academic research into volcanic deformation, although its use as an operational monitoring technique for volcano observatories has been limited by issues such as orbital repeat times, lack of archived data, coherence and atmospheric errors. Recently InSAR has been used to study rifting processes in Ethiopia.
Subsidence
Ground subsidence from a variety of causes has been successfully measured using InSAR, in particular subsidence caused by oil or water extraction from underground reservoirs, subsurface mining and collapse of old mines. Thus, InSAR has become an indispensable tool to satisfactorily address many subsidence studies. Tomás et al. performed a cost analysis that allowed to identify the strongest points of InSAR techniques compared with other conventional techniques: (1) higher data acquisition frequency and spatial coverage; and (2) lower annual cost per measurement point and per square kilometre.
Landslides
Although InSAR technique can present some limitations when applied to landslides, it can also be used for monitoring landscape features such as landslides.
Tomás et al. conducted a bibliometric study on the trends in publications related to landslides and InSAR. They found that the publication trends follow a power model, indicating that despite its inception in the last century, InSAR is a growing topical issue and has become established as a valuable tool for studying landslides.
Ice flow
Glacial motion and deformation have been successfully measured using satellite interferometry. The technique allows remote, high-resolution measurement of changes in glacial structure, ice flow, and shifts in ice dynamics, all of which agree closely with ground observations.
Infrastructure and building monitoring
InSAR can also be used to monitor the stability of built structures. Very high resolution SAR data (such as derived from the TerraSAR-X StripMap mode or COSMO-Skymed HIMAGE mode) are especially suitable for this task. InSAR is used for monitoring highway and railway settlements, dike stability, forensic engineering and many other uses.
DEM generation
Interferograms can be used to produce digital elevation maps (DEMs) using the stereoscopic effect caused by slight differences in observation position between the two images. When using two images produced by the same sensor with a separation in time, it must be assumed other phase contributions (for example from deformation or atmospheric effects) are minimal. In 1995 the two ERS satellites flew in tandem with a one-day separation for this purpose. A second approach is to use two antennas mounted some distance apart on the same platform, and acquire the images at the same time, which ensures no atmospheric or deformation signals are present. This approach was followed by NASA's SRTM mission aboard the Space Shuttle in 2000. InSAR-derived DEMs can be used for later two-pass deformation studies, or for use in other geophysical applications.
Mapping and classification of active deformation areas
Various procedures have been developed to semi-automatically identify clusters of active persistent scatterers, usually referred to as active deformation areas, and preliminarily associate them with different potential types of deformational processes (e.g., landslides, sinkholes, building settlements, land subsidence) across wide areas.
See also
Coherence (physics)
Optical heterodyne detection
Remote sensing
ROI PAC
References
Further reading
B. Kampes, Radar Interferometry – Persistent Scatterer Technique, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2006.
External links
InSAR, a tool for measuring Earth's surface deformation Matthew E. Pritchard
USGS InSAR factsheet
InSAR Principles, ESA publication, TM19, February 2007.
Geophysical survey
Geodesy
Synthetic aperture radar
Interferometry | Interferometric synthetic-aperture radar | Mathematics | 4,039 |
207,336 | https://en.wikipedia.org/wiki/Salivary%20gland | The salivary glands in many vertebrates including mammals are exocrine glands that produce saliva through a system of ducts. Humans have three paired major salivary glands (parotid, submandibular, and sublingual), as well as hundreds of minor salivary glands. Salivary glands can be classified as serous, mucous, or seromucous (mixed).
In serous secretions, the main type of protein secreted is alpha-amylase, an enzyme that breaks down starch into maltose and glucose, whereas in mucous secretions, the main protein secreted is mucin, which acts as a lubricant.
In humans, 1200 to 1500 ml of saliva are produced every day. The secretion of saliva (salivation) is mediated by parasympathetic stimulation; acetylcholine is the active neurotransmitter and binds to muscarinic receptors in the glands, leading to increased salivation.
A proposed fourth pair of salivary glands, the tubarial glands, were first identified in 2020. They are named for their location, being positioned in front of and over the torus tubarius. However, this finding from one study is yet to be confirmed.
Structure
Parotid glands
The two parotid glands are major salivary glands wrapped around the mandibular ramus in humans. These are largest of the salivary glands, secreting saliva to facilitate mastication and swallowing, and amylase to begin the digestion of starches. It is the serous type of gland which secretes alpha-amylase (also known as ptyalin). It enters the oral cavity via the parotid duct. The glands are located posterior to the mandibular ramus and anterior to the mastoid process of the temporal bone. They are clinically relevant in dissections of facial nerve branches while exposing the different lobes, since any iatrogenic lesion will result in either loss of action or strength of muscles involved in facial expression. They produce 20% of the total salivary content in the oral cavity. Mumps is a viral infection, caused by infection in the parotid gland.
Submandibular glands
The submandibular glands (previously known as submaxillary glands) are a pair of major salivary glands located beneath the lower jaws, superior to the digastric muscles. The secretion produced is a mixture of both serous fluid and mucus, and enters the oral cavity via the submandibular duct or Wharton duct. Around 70% of saliva in the oral cavity is produced by the submandibular glands, though they are much smaller than the parotid glands. This gland can usually be felt via palpation of the neck, as it is in the superficial cervical region and feels like a rounded ball. It is located about two fingers above the Adam's apple (laryngeal prominence) and about two inches apart under the chin.
Sublingual glands
The sublingual glands are a pair of major salivary glands located inferior to the tongue, anterior to the submandibular glands. The secretion produced is mainly mucous in nature, but it is categorized as a mixed gland. Unlike the other two major glands, the ductal system of the sublingual glands does not have intercalated ducts and usually does not have striated ducts, either, so saliva exits directly from 8-20 excretory ducts known as the Rivinus ducts. About 5% of saliva entering the oral cavity comes from these glands.
Tubarial salivary glands
The tubarial glands are suggested as a fourth pair of salivary glands situated posteriorly in the nasopharynx and nasal cavity, predominantly with mucous glands, and its ducts opening into the dorsolateral pharyngeal wall. The glands were unknown until September 2020, when they were discovered by a group of Dutch scientists using prostate-specific membrane antigen PET-CT. This discovery may explain mouth dryness after radiotherapy despite the avoidance of the three major glands. However, these findings from just one study need to be confirmed. On the other hand, an interdisciplinary group of scientists disagree with this new discovery. They believe that an accumulation of minor salivary glands has been described.
Minor salivary glands
Around 800 to 1,000 minor salivary glands are located throughout the oral cavity within the submucosa of the oral mucosa in the tissue of the buccal, labial, and lingual mucosa, the soft palate, the lateral parts of the hard palate, and the floor of the mouth or between muscle fibers of the tongue. They are 1 to 2 mm in diameter and unlike the major glands, they are not encapsulated by connective tissue, only surrounded by it. The gland has usually a number of acini connected in a tiny lobule. A minor salivary gland may have a common excretory duct with another gland, or may have its own excretory duct. Their secretion is mainly mucous in nature and have many functions such as coating the oral cavity with saliva. Problems with dentures are sometimes associated with minor salivary glands if dry mouth is present. The minor salivary glands are innervated by the facial nerve (cranial nerve CN VII).
Von Ebner's glands
Von Ebner's glands are found in a trough circling the circumvallate papillae on the dorsal surface of the tongue near the terminal sulcus. They secrete a purely serous fluid that begins lipid hydrolysis. They also facilitate the perception of taste through secretion of digestive enzymes and proteins.
The arrangement of these glands around the circumvallate papillae provides a continuous flow of fluid over the great number of taste buds lining the sides of the papillae, and is important for dissolving the food particles to be tasted.
Nerve supply
Salivary glands are innervated, either directly or indirectly, by the parasympathetic and sympathetic arms of the autonomic nervous system. Parasympathetic stimulation evokes a copious flow of saliva.
Parasympathetic innervation to the salivary glands is carried via cranial nerves. The parotid gland receives its parasympathetic input from the glossopharyngeal nerve (CN IX) via the otic ganglion, while the submandibular and sublingual glands receive their parasympathetic input from the facial nerve (CN VII) via the submandibular ganglion. These nerves release acetylcholine and substance P, which activate the IP3 and DAG pathways respectively.
Direct sympathetic innervation of the salivary glands takes place via preganglionic nerves in the thoracic segments T1-T3 which synapse in the superior cervical ganglion with postganglionic neurons that release norepinephrine, which is then received by β1-adrenergic receptors on the acinar and ductal cells of the salivary glands, leading to an increase in cyclic adenosine monophosphate (cAMP) levels and the corresponding increase of saliva secretion. Note that in this regard both parasympathetic and sympathetic stimuli result in an increase in salivary gland secretions, the difference lies on the composition of this saliva, once sympathetic stimulus results particularly in the increase of amylase secretion, which is produced by serous glands. The sympathetic nervous system also affects salivary gland secretions indirectly by innervating the blood vessels that supply the glands, resulting in vasoconstriction through the activation of α1 adrenergic receptors, lessening the saliva's water content.
Microanatomy
The gland is internally divided into lobules. Blood vessels and nerves enter the glands at the hilum and gradually branch out into the lobules.
Acini
Secretory cells are found in a group, or acinus. Each acinus is located at the terminal part of the gland connected to the ductal system, with many acini within each lobule of the gland. Each acinus consists of a single layer of cuboidal epithelial cells surrounding a lumen, a central opening where the saliva is deposited after being produced by the secretory cells. The three forms of acini are classified in terms of the type of epithelial cell present and the secretory product being produced - serous, mucoserous, and mucous.
Ducts
In the duct system, the lumina are formed by intercalated ducts, which in turn join to form striated ducts. These drain into ducts situated between the lobes of the gland (called interlobular ducts or secretory ducts). These are found on most major and minor glands (exception may be the sublingual gland).
All of the human salivary glands terminate in the mouth, where the saliva proceeds to aid in digestion. The released saliva is quickly inactivated in the stomach by the acid that is present, but saliva also contains enzymes that are actually activated by stomach acid.
Gene and protein expression
About 20,000 protein-coding genes are expressed in human cells and 60% of these genes are expressed in normal, adult salivary glands. Less than 100 genes are more specifically expressed in the salivary gland. The salivary gland specific genes are mainly genes that encode for secreted proteins and compared to other organs in the human body; the salivary gland has the highest fraction of secreted genes. The heterogeneous family of proline-rich, human salivary glycoproteins, such as PRB1 and PRH1, are salivary gland-specific proteins with highest level of expression. Examples of other specifically expressed proteins include the digestive amylase enzyme AMY1A, the mucin MUC7 and statherin, all of major importance for specific characteristics of saliva.
Aging
Aging of salivary glands shows some structural changes, such as:
Decrease in volume of acinar tissue
Increase in fibrous tissue
Increase in adipose tissue
Ductal hyperplasia and dilation
In addition, changes occur in salivary contents:
Decrease in concentration of secretory IgE
Decrease in the amount of mucin
However, no overall change in the amount of saliva secreted is seen.
Function
Salivary glands secrete saliva, which has many benefits for the oral cavity and health in general. The knowledge of normal salivary flow rate (SFR) is extremely important when treating dental patients. These benefits include:
Protection: Saliva consists of proteins (for example; mucins) that lubricate and protect both the soft and hard tissues of the oral cavity. Mucins are the principal organic constituents of mucus, the slimy viscoelastic material that coats all mucosal surfaces.
Buffering: In general, the higher the saliva flow rate, the faster the clearance and the higher the buffer capacity, hence better protection from dental caries. Therefore, people with a slower rate of saliva secretion, combined with a low buffer capacity, have lessened salivary protection against microbes.
Pellicle formation: Saliva forms a pellicle on the surface of the tooth to prevent wearing. The film contains mucins and proline-rich glycoprotein from the saliva.
The proteins (statherin and proline-rich proteins) within the salivary pellicle inhibit demineralization and promote remineralization by attracting calcium ions.
Maintenance of tooth integrity: Demineralization occurs when enamel disintegrates due to the presence of acid. When this occurs, the buffering capacity effect of saliva (increases saliva flow rate) inhibits demineralization. Saliva can then begin to promote the remineralization of the tooth by strengthening the enamel with calcium and phosphate minerals.
Antimicrobial action: Saliva can prevent microbial growth based on the elements it contains. For example, lactoferrin in saliva binds naturally with iron. Since iron is a major component of bacterial cell walls, removal of iron breaks down the cell wall, which in turn breaks down the bacterium. Antimicrobial peptides such as histatins inhibit the growth of Candida albicans and Streptococcus mutans. Salivary immunoglobulin A serves to aggregate oral bacteria such as S. mutans and prevent the formation of dental plaque.
Tissue repair: Saliva can encourage soft-tissue repair by decreasing clotting time and increasing wound contraction.
Digestion: Saliva contains amylase, which hydrolyses starch into glucose, maltose, and dextrin. As a result, saliva allows some digestion to occur before the food reaches the stomach.
Taste: Saliva acts as a solvent in which solid particles can dissolve and enter the taste buds through oral mucosa located on the tongue. These taste buds are found within foliate and circumvallate papillae, where minor salivary glands secrete saliva.
Clinical significance
A sialolithiasis (a salivary calculus or stone) may cause blockage of the ducts, most commonly the submandibular ducts, causing pain and swelling of the gland.
Salivary gland dysfunction refers to either xerostomia (the symptom of dry mouth) or salivary gland hypofunction (reduced production of saliva); it is associated with significant impairment of quality of life. Following radiotherapy of the head and neck region, salivary gland dysfunction is a predictable side-effect. Saliva production may be pharmacologically stimulated by sialagogues such as pilocarpine and cevimeline. It can also be suppressed by so-called antisialagogues such as tricyclic antidepressants, SSRIs, antihypertensives, and polypharmacy. A Cochrane review found there was no strong evidence that topical therapies are effective in relieving the symptoms of dry mouth.
Cancer treatments including chemotherapy and radiation therapy may impair salivary flow. Radiotherapy can cause permanent hyposalivation due to injury to the oral mucosa containing the salivary glands, resulting in xerostomia, whereas chemotherapy may cause only temporary salivary impairment. Furthermore surgical removal because of benign or malignant lesions may also impair function.
Graft versus host disease after allogeneic bone marrow transplantation may manifest as dry mouth and many small mucoceles. Salivary gland tumours may occur, including mucoepidermoid carcinoma, a malignant growth.
Clinical tests/investigations
A sialogram is a radiocontrast study of a salivary duct that may be used to investigate its function and for diagnosing Sjögren syndrome.
Other animals
The salivary glands of some species are modified to produce proteins; salivary amylase is found in many bird and mammal species (including humans, as noted above). Furthermore, the venom glands of venomous snakes, Gila monsters, and some shrews, are actually modified salivary glands. In other organisms such as insects, salivary glands are often used to produce biologically important proteins such as silk or glues, whilst fly salivary glands contain polytene chromosomes that have been useful in genetic research.
See also
Serous demilune
Sialome
References
External links
Salivary gland at the Human Protein Atlas
Illustration at merck.com
Illustration at .washington.edu
Medical Encyclopedia Medline Plus: Salivary gland
Parotid tumor patient videos
Photo of Female with massive right parotid mass
Glands of mouth
Exocrine system
Saliva
Arthropod glands | Salivary gland | Biology | 3,235 |
11,781,050 | https://en.wikipedia.org/wiki/ITerating | ITerating was a Wiki-based software guide, where users could find, compare and give reviews to software products. As of January 2021 the domain is listed as being for sale and the website no longer on-line. Founded in October 2005, and based in New York, ITerating was created by CEO Nicolas Vandenberghe, who saw that there was an industry need for a comprehensive resource to help evaluate software solutions.
The site aims to be a reference guide for the IT industry and includes reviews, ratings, articles, and detailed product feature comparisons. ITerating uses Semantic Web tools (including RDF - Resource Description Framework) to combine user edits with Web service feeds from other sites.
Designed for use by developers and industry consultants, ITerating allows users to contribute to categories such as Software Engineering Tools; Website Design & Tools; Website Software Tools; Website & Communication Applications & Social Networking; or to create their own category if does not exist yet.
Wiki Matrix
Iterating announced the addition of a Feature Matrix in June 2007, which allows users to dynamically create customized, side-by-side feature comparisons of software solutions.
References
Online databases
Computing websites
Software companies based in New York (state)
Defunct software companies of the United States | ITerating | Technology | 256 |
50,954,100 | https://en.wikipedia.org/wiki/The%20No%20Nonsense%20Guide%20To%20Science | The No Nonsense Guide to Science is a 2006 book on Post-normal science (PNS). It was written by American born British historian and philosopher of science Jerome Ravetz.
Main
What should a young person do who aspires to make the world a better place and to make their way in science?
This is how this work's ambition was summarized. Written in 2006 by one of the founding fathers of Post-normal Science - the other being Silvio Funtowicz - its 142 pages cover several themes, in part synthesizing previous works such as Scientific Knowledge and Its Social Problems, The Merger of Knowledge with Power, and Uncertainty and Quality in Science for Policy (with Funtowicz), and introduces the ideas of Post-normal Science. Topics include:
The problem of science being at once 'little' and big or 'mega', embedded in institutions and corporations
The fallibility of science, against a possibly 'dogmatic' teaching of the power of science
The democratization of science as a necessary and realistic antidote to its hubris
The opportunity of forming extended peer communities - inclusive of whistle blowers and investigative journalists as well as academics and interested stakeholders, when science is called to answer conflicted policy questions.
The relationship between science and society
The book makes themes that are well known to philosophers and sociologists of science accessible to a larger, less specialized audience, including young scientists. The foreword was written by biochemist Tom Blundell, who approves of Ravetz' "direct and provocative" approach to describing science, inclusive of its self-destructive tendencies as well as of its hopes and promises.
Reception
No Nonsense Guide to Science was translated and published in Japan in 2012. Ravetz's work has found use for teaching philosophy and ethics of science, e.g at the University of Copenhagen. The volume may help to develop the competencies that scientists need to perform ethically in postnormal research, by developing the ability to identify issues that fit postnormal settings where "facts are uncertain, values in dispute, stakes high and decision urgent".
References
Scientific method
Philosophy of science
Science and technology studies | The No Nonsense Guide To Science | Technology | 429 |
53,751,928 | https://en.wikipedia.org/wiki/IEEE%20Journal%20of%20Oceanic%20Engineering | The IEEE Journal of Oceanic Engineering is a journal published by the Institute of Electrical and Electronics Engineers. The journal's editor in chief is Associate Professor Mandar Chitre, of the National University of Singapore.
According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.2.
References
External links
Engineering journals
Oceanic Engineering, IEEE Journal of
Marine engineering | IEEE Journal of Oceanic Engineering | Engineering | 76 |
23,094,690 | https://en.wikipedia.org/wiki/Static%20core | In integrated circuit design, static core generally refers to a microprocessor (MPU) entirely implemented in static logic. A static core MPU may be halted by stopping the system clock oscillator that is driving it, maintaining its state and resume processing at the point where it was stopped when the clock signal is restarted, as long as power continues to be applied. Static core MPUs are fabricated in the CMOS process and hence consume very little power when the clock is stopped, making them useful in designs in which the MPU remains in standby mode until needed and minimal loading of the power source (often a battery) is desirable during standby.
In comparison, dynamic core microprocessor designs, those without a static core, only refresh and present valid outputs on their pins during specific periods of the clock cycle. If the clock is slowed, or stopped, the charge on the pin leaks out of the capacitors over time, quickly moving to the default state and no longer being valid. Dynamic designs have to run within a set range of clock frequencies to avoid this problem.
Static core microprocessors include the RCA 1802, Intel 80386EX, WDC W65C02S, WDC W65C816S and Freescale 683XX family.
Many low-power electronics systems are designed as fully static systems—such as, for example, the Psion Organiser, the TRS-80 Model 100, and the Galileo spacecraft. In such a fully static system, the processor has a static core and data is stored in static RAM, rather than dynamic RAM. Such design features allow the entire system to be "paused" indefinitely in a low power state, and then instantly resumed when needed.
References
See also
Asynchronous circuit
Dynamic logic (digital logic)#Static versus dynamic logic
Central processing unit
Clock signal | Static core | Technology | 383 |
9,079,529 | https://en.wikipedia.org/wiki/Airshaft | In manufacturing, an airshaft is a device used for handling winding reels in the processing of web-fed materials, such as continuous-process printing presses.
Airshafts—also called Air Expanding shafts—are used in the manufacturing processes for fitting into a core onto which materials such as paper, card and plastic film are wound. An airshaft is designed so that, on fitting into a core, it can be readily expanded, thereby achieving a quick and firm attachment, it may also be easily deflated to facilitate easy withdrawal of the shaft after winding of product is complete. Their efficient design makes them ideal for mounting onto bearing housings to enable the winding or unwinding of rolls of stock material with the minimum of equipment down time. The advantage of using an airshaft is its ability to grip the core, without damage, whilst providing a positive interface to control the web via motors & brakes. Airshafts are available as either lug type (with bladder down the centre) or strip type (bladders on the periphery of the shaft).
Air shafts are used on many converting machines. An example of one of these machines is a slitting machine or slitter rewinder which is used to cut or slit large rolls of material into smaller rollers.
Air shaft is a machine part or shaft which tightens the core or roll on filling air.
Air Shafts are of two types:
Contains Inflatable Rubber tube inside also called Lug shafts.
Which contains bladder multiple outside also called Multi-tube Shaft .
In Lugs type Air shaft, shaft consist of air bladder inside it. It is manufactured using Aluminium or Iron pipe as outer pipe in which there are u-shaped slot are there in which lugs are fitted manually. Then Inflatable bladder placed inside pipe below lugs. Then bladder is connected using a brass air valve. So, when we fill air using Air valve the bladder inflated and lugs comes out of shafts body (pipe) and tights the core in which shafts is placed.
In Multi tube Air Shafts there are small flat tubes placed outside the body of shafts which get in round shapes on filling air in it using Brass Air Valve and Lugs comes out of body and tight the core.
These times Air Shafts & Multi tube Shafts are finding very important placed in industries where they use any kind of cores or rolls. Its main application is on Printing-Packaging industries & Textile Industries.
References
Mechanical engineering
Printing terminology | Airshaft | Physics,Engineering | 498 |
52,311,684 | https://en.wikipedia.org/wiki/Weighted%20planar%20stochastic%20lattice | Physicists often use various lattices to apply their favorite models in them. For instance, the most favorite lattice is perhaps the square lattice. There are 14 Bravais space lattice where every cell has exactly the same number of nearest, next nearest, nearest of next nearest etc. neighbors and hence they are called regular lattice. Often physicists and mathematicians study phenomena which require disordered lattice where each cell do not have exactly the same number of neighbors rather the number of neighbors can vary wildly. For instance, if one wants to study the spread of disease, viruses, rumors etc. then the last thing one would look for is the square lattice. In such cases a disordered lattice is necessary. One way of constructing a disordered lattice is by doing the following.
Starting with a square, say of unit area, and dividing randomly at each step only one block, after
picking it preferentially with respect to ares, into four smaller blocks creates weighted planar stochastic lattice (WPSL). Essentially it is a disordered planar lattice as its block size and their coordination number are random.
Description
In applied mathematics, a weighted planar stochastic lattice (WPSL) is a structure that has properties in common with those of lattices and those of graphs. In general, space-filling planar cellular structures can be useful in a wide variety of seemingly disparate physical and biological systems.
Examples include grain in polycrystalline structures, cell texture and tissues in biology, acicular texture in martensite growth, tessellated pavement on ocean shores, soap froths
and agricultural land division according to ownership etc. The question of how these structures appear and the understanding of their topological and geometrical properties have always been an interesting proposition among scientists in general and physicists in particular.
Several models prescribe how to generate cellular structures. Often these
structures can mimic directly the structures found in nature and they are able to capture the essential properties that we find in natural structures.
In general, cellular structures appear through random tessellation, tiling, or subdivision of a plane into contiguous and non-overlapping cells. For instance, Voronoi diagram and Apollonian packing are formed by partitioning or tiling of a plane into contiguous and non-overlapping convex polygons and disks respectively.
Regular planar lattices like square lattices, triangular lattices, honeycomb lattices, etc., are the simplest example of the cellular structure in which every cell has exactly the same size and the same coordination number. The planar Voronoi diagram, on the other hand, has neither a fixed cell size nor a fixed coordination number. Its coordination number distribution is rather Poissonian in nature. That is, the distribution is peaked about the mean where it is almost impossible to find cells which have significantly higher or fewer coordination number than the mean. Recently, Hassan et al proposed a lattice, namely the weighted planar stochastic lattice. For instance, unlike a network or a graph, it has properties of lattices as its sites are spatially embedded. On the other hand, unlike lattices, its dual (obtained by considering the center of each block of the lattice as a node and the common border between blocks as links) display the property of networks as its degree distribution follows a power law. Besides, unlike regular lattices, the sizes of its cells are not equal; rather, the distribution of the area size of its blocks obeys dynamic scaling, whose coordination number distribution follows a power-law.
Construction of WPSLs
The construction process of the WPSL can be described as follows. It starts with a square of unit area which we regard as an initiator. The generator then divides the initiator, in the first step, randomly with uniform probability into four smaller blocks. In the second step and thereafter, the generator is applied to only one of the blocks. The question is: How do we pick that block when there is more than one block? The most generic choice would be to pick preferentially according to their areas so that the higher the area the higher the probability to be picked. For instance, in step one, the generator divides the initiator randomly into four smaller blocks. Let us label their areas starting from the top left corner and moving clockwise as and . But of course the way we label is totally arbitrary and will bear no consequence to the final results of any observable quantities. Note that is the area of the th block which can be well regarded as the probability of picking the th block. These probabilities are naturally normalized since we choose the area of the initiator equal to one. In step two, we pick one of the four blocks preferentially with respect to their areas. Consider that we pick the block and apply the generator onto it to divide it randomly into four smaller blocks. Thus the label is now redundant and hence we recycle it to label the top left corner while the rest of three new blocks are labelled and in a clockwise fashion. In general, in the th step, we pick one out of blocks preferentially with respect to area and divide randomly into four blocks. The detailed algorithm can be found in Dayeen and Hassan and Hassan, Hassan, and Pavel.
This process of lattice generation can also be described as follows. Consider that the substrate is a square of unit area and at each time step a seed is nucleated from which two orthogonal partitioning lines parallel to the sides of the substrate are grown until intercepted by existing lines. It results in partitioning the square into ever smaller mutually exclusive rectangular blocks. Note that the higher the area of a block, the higher is the probability that the seed will be nucleated in it to divide that into four smaller blocks since seeds are sown at random on the substrate. It can also describes kinetics of fragmentation of two-dimensional objects.
Properties of WPSLs
The dynamics of the growth of this lattice is governed by infinitely many conservation laws, one of which is the trivial conservation of total area.
Each of the non-trivial conservation laws can be used as a multifractal measure and hence it is also a multi-multifractal (multifractal system).
The area size distribution function of its blocks obeys dynamic scaling.
It can be mapped as a network if we consider the center of each block as a node and the common border between block as the link between the center of the corresponding nodes. The degree distribution of the resulting network exhibits power-law (scale-free network). In 1999 Barabasi and Albert argued that growth and preferential attachment (PA) rule are the two basic ingredients behind power-law degree distribution. In the case of WPSL, presence of one of the ingredients is obvious. What about the PA rule. A closer look at the growth process of the WPSL suggest that a block gain new neighbor only if one of its neighbor is picked and divided. Thus the higher the number of neighbors a block has, the higher is the chances that it will gain more neighbors. In fact, the (Mediation-driven attachment model) embodies exactly this idea. In this model too PA rule is present but in disguise!
It exhibits multiscaling.
Before 2000 epidemic models, for instance, were studying by applying them on regular lattices like square lattice assuming that everyone can infect everyone else in the same way.
The emergence of a network-based framework has brought a fundamental change, offering a much much better pragmatic skeleton than any time before. Today epidemic models is one of the most active applications of network science, being used to foresee the spread of influenza or to contain Ebola. The WPSL can be a good candidate for applying epidemic like models since it has the properties
of graph or network and the properties of traditional lattice as well.
References
Applied mathematics
Stochastic models
Network theory
Graph theory | Weighted planar stochastic lattice | Mathematics | 1,607 |
250,302 | https://en.wikipedia.org/wiki/Waiting%20room | A waiting room or waiting hall is a building, or more commonly a part of a building or a room, where people sit or stand until the event or appointment for which they are waiting begins.
There are two types of physical waiting room. One has individuals leave for appointments one at a time or in small groups, for instance at a doctor's office, a hospital triage area, or outside a school headmaster's office. The other has people leave en masse such as those at railway stations, bus stations, and airports. Both examples also highlight the difference between waiting rooms in which one is asked to wait (private waiting rooms) and waiting rooms in which one can enter at will (public waiting rooms).
There are also digital waiting rooms that operate within on-line video conferencing applications such as Zoom developed by Zoom Video Communications. This is a virtual waiting room where participants can be held until such time as the host allows them to enter the meeting.
Order in private rooms
People in private waiting rooms are queued up based on various methods in different types of waiting rooms. In hospital emergency department waiting areas, patients are triaged by a nurse, and they are seen by the doctor depending on the severity of their medical condition. In a doctor's or dentist's waiting room, patients are generally seen in the order in which their appointments are for, with the exception of emergency cases, which get seen immediately upon their arrival. In Canada, where there is publicly-provided health care, controversy has arisen when some important people or celebrities have jumped the line (which is supposed to be based on the appointment order or by severity of condition). In some government offices, such as motor vehicle registration offices or social assistance services, there is a "first-come, first-served" approach in which clients take a number when they arrive. The clients are then seen in the order of their number. In the 2010s, some government offices have a triage-based variant of the first-come, first-served approach, in which some clients are seen by the civil servants faster than others, depending on the nature of their service request and/or the availability of civil servants. This approach can lead to frustration for clients who are waiting, because one client who has been waiting for 30 minutes may see another client come in, take a number, and then be seen within five minutes.
In car repair businesses, clients typically wait until their vehicle is repaired; the service manager can only give an estimate of the approximate waiting time. Clients waiting in the entrance or waiting area of a restaurant for a table normally are seated based on whether they have reservations, or for those without reservations, on a first-come, first-served approach; however, important customers or celebrities may be put to the front of the line. In restaurants, customers may also be able to jump the line by giving a large gratuity or bribe to the maitre d'hotel or head waiter. Some restaurants which are co-located with or combined with a retail store or gift shop ask customers who are waiting for a table to browse in the merchandise section until their table's availability is announced on a PA system or via a pager; this strategy can lead to increased purchases in the retail part of the establishment. One combination restaurant/store is the US Cracker Barrel chain. Some restaurants ask customers who are waiting for a table to sit in the restaurant's bar or its licensed lounge area; this approach may lead to increased sales of alcoholic beverages.
Waiting rooms may be staffed or unstaffed. In waiting rooms that are staffed, a receptionist or administrative staffer sits behind a desk or counter to greet customers/clients, give them information about the expected waiting period, and answer any questions about their appointment time or the appointment process. In doctors' or dentists' waiting rooms, the patients may be able to make additional appointments, pay for appointments, or deal with other administrative tasks with the receptionist or administrator. In police stations, check cashing stores, and some government waiting rooms, the receptionist or administrator is behind a plexiglass barrier, with either small holes to permit communication, or, in higher-security settings, a microphone and speaker. In reception areas with a plexiglass barrier, there may be a heavy-duty drawer to enable the client to provide money or papers to the receptionist and for the receptionist to provide documents to the client. The plexiglass barrier and the drawer system help to protect the receptionists from aggressive or potentially violent clients.
Amenities
Most waiting rooms have seating. Some have adjacent toilets. It is not uncommon to find vending machines in public waiting rooms or newspapers and magazines in private waiting rooms. Also common in waiting rooms in the United States or in airports are public drinking fountains. Some waiting rooms have television access or music. The increasing prevalence of mobile devices has led to many waiting rooms providing electric outlets and free Wi-Fi Internet connections, though cybersecurity is a concern as unsecured connections may be vulnerable to attack, tampering, or even simply by piggybacking users who are within range but not waiting. Sometimes found in airports and railway stations are special waiting rooms, often called "lounges", for those who have paid more. These will generally be less crowded and will have superior seating and better facilities. Waiting rooms for high-end services may provide complimentary drinks and snacks.
In other media
In fiction
The films Brief Encounter and The Terminal use waiting rooms as sets for a large part of their duration. They are used elsewhere in the arts to symbolize waiting in the general sense, to symbolize transitions in life and for scenes depicting emptiness, insignificance or sadness. In the play No Exit, by French existentialist philosopher Jean-Paul Sartre, several strangers find themselves waiting in a mysterious room, where they each wonder why; finally, they each realize that they are in Hell, and that their punishment is being forced to be with each other ("L'enfer, c'est les autres", which translates as "Hell is other people").
In the 2010 Bollywood film The Waiting Room, directed by Maneej Premnath and produced by Sunil Doshi, four passengers waiting in a remote South Indian railway station are stranded there on a rainy night. A serial killer is on the prowl, targeting the passengers of the waiting room, creating intense fear among them.
In video games
The term "waiting room" also extends to the realm of video games as a similar virtual waiting area where players for an online multiplayer game are placed into while waiting for all remaining players for a game session to be present. A virtual waiting room may be a mere, static loading screen (such as the waiting screens in the mobile game Star Wars: Force Arena), or a playable environment in of itself where readied players can practice their skills to pass the time needed for all players to come onboard to begin the session, such as a dedicated "waiting room" arena in Super Smash Bros. Brawl and its subsequent sequels, where players can practice their fighting moves with their chosen character while waiting for other players to arrive.
See also
Airport lounge
Waiting in healthcare
References
Rooms
Time management | Waiting room | Physics,Engineering | 1,466 |
10,218,909 | https://en.wikipedia.org/wiki/Bootstrapping%20%28finance%29 | In finance, bootstrapping is a method for constructing a (zero-coupon) fixed-income yield curve from the prices of a set of coupon-bearing products, e.g. bonds and swaps.
A bootstrapped curve, correspondingly, is one where the prices of the instruments used as an input to the curve, will be an exact output, when these same instruments are valued using this curve.
Here, the term structure of spot returns is recovered from the bond yields by solving for them recursively, by forward substitution: this iterative process is called the bootstrap method.
The usefulness of bootstrapping is that using only a few carefully selected zero-coupon products, it becomes possible to derive par swap rates (forward and spot) for all maturities given the solved curve.
Methodology
As stated above, the selection of the input securities is important, given that there is a general lack of data points in a yield curve (there are only a fixed number of products in the market). More importantly, because the input securities have varying coupon frequencies, the selection of the input securities is critical. It makes sense to construct a curve of zero-coupon instruments from which one can price any yield, whether forward or spot, without the need of more external information.
Note that certain assumptions (e.g. the interpolation method) will always be required.
General methodology
The general methodology is as follows: (1) Define the set of yielding products - these will generally be coupon-bearing bonds; (2) Derive discount factors for the corresponding terms - these are the internal rates of return of the bonds; (3) 'Bootstrap' the zero-coupon curve, successively calibrating this curve such that it returns the prices of the inputs. A generically stated algorithm for the third step is as follows; for more detail see .
For each input instrument, proceeding through these in terms of increasing maturity:
solve analytically for the zero-rate where this is possible (see side-bar example)
if not, iteratively solve (initially using an approximation) such that the price of the instrument in question is output exactly when calculated using the curve (note that the rate corresponding to this instrument's maturity is solved; rates between this date and the previously solved instrument's maturity are interpolated)
once solved, save these rates, and proceed to the next instrument.
When solved as described here, the curve will be arbitrage free in the sense that it is exactly consistent with the selected prices; see and . Note that some analysts will instead construct the curve such that it results in a best-fit "through" the input prices, as opposed to an exact match, using a method such as Nelson-Siegel.
Regardless of approach, however, there is a requirement that the curve be arbitrage-free in a second sense: that all forward rates are positive. More sophisticated methods for the curve construction — whether targeting an exact- or a best-fit — will additionally target curve "smoothness" as an output,
and the choice of interpolation method here, for rates not directly specified, will then be important.
Forward substitution
A more detailed description of the forward substitution is as follows. For each stage of the iterative process, we are interested in deriving the n-year zero-coupon bond yield, also known as the internal rate of return of the zero-coupon bond. As there are no intermediate payments on this bond, (all the interest and principal is realized at the end of n years) it is sometimes called the n-year spot rate. To derive this rate we observe that the theoretical price of a bond can be calculated as the present value of the cash flows to be received in the future. In the case of swap rates, we want the par bond rate (Swaps are priced at par when created) and therefore we require that the present value of the future cash flows and principal be equal to 100%.
therefore
(this formula is precisely forward substitution)
where
is the coupon rate of the n-year bond
is the length, or day count fraction, of the period , in years
is the discount factor for that time period
is the discount factor for the entire period, from which we derive the zero-rate.
Recent practice
After the financial crisis of 2007–2008 swap valuation is typically under a "multi-curve and collateral" framework; the above, by contrast, describes the "self discounting" approach.
Under the new framework, when valuing a Libor-based swap:
(i) the forecasted cashflows are derived from the Libor-curve,
(ii) however, these cashflows are discounted at the OIS-based curve's overnight rate, as opposed to at Libor.
The result is that, in practice, curves are built as a "set" and not individually, where, correspondingly:
(i) "forecast curves" are constructed for each floating-leg Libor tenor;
and (ii) discounting is on a single, common OIS curve which must simultaneously be constructed.
The reason for the change is that, post-crisis, the overnight rate is the rate paid on the collateral (variation margin) posted by counterparties on most CSAs. The forward values of the overnight rate can be read from the overnight index swap curve. "OIS-discounting" is now standard, and is sometimes, referred to as "CSA-discounting".
See: for context; for the math.
See also
Multi-curve framework
- discussing short rate "trees" constructed using an analogous approach.
Corporate finance usage:
Leveraged buyout
References
References
Standard texts
External links
Excel Bootstrapper, janroman.dhis.org
Bootstrapping Step-By-Step, bus.umich.edu
Financial economics
Mathematical finance
Fixed income analysis
Interest rates
Bonds (finance)
Swaps (finance)
Financial models | Bootstrapping (finance) | Mathematics | 1,215 |
77,387,405 | https://en.wikipedia.org/wiki/CTB%201 | CTB 1, also known as G116.9+00.1 and AJG 110, nicknamed the Medulla Nebula, is a supernova remnant located in the constellation Cassiopeia. It was discovered as a radio source in 1960 in a study of galactic radiation carried out at a frequency of 960 MHz.
Morphology
CTB 1 is an oxygen-rich supernova remnant of mixed morphology, that is, in the radio band it is similar to a hollow shell while in X-rays its structure is compact and centralized. Thus, it shows a complete envelope in both the visible spectrum and the radio band. The radio emission is brightest along the western edge, with a prominent gap existing along the northern and northeastern sectors. The uniform envelope—in both wavelength ranges—indicates that the shock wave extends in a relatively homogeneous interstellar medium.
Infrared emission has also been detected at 60 μm and 100 μm from CTB 1; an arc of emission at these wavelengths is coincident with the shell observed at radio frequencies.
The X-ray emission from this supernova remnant – which has a thermal origin – comes from inside the shell, observed in the visible and radio spectrum. Notably, the X-ray emission also extends across the remnant's northern gap. The abundance of neon has been determined to be very uniform, while iron is more abundant towards the southwest of the remnant, suggesting that the distribution of ejecta is asymmetric. CTB 1 is a supernova remnant rich in oxygen and neon, which is surprising for an evolved remnant; The determined abundances are consistent with the explosion of a stellar progenitor with a mass of 13 - 15 solar masses or even greater.
Stellar remnant
The pulsar PSR J0002+6216 has been proposed to be the stellar remnant of the supernova that caused the formation of CTB 1. Its proper motion is of the correct magnitude and direction to support the relationship between the two objects. Likewise, the direction and morphology of the plerion tail suggests a physical connection between PSR J0002+6216 and CTB 1. The pulsar is moving at high speed (more than 1000 km/s), which may be the result of the primary explosion.
Age and distance
The estimated age of CTB 1 is 10,000 years, although the uncertainty of this value can be as high as 20%. Other studies give it a greater age, around 16,700 years. On the other hand, there is also no consensus regarding the distance at which this supernova remnant is located. Various publications place it at a distance between 2,000 and 3,100 parsecs, while for others it is at 4,300 ± 200 parsecs. If this last value is correct, CTB 1 would be located in the Perseus arm and not in the Local arm. CTB 1 has a radius of approximately 15 parsecs.
See also
List of supernova remnants
Cassiopeia A
References
Supernova remnants
Cassiopeia (constellation) | CTB 1 | Astronomy | 619 |
71,214,116 | https://en.wikipedia.org/wiki/Tramadol/paracetamol | Tramadol/paracetamol, also known as tramadol/acetaminophen and sold under the brand name Ultracet among others, is a fixed-dose combination medication used for the treatment of moderate to severe pain. It contains tramadol, as the hydrochloride, an analgesic; and paracetamol an analgesic. It is taken by mouth.
References
Combination analgesics | Tramadol/paracetamol | Chemistry | 91 |
11,879,346 | https://en.wikipedia.org/wiki/Spirit%20world%20%28Spiritualism%29 | The spirit world, according to spiritualism, is the world or realm inhabited by spirits, both good or evil of various spiritual manifestations. This spirit world is regarded as an external environment for spirits. The Spiritualism religious movement in the nineteenth century espoused a belief in an afterlife where individual's awareness persists beyond death. Although independent from one another, both the spirit world and the physical world are in constant interaction. Through séances, trances, and other forms of mediumship, these worlds can consciously communicate with each other. The spirit world is sometimes described by mediums from the natural world in trance.
History
By the mid-19th century most Spiritualist writers concurred that the spirit world was of "tangible substance" and a place consisting of "spheres" or "zones". Although specific details differed, the construct suggested organization and centralization. An 18th-century writer, Emanuel Swedenborg, influenced Spiritualist views of the spirit world. He described a series of concentric spheres each including a hierarchical organization of spirits in a setting more earth-like than theocentric. The spheres become gradually more illuminated and celestial. Spiritualists added a concept of limitlessness, or infinity to these spheres. Furthermore, it was defined that Laws initiated by God apply to earth as well as the spirit world.
Another common Spiritualist conception was that the spirit world is inherently good and is related to truth-seeking as opposed to things that are bad residing in a "spiritual darkness". This conception inferred as in the biblical parable Lazarus and Dives that there is considered a greater distance between good and bad spirits than between the dead and the living. Also, the spirit world is "The Home of the Soul" as described by C. W. Leadbeater (Theosophist), suggesting that for a living human to experience the spirit world is a blissful, meaningful and life-changing experience.
Yet, John Worth Edmonds stated in his 1853 work Spiritualism, "Man's relation spiritually with the spirit-world is no more wonderful than his connection with the natural world. The two parts of his nature respond to the same affinities in the natural and spiritual worlds." He asserted, quoting Swedenborg through mediumship, that the relationship between man and the spirit world is reciprocal and thus could contain sorrow. Though ultimately, "wandering through the spheres" a path of goodness "is received at last by that Spirit whose thought is universal love forever."
See also
Afterlife
Astral plane
Celtic Otherworld
Exorcism
Ghost
Hell
Heaven
List of death deities
Paradise
Shamanism
Soul flight
Spirit possession
Spirit photography
Spiritual warfare
Spiritual mapping
Territorial spirit
The Dreaming
Underworld/Netherworld
References
Afterlife places
Spiritism
Spiritualism
Spirituality | Spirit world (Spiritualism) | Biology | 554 |
56,248,020 | https://en.wikipedia.org/wiki/2014%20Dan%20River%20coal%20ash%20spill | The 2014 Dan River coal ash spill occurred in February 2014, when an Eden, North Carolina facility owned by Duke Energy spilled 39,000 tons of coal ash into the Dan River. The company later pled guilty to criminal negligence in their handling of coal ash at Eden and elsewhere and paid fines of over $5 million. The United States Environmental Protection Agency (EPA) has since been responsible for overseeing cleanup of the waste. EPA and Duke Energy signed an administrative order for the site cleanup.
Incident
On February 2, 2014 a drainage pipe burst at a coal ash containment pond owned by Duke Energy in Eden, North Carolina, sending 39,000 tons of coal ash into the Dan River. In addition to the coal ash, 27 million gallons of wastewater from the plant was released into the river. The broken pipe was left unsealed for almost a week before the draining coal ash was stopped. The ash was deposited up to from the site of the spill and contained harmful metals and chemicals. This catastrophe occurred at the site of the Dan River Steam Station, a retired coal power plant which had ceased operation in 2012. Duke Energy apologized for the incident and announced detailed plans for removal of coal ash at the Dan River site. Workers were only able to remove about ten percent of the coal ash that was spilled into the river, but cleanup is ongoing and Duke Energy plans to spend around 3 million dollars to continue the cleanup efforts.
CNN reported that the river was turned into an oily sludge. The river is a drinking water source for communities in North Carolina and Virginia. Immediate tests showed increased amounts of arsenic and selenium, but the river was deemed by state officials to be a safe source for drinking water. However, further tests showed the ash to contain pollutants including but not limited to arsenic, copper, selenium, iron, zinc and lead. The coal ash immediately endangered animals and fish species that lived in or around the river. Six days after the spill Duke Energy announced that the leakage had been stopped and they pledged to clean up the coal ash.
Reasons for spill
The cause of the ash spill was described by EPA as a limited structural flaw. A storm pipe nearby the deposits of a coal ash slurry containment area broke and allowed for the leakage. Coal ash slurry is produced during the process of burning coal. It is the left over impurities that stick around after burning coal for electricity. Coal companies have found that the cheapest way to store this waste is to mix it with water and store it in a pond. These ponds have been found to have leaks that can dispose hazardous material into surface water among other things. EPA has identified at least 25 coal ash ponds in the southeast that are "high hazard". This material was released into the Dan River because of the collapse of a 48 inch drain pipe. The pipe was made of concrete and corrugated metal and reason for the fracture cannot be identified. What resulted was 39 thousand tons of coal ash and 27 million gallons of ash pond water were deposited into the Dan River.
Environmental impact
EPA has been collecting dissolved contaminant concentration data in the Dan River (from the VA/NC state line to midway between Danville and South Boston) since the coal ash spill. The organization has been periodically comparing the retrieved water/sediment chemistry data to ecological risk screening levels (ERSLs) to assess risk to aquatic and plant life. Coal ash is made up of various materials after the burning of coal takes place. These include silica, arsenic, boron, cadmium, chromium, copper, lead, mercury, selenium, and zinc. Certain contaminants that were measured exceed the screening levels, necessitating that the water/sediment chemistry must continue to be monitored. Coal ash can coat and degrade the habitats of aquatic animals as well as cause direct harm to certain organisms.
The latest surface water sampling results were released by EPA in July 2014. All surface water chemical concentrations were found to be below the ERSLs except for lead. The latest sediment sampling results were also released in July 2014. All sediment chemical concentrations were found to be below the ERSLs except for aluminum, arsenic, barium, and iron. The latest soil sampling results were released in June 2014. All soil chemical concentrations were found to be below the ERSLs except for aluminum, barium, iron, and manganese.
The coal ash will never be fully removed from the river. This is due to samples passing human health screening, the potential for historical contamination to become re-suspended, and removal being more detrimental to certain endangered species than the coal ash itself. In addition, the coal ash is already mixed in with existing sediment, complicating its removal further. EPA estimated that about 72 percent of all the toxic water in the country comes directly from coal-fired power plants.
Enforcement
The New York Times reported that the North Carolina Department of Environmental Quality (NCDEQ; formerly the Department of Environment and Natural Resources) was directed to minimize its regulatory role prior to the accident by Governor Pat McCrory. Prior to being Governor, McCrory had worked for Duke Energy for nearly three decades. At the time, it was the third largest coal ash spill to have occurred in the United States. Prior to the incident, environmental groups had attempted to sue Duke Energy three times in 2013 under the Clean Water Act to force the company to fix leaks in its coal ash dumps. Each time, the groups were blocked by NCDEQ, which eventually fined the company $99,111. Federal prosecutors found this fine to be suspiciously low, and investigated both Duke Energy and the state regulators. Many newspaper editorials alleged that Duke Energy's environmental safety controls were lax and that the company "bullied" regulators.
After the incident, Duke Energy was prosecuted by a number of agencies, and substantial evidence was presented indicating that company officials knew about numerous coal ash leaks in various plants including the Eden facility and declined to resolve it or provide local plant administrators the funds they were requesting to monitor and mitigate the problems. At the federal level, Duke was prosecuted by the United States Department of Justice Environment and Natural Resources Division and pled guilty to nine charges of criminal negligence under the Clean Water Act. Duke agreed to pay $102 million in fines and restitution, the largest federal criminal fine in North Carolina history. Duke also agreed to pay fines to North Carolina and Virginia ($2.5 million).
Outcomes
Largely as a result of the attention brought to Duke Energy's handling of coal ash ponds by the 2014 disaster, the North Carolina state legislature ordered Duke Energy to close its 32 ash ponds in the state by 2029. On May 2, 2014, Duke Energy and EPA agreed to a $3 million dollar cleanup agreement. Part of the agreement is having Duke Energy identify areas of necessary cleanup on the Dan River that is estimated to cost around 1 million dollars. The other 2 million dollars is allocated to EPA to address future response methods needed in order to clean up the Dan River. A spokesperson for Duke Energy announced that the company plans to exit the coal ash business. Associates have said that well before the Dan River incident the company had allocated 130 million dollars to transitioning plants to handle fly ash in dry form and manage it in lined landfills. Duke Energy said that it created an advisory group of researchers to help with cleaner coal combustion at its facilities.
In February 2016, the EPA proposed a $6.8 million settlement, which Duke Energy immediately appealed. In September the corporation accepted a settlement just shy of the original amount at $5,983,750, to be paid for fines, restitution, cleanup assessment, removal, and community action initiatives. Regarding the initial settlement, EPA sends periodic bills to Duke Energy accounting for direct and indirect costs incurred by EPA, its contractors, and the Department of Justice.
The states affected launched a lawsuit on July 18, 2019, asking that the court declare Duke Energy responsible for the damage done to the environment by the spill.
Cleanup efforts
To keep the energy provider accountable, under the Administrative Settlement Agreement & Order on Consent for Removal Action (AOC) as of May 2014, the Respondent, Duke Energy, was required to submit a number of plans to EPA, including a scope of work, public health, post-removal site control, and engineering plans.
The work plan includes descriptions and a schedule of actions required by the settlement.
The public health plan ensures protection of public health during on-site removal projects.
The post-removal site control plan provides EPA with documentation of all post-removal arrangements.
The engineering report describes steps executed by Duke to improve the structural durability of post-release impoundments and storm sewer lines running under their coal ash impoundments.
Within these plans, Duke Energy is responsible for creating and implementing a Site Assessment that includes but is not limited to ecological analysis, surface water and sediment assessment as well as post-removal monitoring protocols to calculate the extent of pollution in the Dan River in North Carolina and the Kerr Reservoir and Schoolfield Dam in Danville, Virginia. These assessments were approved by the EPA in consultation with the affected state agencies including NCDEQ and the Virginia Department of Environmental Quality (VDEQ). Following the spill and written into the AOC are monitoring protocols in which EPA will sporadically authorize the NCDEQ and VDEQ to take split or duplicate water samples to ensure consistent quality after removal of the coal ash.
As of April 1, 2019 North Carolina has ordered Duke Energy to dig up millions of tons of coal ash at six of its power plants. The dangerous coal ash has been mixed with water and stored in uncovered, unlined ponds for decades, but following the 2014 Dan River coal ash spill, many lawsuits have been filed. If the plaintiffs in these cases are successful, Duke Energy would be required to drain all of its 31 ponds. The draining process would cost $5 billion to the already $5.6 billion cleanup from 2014. With the added costs, Duke energy customers could expect to pay a higher fee in the next coming years.
See also
2018 Cape Fear River coal ash spill
References
Dan River coal ash spill
Dan River coal ash spill
Coal-fired power stations
Energy accidents and incidents
Environmental impact of the coal industry
Environmental impact of the energy industry
Hazardous waste
Rockingham County, North Carolina
Waste disposal incidents in the United States
Water pollution in the United States
Environmental disasters in the United States
2014 disasters in the United States | 2014 Dan River coal ash spill | Technology | 2,114 |
40,243,698 | https://en.wikipedia.org/wiki/ShuntCheck | ShuntCheck is a non-invasive diagnostic medical device which detects flow in the cerebral shunts of hydrocephalus patients. Neurosurgeons can use ShuntCheck flow results along with other diagnostic tests to assess shunt function and malfunction.
Hydrocephalus is a condition in which cerebrospinal fluid (CSF) accumulates in the brain, potentially leading to brain damage and death. It is corrected by a shunt which drains excess CSF from the brain to the abdomen. Shunts fail, typically by obstruction – a life-threatening medical condition requiring the surgical replacement of the shunt. The symptoms of shunt failure are non-specific – headache, nausea, lethargy – so diagnostic tests must be conducted to rule in or rule out surgery. Current methods of diagnosing shunt malfunction, including CT Scan, MRI, radionuclide studies and shunt tap, have limitations and risks. These limitations and risks led to the development of ShuntCheck.
ShuntCheck uses thermal dilution to detect flow. The ShuntCheck sensor, a high-tech skin thermometer, is
placed over the shunt where it crosses the clavicle. The shunt, which lies just below the skin, is cooled with an ice pack placed “upstream” of the sensor. If CSF is flowing through the shunt, the cooled fluid will move “downstream” and the ShuntCheck sensor will detect a drop in temperature. Faster shunt flow results in greater temperature decreases. If the shunt is not flowing, the cooled fluid remains upstream and no temperature drop is recorded.
The sensor is connected to a laptop or tablet computer running ShuntCheck software.
The sensor is connected to a laptop or tablet computer running ShuntCheck software. The computer analyzes the thermal data, determines “Flow Confirmed” or “Flow NOT Confirmed” and presents a time-temperature graph.
Early clinical testing of ShuntCheck found that functioning shunts flow intermittently, which meant that a ShuntCheck reading of “Flow NOT Confirmed” did not necessarily indicate a shunt problem. This discovery led to the development of the ShuntCheck Micro-Pumper, a handheld device which vibrates the shunt valve, generating a temporary increase in flow through patent, but not through obstructed, shunts. Micro-Pumper allows ShuntCheck to differentiate between temporarily non-flowing patent shunts and obstructed shunts.
ShuntCheck III
The current version of ShuntCheck was developed in 2011-2012 funded by grants from the National Institute of Health and was cleared by the US FDA in 2013. The ShuntCheck system includes the ShuntCheck Sensor, a skin marker, an Instant Cold Pack, a Data Acquisition Unit (an analog-to-digital converter called the “DAQ”), a Windows 7 or Windows 8 laptop or tablet computer running ShuntCheck software and the Micro-Pumper.
ShuntCheck clinical studies
Boston Children's Hospital 2008-2009 Joseph R. Madsen MD tested 100 symptomatic and asymptomatic pediatric hydrocephalus patients using an earlier version of ShuntCheck during 2008–2009. His key findings, reported in Neurosurgery were
The ShuntCheck test is sensitive and specific for detecting shunt flow
But failure to detect flow does not predict the need for surgery
University of South Florida 2008 Arthur E. Marlin MD and Sarah J Gaskill MD conducted ShuntCheck testing on 35 asymptomatic pediatric patients with similar results.
These findings led to the development of the Micro-Pumper.
Boston Children's Hospital 2010-2013 Dr. Madsen is testing 130 symptomatic and asymptomatic pediatric hydrocephalus patients to assess the diagnostic accuracy and clinical utility of the newer version of ShuntCheck including Micro-Pumper. This study was funded by the NIH.
Multi-Center Pediatric Outcomes Study 2013-2014 Boston Children's Hospital, Children's Hospital of Philadelphia, Johns Hopkins Hospital, Cleveland Clinic, University of Chicago Comer Children's Hospital, LeBonheur Children's Hospital and University of Texas Houston Children's Memorial Hermann Hospital will conduct an outcomes study of 400 symptomatic pediatric hydrocephalus patients during 2013–2014. In this NIH funded study, ShuntCheck results and the results of standard of care diagnostic tests will be compared to clinical outcome (shunt obstruction confirmed by surgery vs no-obstruction). This study seeks to demonstrate that
ShuntCheck is synergistic with imaging. Specifically that ShuntCheck plus imaging yield higher positive and negative predictive values than imaging alone.
ShuntCheck is comparable to imaging in ruling out shunt obstruction in cases which the Attending Physician judges to be “unlikely to require shunt surgery”.
Sinai Baltimore NPH Study 2012-2014 Michael A. Williams MD is conducting ShuntCheck testing on adult hydrocephalus patients undergoing radionuclide shunt patency testing. This study, funded by the NIH, seeks to demonstrate that ShuntCheck results match radionuclide results.
Potential clinical uses of ShuntCheck
A test for assessing shunt function in symptomatic hydrocephalus patients. ShuntCheck flow data, used in conjunction with other diagnostic test results and with physician judgment, can aid in ruling in or ruling out shunt obstruction.
A tool for establishing “normal” CSF flow patterns in asymptomatic patients. Establishing baseline flow may increase the value of flow information in symptomatic patients.
A tool to streamline the process of adjusting shunt valve settings to accommodate individual needs for CSF drainage. While the settings for these valves in each patient must currently be determined empirically over a number of weeks, Shuntcheck will be helpful in measuring changes in CSF flow due to changes in the valve setting.
A tool for assessing suspected over-drainage. CSF flow data will allow neurosurgeons to identify periods and causes of high CSF flow when assessing suspected CSF over drainage. This data can also be used to evaluate flow and siphon control devices to determine if they are meeting the patient's needs.
A post operative test to confirm shunt function. Hospitals in sparsely populated areas often conduct post-surgical CT scans to confirm shunt function before releasing patients for the long drive home. CSF flow data can confirm shunt function more quickly than CT (which requires time for the ventricles to stabilize).
NeuroDx Development
NeuroDx Development [10] (NeuroDx) is an early commercial stage medical device company founded in 2008 by Fred Fritz (CEO), a serial life sciences entrepreneur, and Dr. Marek Swoboda (Chief Scientific Officer), a biosensor engineer, to address important unmet needs in the hydrocephalus market. The company has developed two thermal dilution technologies for assessing shunt function in hydrocephalus patients, ShuntCheck-Micro-Pumper, an eight-minute test of CSF shunt flow, and Continuous Real Time (CRT) ShuntCheck, which uses thermal dilution to monitor changes in shunt flow over longer time periods. The company's follow-on products, an implantable intracranial pressure monitoring device and a non-invasive monitor of intracranial pressure, are in proof of concept development.
References
Medical equipment | ShuntCheck | Biology | 1,554 |
1,606,866 | https://en.wikipedia.org/wiki/Pitch%20Lake | The Pitch Lake is the largest natural deposit of bitumen in the world, estimated to contain 10 million tons. It is located in La Brea in southwest Trinidad, within the Siparia Regional Corporation. The lake covers about 0.405 square kilometres (100 acres) and is reported to be 76.2 metres (250 feet) deep.
Pitch Lake is a popular tourist attraction, including a small museum, from where official tour guides can escort people across the lake. The lake is mined for asphalt by Lake Asphalt of Trinidad and Tobago.
History
The Pitch Lake has fascinated explorers and scientists, attracting tourists since its re-discovery by Sir Walter Raleigh in his expedition there in 1595. Raleigh himself found immediate use for the asphalt to caulk his ship. He referred to the pitch as "most excellent... It melteth not with the sun as the pitch of Norway". Raleigh was informed of the lake’s location by the native Amerindians, who had their own story about the origin of the lake. The story goes that the indigenous people were celebrating a victory over a rival tribe when they got carried away in their celebration. They proceeded to cook and eat the sacred hummingbird which they believed possessed the souls of their ancestors. According to legend, their winged god punished them by opening the earth and conjuring the pitch lake to swallow the entire village, and the lake became a permanent stain and a reminder of their sins. The local villages believe this legend due to the many Amerindian artifacts and a cranium that have been discovered, preserved, in the pitch.
In the 1840s, Abraham Pineo Gesner first obtained kerosene from a sample of Pitch Lake bitumen.
In 1887, Amzi Barber, an American businessman known as "The Asphalt King", secured a 42-year monopoly concession from the British Government for the Pitch Lake for his company, Barber Asphalt Paving Company. It was from this source that many of the first asphalt roads of New York City, Washington D.C., and other Eastern U.S. cities were paved.
Geology
The origin of The Pitch Lake is related to deep faults in connection with subduction under the Caribbean Plate related to Barbados Arc. The lake has not been studied extensively, but it is believed that the lake is at the intersection of two faults, which allows oil from a deep deposit to be forced up. The lighter elements in the oil evaporate under the hot tropical sun, leaving behind the heavier asphalt. Bacterial action on the asphalt at low pressures creates petroleum in asphalt. The researchers indicated that extremophiles inhabited the asphalt lake in populations ranging between 106 and 107 cells/gram. The Pitch Lake is one of several natural asphalt lakes in the world, including La Brea Tar Pits (Los Angeles), the McKittrick Tar Pits (McKittrick) and the Carpinteria Tar Pits (Carpinteria) in the U.S. state of California, and Lake Guanoco in the Republic of Venezuela.
The regional geology of southern Trinidad consists of a trend of ridges, anticlines with shale diapiric cores, and sedimentary volcanoes. According to Woodside, "host muds and/or shales become over pressured and under compacted in relation to the surrounding sediments...mud or shale diapirs or mud volcanoes result because of the unstable semi-fluid nature of the methane-charged, undercompacted shales/muds." The mud volcanoes are aligned along east-northeast parallel trends. Woodside goes on to say, "The Asphalt Lake at Brighton represents a different kind of sedimentary volcanism in which gas and oil are acting on asphalt mixed with clay. This asphalt lake cuts across Miocene/Pliocene formations overlying a complicated thrust structure."
The first wells were drilled into Pitch Lake oil seeps in 1866. Kerosene was distilled from the pitch in the lake from 1860 to 1865. The Guayaguayare No. 3 well was drilled in 1903, but the first commercial well was drilled at the west end of the lake in 1903. Oil was then discovered in Point Fortin-Perrylands area, and in 1911, the Tabaquite Field was discovered. The Forest Reserve Field was discovered in 1914 and the Penal Field in 1941. The first offshore well was drilled in 1954 at Soldado.
Microbiology
Evidence of an active microbiological ecosystem in Pitch Lake has been reported. The microbial diversity was found to be unique when compared to microbial communities analyzed at other hydrocarbon-rich environments, including La Brea tar pits in California, and an oil well and a mud volcano in Trinidad and Tobago. Archaeal and bacterial communities co-exist, with novel species having been discovered from Pitch Lake samples. Researchers have also observed novel fungal life forms which can grow on the available asphaltenes as a sole carbon and energy source. The microbiological activity is accompanied by a stronger evolution of gas consisting principally of methane with a considerable proportion of carbon dioxide, and which also contains hydrogen sulphide.
See also
Notable tar pits
List of tar pits
Asphalt volcano
References
External links
The Wonderland of Trinidad, by Barber Asphalt Company—a Project Gutenberg eBook
Asphalt lakes
Landforms of Trinidad and Tobago | Pitch Lake | Chemistry | 1,063 |
2,805,105 | https://en.wikipedia.org/wiki/Lyot%20stop | A Lyot stop (also called a glare stop) is an optical stop, invented by French astronomer Bernard Lyot, that reduces the amount of flare caused by diffraction of other stops and baffles in optical systems.
Lyot stops are located at images of the system's entrance pupil and have a diameter slightly smaller than the pupil's image. Examples of applications can be found in Ref.
See also
Lyot filter
Lyot depolarizer
Coronagraph
References
External links
NASA Coronagraphs
Handbook of Optics by Bass et al.
Optical devices
Photography equipment | Lyot stop | Materials_science,Engineering | 120 |
1,782,555 | https://en.wikipedia.org/wiki/Stirling%20transform | In combinatorial mathematics, the Stirling transform of a sequence { an : n = 1, 2, 3, ... } of numbers is the sequence { bn : n = 1, 2, 3, ... } given by
,
where is the Stirling number of the second kind, which is the number of partitions of a set of size into parts. This is a linear sequence transformation.
The inverse transform is
,
where is a signed Stirling number of the first kind, where the unsigned can be defined as the number of permutations on elements with cycles.
Berstein and Sloane (cited below) state "If an is the number of objects in some class with points labeled 1, 2, ..., n (with all labels distinct, i.e. ordinary labeled structures), then bn is the number of objects with points labeled 1, 2, ..., n (with repetitions allowed)."
If
is a formal power series, and
with an and bn as above, then
.
Likewise, the inverse transform leads to the generating function identity
.
See also
Binomial transform
Generating function transformation
List of factorial and binomial topics
References
.
Khristo N. Boyadzhiev, Notes on the Binomial Transform, Theory and Table, with Appendix on the Stirling Transform (2018), World Scientific.
Factorial and binomial topics
Transforms | Stirling transform | Mathematics | 285 |
15,139,091 | https://en.wikipedia.org/wiki/Architecture%20and%20Vision | Architecture and Vision (AV) is an international multidisciplinary design agency that was formed in 2003 by Arturo Vittori in partnership with Andreas Volger. AV works in architecture, design, and art.
The practice is mainly based around technology transfer between disciplines such as aerospace, art, and architecture.
History
Architecture and Vision was established in 2003 and is directed by architect Arturo Vittori, based in Bomarzo Viterbo, Italy, and Andreas Vogler, based in Munich, Germany.
Projects
2014: WarkaWater 2.0, USEK, Beirut, Lebanon
2013: OR of the Future, UIC, Chicago, United States
2012: WarkaWater, Venice Biennale, Venice, Italy
2011: LaFenice, Messina, Sicily, Italy
2011: AtlasCoelestisZeroG, International Space Station
2011: Corsair International, Paris, France
2009: AtlasCoelestis, Sullivan Galleries, Chicago, Illinois
2009: MercuryHouseOne, Venice Biennale, Venice, Italy
2009: FioredelCielo, Macchina di Santa Rosa, Viterbo, Italy
2007: BirdHouse, Bird House Foundation , Osaka, Japan
2006: DesertSeal, permanent collection, Museum of Modern Art (MOMA), New York City
Awards and recognition
In 2006, the "DesertSeal" (2004), an extreme environment tent prototype, gained recognition when it became a part of the permanent collection at the Museum of Modern Art in New York. It was featured in the exhibition "SAFE: Design Takes on Risk" (2005), curated by Paola Antonelli. The same year, Vittori and Vogler were honored as "Modern-day Leonardos" by the Museum of Science and Industry during its Leonardo da Vinci: Man, Inventor, Genius exhibition.
In 2007, the Museum of Science and Industry acquired a model of the inflatable habitat "MoonBaseTwo" (2007), designed to facilitate long-term exploration on the Moon. Additionally, the "MarsCruiserOne" (2007), a pressurized laboratory rover for Mars exploration, was showcased at the Centre Georges Pompidou in Paris as part of the Airs de Paris exhibition.
References
Further reading
Paola Antonelli (ed.), Safe: Design Takes on Risk, The Museum of Modern Art, New York 2005, p. 64.
Valérie Guillaume, "architecture + vision. Mars Cruiser One 2002-2006", in Airs de Paris, Diffusion Union-Distribution, Paris 2007, pp. 338–339.
Namita Goel, The Beauty of the Extreme, Indian Architect & Builder, March 2006, pp. 82–83.
Arturo Vittori, Architecture and Vision, in L'Arca, October 2004, 196, pp. 26–38.
Un veicolo per Marte. Mars Cruiser One, in L'Arca, April 2007, 224, p. 91.
Ruth Slavid, Micro: Very Small Buildings, Laurence King Publishing, London, pp. 102–106,
Wüstenzelt Desert Seal / Desert Seal Tent, in Detail, 2008, 6, pp. 612–614
External links
Official website
Design companies of Italy
Design companies of Germany
Aerospace
Architecture groups
Design companies established in 2003
Italian companies established in 2003
German companies established in 2003 | Architecture and Vision | Physics | 670 |
30,498,206 | https://en.wikipedia.org/wiki/UTRome | UTRome is a database of three-prime untranslated regions in C. elegans developed by Marco Mangone
See also
untranslated region (UTR)
UTRdb
UTRome.org
References
External links
http://www.UTRome.org
Model organism databases
RNA
Gene expression | UTRome | Chemistry,Biology | 67 |
53,830,069 | https://en.wikipedia.org/wiki/Climate-friendly%20school | A climate-friendly school, or eco-school, encourages the education of sustainable developments, especially by reducing the amount of carbon dioxide produced in order to decrease the effects of climate change. The term "climate-friendly school" arose and was promoted by the United Nations' education for sustainable development program (ESD).
The scientific consensus on the warming of the climate system and growing public concern about its effects, as well as the increased international commitments by countries to reduce global emissions, has accelerated investment into climate-friendly technologies in recent years.
Climate-friendly initiatives
International initiatives such as the United Nation's education for sustainable development program (ESD), supported by the Foundation for Environmental Education eco-schools program and the UNEP's global universities partnership on environment and sustainability have led the development of climate-friendly schools. In order to minimise the production of carbon dioxide, these initiatives have encouraged the calculation and reduction of carbon footprints, the reduction of waste (through composting, purchasing policies, litter-less lunches), alternative transport options and increased education of climate change issues.
Education for sustainable development (ESD)
The education for sustainable development (ESD) was developed through broad consultations with stakeholders from 2016 to 2018 with the aim of contributing to the achievement of 17 sustainable development goals. The United Nations introduced a "whole-school" approach, surrounding a situation where students learn about climate change is further improved by formal and informal messages promoted by the school's values and actions. The "whole-school" approach to climate change means that an educational institution encourages action for reducing climate change in every aspect of school life. This includes school governance, teaching content and methodology, campus and facilities management as well as cooperation with partners and the broader communities. This actively involves all internal and external school stakeholders, namely students, teachers, principals, school staff at all levels and the wider school community such as families and community members in reflecting and acting on climate change is key to a whole-school approach.
Eco-schools
The eco-schools program was developed in 1994 with the support of the European Commission and identified as model initiative for the ESD program by the United Nations. The aim of the program is to promote sustainable development issues in schools through the introduction of a seven-step methodology and eleven-subject theme encouragement.
Global Universities Partnership on Environment and Sustainability
The global universities partnership on environment and sustainability was launched in 2012 at the UNEP in Shanghai, China. In accordance with the ESD program it aims to increase the mainstreaming of sustainability practises and education into universities worldwide. The program pays special attention to enabling individual transformation, societal transformation and technological advancement.
Climate-friendly schools
According to a UNESCO report, the following schools situated around the world have implemented the system of a "climate-friendly school", with respect to climate agreements.
Greece
As an experimental school, the Athens-Gennadeio in Greece was encouraged to introduce innovative programmes. In 2013, the school introduced systems into biology and chemistry courses for 157 senior secondary students. In this systems unit, students worked in groups to investigate climate change, virus transmission, and ecosystem dynamics with the help of computer simulations. Through their investigations, students discovered the properties of complex systems, such as positive and negative feedback loops. A group of students measured the energy sustainability of the school building, to find its weaknesses and construct an action plan to improve it.
Lebanon
The Al-Kawthar Secondary School in Beirut, Lebanon works to raise awareness of climate change within their school. So far, 2,421 students, 310 teachers, and 110 families have been involved in projects including tree-planting, making handicrafts from recycled materials, visiting national forests, recycling, and conserving water. The school also hosted film nights and workshops where students, families and teachers suggested ways to save the planet. Following the ISO-26000 guidelines for socially responsible institutions, the school has committed to a continuous process of improvement. At the beginning of the school year, the environmental committee develops an action plan based on what was learned and achieved the previous year. The committee keeps a record of their activities, so the school can identify high-impact activities and activities that could be scaled up. Teachers and students deepen their learning by sharing their experiences with other schools in Lebanon and around the world.
Côte d’Ivoire
In Côte d'Ivoire, ASPnet schools implemented initiatives, with the consultation university researchers and medical practitioners, that aimed to preserve the biodiversity of forests. The biodiversity of forests was acknowledged to be under threat due to the widespread uses of forest resources as a culturally important practise in traditional medicine. The schools promote visits to botanical gardens where parents and traditional medicine practitioners teach students about traditional medicinal plant cultivation and methods of sustainable conservation. In collaboration with the experts and researchers, the ASPnet schools are now considering creating a genebank as well as replanting endangered species.
Brazil
In Rio de Janeiro in Brazil, the Colégio Israelita Brasileiro A. Liessen's environment team has developed initiatives to teach janitors, teachers, students and engineers about sustainable practices in experiential, non-formal learning activities. The team built a green roof, solar ovens, bamboo bicycle racks and planted spices, flowers, and meditation gardens that could be converted into biodiesel cooking oils. The team has also offered trainings for school community members in order to secure buy-ins for the projects. For example, training on waste sorting and cooking oil collection was offered to employees and a gardening workshop was organized for student volunteers, so they could assist maintenance staff in caring for the expanding school gardens.
Japan
The Nagoya International School in Japan is committed to developing a school culture of sustainability, as expressed in their school mission statement. The institution aims to “nurture in its students the capacity to objectively define what is truly needed in the global society, to take action on their own, and to become active agents for sustainable development.”
See also
UNESCO ASPnet
Education for sustainable development
Global Citizenship Education
Climate Change Education (CCE)
Sources
References
Free content from UNESCO
Sustainable development
Environmental education
Meteorology and climate education | Climate-friendly school | Environmental_science | 1,241 |
22,522,863 | https://en.wikipedia.org/wiki/Neodecanoic%20acid | Neodecanoic acid is a mixture of carboxylic acids with the common structural formula C10H20O2, a molecular weight of 172.26 g/mol, with a boling point of 243-253 °C (482 to 494 °F), a melting point of -39 °C (<104 °F) and the CAS number 26896-20-8. Components of the mixture are acids with the common property of a "trialkyl acetic acid" having three alkyl groups at carbon two, including:
2,2,3,5-Tetramethylhexanoic acid
2,4-Dimethyl-2-isopropylpentanoic acid
2,5-Dimethyl-2-ethylhexanoic acid
2,2-Dimethyloctanoic acid
2,2-Diethylhexanoic acid
References
Alkanoic acids | Neodecanoic acid | Chemistry | 194 |
31,672,921 | https://en.wikipedia.org/wiki/Stella%20d%27Italia | The Stella d'Italia ("Star of Italy"), popularly known as Stellone d'Italia ("Great Star of Italy"), is a five-pointed white star, which has symbolized Italy for many centuries. It is the oldest national symbol of Italy, since it dates back to Graeco-Roman mythology when Venus, associated with the West as an evening star, was adopted to identify the Italian peninsula. From an allegorical point of view, the Stella d'Italia metaphorically represents the shining destiny of Italy.
In the early 16th century it began to be frequently associated with Italia turrita, the national personification of the Italian peninsula. The Stella d'Italia was adopted as part of the emblem of Italy in 1947, where it is superimposed on a steel cogwheel, all surrounded by an oak branch and an olive branch.
Symbolic value
From an allegorical point of view, the Star of Italy metaphorically represents the shining destiny of Italy. Its unifying value is equal to that of the flag of Italy. In 1947, the Stella d'Italia was inserted at the center of the emblem of Italy, which was designed by Paolo Paschetto and which is the iconic symbol identifying the Italian State.
The Italian Star is also recalled by some honors. The Italian Star is recalled by the Colonial Order of the Star of Italy, decoration of the Kingdom of Italy which was intended to celebrate the Italian Empire, as well as by the Order of the Star of Italian Solidarity, the first decoration established by Republican Italy, which was replaced in 2011 by the Order of the Star of Italy, second civil honorary title in importance of the Italian State.
The Star of Italy is also recalled by the stars worn on the collars of Italian military uniforms and appears on the figurehead of the Italian Navy. In the civil sphere, the Italian Star is the central symbol of the emblem of the Club Alpino Italiano.
History
From ancient Greece to the Roman era
The symbolism of a star associated with Italy first appeared in the writings of the ancient Greek poet Stesicoro, from whom it passed on to poets such as Virgil. The oldest national symbol of Italy, it originated from the combination of Venus, as an evening star, with the West and therefore with the Italian peninsula, one of which was Esperia, or "land of Hesperus, the star of the Evening consecrated to Venus". This symbolism was already attested in archaic Greek literature, in 6th century BC by the poet Stesichorus, in the poem Iliupersis (Fall of Troy) that created the legend of Aeneas which described his return to the land of his ancestors (Italy) after the defeat of Troy, under the leadership of Venus.
The story of Aeneas' journey to the Italian coast from the maternal star of Venus is then resumed in Roman times by Pliny the Elder, by Marcus Terentius Varro and by Virgil, giving rise to a double tradition: the political tradition of Caesaris Astrum, the star of Julius Caesar that had originated from the appearance of a comet star shortly after his death and which was also recalled by Augustus as an auspicious sign and as a prefiguration of the Pax Romana, and the toponymy and literary tradition of Greek origin of Italy called Esperia, the "land on which the evening star sets" that is Venus. The merger of the two traditions associated the star with Italy, the center of the Roman Empire and never considered a province, having a special administrative status, being divided into the Augustan regions.
The first association between the star (the Stella Veneris) and the mural crown (the Corona muralis) of Italia turrita, from which the so-called Italia turrita e stellata, is also from the Roman era and dates back to the time of Augustus.
From the Middle Ages to the unification of Italy
After a period of disuse in the Middle Ages, the Star of Italy was rediscovered during the Renaissance. The symbolic meaning of Caesar's star as the precious tricolor star-shaped jewel, studded with green emeralds, white pearls and red rubies, preserved at the Castelvecchio Museum in Verona, dates back to the 14th century, is therefore still uncertain. One meaning could be that it was built for the condottiere Cangrande I della Scala, lord of Verona in which Dante Alighieri saw the new Caesar capable of unifying Italy. However, the star may also refer to Sirius, with the green, white and red colors associated with the three theological virtues.
In 1603, in the second edition of his treatise Iconologia, Cesare Ripa associated the symbol with the Italia turrita, and created a modern version of Italy's allegorical personification: a woman with a star on top of a towered crown, therefore supplied with the Corona muralis and the Stella Veneris. Ripa's treatise inspired many artists like Canova, Bisson, Maccari, Balla, Sironi, until the 1920s.
The allegorical image of the towered and star-topped Italy became popular during the unification of Italy, spreading through a large iconography of statues, friezes and decorative objects, tourist-guide covers, postcards, prints and magazines' illustrations. During the unification of Italy, evoking Aeneas' journey toward the Italian coasts, patriot Giuseppe Mazzini mentioned again the national star's myth that afterwards was recovered by Cavour and the new Savoyard kings of Italy. The reigning house even tried to get possession of it, suggesting that it was the Stella Sabauda ("Savoys' star"), a family heraldic pattern that is not mentioned in any historical document preceding the unification of Italy.
From the unification of Italy to republican Italy
After the unification of Italy, the presence of enormous symbolic stars on the honor stage of the official ceremonies in which King Victor Emmanuel II of Italy participated led the Italians more and more to define it, in an affective way, as the "star" that protects Italy. On the Italian metallic coinage the Stella d'Italia is present on all copper emissions already from 1861 until 1907, as well as on all the coins of King Umberto I of Italy. The Stella d'Italia is also recalled by the coat of arms of the Kingdom of Italy used from 1870 to 1890. In 1871, due to the royal decree n. 571 of December 13, 1871 signed by the minister Cesare Ricotti-Magnani, the Stella d'Italia became one of the distinctive signs of the Italian Armed Forces, the so-called "stars".
The Stella d'Italia is also mentioned in the patriotic music piece Tripoli bel suol d'amore, which was written in 1911 just before the start of the Italo-Turkish War, a military campaign forming part of the Italian colonial wars to propagate the imminent war of the Kingdom of Italy against the Ottoman Empire aimed at the conquest of Libya.
The Stella d'Italia was one of the symbols of the journey by train on the Aquileia-Rome line towards the capital of Italy of the body of the Italian Unknown Soldier. The coffin was placed on a gun carriage and placed on a goods wagon designed for the occasion by Guido Cirilli. The ceremony had its epilogue in Rome with the solemn burial at the Altare della Patria on 4 November 1921 on the occasion of National Unity and Armed Forces Day. A bronze Stella d'Italia was placed on one of the two locomotives that pulled the railway hearse, while a second one was represented on the main building of the Roma Tiburtina railway station, which received the convoy in the final destination and which was known at the time as "Portonaccio station".
The protective or providential meaning of the star was then adopted by Italian Fascism and the Italian resistance movement, which placed it on the flag of the National Liberation Committee, as well as by the republicans and the monarchists on the occasion of the institutional referendum of 2 June 1946, which took place according to the end of World War II. In 1947, the Stella d'Italia was included in the center of the official Emblem of Italy, drawn by the designer Paolo Paschetto. From an allegorical point of view, the Stella d'Italia metaphorically represents the shining destiny of Italy.
Gallery
See also
Italia turrita
Emblem of Italy
National symbols of Italy
Order of the Star of Italy
Citations
References
National symbols of Italy
Heraldic charges
Star symbols | Stella d'Italia | Mathematics | 1,734 |
41,710,955 | https://en.wikipedia.org/wiki/Perennation | In botany, perennation is the ability of organisms, particularly plants, to survive from one germinating season to another, especially under unfavourable conditions such as drought or winter cold. It typically involves development of a perennating organ, which stores enough nutrients to sustain the organism during the unfavourable season, and develops into one or more new plants the following year. Common forms of perennating organs are storage organs (e.g. tubers, rhizomes and corm), and buds. Perennation is closely related with vegetative reproduction, as the organisms commonly use the same organs for both survival and reproduction.
See also
Overwintering
Plant pathology
Sclerotium
Turion (botany)
References
Agronomy
Botany
Biology terminology | Perennation | Biology | 160 |
1,393,819 | https://en.wikipedia.org/wiki/Diamond%20turning | Diamond turning is turning using a cutting tool with a diamond tip. It is a process of mechanical machining of precision elements using lathes or derivative machine tools (e.g., turn-mills, rotary transfers) equipped with natural or synthetic diamond-tipped tool bits. The term single-point diamond turning (SPDT) is sometimes applied, although as with other lathe work, the "single-point" label is sometimes only nominal (radiused tool noses and contoured form tools being options). The process of diamond turning is widely used to manufacture high-quality aspheric optical elements from crystals, metals, acrylic, and other materials. Plastic optics are frequently molded using diamond turned mold inserts. Optical elements produced by the means of diamond turning are used in optical assemblies in telescopes, video projectors, missile guidance systems, lasers, scientific research instruments, and numerous other systems and devices. Most SPDT today is done with computer numerical control (CNC) machine tools. Diamonds also serve in other machining processes, such as milling, grinding, and honing. Diamond turned surfaces have a high specular brightness and require no additional polishing or buffing, unlike other conventionally machined surfaces.
Process
Diamond turning is a multi-stage process. Initial stages of machining are carried out using a series of CNC lathes of increasing accuracy. A diamond-tipped lathe tool is used in the final stages of the manufacturing process to achieve sub-nanometer level surface finishes and sub-micrometer form accuracies. The surface finish quality is measured as the peak-to-valley distance of the grooves left by the lathe. The form accuracy is measured as a mean deviation from the ideal target form. Quality of surface finish and form accuracy is monitored throughout the manufacturing process using such equipment as contact and laser profilometers, laser interferometers, optical and electron microscopes. Diamond turning is most often used for making infrared optics, because at longer wavelengths midspatial frequencies do not affect optical performance as it is less sensitive to surface finish quality, and because many of the materials used are difficult to polish with traditional methods.
Temperature control is crucial, because the surface must be accurate on distance scales shorter than the wavelength of light. Temperature changes of a few degrees during machining can alter the form of the surface enough to have an effect. The main spindle may be cooled with a liquid coolant to prevent temperature deviations.
The diamonds that are used in the process are strong in the downhill regime but tool wear is also highly dependent on crystal anisotropy and work material.
The machine tool
For best possible quality natural diamonds are used as single-point cutting elements during the final stages of the machining process. A CNC SPDT lathe rests atop a high-quality granite base with micrometer surface finish quality. The granite base is placed on air suspension on a solid foundation, keeping its working surface strictly horizontal. The machine tool components are placed on top of the granite base and can be moved with high degree of accuracy using a high-pressure air cushion or hydraulic suspension. The machined element is attached to an air chuck using negative air pressure and is usually centered manually using a micrometer. The chuck itself is separated from the electric motor that spins it by another air suspension.
The cutting tool is moved with sub-micron precision by a combination of electric motors and piezoelectric actuators. As with other CNC machines, the motion of the tool is controlled by a list of coordinates generated by a computer. Typically, the part to be created is first described using a computer aided design (CAD) model, then converted to G-code using a computer aided manufacturing (CAM) program, and the G-code is then executed by the machine control computer to move the cutting tool. The final surface is achieved with a series of cutting passes to maintain a ductile cutting regime.
Alternative methods of diamond machining in practice also include diamond fly cutting and diamond milling. Diamond fly cutting can be used to generate diffraction gratings and other linear patterns with appropriately contoured diamond shapes. Diamond milling can be used to generate aspheric lens arrays by annulus cutting methods with a spherical diamond tool.
Materials
Diamond turning is specifically useful when cutting materials that are viable as infrared optical components and certain non-linear optical components such as potassium dihydrogen phosphate (KDP). KDP is a perfect material in application for diamond turning, because the material is very desirable for its optical modulating properties, yet it is impossible to make optics from this material using conventional methods. KDP is water-soluble, so conventional grinding and polishing techniques are not effective in producing optics. Diamond turning works well to produce optics from KDP.
Generally, diamond turning is restricted to certain materials. Materials that are readily machinable include:
Plastics
Acetal
Acrylic
Nylon
Polycarbonate
Polypropylene
Polystyrene
Zeonex
Metals
Aluminum and aluminium alloys
Brass
Copper
Gold
Nickel-phosphorus alloy, deposited via electrolytic or electroless nickel plating on other materials
Silver
Tin
Zinc
Infrared crystals
Cadmium sulfide
Cadmium telluride
Calcium fluoride
Cesium iodide
Gallium arsenide
Germanium
Lithium niobate
Potassium bromide
Potassium dihydrogen phosphate (KDP)
Silicon
Sodium chloride
Tellurium dioxide
Zinc selenide
Zinc sulfide
The most often requested materials that are not readily machinable are:
Silicon-based glasses and ceramics
Ferrous materials (steel, iron)
Beryllium
Titanium
Molybdenum
Nickel (except for electroless nickel plating)
Ferrous materials are not readily machinable because the carbon in the diamond tool chemically reacts with the substrate, leading to tool damage and dulling after short cut lengths. Several techniques have been investigated to prevent this reaction, but few have been successful for long diamond machining processes at mass production scales.
Tool life improvement has been under consideration in diamond turning as the tool is expensive. Hybrid processes such as laser-assisted machining have emerged in this industry recently. The laser softens hard and difficult-to-machine materials such as ceramics and semiconductors, making them easier to cut.
Quality control
Despite all the automation involved in the diamond turning process, the human operator still plays the main role in achieving the final result. Quality control is a major part of the diamond turning process and is required after each stage of machining, sometimes after each pass of the cutting tool. If it is not detected immediately, even a minute error during any of the cutting stages results in a defective part. The extremely high requirements for quality of diamond-turned optics leave virtually no room for error.
The SPDT manufacturing process produces a relatively high percentage of defective parts, which must be discarded. As a result, the manufacturing costs are high compared to conventional polishing methods. Even with the relatively high volume of optical components manufactured using the SPDT process, this process cannot be classified as mass production, especially when compared with production of polished optics. Each diamond-turned optical element is manufactured on an individual basis with extensive manual labor.
History
Research into single-point diamond turning began in the late 1940s with Philips in the Netherlands, while Lawrence Livermore National Laboratory (LLNL) pioneered SPDT in the mid-1960s. By 1979, LLNL received funding to transfer this technology to private industry.
LLNL initially focused on two-axis machining for axisymmetric surfaces and developed the Large Optics Diamond Turning Machine (LDTM), a highly accurate lathe. They also experimented with freeform surfaces using fast tool servos and XZC (slow tool servo) turning, leading to applications like wavefront correctors for lasers.
Three-axis turning became more common in the early 1990s as diamond quality improved. Companies like Zeiss began producing refractive lenses for infrared optics, advancing freeform optical manufacturing. By 2002, interest in freeform shapes had expanded, especially in focusing lenses. Early applications included Polaroid’s SX-70 camera, and fast tool servos enabled rapid production of non-axisymmetric surfaces for contact lenses.
See also
Fabrication and testing (optical components)
References
Optics
Glass production
Turning | Diamond turning | Physics,Chemistry,Materials_science,Engineering | 1,671 |
6,217,045 | https://en.wikipedia.org/wiki/Kelvin%20equation | The Kelvin equation describes the change in vapour pressure due to a curved liquid–vapor interface, such as the surface of a droplet. The vapor pressure at a convex curved surface is higher than that at a flat surface. The Kelvin equation is dependent upon thermodynamic principles and does not allude to special properties of materials. It is also used for determination of pore size distribution of a porous medium using adsorption porosimetry. The equation is named in honor of William Thomson, also known as Lord Kelvin.
Formulation
The original form of the Kelvin equation, published in 1871, is:
where:
= vapor pressure at a curved interface of radius
= vapor pressure at flat interface () =
= surface tension
= density of vapor
= density of liquid
, = radii of curvature along the principal sections of the curved interface.
This may be written in the following form, known as the Ostwald–Freundlich equation:
where is the actual vapour pressure,
is the saturated vapour pressure when the surface is flat,
is the liquid/vapor surface tension, is the molar volume of the liquid, is the universal gas constant, is the radius of the droplet, and is temperature.
Equilibrium vapor pressure depends on droplet size.
If the curvature is convex, is positive, then
If the curvature is concave, is negative, then
As increases, decreases towards , and the droplets grow into bulk liquid.
If the vapour is cooled, then decreases, but so does . This means increases as the liquid is cooled. and may be treated as approximately fixed, which means that the critical radius must also decrease.
The further a vapour is supercooled, the smaller the critical radius becomes. Ultimately it can become as small as a few molecules, and the liquid undergoes homogeneous nucleation and growth.
The change in vapor pressure can be attributed to changes in the Laplace pressure. When the Laplace pressure rises in a droplet, the droplet tends to evaporate more easily.
When applying the Kelvin equation, two cases must be distinguished: A drop of liquid in its own vapor will result in a convex liquid surface, and a bubble of vapor in a liquid will result in a concave liquid surface.
History
The form of the Kelvin equation here is not the form in which it appeared in Lord Kelvin's article of 1871. The derivation of the form that appears in this article from Kelvin's original equation was presented by Robert von Helmholtz (son of German physicist Hermann von Helmholtz) in his dissertation of 1885. In 2020, researchers found that the equation was accurate down to the 1nm scale.
Derivation using the Gibbs free energy
The formal definition of the Gibbs free energy for a parcel of volume , pressure and temperature is given by:
where is the internal energy and is the entropy. The differential form of the Gibbs free energy can be given as
where is the chemical potential and is the number of moles. Suppose we have a substance which contains no impurities. Let's consider the formation of a single drop of with radius containing molecules from its pure vapor. The change in the Gibbs free energy due to this process is
where and are the Gibbs energies of the drop and vapor respectively. Suppose we have molecules in the vapor phase initially. After the formation of the drop, this number decreases to , where
Let and represent the Gibbs free energy of a molecule in the vapor and liquid phase respectively. The change in the Gibbs free energy is then:
where is the Gibbs free energy associated with an interface with radius of curvature and surface tension . The equation can be rearranged to give
Let and be the volume occupied by one molecule in the liquid phase and vapor phase respectively. If the drop is considered to be spherical, then
The number of molecules in the drop is then given by
The change in Gibbs energy is then
The differential form of the Gibbs free energy of one molecule at constant temperature and constant number of molecules can be given by:
If we assume that then
The vapor phase is also assumed to behave like an ideal gas, so
where is the Boltzmann constant. Thus, the change in the Gibbs free energy for one molecule is
where is the saturated vapor pressure of over a flat surface and is the actual vapor pressure over the liquid. Solving the integral, we have
The change in the Gibbs free energy following the formation of the drop is then
The derivative of this equation with respect to is
The maximum value occurs when the derivative equals zero. The radius corresponding to this value is:
Rearranging this equation gives the Ostwald–Freundlich form of the Kelvin equation:
Apparent paradox
An equation similar to that of Kelvin can be derived for the solubility of small particles or droplets in a liquid, by means of the connection between vapour pressure and solubility, thus the Kelvin equation also applies to solids, to slightly soluble liquids, and their solutions if the partial pressure is replaced by the solubility of the solid () (or a second liquid) at the given radius, , and by the solubility at a plane surface (). Hence small particles (like small droplets) are more soluble than larger ones. The equation would then be given by:
These results led to the problem of how new phases can ever arise from old ones. For example, if a container filled with water vapour at slightly below the saturation pressure is suddenly cooled, perhaps by adiabatic expansion, as in a cloud chamber, the vapour may become supersaturated with respect to liquid water. It is then in a metastable state, and we may expect condensation to take place. A reasonable molecular model of condensation would seem to be that two or three molecules of water vapour come together to form a tiny droplet, and that this nucleus of condensation then grows by accretion, as additional vapour molecules happen to hit it. The Kelvin equation, however, indicates that a tiny droplet like this nucleus, being only a few ångströms in diameter, would have a vapour pressure many times that of the bulk liquid. As far as tiny nuclei are concerned, the vapour would not be supersaturated at all. Such nuclei should immediately re-evaporate, and the emergence of a new phase at the equilibrium pressure, or even moderately above it should be impossible. Hence, the over-saturation must be several times higher than the normal saturation value for spontaneous nucleation to occur.
There are two ways of resolving this paradox. In the first place, we know the statistical basis of the second law of thermodynamics. In any system at equilibrium, there are always fluctuations around the equilibrium condition, and if the system contains few molecules, these fluctuations may be relatively large. There is always a chance that an appropriate fluctuation may lead to the formation of a nucleus of a new phase, even though the tiny nucleus could be called thermodynamically unstable. The chance of a fluctuation is e−ΔS/k, where ΔS is the deviation of the entropy from the equilibrium value.
It is unlikely, however, that new phases often arise by this fluctuation mechanism and the resultant spontaneous nucleation. Calculations show that the chance, e−ΔS/k, is usually too small. It is more likely that tiny dust particles act as nuclei in supersaturated vapours or solutions. In the cloud chamber, it is the clusters of ions caused by a passing high-energy particle that acts as nucleation centers. Actually, vapours seem to be much less finicky than solutions about the sort of nuclei required. This is because a liquid will condense on almost any surface, but crystallization requires the presence of crystal faces of the proper kind.
For a sessile drop residing on a solid surface, the Kelvin equation is modified near the contact line, due to intermolecular interactions between the liquid drop and the solid surface. This extended Kelvin equation is given by
where is the disjoining pressure that accounts for the intermolecular interactions between the sessile drop and the solid and is the Laplace pressure, accounting for the curvature-induced pressure inside the liquid drop. When the interactions are attractive in nature, the disjoining pressure, is negative. Near the contact line, the disjoining pressure dominates over the Laplace pressure, implying that the solubility, is less than . This implies that a new phase can spontaneously grow on a solid surface, even under saturation conditions.
See also
Condensation
Gibbs–Thomson equation
Ostwald–Freundlich equation
References
Further reading
W. J. Moore, Physical Chemistry, 4th ed., Prentice Hall, Englewood Cliffs, N. J., (1962) p. 734–736.
S. J. Gregg and K. S. W. Sing, Adsorption, Surface Area and Porosity, 2nd edition, Academic Press, New York, (1982) p. 121.
Arthur W. Adamson and Alice P. Gast, Physical Chemistry of Surfaces, 6th edition, Wiley-Blackwell (1997) p. 54.
Butt, Hans-Jürgen, Karlheinz Graf, and Michael Kappl. "The Kelvin Equation". Physics and Chemistry of Interfaces. Weinheim: Wiley-VCH, 2006. 16–19. Print.
Anton A. Valeev,"Simple Kelvin Equation Applicable in the Critical Point Vicinity",European Journal of Natural History, (2014), Issue 5, p. 13-14.
Surface science
Physical chemistry
Equation
Thought experiments in physics | Kelvin equation | Physics,Chemistry,Materials_science | 1,965 |
11,559,449 | https://en.wikipedia.org/wiki/Colletotrichum%20dematium | Colletotrichum dematium is a plant pathogen causing anthracnose.
References
dematium
Fungal plant pathogens and diseases
Fungi described in 1884
Fungus species | Colletotrichum dematium | Biology | 37 |
41,052,657 | https://en.wikipedia.org/wiki/Precapillary%20resistance | Precapillary resistance is the modulation of blood flow by capillaries through vasomotion, either opening (dilating) and letting blood pass through, or by constricting their lumens, reducing bloodflow through the capillary bed (occluding the passage of blood). It is not entirely clear how precapillary resistance is created in many parts of the body. Precapillary sphincters are smooth muscle structures that mediate the precapillary resistance in the mesenteric microcirculation.
See also
Capillary
Metarteriole
Precapillary sphincter
References
Angiology
Circulatory system | Precapillary resistance | Biology | 141 |
45,111,150 | https://en.wikipedia.org/wiki/Allen%20Street%20Bridge%20disaster | The Allen Street Bridge was a bridge over the Cowlitz River between Kelso, Washington and Longview, Washington that collapsed on January 3, 1923, killing as many as 35 people. It resulted in the deadliest bridge collapse in Washington history.
Construction
The bridge was a bascule bridge made entirely of wood with a central span, built in 1907 to replace an earlier wooden bridge. It was renovated in 1915, but many residents refused to use the bridge due to its poor condition.
Collapse
The collapse occurred the day after a log jam of over 3 million board-feet of runaway log boom piled up against the bridge was cleared. This was concluded by structural engineers to have weakened the bridge. According to another source, the original old, rotten bridge deck had been overlaid by another layer of timbers which, combined with the soaking of the entire deck thickness, overloaded the span.
The collapse occurred during evening rush hour with workers coming home from the Longview mills. A stalled car caused traffic to bunch on the bridge; according to witnesses, the bridge was carrying about 20 vehicles and 100 to 150 pedestrians when a support cable failed for unknown reasons. The two supporting towers fell and the 300-foot center span of the bridge collapsed.
Initial contemporary newspaper reports stated that up to 80 people were killed in the collapse, with some witnesses saying 150. By January 9, reports were that 19 people had been killed. The figure compiled by authorities stood at 17, but probably did not account for many transient workers. Many of the missing bodies were probably carried down the Cowlitz to the Columbia River and then out to sea. An estimate today is that 35 lives were lost. Using even the lowest estimate of 17, , the disaster stands as Washington's greatest loss of life caused by bridge failure.
Aftermath
The bridge loss is the first in a list of seventy accidental losses compiled by the Washington State Department of Transportation between 1923 and 1998. This disaster brought about bridge inspection programs conducted by the state agency and counties.
A new four-lane vertical-lift drawbridge, of steel and cement construction, was under construction when the 1906 bridge collapsed. It was to connect Kelso with the new planned city of Longview on the west side of the Cowlitz, at a cost of $228,000. It was built by the Washington Department of Highways and opened to traffic on March 19, 1923. The vertical-lift bridge remained in use until it was closed in 2000 and replaced by a new span.
References
Notes
Sources
Bridge disasters in the United States
Bridge disasters caused by collision
Cowlitz County, Washington
Transportation disasters in Washington (state)
1923 in Washington (state)
1923 disasters in the United States
Transport disasters in 1923
Road bridges in Washington (state) | Allen Street Bridge disaster | Technology | 549 |
20,933,351 | https://en.wikipedia.org/wiki/Hybrid%20inviability | Hybrid inviability is a post-zygotic barrier, which reduces a hybrid's capacity to mature into a healthy, fit adult. The relatively low health of these hybrids relative to pure-breed individuals prevents gene flow between species. Thus, hybrid inviability acts as an isolating mechanism, limiting hybridization and allowing for the differentiation of species.
The barrier of hybrid inviability occurs after mating species overcome pre-zygotic barriers (behavioral, mechanical, etc.) to produce a zygote. The barrier emerges from the cumulative effect of parental genes; these conflicting genes interfere with the embryo's development and prevents its maturation. Most often, the hybrid embryo dies before birth. However, sometimes, the offspring develops fully with mixed traits, forming a frail, often infertile adult. This hybrid displays reduced fitness, marked by decreased rates of survival and reproduction relative to the parent species. The offspring fails to compete with purebred individuals, limiting genes flow between species.
Evolution of Hybrid Inviability in Tetrapods
In the 1970s, Allan C. Wilson and his colleagues first investigated the evolution of hybrid inviability in tetrapods, specifically mammals, birds, and frogs.
Recognizing that hybrid viability decreases with time, the researchers used molecular clocks to quantify divergence time. They identified how long ago the common ancestor of hybridizing species diverged into two lines, and found that bird and frog species can produce viable hybrids up to twenty million years after speciation. In addition, the researchers showed that mammal species can only produce viable hybrids up to two or three million years after speciation.
Wilson et al. (1974) proposes two hypotheses to explain the relatively faster evolution of hybrid inviability in mammals: the Regulatory and the Immunological Hypotheses. Subsequent research finds support for these hypotheses.
The Regulatory Hypothesis accounts for two characteristics of mammals, and explains the general formation of hybrid inviability in mammals, birds, and frogs.
First, mammals display relatively lower protein diversity than frogs. As Wilson et al. (1974) suggests, “mammals that can hybridize with each other differ only slightly at the protein level, whereas frogs that differ substantially in protein sequence hybridize readily.” This analysis suggests that gene divergence is not the only determinate of hybridization in mammals, birds, or frogs.
Second, the evolution of anatomical diversity occurred far faster in mammals than in either birds or frogs. As Fitzpatrick (2004) indicates, “the morphological disparities among bats, mole-rats, and whales are more dramatic than any disparities in birds and frogs.” This anatomical diversity is evidence for the diversification of regulatory systems. This mammalian characteristic suggests that, although mammals are genetically similar, dramatic changes in regulatory genes caused distinct developmental differences.
The Regulatory Hypotheses specifically attributes hybrid inviability in mammals, birds, and frogs to differences in gene regulation. It proposes that hybrid inviability evolved faster in mammalian taxa because mammals have accumulated significantly more changes in regulatory systems than birds or frogs, and it suggests that organisms with distinctly different systems of gene regulation may not produce viable hybrids.
Wilson et al. (1974) recognizes that the development of embryos in the mammalian placenta requires regulatory compatibility. Both the regulatory genes of the sperm and egg contribute to the expression of other protein-coding genes in the zygote; if certain regulatory genes are not expressed or are expressed at the wrong time, the inter-specific zygote will abort or develop unhealthy traits. Moreover, because the development of the zygote depends on maternal characteristics, such as cytoplasmic determinants, the regulatory traits of the mother may not support the hybrid's developmental needs.
The Immunological Hypothesis proposes that the divergence of certain protein structures associated with mother and child causes hybrid inviability. The hypothesis applies only to mammals, where fertilization and development is internal. In birds and in frogs, fertilization is primarily external, and the mother’s immune system does not interfere with fetal development.
This hypothesis stems from the immunological characteristics of the placenta, where the growing fetus is in constant contact with the fluids and tissues of the mother. Variation within species and variation between species may contribute to fetal-maternal incompatibility, and according to the hypothesis, if the proteins of the fetus varies significantly from the proteins of the placenta, the mother may produce anti-bodies that will attack and abort the fetus. Therefore, if the fetal proteins of the father species are incompatible the mother's placental proteins, the mother's immune system may abort the embryo.
Evidence for the Immunological Hypothesis varies considerably. Wilson et al. (1974) recognizes studies that provide no support to the Immunological Hypotheses. In these experiments, the use of immunological suppressants provided no additional viability to inter-specific hybrids. In contrast, Elliot and Crespi (2006) documents the effects of placental immunology on hybrid inviability, showing that mammals with hemochorial placentas more readily hybridize than mammals with epitheliochorial or endotheliochorial placentas. These different placenta types possess divergent immunological systems, and consequently, they cause varying degrees of hybrid inviability.
Notes
Developmental biology | Hybrid inviability | Biology | 1,107 |
4,438,288 | https://en.wikipedia.org/wiki/Medical%20logic%20module | A medical logic module (MLM) is an independent unit in a healthcare knowledge base that represents the knowledge published on a requirement for treating a patient according to a single medical decision.
Possible usage is with an event monitor program in an intensive care ward or with hospital information system on occurrence of defined conditions. See Arden syntax reference for examples. Early introduction is given with monographs.
Implementation
The Arden syntax has been defined as a grammar which could make MLMs swappable between various platforms. XML representation of Arden (ArdenML) can be transformed by Extensible Stylesheet Language Transformations (XSLTs) to other forms.
There is no reference stated for general implementation as a transfer method between different information systems.
See also
Health Level 7
References
Health informatics | Medical logic module | Biology | 154 |
17,557,798 | https://en.wikipedia.org/wiki/Heat%20illness | Heat illness is a spectrum of disorders due to increased body temperature. It can be caused by either environmental conditions or by exertion. It includes minor conditions such as heat cramps, heat syncope, and heat exhaustion as well as the more severe condition known as heat stroke. It can affect any or all anatomical systems. Heat illnesses include: heat stroke, heat exhaustion, heat syncope, heat edema, heat cramps, heat rash, heat tetany.
Prevention includes avoiding medications that can increase the risk of heat illness, gradual adjustment to heat, and sufficient fluids and electrolytes.
Classification
A number of heat illnesses exist including:
Heat stroke - Defined by a body temperature of greater than due to environmental heat exposure with lack of thermoregulation. Symptoms include dry skin, rapid, strong pulse and dizziness.
Heat exhaustion - Can be a precursor of heatstroke; the symptoms include heavy sweating, rapid breathing and a fast, weak pulse.
Heat syncope - Fainting or dizziness as a result of overheating.
Heat edema - Swelling of extremities due to water retention following dilation of blood vessels in response to heat.
Heat cramps - Muscle pains that happen during heavy exercise in hot weather.
Heat rash - Skin irritation from excessive sweating.
Heat tetany - Usually results from short periods of stress in intense heat. Symptoms may include hyperventilation, respiratory problems, numbness or tingling, or muscle spasms.
Overview of diseases
Hyperthermia, also known as heat stroke, becomes commonplace during periods of sustained high temperature and humidity. Older adults, very young children, and those who are sick or overweight are at a higher risk for heat-related illness. The chronically ill and elderly are often taking prescription medications (e.g., diuretics, anticholinergics, antipsychotics, and antihypertensives) that interfere with the body's ability to dissipate heat.
Heat edema presents as a transient swelling of the hands, feet, and ankles and is generally secondary to increased aldosterone secretion, which enhances water retention. When combined with peripheral vasodilation and venous stasis, the excess fluid accumulates in the dependent areas of the extremities. The heat edema usually resolves within several days after the patient becomes acclimated to the warmer environment. No treatment is required, although wearing support stockings and elevating the affected legs will help minimize the edema.
Heat rash, also known as prickly heat, is a maculopapular rash accompanied by acute inflammation and blocked sweat ducts. The sweat ducts may become dilated and may eventually rupture, producing small pruritic vesicles on an erythematous base. Heat rash affects areas of the body covered by tight clothing. If this continues for a duration of time it can lead to the development of chronic dermatitis or a secondary bacterial infection. Prevention is the best therapy. It is also advised to wear loose-fitting clothing in the heat. Once heat rash has developed, the initial treatment involves the application of chlorhexidine lotion to remove any desquamated skin. The associated itching may be treated with topical or systemic antihistamines. If infection occurs a regimen of antibiotics is required.
Heat cramps are painful, often severe, involuntary spasms of the large muscle groups used in strenuous exercise. Heat cramps tend to occur after intense exertion. They usually develop in people performing heavy exercise while sweating profusely and replenishing fluid loss with non-electrolyte containing water. This is believed to lead to hyponatremia that induces cramping in stressed muscles. Rehydration with salt-containing fluids provides rapid relief. Patients with mild cramps can be given oral .2% salt solutions, while those with severe cramps require IV isotonic fluids. The many sport drinks on the market are a good source of electrolytes and are readily accessible.
Heat syncope is related to heat exposure that produces orthostatic hypotension. This hypotension can precipitate a near-syncopal episode. Heat syncope is believed to result from intense sweating, which leads to dehydration, followed by peripheral vasodilation and reduced venous blood return in the face of decreased vasomotor control. Management of heat syncope consists of cooling and rehydration of the patient using oral rehydration therapy (sport drinks) or isotonic IV fluids. People who experience heat syncope should avoid standing in the heat for long periods of time. They should move to a cooler environment and lie down if they recognize the initial symptoms. Wearing support stockings and engaging in deep knee-bending movements can help promote venous blood return.
Heat exhaustion is considered by experts to be the forerunner of heat stroke (hyperthermia). It may even resemble heat stroke, with the difference being that the neurologic function remains intact. Heat exhaustion is marked by excessive dehydration and electrolyte depletion. Symptoms may include diarrhea, headache, nausea and vomiting, dizziness, tachycardia, malaise, and myalgia. Definitive therapy includes removing patients from the heat and replenishing their fluids. Most patients will require fluid replacement with IV isotonic fluids at first. The salt content is adjusted as necessary once the electrolyte levels are known. After discharge from the hospital, patients are instructed to rest, drink plenty of fluids for 2–3 hours, and avoid the heat for several days. If this advice is not followed it may then lead to heat stroke.
Symptoms
Increased temperatures have been reported to cause heat stroke, heat exhaustion, heat syncope, and heat cramps. Some studies have also looked at how severe heat stroke can lead to permanent damage to organ systems. This damage can increase the risk of early mortality because the damage can cause severe impairment in organ function. Other complications of heat stroke include respiratory distress syndrome in adults and disseminated intravascular coagulation. Some researchers have noted that any compromise to the human body's ability to thermoregulate would in theory increase risk of mortality. This includes illnesses that may affect a person's mobility, awareness, or behavior.
Prevention
Prevention includes avoiding medications that can increase the risk of heat illness (e.g. antihypertensives, diuretics, and anticholinergics), gradual adjustment to heat, and sufficient fluids and electrolytes.
Some common medications that have an effect on thermoregulation can also increase the risk of mortality. Specific examples include anticholinergics, diuretics, phenothiazines and barbiturates.
Epidemiology
Heat stroke is relatively common in sports. About 2 percent of sports-related deaths that occurred in the United States between 1980 and 2006 were caused by exertional heat stroke. Football in the United States has the highest rates. The month of August, which is associated with pre-season football camps across the country, accounts for 66.3% of exertion heat-related illness time-loss events. Heat illness is also not limited geographically and is widely distributed throughout the United States. An average of 5946 persons were treated annually in US hospital emergency departments (2 visits/ 100,00 population) with a hospitalization rate of 7.1%. Most commonly males are brought in 72.5% and persons 15–19 years of age 35.6% When taking into consideration all high school athletes, heat illness occurs at a rate of 1.2 per 100,000 kids. When comparing risk by sport, Football was 11.4 times more likely than all other sports combined to be exposed to an exertional heat illness.
Between 1999 and 2003, the US had a total of 3442 deaths from heat illness. Those who work outdoors are at particular risk for heat illness, though those who work in poorly-cooled spaces indoors are also at risk. Between 1992 and 2006, 423 workers died from heat illness in the US. Exposure to environmental heat led to 37 work-related deaths. There were 2,830 nonfatal occupational injuries and illnesses involving days away from work as well, in 2015. Kansas had the highest heat related injury while on the job with a rate of 1.3 per 10,000 workers, while Texas had the most overall. Due to the much higher state population of Texas, their prevalence was only 0.4 per 10,000 or 4 per 100,000. Of the 37 deaths reported heat illnesses, 33 of the 37 occurred between the summer months of June through September. The most dangerous profession that was documented was transportation and material moving. Transportation and material moving accounted for 720 of the 2,830 reported nonfatal occupational injuries or 25.4 percent. After transportation and material moving, Production placed second followed by protective services, installation, maintenance, and repair and construction all in succession
Effects of climate change
A 2016 U.S. government report said that climate change could result in "tens of thousands of additional premature deaths per year across the United States by the end of this century." Indeed, between 2014 and 2017, heat exposure deaths tripled in Arizona (76 deaths in 2014; 235 deaths in 2017) and increased fivefold in Nevada (29 deaths in 2014; 139 deaths in 2017).
History
Heat illness used to be blamed on a tropical fever named calenture.
See also
Occupational heat stress
References
External links
"Heat Exhaustion" on Medicine.net
Emergency medicine
Effects of external causes
Thermoregulation | Heat illness | Biology | 1,977 |
21,291,593 | https://en.wikipedia.org/wiki/Krylov%E2%80%93Bogoliubov%20averaging%20method | The Krylov–Bogolyubov averaging method (Krylov–Bogolyubov method of averaging) is a mathematical method for approximate analysis of oscillating processes in non-linear mechanics. The method is based on the averaging principle when the exact differential equation of the motion is replaced by its averaged version. The method is named after Nikolay Krylov and Nikolay Bogoliubov.
Various averaging schemes for studying problems of celestial mechanics were used since works of Carl Friederich Gauss, Pierre Fatou, Boris Delone and George William Hill. The importance of the contribution of Krylov and Bogoliubov is that they developed a general averaging approach and proved that the solution of the averaged system approximates the exact dynamics.
Background
Krylov–Bogoliubov averaging can be used to approximate oscillatory problems when a classical perturbation expansion fails. That is singular perturbation problems of oscillatory type, for example Einstein's correction to the perihelion precession of Mercury.
Derivation
The method deals with differential equations in the form
for a smooth function f along with appropriate initial conditions. The parameter ε is assumed to satisfy
If ε = 0 then the equation becomes that of the simple harmonic oscillator with constant forcing, and the general solution is
where A and B are chosen to match the initial conditions. The solution to the perturbed equation (when ε ≠ 0) is assumed to take the same
form, but now A and B are allowed to vary with t (and ε). If it is also assumed that
then it can be shown that A and B satisfy the differential equation:
where . Note that this equation is still exact — no approximation has been made as yet. The method of Krylov and Bogolyubov is to note that the functions A and B vary slowly
with time (in proportion to ε), so their dependence on can be (approximately) removed by averaging on the right hand side of the previous equation:
where and are held fixed during the integration. After solving this (possibly) simpler set of differential equations, the Krylov–Bogolyubov averaged approximation for the original function is then given by
This approximation has been shown to satisfy
where t satisfies
for some constants and , independent of ε.
References
Dynamical systems | Krylov–Bogoliubov averaging method | Physics,Mathematics | 480 |
34,564,500 | https://en.wikipedia.org/wiki/O-774 | O-774 is a classical cannabinoid derivative which acts as a potent agonist for the cannabinoid receptors, with a Ki of 0.6 nM at CB1, and very potent cannabinoid effects in animal studies.
See also
AM-2232
O-1057
O-1812
References
Benzochromenes
Cannabinoids
Nitriles | O-774 | Chemistry | 78 |
14,247,668 | https://en.wikipedia.org/wiki/PEDOT-TMA | Poly(3,4-ethylenedioxythiophene)-tetramethacrylate or PEDOT-TMA is a p-type conducting polymer based on 3,4-ethylenedioxylthiophene or the EDOT monomer. It is a modification of the PEDOT structure. Advantages of this polymer relative to PEDOT (or PEDOT:PSS) are that it is dispersible in organic solvents, and it is non-corrosive. PEDOT-TMA was developed under a contract with the National Science Foundation, and it was first announced publicly on April 12, 2004. The trade name for PEDOT-TMA is Oligotron. PEDOT-TMA was featured in an article entitled "Next Stretch for Plastic Electronics" that appeared in Scientific American in 2004.
The U.S. Patent office issued a patent protecting PEDOT-TMA on April 22, 2008.
PEDOT-TMA differs from the parent polymer PEDOT in that it is capped on both ends of the polymer. This limits the chain-length of the polymer, making it more soluble in organic solvents than PEDOT. The methacrylate groups on the two end-caps allow further chemistry to occur such as cross-linking to other polymers or materials.
Physical properties
The bulk conductivity of PEDOT-TMA is 0.1-.5 S/cm, the sheet resistance 1-10 M Ω/sq, and the methacrylate equivalent weight 1360-1600 g/mol. The chemical composition of a film of PEDOT-TMA was measured by energy-dispersive x-ray spectroscopy (EDS). The relative C, O, and S weight percentages were 51.28%, 35.37%, and 10.43%. There was also 2.92% Fe present in the film.
Applications
Several devices and materials have been described in both journals and the patent literature that use PEDOT-TMA as a critical component. In this section, a brief overview of these inventions is given.
Patternable OLEDs: In a study by researchers at General Electric, PEDOT-TMA was used in the hole injection layer in a series of OLED devices. They have also filed a patent application to protect this invention.
Quantum dot modified OLEDs: In an international patent application, PEDOT-TMA surfaces were modified with quantum dots such as CdSe, CdS, and ZnS.
Ion selective membranes: PEDOT-TMA was used as a key ingredient in ion selective membranes and in particular in calcium-selective electrodes. The performance of PEDOT-TMA films in solid contact ion selective electrodes compared to other commercially available conducting polymers has also been reported.
Dye sensitized solar cell: PEDOT-TMA was used in the construction of effective Dye-sensitized solar cells. The PEDOT-TMA was spun-coat to give a 15 nm thick layer which was used as the counter-electrode in a series of Dye-sensitized solar cells. Efficiencies as high as 7.85% were obtained.
Flexible touch screens: PEDOT-TMA was used in the construction of electrodes for flexible touch screens as described in a patent application by the Honeywell Corporation.
Energy storage and conversion devices: Synkera Technologies, Inc. filed a patent application detailing a variety of energy storage and conversion devices that use PEDOT-TMA in their construction.
Glucose sensor: A glucose sensor was prepared by Gymama Slaughter of Virginia State University.
Carbon nanotube composites: Researchers from Los Alamos National Laboratory used PEDOT-TMA to prepare composites with carbon nanotubes. These composites form highly aligned arrays of the nanotubes, and exhibit high conductivity at room temperature (25.0 S/cm).
Metal wire-based photovoltaic device: Researchers from The Institute of Advanced Energy at Kyoto University used PEDOT-TMA to fabricate organic photovoltaic devices.
Embedded capacitors: Researchers from The Polymer Composite Laboratory at VIT University prepared composites of Graphene oxide with PEDOT-TMA and PMMA. They extensively studied the properties of these materials as a function of Graphene oxide composition. The materials were characterized by UV-Vis spectroscopy, FT-IR and FT-Raman spectroscopy, X-Ray diffraction, thermogravimetric analysis, atomic force microscopy and scanning electron microscopy. Finally, the dielectric properties of the materials were evaluated, and the potential application of the composites in constructing embedded capacitors was discussed. This research group has also developed thermistors made from Graphene Oxide/PEDOT-TMA composites.
Titanium dioxide nanocomposites: A research group led by A.A.M. Farag has prepared and characterized nanocomposites of and with PEDOT-TMA. This group has also prepared and characterized heterojunction diodes using this nanocomposite.
Ultrathin Fiber-Mesh Polymer Thermistors: Ultrathin fibers were prepared that show a 10^3 increase in resistance over a narrow temperature range suitable for on-skin and implantable sensors. These thermistors prevent overheating in devices that use thermal protection circuits.
References
Organic polymers
Organic semiconductors
Conductive polymers
Transparent electrodes
Sensors
Materials science
Carboxylate esters
Sulfur heterocycles
Sulfonium compounds | PEDOT-TMA | Physics,Chemistry,Materials_science,Technology,Engineering | 1,127 |
31,595,214 | https://en.wikipedia.org/wiki/Yale%20attitude%20change%20approach | In social psychology, the Yale attitude change approach (also known as the Yale attitude change model) is the study of the conditions under which people are most likely to change their attitudes in response to persuasive messages. This approach to persuasive communications was first studied by Carl Hovland and his colleagues at Yale University during World War II. The basic model of this approach can be described as "who said what to whom": the source of the communication, the nature of the communication and the nature of the audience. According to this approach, many factors affect each component of a persuasive communication. The credibility and attractiveness of the communicator (source), the quality and sincerity of the message (nature of the communication), and the attention, intelligence and age of the audience (nature of the audience) can influence an audience's attitude change with a persuasive communication. Independent variables include the source, message, medium and audience, with the dependent variable the effect (or impact) of the persuasion.
The Yale attitude change approach has generated research and insight into the nature of persuasion. This approach has helped social psychologists understand the process of persuasion and companies make their marketing and advertising strategies more effective. Like most other theories about persuasion and attitude change, this approach is not perfect. Not a systematic theory about persuasive communications, this approach is a general framework within which research was conducted. The Yale researchers did not specify levels of importance among the factors of a persuasive message; they emphasized analyzing the aspects of attitude change over comparing them.
Persuasive communication depends on who says what to whom
Defining the : the source of communication
The effects of credibility rely on the aspects of the speaker to be of "high trustworthiness" or "low trustworthiness". Prominent, credible speakers can drastically persuade more people than others who are not credible. Credible speakers also have a sense of reputation where what they say matters to whom they are speaking to. In addition, attractive speakers have a stronger influence than those who are unattractive, depending on the condition. A study was conducted testing both attractive/unattractive females, as well as strong/weak messages in reference to promoting a sunscreen. They found that people were more willing to be persuaded by a strong message by an attractive female. Contrary to a weak message by an equally attractive female.
Defining the : the nature of the communication
The characteristics of the nature of the communication impacts the degree of attitude change. One such characteristic is the design of the message; people tend be more persuaded by messages that don't appear to be targeted for them. By nature, there is a primacy effect that occurs with speakers. People are more influenced by what they hear first. The first speaker is recorded stronger than the following speakers even if the arguments following the first speaker are stronger. If there is a delay after every speech, then it is better to go last because of the recency effect where people remember the most recent event the most.
Defining the : the nature of the audience
Attitude may change depending on the characteristics of the audience. Audiences that are distracted during the persuasive communication will often be persuaded less than audiences that are not distracted. From the ages of 18–25, people are very susceptible to attitude change. After those ages, people tend to be more stable and resistant to attitude change. Additionally, an audience member that is less intelligent tends to be more persuadable than those with higher intelligence. People who do not enjoy thinking can rely on experts and trustworthy sources to conserve their cognitive resources. If the expert source is untrustworthy, then the person might have to evaluate the material on their own. In most cases, people are not knowledgeable enough to interpret the information themselves or have very low confidence in the issue, thus they must rely on knowledgeable others (expert and trustworthy sources).
History
The Yale attitude change approach (also referred to as the Yale model of persuasion) is considered to be one of the first models of attitude change. It was a reflection of the Yale Communication Research Program's findings, a program which was set up under a grant from the Rockefeller Foundation.
During World War II, political persuasion and propaganda analysis became important fields of study in light of the success of Nazi propaganda campaigns. The Research Branch of the Army's Information and Education Division was assigned this research. Carl Hovland was appointed the Chief Psychologist and director of Experimental Studies for the U.S. He and others undertook the responsibility of conducting, analyzing, and planning experiments that explored the effectiveness of war propaganda. After the war, Hovland, Lumsdaine, and Sheffield published a report of their research findings. These experiments are considered an antecedent to the Yale groups' research. Interest in persuasion remained strong after the war due to advancements in telecommunications. Hovland and others within the "Yale school" returned to Yale in order to continue researching the topic. They established the Yale Communication Research Program which aimed to understand and examine factors that influenced attitude change. It was considered the first modern attempt of such a task.
The Yale Communication Research Program was a "cooperative research and study group" that encouraged members to pursue research in their line of interest regarding the subject of persuasive communication and their effects on behavior and opinion. The Yale group examined attitude change from a learning theory perspective and information processing approach. The Yale school's approach is considered convergent: it started with identifying a phenomenon (attitude change) and then searched for an explanation by looking at variable factors and their effect on the phenomena. This is in contrast to a divergent approach which starts with a theory that is then applied to a phenomenon. The Yale school also suggested that message processing take place in stages: attention, comprehension and acceptance. In essence a person must "first notice the message, and pay attention to it, then comprehend its meaning, and finally accept it". They also introduced the concept of incentive as a crucial variable in influencing attitude change. It was not enough for a response to be learned but that motivation was provided in order to preference one opinion over another.
Hovland, Janis, and Kelly published the group's first empirical findings in 1953. They paralleled their research to Laswell's (1948) statement, "who says what to whom with what effect". In the publication they categorized their findings on the analysis of four factors: "1) the communicator who transmits the communication; 2) the stimuli transmitted by the communicator; 3) the audience responding to the communication; 4) the responses made by the audience to the communication". The Yale school had a breakdown of sub-factors that they observed for each topic (the communicator, the communication, the audience). The fourth topic, which they deemed "responses", was composed of two subtopics which explored the "expression of the new opinion" and "retention of the opinion change". The Yale group had a total of five publications reporting the findings of their experiments (including Communication and Persuasion) that further explored each factor under the same model.
Initial/notable studies
Characteristics of the communicator
The Yale group observed the effects of credibility on persuasion. Credibility was composed of; 1) expertness: the degree to which the communicator was knowledgeable in the field, and 2) trustworthiness: in reference to the intentions of the communicator.
Hovland and Weiss in 1951 exposed participants to identical newspaper and magazine articles. Some were attributed to high-credibility sources (like Robert Oppenheimer); others to low-credibility sources (like the Soviet newspaper Pravda). Participants regarded sources with higher credibility more favorably. They attributed this effect to the expertise of the source and the confidence in the sources' sincerity in delivering the message. They also observed that effects from both positive and negative sources tended to dissipate after several weeks.
An exception to the gradual dissociation of the effects of a persuasive message were reported in studies conducted by Hovland, Lumsdaine, and Shieffield. Results showed that opinion change increased gradually over time, despite forgetting the source of the information. They coined this phenomenon the sleeper effect.
Content of the communication
The Yale school focused on factors such as motivating appeals and organization of arguments in regards to the content of the communication. In particular they focused on emotional appeals which were considered a class of stimuli whose contents could arouse emotion, in contrast to logical/rational appeals. In particular the group looked at fear-arousing appeals.
Janis and Feshbach in 1953 explored degrees of intensities of fear appeal and their effects on conformity in the context of the consequences of poor dental hygiene. The study showed that messages were more effective when contained low-leveled threat references such as "cavities" or "tooth decay" instead of "strong appeal" references, such as exclaiming serious infections that could cause paralysis and kidney damage. The results suggested that appeals of high intensity would be less effective than milder ones.
Hovland, Lumsdaine, and Sheffield explored the effectiveness of one-sided and two-sided messages (containing pros and cons). The 1948 study looked at whether a message given to American soldiers would be more effective if it only advocated one position or if it advocated both sides of the position. It was found that two-sided messages were more effective on budging educated men's opinions. Additionally, two sided arguments were also better at generating change of opinion in those soldiers who opposed the argument initially. For less educated men who also supported the government's position, the one-sided argument was more persuasive. Their findings not only suggested that the content of the communication has an impact on attitude change but so do the attributes of the receiver.
A 1953 follow up study conducted by Lumsdaine and Janis explored the resistance of opinion change motivated by argument structure. The findings summarized that two sided messages were more effective to resistance of counter propaganda regardless of initial position held (even if the initial belief was counter to the new developed belief). Two sided messages were more effective in maintaining sustained opinion change.
Another 1952 study spearheaded by Hovland and Mandel highlighted that messages are more persuasive when they are implicitly argued. The audience is then able to come to their own conclusion. Hovland and Mandel mentioned that this effect may only be evident with less complex issues that can easily be surmised by the audience.
The audience
The Yale group investigated the audience predisposition, which they defined as the audience's motives, abilities, personalities, and the context of the situation.
Kelly and Volkart confirmed the notion that individuals with greater interest in retaining group membership are less likely to adopt beliefs that contradict group standards. Their findings are consistent with the hypothesis that supports the relationship between internalization of norms and stronger group attachments.
Holland et al. studied the resistance to attitude change when a person is a member of a group and discovered five factors that induce conformity of opinion within a group:
the individual's knowledge of group norms
the extent to which the individual values group membership
the individuals social status or rank within the group
particular situation cues
salience of the group (the extent to which a specific group is dominant in the individual's awareness at the time the 'counter persuasion' is delivered
Because these five factors play an influential role in inducing conformity, members may resist attitude change when the group is exposed to a message that counters group norms/culture. The more the group membership is valued, the more resistance, to the extent that a boomerang effect may occur.
Reception
The Yale group did ample and very meaningful research in the field of attitude change and persuasion. They brought forth emphasis on the importance of learning theories behind attitude change and laid a strong foundation of mass findings that stimulated further research related to persuasion. Examples of such work that stemmed from their findings was the inoculation theory and the social judgement theory. The research was considered a landmark in the development of attitude change and persuasion.
The model was a major contributor to the development and understanding of attitude change and persuasion, however it is now only one part of many perspectives on persuasion. Research in persuasion is considering the effects of the unconscious, with scholars beginning to explore the possibility of "priming in inducing non-conscious effects". This idea, new to social psychology, is beginning to shed light on the relationship between the individual unconscious and the social environment. The study of persuasion has always been an integral part of social psychology with the focus slowly moving from attitude change and behavior modification to communications, literature, art and the other humanities.
Theoretical approaches
The Yale group's original research "stemmed from a variety of theoretical approaches, including, among others, Hull's learning theory, some motivational hypotheses of Freud and other psychoanalysts, and some of the formulations of Lewin, Sherif, Newcomb, and others". The Yale group developed a theoretical structure linking individual attributes and persuasion based on three major factors: the source of the communication, the nature of the communication and the nature of the audience.
The approach has a similar structure to Aristotle's concept of persuasion in his Rhetoric. According to Aristotle, there are three means of persuasion: the character of the speaker, the emotional state of the listener and logos (the argument itself). Contemporary psychologists use the Yale model's psychological approach and Aristotle's philosophical approach to examine components of persuasion.
Legacy
Influences on McGuire
In 1968, William McGuire further broke down Hovland's message processing stages (attention, comprehension, acceptance) into six stages: presentation, attention, comprehension, yielding, retention, and behavior. McGuire proposed that a message must first be presented, drawn attention to, and then understood and comprehended by the audience. This would cause an attitude change, that must be remembered at a later time to actually influence behavioral changes. McGuire emphasized the importance of reception (the attention and comprehension stages of the Yale group) and yielding (anticipation and critical-evaluation steps) in his study of individual differences in influenceability. According to McGuire reception was positively related to ability and motivational attributes. One weakness of the approach is the nature of the yielding step, which assumes that the audience's attitude will change by learning a new message, yet learning does not always result in persuasion. McGuire is best known for his inoculation theory of exploring resistance to persuasion, which was influenced by the Yale school's research on the resilience of two-sided messages on opinion acceptance.
Influences on Dolores Albarracín's cognition-in-persuasion model
The Albarracín model, which is a stage model developed in 2002, builds off both McGuire's work and the Yale attitude change approach in regards to the sequence of message processing stages. The study found that message processing may occasionally bypass early stages and takes a step towards addressing the role of processing stages on attitude change. The evidence that people can use processing stages in a different order or even skip a stage altogether was the important acknowledgment of this study.
Influences on elaboration likelihood model
Another model that stems from the Yale attitude change approach is the elaboration likelihood model which is a contemporary approach to persuasion. Developed by Petty and Cacioppo during the late 1980s, the model describes two ways in which persuasive communications can cause attitude change: centrally and peripherally. The central route to persuasion occurs when people have the ability and motivation to listen to a message, think about its arguments and internalize the information. The recipient relies on cognititive responses instead of heuristics when using the central route. The peripheral route to persuasion is used when the recipient has little to no motivation or effort and people are swayed not by the argument itself but by elements secondary to the message (such as the length of the communication or the attractiveness of the communicator). Under the peripheral route, the recipient relies on the context of the situation rather than the information at hand (i.e. they look at attractiveness in this case or if the person speaking is famous or not).
Influences on other theories
Martin Bauer views the Yale approach from a slightly-different angle. In 2008, he argued that persuasion cannot focus only on the social influence of intersubjectivity (the sharing of subjective states by two or more individuals) but must include inter-objectivity (the understandings shared by individuals about social reality). Using the concept of the fait accompli (a completed, irreversible "done deal"), Bauer described artifacts such as nuclear power, information technology and genetic engineering as types of social influence.
More recent empirical studies
Research on external factors which influence individual's attitude has a strong focus on marketing strategy applications. Advances in technology have made mass media a pervasive, $400 billion-plus industry. The average American watches 38,000 commercials a year. There is significant financial interest in examining the impact of source credibility, communicator attractiveness, message context, and mood on persuasion and attitude change.
Applications in marketing
Source credibility and experience claims in consumer advertisement
A study by Jain and Posavac examined the role message origin plays in the likelihood that a recipient will believe the message in an advertisement. Advertisements for mountain bikes and cameras were studied; consumers were asked their overall reaction to search claims (claims which can be statistically proven) and experience claims (testimonials). The credibility of the claims was also compared. Participants were shown advertisements for each of the products and asked questions about the advertisements, which contained search or experience claims. The search claim for the mountain bike was its weight, the experience claim was its ease of control. The search claim for the camera was its compactness and the experience claim was its photo quality. The results indicated that consumers were more likely to believe (and be satisfied with) claims if they thought the source was trustworthy or had experience with the product. They were more likely to believe advertisements with concrete evidence behind the claims, such as the weight of the bike or the compactness of the camera. The study demonstrates that the credibility of a source correlates with its ability to persuade.
Persuasion
Attractiveness as an influence on opinion change and persuasion
A study by Eagly and Chaiken examined the effects of attractiveness and message content on persuasion. Eagly and Chaiken surveyed undergraduate students on communicators' attractiveness and whether they were persuaded to adopt the speaker's position (desirable or undesirable) on a topic. Students were asked to predict the speaker's position before hearing the message. The study showed that participants were more likely to be persuaded by an attractive speaker to take an undesirable position on a topic than by an unattractive speaker. However, they were equally likely to be persuaded to take a desirable position on a topic by attractive and unattractive speakers. Participants were more likely to agree with attractive speakers in general and more likely to agree with any speaker discussing a desirable position on a topic. More attractive individuals are more persuasive than individuals perceived as less attractive. Message content affects believability; desirable messages are more believable than undesirable ones.
Acceptance of message by recipient
Hovland states another set of factors that impact attitude change, specifically, in order for the communication to be accepted by the recipient. One such factor is the prestige of the medium on which the message is communicated with. One medium may be more prestigeful than others. Most importantly, "prestige for whom?" is important to specify since certain mediums may be more prestigeful for certain segments of the population. Accordingly, how the medium differs in credibility affects the prestige of the medium; a medium judged by an individual to be the most trustworthy may be the most effective.
A second factor which may affect the comparison of media is the extent of social interaction. In a study by Knower (1935), hearing a speech when a member of an audience is less effective than hearing it individually. Conversely, a study by Cantril and Allport (1935) suggest that radio may be more effective than print because the individual identifies as part of a larger group of people listening to the same program at the same time.
A third factor that affects the likelihood of acceptance of the message by the recipient is the extent to which the medium provides flexibility. Flexibility in this case means the extent to which the medium can canter to special interests and differences in comprehension. Print, for example, is particularly effective by providing for specialized interests and tastes to a greater extent than other mediums. Additionally, a two-way communication network may also induce flexibility. For example, in a political campaign where a two-way communication between the studio and the listener was employed, flexibility was heightened. Accordingly, "questions raised by the man-in-the-street could thus be answered immediately by the political candidate in the television studio"
Controversy
A major issue with the Yale attitude change approach is the fact that it is strictly functional, focusing on a change in attitude and the information processing accompanying it. Other scholars see persuasion as a function of "communication, social influence, and group processes", taking into account other factors such as social influence and the media.
A theory proposed by Margarita Sanchez-Mazas focuses on people's desire for social recognition and dignity. In this model, persuasion is seen as a way to overcome social injustice and achieve recognition and dignity. Sanchez-Mazas examines the roles of majorities and minorities in creating social change, and believes that "persuasion is a simultaneous, reciprocal process between groups, and specifically between majorities and minorities".
A persuasion developed by Clelia Nascimento-Schulze emphasizes communication in a complex society. According to Nascimento-Schulze, technology and the media are used to promote science in developing countries. It was determined that the Internet was most successful at transferring scientific knowledge to the public because it contains an optimal amount of visual information and combines art and science in a creative, informative way. Key to this theory is an "interactive society" with technology allowing communities to share common values and beliefs, such as the Internet.
Another form of public persuasion, studied by Helene Joffe, explores how the media produces visual stimuli which elicit feelings of fear, empathy or disgust. This theory highlights the substantial role of technology in evoking emotion in individuals, focusing on advertising campaigns for health, safety and charities. According to Joffe, visual stimuli lure an audience into a "state of emotion".
The elaboration likelihood model is based on the Yale attitude change model by processing the different outcomes of attitude change. However, there are also claims that they are independent entities that have no connection.
References
External links
"Persuasion" by William L. Benoit, Ph.D.
Attitude change
Human communication
Motivation
Human behavior
Motivational theories | Yale attitude change approach | Biology | 4,678 |
47,161,380 | https://en.wikipedia.org/wiki/Worm%20shoe | A worm shoe is a strip of wood such as oak or pine which is fixed to the keel of a wooden boat to protect it from shipworms. The wood is sacrificed to the worms while the main structure is kept separate and safe using a layer of tar paper or creosoted felt, which the worms will not penetrate.
References
External links
Putting the Worm Shoe on the Keel Bottom — demonstration by a boat-builder
Shipbuilding | Worm shoe | Engineering | 88 |
5,617,513 | https://en.wikipedia.org/wiki/Regeneron%20Pharmaceuticals | Regeneron Pharmaceuticals, Inc. is an American biotechnology company headquartered in Westchester County, New York. The company was founded in 1988. Originally focused on neurotrophic factors and their regenerative capabilities, giving rise to its name, the company branched out into the study of both cytokine and tyrosine kinase receptors, which gave rise to their first product, which is a VEGF-trap.
Company history
The company was founded by CEO Leonard Schleifer and scientist George Yancopoulos in 1988.
Regeneron has developed aflibercept, a VEGF inhibitor, and rilonacept, an interleukin-1 blocker. VEGF is a protein that normally stimulates the growth of blood vessels, and interleukin-1 is a protein that is normally involved in inflammation.
On March 26, 2012, Bloomberg announced that Sanofi and Regeneron were in development of a new drug that would help reduce cholesterol up to 72% more than its competitors. The new drug would target the PCSK9 gene.
In July 2015, the company announced a new global collaboration with Sanofi to discover, develop, and commercialize new immuno-oncology drugs, which could generate more than $2 billion for Regeneron, with $640 million upfront, $750 million for proof-of-concept data, and $650 million from the development of REGN2810. REGN2810 was later named cemiplimab. In 2019, Regeneron Pharmaceuticals was announced the 7th best publicly listed company of the 2010s, with a total return of 1,457%. Regeneron Pharmaceuticals was home to the two highest-paid pharmaceutical executives as of 2020.
In October 2017, Regeneron made a deal with the Biomedical Advanced Research and Development Authority (BARDA) that the U.S. government would fund 80% of the costs for Regeneron to develop and manufacture antibody-based medications, which subsequently, in 2020, included their COVID-19 treatments, and Regeneron would retain the right to set prices and control production. This deal was criticized in The New York Times. Such deals are not unusual for routine drug development in the American pharmaceutical market.
In 2019, the company was added to the Dow Jones Sustainability World Index.
In May 2020, Regeneron announced it would repurchase approx. 19.2 million of its shares for around $5 billion, held directly by Sanofi. Prior to the transaction, Sanofi held 23.2 million Regeneron shares.
In April 2022, the business announced it would acquire Checkmate Pharmaceuticals for around $250 million, enhancing its number of immuno-oncology drugs.
In August 2023, Regeneron announced it would acquire Decibel Therapeutics.
In December 2023, Regeneron acquired an Avon Products property in Suffern, New York to be used for cold storage, research and development laboratories
In April 2024, the company acquired 2seventy Bio.
Experimental treatment for COVID-19
On February 4, 2020, the U.S. Department of Health and Human Services, which already worked with Regeneron, announced that Regeneron would pursue monoclonal antibodies to fight COVID-19.
In July 2020, under Operation Warp Speed, Regeneron was awarded a $450 million government contract to manufacture and supply its experimental treatment REGN-COV2, an artificial "antibody cocktail" which was then undergoing clinical trials for its potential both to treat people with COVID-19 and to prevent SARS-CoV-2 coronavirus infection. The $450 million came from the Biomedical Advanced Research and Development Authority (BARDA), the DoD Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense, and Army Contracting Command. Regeneron expected to produce 70,000–300,000 treatment doses or 420,000–1,300,000 prevention doses. "By funding this manufacturing effort, the federal government will own the doses expected to result from the demonstration project," the government said in its July 7 news release. Regeneron similarly said in its own news release that same day that "the government has committed to making doses from these lots available to the American people at no cost and would be responsible for their distribution," noting that this depended on the government granting emergency use authorization or product approval. California based laboratory, FOMAT, is part of the clinical investigation through their doctors Augusto and Nicholas Focil.
In October 2020 when U.S. President Donald Trump was infected with COVID-19 and taken to Walter Reed National Military Medical Center in Bethesda, Maryland, he was administered REGN-COV2. His doctors obtained it from Regeneron via a compassionate use request (as clinical trials had not yet been completed and the drug had not yet been approved by the US Food and Drug Administration (FDA)). On October 7, Trump posted a five-minute video to Twitter reasserting that this drug should be "free." That same day, Regeneron filed with the FDA for emergency use authorization. In the filing, it specified that it currently had 50,000 doses and that it expected to reach a total of 300,000 doses "within the next few months." The FDA granted approval for emergency use authorization in November 2020.
Marketed products
Arcalyst (rilonacept) is used for specific, rare autoinflammatory conditions. Approved by the FDA in February 2008.
Eylea (aflibercept injection) was approved by the U.S. Food and Drug Administration (FDA) in November 2011 to treat a common cause of blindness in the elderly. Eylea is reported to cost $11,000 per year for each eye treated.
Zaltrap (aflibercept injection) is used for metastatic colorectal cancer approved by the FDA in August 2012.
Praluent (alirocumab) is indicated as an adjunct to diet and maximally tolerated statin therapy for the treatment of adults with heterozygous familial hypercholesterolemia or clinical atherosclerotic cardiovascular disease (ASCVD) who require additional lowering of low-density lipoprotein (LDL) cholesterol. Approved by the FDA in July 2015, It is reported to cost $4,500 to $8,000 per year.
Dupixent (dupilumab injection) is for the treatment of adolescent and adult patients' atopic dermatitis. It was approved by the FDA in March 2017 and is reported to cost $37,000 per year.
Kevzara (sarilumab injection) is an interleukin-6 (IL-6) receptor antagonist for treatment of adults with rheumatoid arthritis approved by the FDA in May 2017. Trials commenced in March 2020 to evaluate the effectiveness of Kevzara in the treatment of COVID-19.
Libtayo (cemiplimab injection) is a monoclonal antibody targeting the PD-1 pathway as a checkpoint inhibitor, for the treatment of people with metastatic cutaneous squamous cell carcinoma (cSCC) or locally advanced cSCC who are not candidates for curative surgery or curative radiation. Libtayo was approved by the FDA in September 2018.
Inmazeb (atoltivimab/maftivimab/odesivimab) is a drug made of three antibodies, developed to treat deadly Ebola virus. In October 2020, the U.S. Food and Drug Administration (FDA) approved it with an indication for the treatment of infection caused by Zaire ebolavirus.
Veopoz (pozelimab-bbfg) is a fully human monoclonal antibody targeting complement factor C5, a protein involved in complement system activation. In August 2023, it was approved by the FDA for children and adults with CHAPLE disease or CD55-deficient protein-losing enteropathy.
Technology platforms
Trap Fusion Proteins: Regeneron's novel and patented Trap technology creates high-affinity product candidates for many types of signaling molecules, including growth factors and cytokines. The Trap technology involves fusing two distinct fully human receptor components and a fully human immunoglobulin-G constant region.
Fully Human Monoclonal Antibodies: Regeneron has developed a suite (VelociSuite) of patented technologies, including VelocImmune and VelociMab, that allow Regeneron scientists to determine the best targets for therapeutic intervention and rapidly generate high-quality, fully human antibodies drug candidates addressing these targets.
Financial performance
Key people
The founders Leonard Schleifer and George Yancopoulos are reported to hold $1.3 billion and $900 million in company stock, respectively. Both are from Queens, New York. Schleifer was formerly a professor of medicine at Weill Cornell Medical School. Yancopoulos was a post-doctoral fellow, and MD/PhD student at Columbia University. Yancopoulos was involved in each drug's development.
See also
Biotech and pharmaceutical companies in the New York metropolitan area
Regeneron Science Talent Search
References
External links
1988 establishments in New York (state)
American companies established in 1988
Biotechnology companies established in 1988
Biotechnology companies of the United States
Companies based in Westchester County, New York
Health care companies based in New York (state)
Life sciences industry
Companies associated with the COVID-19 pandemic
Pharmaceutical companies established in 1988
Pharmaceutical companies of the United States | Regeneron Pharmaceuticals | Biology | 1,970 |
36,728,961 | https://en.wikipedia.org/wiki/Aircraft%20engine%20starting | Many variations of aircraft engine starting have been used since the Wright brothers made their first powered flight in 1903. The methods used have been designed for weight saving, simplicity of operation and reliability. Early piston engines were started by hand. Geared hand starting, electrical and cartridge-operated systems for larger engines were developed between the First and Second World Wars.
Gas turbine aircraft engines such as turbojets, turboshafts and turbofans often use air/pneumatic starting, with the use of bleed air from built-in auxiliary power units (APUs) or external air compressors now seen as a common starting method. Often only one engine needs be started using the APU (or remote compressor). After the first engine is started using APU bleed air, cross-bleed air from the running engine can be used to start the remaining engine(s).
Piston engines
Hand starting/propeller swinging
Hand starting of aircraft piston engines by swinging the propeller is the oldest and simplest method, the absence of any onboard starting system giving an appreciable weight saving. Positioning of the propeller relative to the crankshaft is arranged such that the engine pistons pass through top dead centre during the swinging stroke.
As the ignition system is normally arranged to produce sparks before top dead centre there is a risk of the engine kicking back during hand starting. To avoid this problem one of the two magnetos used in a typical aero engine ignition system is fitted with an 'impulse coupling', this spring-loaded device delays the spark until top dead centre and also increases the rotational speed of the magneto to produce a stronger spark. When the engine fires, the impulse coupling no longer operates and the second magneto is switched on.
As aero engines grew bigger in capacity (during the interwar period), single-person propeller swinging became physically difficult, ground crew personnel would join hands and pull together as a team or use a canvas sock fitted over one propeller blade, the sock having a length of rope attached to the propeller tip end. Note that this is different from the manual "turning over" of radial piston engine, which is done to release oil that has become trapped in the lower cylinders prior to starting, to avoid engine damage. The two appear similar, but while hand starting involves a sharp, strong "yank" on the prop to start the engine, turning over is simply done by turning the prop through a certain set amount.
Accidents have occurred during lone pilot hand starting, high throttle settings, brakes not applied or wheel chocks not being used, all resulting in aircraft moving off without the pilot at the controls. "Turning the engine" with the ignition and switches accidentally left "on" can also cause injury, as the engine can start unexpectedly when a spark plug fires. If the switch is not in start position, the spark will occur before the piston hits top dead center, which can force the propeller to violently kick back.
Hucks starter
The Hucks starter (invented by Bentfield Hucks during WWI) is a mechanical replacement for the ground crew. Based on a vehicle chassis the device uses a clutch driven shaft to turn the propeller, disengaging as the engine starts. A Hucks starter is used regularly at the Shuttleworth Collection for starting period aircraft.
Pull cord
Self-sustaining motor gliders (often known as 'turbos') are fitted with small two-stroke engines with no starting system, for ground testing a cord is wrapped around the propeller boss and pulled rapidly in conjunction with operating decompressor valves. These engines are started in flight by operating the decompressor and increasing airspeed to windmill the propeller. Early variants of the Slingsby Falke motor glider use a cockpit mounted pull start system.
Electric starter
Aircraft began to be equipped with electrical systems around 1930, powered by a battery and small wind-driven generator. The systems were initially not powerful enough to drive starter motors. Introduction of engine-driven generators solved the problem.
Introduction of electric starter motors for aero engines increased convenience at the expense of extra weight and complexity. They were a necessity for flying boats with high mounted, inaccessible engines. Powered by an onboard battery, ground electrical supply or both, the starter is operated by a key or switch in the cockpit. The key system usually facilitates switching of the magnetos.
In cold ambient conditions the friction caused by viscous engine oil causes a high load on the starting system. Another problem is the reluctance of the fuel to vaporise and combust at low temperatures. Oil dilution systems were developed (mixing fuel with the engine oil), and engine pre-heaters were used (including lighting fires under the engine). The Ki-Gass priming pump system was used to assist starting of British engines.
Aircraft fitted with variable-pitch propellers or constant speed propellers are started in fine pitch to reduce air loads and current in the starter motor circuit.
Many light aircraft are fitted with a 'starter engaged' warning light in the cockpit, a mandatory airworthiness requirement to guard against the risk of the starter motor failing to disengage from the engine.
Coffman starter
The Coffman starter was an explosive cartridge operated device, the burning gases either operating directly in the cylinders to rotate the engine or operating through a geared drive. First introduced on the Junkers Jumo 205 diesel engine in 1936 the Coffman starter was not widely used by civil operators due to the expense of the cartridges.
Pneumatic starter
In 1920 Roy Fedden designed a piston engine gas starting system, used on the Bristol Jupiter engine in 1922. A system used in early Rolls-Royce Kestrel engines ducted high-pressure air from a ground unit through a camshaft driven distributor to the cylinders via non-return valves, the system had disadvantages only overcome by conversion to electric starting.
In-flight starting
When a piston engine needs to be started in flight the electric starter motor can be used. This is a normal procedure for motor gliders that have been soaring with the engine turned off. During aerobatics with earlier aircraft types it was not uncommon for the engine to cut during manoeuvres due to carburettor design. With no electric starter installed, engines can be restarted by diving the aircraft to increase airspeed and the rotation speed of the 'windmilling' propeller.
Inertia starter
An aero engine inertia starter uses a pre-rotated flywheel to transfer kinetic energy to the crankshaft, normally through reduction gears and a clutch to prevent over-torque conditions. Three variations have been used, hand driven, electrically driven and a combination of both. When the flywheel is fully energised either a manual cable is pulled or a solenoid is used to engage the starter.
Gas turbine engines
Starting of a gas turbine engine requires rotation of the compressor to a speed that provides sufficient pressurised air to the combustion chambers. The starting system has to overcome inertia of the compressor and friction loads, the system remains in operation after combustion starts and is disengaged once the engine has reached self-idling speed.
Electric starter
Two types of electrical starter motor can be used, direct-cranking (to disengage as internal combustion engines) and starter-generator system (permanently engaged).
Hydraulic starter
Small gas turbine engines, particularly turboshaft engines used in helicopters and cruise missile turbojets can be started by a geared hydraulic motor using oil pressure from a ground supply.
Air-start
With air-start systems, gas turbine engine compressor spools are rotated by the action of a large volume of compressed air acting directly on the compressor blades or driving the engine through a small, geared turbine motor. These motors can weigh up to 75% less than an equivalent electrical system.
The compressed air can be supplied from an on-board auxiliary power unit (APU), a portable gas generator used by ground crew or by cross feeding bleed air from a running engine in the case of multi-engined aircraft.
The Turbomeca Palouste gas generator was used to start the Spey engines of the Blackburn Buccaneer. The de Havilland Sea Vixen was equipped with its own Palouste in a removable underwing container to facilitate starting when away from base. Other military aircraft types using ground supplied compressed air for starting include the Lockheed F-104 Starfighter and variants of the F-4 Phantom using the General Electric J79 turbojet engine.
Combustion starters
AVPIN starter
Versions of the Rolls-Royce Avon turbojet engine used a geared turbine starter motor that burned isopropyl nitrate as the fuel. In military service this monofuel had the NATO designation of S-746 AVPIN. For starting a measured amount of fuel was introduced to the starter combustion chamber then ignited electrically, the hot gases spinning the turbine at high revolutions with the exhaust exiting overboard.
Cartridge starter
Similar in operating principle to the piston engine Coffman starter, an explosive cartridge drives a small turbine engine which is connected by gears to the compressor shaft.
Fuel/air turbine starter (APU)
Developed for short-haul airliners, most civil and military aircraft requiring self-contained starting systems these units are known by various names including Auxiliary Power Unit (APU), Jet Fuel Starter (JFS), Air Start Unit (ASU) or Gas Turbine Compressor (GTC).
Comprising a small gas turbine which is electrically started, these devices provide compressed bleed air for engine starting and often also provide electrical and hydraulic power for ground operations without the need to run the main engines.
ASUs are used today within the civil and military Ground Support to serve Aircraft on main engine start (MES) and pneumatic bleed-air-support for the Environmental Control System (ECS) cooling and heating
Internal combustion engine starter
An interesting feature of all three German jet engine designs that saw production of any kind before May 1945: the German BMW 003, Junkers Jumo 004 and Heinkel HeS 011 axial-flow turbojet engine designs was the starter system, which consisted of a Riedel 10 hp (7.5 kW) flat twin two-stroke air-cooled engine hidden in the intake, and essentially functioned as a pioneering example of an auxiliary power unit (APU) for starting a jet engine — for the Jumo 004, a hole in the extreme nose of the intake diverter contained a D-shaped manual pull-cord handle which started the piston engine, which in turn rotated the compressor. Two small petrol/oil mix tanks were fitted in the annular intake.
The Lockheed SR-71 Blackbird used two Buick Nailheads as starter motors, which were mounted on a AG-330 Start Kart trolley, later with big-block V8 Chevrolet 454 engines.
In-flight restart
Gas turbine engines can be shut down in flight, intentionally by the crew to save fuel or during a flight test or unintentionally due to fuel starvation or flameout after a compressor stall.
Sufficient airspeed is used to 'windmill' the compressor then fuel and ignition are switched on, an on-board auxiliary power unit may be used at high altitudes where the air density is lower.
During zoom climb operations of the Lockheed NF-104A the jet engine was shut down on climbing through and was started using the windmill method on descent through denser air.
Pulse jet starting
Pulse jet engines are uncommon aircraft powerplants. However, the Argus As 014 used to power the V-1 flying bomb and Fieseler Fi 103R Reichenberg was a notable exception.
In this pulse jet three air nozzles in the front section were connected to an external high-pressure air source, butane from an external supply was used for starting, ignition was accomplished by a spark plug located behind the shutter system, electricity to the plug being supplied from a portable starting unit.
Once the engine started and the temperature rose to the minimum operating level, the external air hose and connectors were removed, and the resonant design of the tailpipe kept the pulse jet firing. Each cycle or pulse of the engine began with the shutters open; fuel was injected behind them and ignited, and the resulting expansion of gases forced the shutters closed. As the pressure in the engine dropped following combustion, the shutters reopened and the cycle was repeated, roughly 40 to 45 times per second. The electrical ignition system was used only to start the engine; heating of the tailpipe skin maintained combustion.
See also
Index of aviation articles
References
Notes
Bibliography
Bowman, Martin W. Lockheed F-104 Starfighter. Ramsbury, Marlborough, Wiltshire, UK: Crowood Press Ltd., 2000. .
Federal Aviation Administration, Airframe & Powerplant Mechanics Powerplant Handbook U.S Department of Transportation, Jeppesen Sanderson, 1976.
Gunston, Bill. Development of Piston Aero Engines. Cambridge, England. Patrick Stephens Limited, 2006.
Gunston, Bill. The Development of Jet and Turbine Aero Engines. Cambridge, England. Patrick Stephens Limited, 1997.
Hardy, Michael. Gliders & Sailplanes of the World. London: Ian Allan, 1982. .
Jane's Fighting Aircraft of World War II. London. Studio Editions Ltd, 1998.
Lumsden, Alec. British Piston Engines and their Aircraft. Marlborough, Wiltshire: Airlife Publishing, 2003. .
Rubbra, A.A. Rolls-Royce Piston Aero Engines - a designer remembers: Historical Series no 16 :Rolls-Royce Heritage Trust, 1990.
Stewart, Stanley. Flying the Big Jets. Shrewsbury, England. Airlife Publishing Ltd, 1986.
Thom, Trevor. The Air Pilot's Manual 4-The Aeroplane-Technical. Shrewsbury, Shropshire, England. Airlife Publishing Ltd, 1988.
Williams, Neil. Aerobatics, Shrewsbury, England: Airlife Publishing Ltd., 1975
Starting systems
Engine starting | Aircraft engine starting | Engineering | 2,787 |
28,279,313 | https://en.wikipedia.org/wiki/Harlequin%20print | Harlequin print is a repeating pattern of contrasting diamonds or elongated squares standing on end.
Origins
The harlequin is a character from Commedia dell'arte, a 16th-century Italian theater movement. Harlequins were witty, mischievous clowns. Their early costumes were sewn together from fabric scraps. Over time, the diamond pattern became associated with harlequins.
1949–1950s
Harlequin fabric was popularized in 1944 when Adele Simpson presented the harlequin print in a bold diamond design on the town suits she created. It was also featured in green and white with a green jacket and a black skirt.
Also in 1949, Louella Ballerino employed a harlequin print motif in the jester blouse "sun and fun" fashions she made popular. The design appeared along with pointed collars, tipped with buttons reminiscent of bells, and jagged points which sometimes adorned an apron overskirt.
In August 1950, Fashion Frocks of Cincinnati, Ohio marketed a white piqué dress, with an exaggerated side drape, in a red, white, and black harlequin print piqué, It was sold directly to homes by housewife representatives.
Tammis Keefe, a cloth designer whose patterns appeared at Lord and Taylor in September 1952, used a harlequin print diamond pattern on a large cloth she crafted for a table setting show.
In a July 1954 article in the Washington Post, columnist Olga Curtis mentioned harlequin print fabrics and cellophane as very novel ideas in accessories.
A sports costume In a harlequin print topped by bright orange received the most applause at a Simplicity Patterns Fashion Show at Sulzberger Junior High School, in March 1955. Presented by Norma Riseman, the orange wraparound blouse was one of many colorful fashions modeled for teenagers.
In June 1955 a silk dress at Peck & Peck had white, pink, and blue tints of harlequin print. The colors and diamond motif were embellished when worn together with a white cashmere sweater highlighted by spangles.
Freddie Mercury was very well known for performing in various concerts during the late seventies in harlequin print trousers. He also wore them in his music video "Living on My Own".
1960s
Harlequin print became particularly popular in the 1960s, appearing on underwear, umbrellas, pajamas, ski wear, maternity clothes, and hosiery. A 1962 Picasso retrospective in New York fueled the trend.
1990s and 2000s
Beene and Rowley employed harlequin prints in their fashion designs presented at spring fashion week in November 1996.
At the 2001 Ebony Fashion Fair designer Oscar de la Renta presented a multicolored harlequin print silk charmeuse gown with a skirt fashioned with accordion pleats and a halter top with a gold neck ring.
Gallery
References
1940s fashion
Textile patterns
Fashion design
1950s fashion | Harlequin print | Engineering | 575 |
72,712,320 | https://en.wikipedia.org/wiki/OnePlus%2010T | The OnePlus 10T is a high-end Android-based smartphone manufactured by OnePlus, unveiled on August 3, 2022. Designed as a successor to the OnePlus 8T, the 10T features a Qualcomm Snapdragon 8+ Gen 1 chipset, an octa-core CPU, and an Adreno 730 GPU. Available in Moonstone Black and Jade Green, the phone has a sleek slate form factor with dimensions of 163 mm in height, 75.4 mm in width, and 8.8 mm in thickness, and weighs 204 grams.
The OnePlus 10T has a 6.7-inch Fluid AMOLED display with a resolution of 1080 x 2412 pixels, a 20:9 aspect ratio, and a 120Hz refresh rate.
The camera setup includes a triple rear camera system with a 50 MP main sensor, an 8 MP ultrawide lens, and a 2 MP macro lens, capable of recording 4K video at 30/60fps and 1080p video at up to 240fps. The front-facing camera features a 16 MP sensor.
Additional features include a 4800 mAh battery with 150W fast charging, stereo speakers, and a range of connectivity options such as Wi-Fi 6, Bluetooth 5.2, and USB Type-C. The OnePlus 10T also includes an under-display optical fingerprint scanner, accelerometer, gyroscope, proximity sensor, compass, and a color spectrum sensor.
The OnePlus 10T is related to the OnePlus 10 Pro and OnePlus 10R, sharing similar design and performance traits.
References
External links
OnePlus mobile phones
Phablets
Mobile phones introduced in 2022
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording | OnePlus 10T | Technology | 373 |
55,100,201 | https://en.wikipedia.org/wiki/Olech%20theorem | In dynamical systems theory, the Olech theorem establishes sufficient conditions for global asymptotic stability of a two-equation system of non-linear differential equations. The result was established by Czesław Olech in 1963, based on joint work with Philip Hartman.
Theorem
The differential equations , , where , for which is an equilibrium point, is uniformly globally asymptotically stable if:
(a) the trace of the Jacobian matrix is negative, for all ,
(b) the Jacobian determinant is positive, for all , and
(c) the system is coupled everywhere with either
References
Theorems in dynamical systems
Stability theory | Olech theorem | Mathematics | 133 |
9,742,707 | https://en.wikipedia.org/wiki/Waste%20pond | A waste pond or chemical pond is a small impounded water body used for the disposal of water pollutants, and sometimes utilized as a method of recycling or decomposing toxic substances. Such waste ponds may be used for regular disposal of pollutant materials or may be used as upset receivers for special pollution events. Often, chemical ponds themselves are addressed for cleanup action after their useful life is over or when a risk of groundwater contamination arises. Contamination of waterways and groundwater can be damaging to human, animal and environmental health. These health effects bring into question the best engineering solutions to mitigate waste ponds' environmental impact.
Environmental and health risks
The bacteria, pathogens, and excess nutrients stored in waste ponds can damage the environment and harm human health. In storms and heavy rainfall, waste ponds can overflow spilling sewage water and contaminating waterways. The contamination of surrounding watersheds causes negative impacts to both the ecosystems and surrounding populations. A survey carried out in Eastern North Carolina found that there was a twenty-one percent increase in cases of acute gastrointestinal illness in rural areas surrounding hog farms which stored waste in waste ponds compared to areas without. The results also showed a stronger association following periods of heavy rain. This suggests that the waste ponds, particularly during heavy rainfall, may play a significant role in the contamination of surrounding environments, warranting further investigation into their impact on public health in rural areas. Overall, these findings highlight the potential risks associated with waste ponds, creating a potential for innovation to improve management practices and continue research to mitigate their environmental and public health effects.
History
Peak usage of waste ponds in the United States occurred in the period 1955 to 1985, after which the environmental risks of pond technology were sufficiently understood, such that alternative technologies for waste disposal gradually began to displace many of the waste ponds. Waste ponds often have pond liners, such as concrete or robust synthetic polymeric materials, to prevent infiltration of chemicals to soil or groundwater.
Engineering
Designing and managing waste ponds in an environmentally responsible way requires a comprehensive approach that integrates site selection, chemical balancing, and the establishment of long-term sustainability practices. By employing effective chemical treatments, and monitoring systems, it is possible to significantly reduce the environmental impact of waste ponds. Additionally, using strategies such as waste minimization, pond closure, and the use of containment systems ensure that these ponds can serve as a safe and effective solution for waste management without putting the health of surrounding ecosystems at risk.
Waste ponds in practice
United States
Piscataway chemical pond
Union Carbide used the pond at its Piscataway, New Jersey plant while in operation. The pond's primary use was chemical drainage. Hazardous chemicals would flow through drains inside the plant and into the pond. They were later pumped back to the factory via two large pumps, distilled to remove acetone and other hazards. Overall, this process was harmful to the environment and polluted the groundwater.
Oak Ridge waste pond
The United States Oak Ridge National Laboratory in Oak Ridge, Tennessee operated for more than 50 years, and was decommissioned in the mid 1960s. Plant waste, collected in a pond, was found to contain radioactive waste, including strontium-90, caesium-137; tritium, and transuranics.
In the mid 1990s, Department of Energy officials installed a cryogenic stabilization system at the waste pond, freezing the soil and groundwater, forming a barrier to groundwater leaching. In February 2004, the cryogenic system was dismantled, and the pond was excavated. The soil surrounding the frozen pond contained lower levels of contamination than the pond itself, but enough contamination that it had to be removed. This demonstrates the lasting environmental impact of waste disposal in waste ponds.
Kenya
While there are many wastewater treatment options available, some are more accessible or effective in different parts of the world. In Kenya, waste stabilization ponds are one of the most effective wastewater treatment methods, and one of the few that work in Kenya, specifically.
Europe
Across Europe, waste ponds are a common method of wastewater treatment. In France there are an estimated 2,500 waste ponds. There are approximately 1,500 in Bavaria and approximately 3,000 in Germany, overall. The United Kingdom has only recorded the existence of 40 waste ponds, but this may be due to the limited research has been done on the UK's waste ponds.
See also
Evaporation pond
References
Environmental technology
Ponds
Water pollution | Waste pond | Chemistry,Environmental_science | 893 |
67,580,479 | https://en.wikipedia.org/wiki/Time%20in%20San%20Marino | In San Marino, the standard time is Central European Time (CET; UTC+01:00). Daylight saving time is observed from the last Sunday in March (02:00 CET) to the last Sunday in October (03:00 CEST). This is shared with several other EU member states.
History
San Marino observed daylight saving time between 1916 and 1920, 1940, 1942 to 1948, and again since 1966.
IANA time zone database
In the IANA time zone database, San Marino is given one zone in the file zone.tab – Europe/San_Marino. Data for San Marino directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
Time in Europe
List of time zones by country
List of time zones by UTC offset
References
External links
Current time in San Marino at Time.is
Geography of San Marino | Time in San Marino | Physics | 185 |
76,736,170 | https://en.wikipedia.org/wiki/Sensitivity%20theorem | In computational complexity, the sensitivity theorem, proved by Hao Huang in 2019, states that the sensitivity of a Boolean function is at least the square root of its degree, thus settling a conjecture posed by Nisan and Szegedy in 1992. The proof is notably succinct, given that prior progress had been limited.
Background
Several papers in the late 1980s and early 1990s showed that various decision tree complexity measures of Boolean functions are polynomially related, meaning that if are two such measures then for some constant . Nisan and Szegedy showed that degree and approximate degree are also polynomially related to all these measures. Their proof went via yet another complexity measure, block sensitivity, which had been introduced by Nisan. Block sensitivity generalizes a more natural measure, (critical) sensitivity, which had appeared before.
Nisan and Szegedy asked whether block sensitivity is polynomially bounded by sensitivity (the other direction is immediate since sensitivity is at most block sensitivity). This is equivalent to asking whether sensitivity is polynomially related to the various decision tree complexity measures, as well as to degree, approximate degree, and other complexity measures which have been shown to be polynomially related to these along the years. This became known as the sensitivity conjecture.
Along the years, several special cases of the sensitivity conjecture were proven.
The sensitivity theorem was finally proven in its entirety by Huang, using a reduction of Gotsman and Linial.
Statement
Every Boolean function can be expressed in a unique way as a multilinear polynomial. The degree of is the degree of this unique polynomial, denoted .
The sensitivity of the Boolean function at the point is the number of indices such that , where is obtained from by flipping the 'th coordinate. The sensitivity of is the maximum sensitivity of at any point , denoted .
The sensitivity theorem states that
In the other direction, Tal, improving on an earlier bound of Nisan and Szegedy, showed that
The sensitivity theorem is tight for the AND-of-ORs function:
This function has degree and sensitivity .
Proof
Let be a Boolean function of degree . Consider any maxonomial of , that is, a monomial of degree in the unique multilinear polynomial representing . If we substitute an arbitrary value in the coordinates not mentioned in the monomial then we get a function on coordinates which has degree , and moreover, . If we prove the sensitivity theorem for then it follows for . So from now on, we assume without loss of generality that has degree .
Define a new function by
It can be shown that since has degree then is unbalanced (meaning that ), say . Consider the subgraph of the hypercube (the graph on in which two vertices are connected if they differ by a single coordinate) induced by . In order to prove the sensitivity theorem, it suffices to show that has a vertex whose degree is at least . This reduction is due to Gotsman and Linial.
Huang constructs a signing of the hypercube in which the product of the signs along any square is . This means that there is a way to assign a sign to every edge of the hypercube so that this property is satisfied. The same signing had been found earlier by Ahmadi et al., which were interested in signings of graphs with few distinct eigenvalues.
Let be the signed adjacency matrix corresponding to the signing. The property that the product of the signs in every square is implies that , and so half of the eigenvalues of are and half are . In particular, the eigenspace of (which has dimension ) intersects the space of vectors supported by (which has dimension ), implying that there is an eigenvector of with eigenvalue which is supported on . (This is a simplification of Huang's original argument due to Shalev Ben-David.)
Consider a point maximizing . On the one hand, .
On the other hand, is at most the sum of absolute values of all neighbors of in , which is at most . Hence .
Constructing the signing
Huang constructed the signing recursively. When , we can take an arbitrary signing. Given a signing of the -dimensional hypercube , we construct
a signing of as follows. Partition into two copies of . Use for one of them and for the other, and assign all edges between the two copies the sign .
The same signing can also be expressed directly. Let be an edge of the hypercube. If is the first coordinate on which differ, we use the sign .
Extensions
The sensitivity theorem can be equivalently restated as
Laplante et al. refined this to
where is the maximum sensitivity of at a point in .
They showed furthermore that this bound is attained at two neighboring points of the hypercube.
Aaronson, Ben-David, Kothari and Tal defined a new measure, the spectral sensitivity of , denoted . This is the largest eigenvalue of the adjacency matrix of the sensitivity graph of , which is the subgraph of the hypercube consisting of all sensitive edges (edges connecting two points such that ). They showed that Huang's proof can be decomposed into two steps:
.
.
Using this measure, they proved several tight relations between complexity measures of Boolean functions: and . Here is the deterministic query complexity and is the quantum query complexity.
Dafni et al. extended the notions of degree and sensitivity to Boolean functions on the symmetric group and on the perfect matching association scheme, and proved analogs of the sensitivity theorem for such functions. Their proofs use a reduction to Huang's sensitivity theorem.
See also
Decision tree model
Notes
References
Theorems in computational complexity theory | Sensitivity theorem | Mathematics | 1,167 |
25,472,715 | https://en.wikipedia.org/wiki/Gliese%20649%20b | Gliese 649 b , or Gl 649 b is an extrasolar planet, orbiting the 10th magnitude M-type star Gliese 649, 10 parsecs from earth. This planet is a sub-Jupiter, massing 0.328 Jupiter mass and orbits at 1.135 AU.
References
Exoplanets discovered in 2009
Exoplanets detected by radial velocity
Giant planets
Hercules (constellation)
6 | Gliese 649 b | Astronomy | 88 |
2,902,648 | https://en.wikipedia.org/wiki/94%20Aquarii | 94 Aquarii (abbreviated 94 Aqr) is a triple star system in the equatorial constellation of Aquarius. 94 Aquarii is the Flamsteed designation. The brightest member has an apparent visual magnitude of 5.19, making it visible to the naked eye. The parallax measured by the Gaia spacecraft yields a distance estimate of around from Earth.
The inner pair of this triple star system form a spectroscopic binary with an orbital period of 6.321 years, a moderate orbital eccentricity of 0.173, and a combined visual magnitude of 5.19. The primary component of this pair has a stellar classification of G8.5 IV, with the luminosity class of IV indicating this is a subgiant star. At an angular separation of 13.0 arcseconds from this pair is a magnitude 7.52 K-type main sequence star with a classification of K2 V.
References
External links
Image 94 Aquarii
Aquarius (constellation)
Spectroscopic binaries
Aquarii, 094
G-type subgiants
K-type main-sequence stars
8866
115126
Durchmusterung objects
219834
Triple star systems
0894.2 | 94 Aquarii | Astronomy | 250 |
15,029,253 | https://en.wikipedia.org/wiki/BATF%20%28gene%29 | Basic leucine zipper transcription factor, ATF-like, also known as BATF, is a protein which in humans is encoded by the gene.
Function
The protein encoded by this gene is a nuclear basic leucine zipper (bZIP) protein that belongs to the AP-1/ATF superfamily of transcription factors. The leucine zipper of this protein mediates dimerization with members of the Jun family of proteins. This protein is thought to be a negative regulator of AP-1/ATF transcriptional events.
Mice without the BATF gene (BATF knockout mice) lacked a type of inflammatory immune cell (Th17) and were resistant to conditions that normally induces an autoimmune condition similar to multiple sclerosis.
Interactions
BATF (gene) has been shown to interact with IFI35.
References
Further reading
External links
Transcription factors | BATF (gene) | Chemistry,Biology | 178 |
18,928,423 | https://en.wikipedia.org/wiki/NGC%206250 | NGC 6250 is a open cluster of stars in the southern constellation of Ara, near the border with Scorpius. It was discovered by English astronomer John Herschel on July 1, 1834. This cluster has an apparent visual magnitude of 5.9 and spans an angular diameter of , with the brightest member being of magnitude 7.6. About 15 members are visible with binoculars or a small telescope. NGC 6250 is located at a distance of from the Sun, and is approaching with a mean radial velocity of .
The Trumpler classification of NGC 6250 is , indicating a rich cluster of stars (r) with a slightly disparate grouping (II) and a large brightness range (3). This is a young cluster with an estimated age of 14 million years. Seven cluster members are B-type stars, and three are illuminating reflection nebulae. Two magnetic chemically peculiar stars (CP2) and two candidate Lambda Boötis stars have been identified as members. The metallicity of the cluster members is consistent with the Sun.
References
External links
Image NGC 6250
http://seds.org/
NGC 6250
6250
Ara (constellation) | NGC 6250 | Astronomy | 236 |
2,103,537 | https://en.wikipedia.org/wiki/Hospitality | Hospitality is the relationship of a host towards a guest, wherein the host receives the guest with some amount of goodwill and welcome. This includes the reception and entertainment of guests, visitors, or strangers. Louis, chevalier de Jaucourt describes hospitality in the as the virtue of a great soul that cares for the whole universe through the ties of humanity. Hospitality is also the way people treat others, for example in the service of welcoming and receiving guests in hotels. Hospitality plays a role in augmenting or decreasing the volume of sales of an organization.
Hospitality ethics is a discipline that studies this usage of hospitality.
Etymology
"Hospitality" derives from the Latin , meaning "host", "guest", or "stranger". is formed from , which means "stranger" or "enemy" (the latter being where terms like "hostile" derive). By metonymy, the Latin word means a guest-chamber, guest's lodging, an inn. is thus the root for the English words host, hospitality, hospice, hostel, and hotel.
Historical practice
In ancient cultures, hospitality involved welcoming the stranger and offering them food, shelter, and safety.
Global concepts
Albanians
Among Albanians, hospitality () is an indissoluble element of their traditional society, also regulated by the Albanian traditional customary law (Kanun). Hospitality, honor, and besa, are the pillars of the northern Albanian tribal society. Numerous foreign visitors have historically documented the hospitality of both northern and southern Albanians. Foreign travelers and diplomats, and a number of renowned historians and anthropologists have, in particular, "solemnized, romanticized, and glorified" the hospitality of the northern Albanian highlanders.
Some reasons that have been provided to explain the admiration of the Albanian hospitality by foreign visitors are: the rituals and forms in which it is expressed; its universal application with uncompromising protection of the guest, even in the case of blood feud (gjakmarrje) between the host and the guest; its central role as a moral principle in Albanian society and individual life, also regulated and sanctified in the Kanun as a basic societal institution; its exceptional altruistic appeal as well as application, conferred with the best available resources, regardless of the fact that the remote, harsh, and geographically inhospitable territory of the northern Albanian mountains is typically scarce in material resources.
The Albanian law of hospitality is simply clarified by the Kanun: "The house of the Albanian belongs to God and the guest." Which means that the guest – who represents the supreme ethical category – has a greater role than the master of the house himself. The guest's role is even more important than blood, because according to custom there is the possibility to pardon the man who spilled the blood of one's father or one's son, but a man who has spilled the blood of a guest cannot ever been pardoned. In Albanian tradition a guest is effectively regarded as a semi-god, admired above all other human relations.
A reflection of the Albanian solemn adherence to their traditional customs of hospitality and besa is notably considered to be their treatment of Jews at the time of the Italian and German occupation during World War II. Indeed, Jews in hiding in Albania were not betrayed or handed over to the Germans by Albanians, and as a result, there were eleven times more Jews at the end of the WWII than at the beginning of it in Albania.
Ancient Greece
In Ancient Greece, hospitality was a right, with the host being expected to make sure the needs of his guests were met. Conversely, the guest was expected to abide by a set code of behaviour. The ancient Greek term —or when a god was involved—expressed this ritualized guest-friendship relation. This relationship was codified in the Homeric epics, and especially in the Odyssey. In Greek society, a person's ability to abide by the laws of hospitality determined nobility and social standing. The ancient Greeks, since the time of Homer, believed that the goddess of hospitality and hearth was Hestia, one of the original six Olympians.
India and Nepal
In India and Nepal, hospitality is based on the principle , meaning "the guest is God". This principle is shown in a number of stories where a guest is revealed to be a god who rewards the provider of hospitality. From this stems, the Indian or Nepalese practice of graciousness towards guests at home and in all social situations. The Tirukkuṛaḷ, an ancient Indian work on ethics and morality, explains the ethics of hospitality in verses 81 through 90, dedicating a separate chapter to it (chapter 9).
Judaism
Judaism praises hospitality to strangers and guests, based largely on the examples of Abraham and Lot in the Book of Genesis ( and ). In Hebrew, the practice is called , meaning "welcoming guests". Besides other expectations, hosts are expected to provide nourishment, comfort, and entertainment for their guests, and at the end of the visit, hosts customarily escort their guests out of their home, wishing them a safe journey.
Abraham set the standard as providing three things:
("feeding")
("drinking")
("lodging")
The initial letters of these Hebrew words spell Aishel ().
Christianity
In Christianity, hospitality is a virtue. It is a reminder of sympathy for strangers and a rule to welcome visitors. This is a virtue found in the Old Testament, with, for example, the custom of the foot washing of visitors or the kiss of peace. Jesus taught in the New Testament that those who had welcomed a stranger had welcomed him. He expanded the meaning of brother and neighbor to include the stranger, that he or she be treated with hospitality.
Pope John Paul II wrote: "Welcoming our brothers and sisters with care and willingness must not be limited to extraordinary occasions but must become for all believers a habit of service in their daily lives." He also said, "Only those who have opened their hearts to Christ can offer a hospitality that is never formal or superficial but identified by 'gentleness' and 'reverence'." Some Western countries have developed a host culture for immigrants based on the Bible. In some Christian belief, a guest should never be made to feel that they are causing undue extra labor by their presence.
Pashtun
One of the main principles of Pashtunwali is . This is the display of hospitality and profound respect to all visitors (regardless of race, religion, national affiliation, or economic status) without any hope of remuneration or favour. Pashtuns will go to great lengths to show their hospitality.
Islam
In Islam, there is a strong emphasis on expressing goodwill through the phrase peace be upon you Assalamu Alaikum. This practice is rooted in the teachings of Muhammad. These teachings extend to the treatment of guests and even prisoners of war. Authentic sources and Quranic verses underscore the importance of showing kindness and peace towards these people.
Abu Aziz ibn Umair reported: "I was among the prisoners of war on the day of the battle of Badr. Muhammad had said, 'I enjoin you to treat the captives well.' After I accepted Islam, I was among the Ansar (Inhabitants of Madinah) and when the time of lunch or dinner arrived, I would feed dates to the prisoners for I had been fed bread due to the command of Muhammad."
Invite (all) to the Way of thy Lord with wisdom and beautiful preaching, and argue with them in ways that are best and most gracious.
Good hospitality is crucial in Islam even in business. According to another report, Muhammad passed by a pile of food in the market. He put his hand inside it and felt dampness, although the surface was dry. He said:
"O owner of the food, what is this?"
The man said, "It was damaged by rain, O Messenger of God."
He said, "Why did you not put the rain-damaged food on top so that people could see it! Whoever cheats us is not one of us."
Celtic cultures
Celtic societies also valued hospitality, especially in terms of protection. A host who granted a person's request for refuge was expected not only to provide food and shelter for guests but also to make sure that they did not come to harm under their care.
Northern European cultures
In Sweden, Norway, Finland, Denmark, and the Netherlands, it is often considered inappropriate to feed children from another family. Visiting children may be asked to leave at dinnertime or to wait in another room, or the host family may call the visitor's parents and ask for permission.
Examples
Bread and salt
Hospitium
Southern hospitality
Current usage
In the West today hospitality is rarely a matter of protection and survival and is more associated with etiquette and entertainment. However, it still involves showing respect for one's guests, providing for their needs, and treating them as equals. Cultures and subcultures vary in the extent to which one is expected to show hospitality to strangers, as opposed to personal friends or members of one's ingroup.
Anthropology of hospitality
In anthropology, hospitality has been analyzed as an unequal relation between hosts and guests, mediated through various forms of exchange.
Jacques Derrida offers a model to understand hospitality that divides unconditional hospitality from conditional hospitality. Over the centuries, philosophers have considered the problem of hospitality. To Derrida, there is an implicit hostility in hospitality, as it requires treating a person as a stranger, distancing them from oneself; Derrida labels this intrinsic conflict with the portmanteau "hostipitality". However, hospitality offers a paradoxical situation (like language), since the inclusion of those who are welcomed in the sacred law of hospitality implies that others will be rejected.
Julia Kristeva alerts readers to the dangers of "perverse hospitality", takes advantage of the vulnerability of aliens to dispossess them. Hospitality reduces the tension in the process of host-guest encounters, producing a liminal zone that combines curiosity about others and fear of strangers. Hospitality centres on the belief that strangers should be assisted and protected while traveling. However, some disagree. Anthony Pagden describes how the concept of hospitality was historically manipulated to legitimate the conquest of the Americas by imposing the right of free transit, which was conducive to the formation of the modern nation state. This suggests that hospitality is a political institution, which can be ideologically deformed to oppress others.
See also
References
Further reading
Cultural anthropology
Etiquette | Hospitality | Biology | 2,147 |
569,705 | https://en.wikipedia.org/wiki/Cell-mediated%20immunity | Cellular immunity, also known as cell-mediated immunity, is an immune response that does not rely on the production of antibodies. Rather, cell-mediated immunity is the activation of phagocytes, antigen-specific cytotoxic T-lymphocytes, and the release of various cytokines in response to an antigen.
History
In the late 19th century Hippocratic tradition medicine system, the immune system was imagined into two branches: humoral immunity, for which the protective function of immunization could be found in the humor (cell-free bodily fluid or serum) and cellular immunity, for which the protective function of immunization was associated with cells. CD4 cells or helper T cells provide protection against different pathogens. Naive T cells, which are immature T cells that have yet to encounter an antigen, are converted into activated effector T cells after encountering antigen-presenting cells (APCs). These APCs, such as macrophages, dendritic cells, and B cells in some circumstances, load antigenic peptides onto the major histocompatibility complex (MHC) of the cell, in turn presenting the peptide to receptors on T cells. The most important of these APCs are highly specialized dendritic cells; conceivably operating solely to ingest and present antigens. Activated effector T cells can be placed into three functioning classes, detecting peptide antigens originating from various types of pathogen: The first class being 1) Cytotoxic T cells, which kill infected target cells by apoptosis without using cytokines, 2) Th1 cells, which primarily function to activate macrophages, and 3) Th2 cells, which primarily function to stimulate B cells into producing antibodies.
In another ideology, the innate immune system and the adaptive immune system each comprise both humoral and cell-mediated components. Some cell-mediated components of the innate immune system include myeloid phagocytes, innate lymphoid cells (NK cells) and intraepithelial lymphocytes.
Synopsis
Cellular immunity protects the body through:
T-cell mediated immunity or T-cell immunity: activating antigen-specific cytotoxic T cells that are able to induce apoptosis in body cells displaying epitopes of foreign antigen on their surface, such as virus-infected cells, cells with intracellular bacteria, and cancer cells displaying tumor antigens;
Macrophage and natural killer cell action: enabling the destruction of pathogens via recognition and secretion of cytotoxic granules (for natural killer cells) and phagocytosis (for macrophages); and
Stimulating cells to secrete a variety of cytokines that influence the function of other cells involved in adaptive immune responses and innate immune responses.
Cell-mediated immunity is directed primarily at microbes that survive in phagocytes and microbes that infect non-phagocytic cells. It is most effective in removing virus-infected cells, but also participates in defending against fungi, protozoans, cancers, and intracellular bacteria. It also plays a major role in transplant rejection.
Type 1 immunity is directed primarily at viruses, bacteria, and protozoa and is responsible for activating macrophages, turning them into potent effector cells. This is achieved by the secretion of interferon gamma and TNF.
Overview
CD4+ T-helper cells may be differentiated into two main categories:
TH1 cells which produce interferon gamma and lymphotoxin alpha,
TH2 cells which produce IL-4, IL-5, and IL-13.
A third category called T helper 17 cells (TH17) were also discovered which are named after their secretion of Interleukin 17.
CD8+ cytotoxic T-cells may also be categorized as:
Tc1 cells,
Tc2 cells.
Similarly to CD4+ TH cells, a third category called TC17 were discovered that also secrete IL-17.
As for the ILCs, they[Clarification needed.] may be classified into three main categories
ILC1 which secrete type 1 cytokines,
ILC2 which secrete type 2 cytokines,
ILC3 which secrete type 17 cytokines.
Development of cells
All type 1 cells begin their development from the common lymphoid progenitor (CLp) which then differentiates to become the common innate lymphoid progenitor (CILp) and the t-cell progenitor (Tp) through the process of lymphopoiesis.
Common innate lymphoid progenitors may then be differentiated into a natural killer progenitor (NKp) or a common helper like innate lymphoid progenitor (CHILp). NKp cells may then be induced to differentiate into natural killer cells by IL-15. CHILp cells may be induced to differentiate into ILC1 cells by IL-15, into ILC2 cells by IL-7 or ILC3 cells by IL-7 as well.
T-cell progenitors may differentiate into naïve CD8+ cells or naïve CD4+ cells. Naïve CD8+ cells may then further differentiate into TC1 cells upon IL-12 exposure, [IL-4] can induce the differentiation into TC2 cells and IL-1 or IL-23 can induce the differentiation into TC17 cells. Naïve CD4+ cells may differentiate into TH1 cells upon IL-12 exposure, TH2 upon IL-4 exposure or TH17 upon IL-1 or IL-23 exposure.
Type 1 immunity
Type 1 immunity makes use of the type 1 subset for each of these cell types. By secreting interferon gamma and TNF, TH1, TC1, and group 1 ILCS activate macrophages, converting them to potent effector cells. It provides defense against intracellular bacteria, protozoa, and viruses. It is also responsible for inflammation and autoimmunity with diseases such as rheumatoid arthritis, multiple sclerosis, and inflammatory bowel disease all being implicated in type 1 immunity. Type 1 immunity consists of these cells:
CD4+ TH1 cells
CD8+ cytotoxic T cells (Tc1)
T-Bet+ interferon gamma producing group 1 ILCs(ILC1 and Natural killer cells)
CD4+ TH1 Cells
It has been found in both mice and humans that the signature cytokines for these cells are interferon gamma and lymphotoxin alpha. The main cytokine for differentiation into TH1 cells is IL-12 which is produced by dendritic cells in response to the activation of pattern recognition receptors. T-bet is a distinctive transcription factor of TH1 cells. TH1 cells are also characterized by the expression of chemokine receptors which allow their movement to sites of inflammation. The main chemokine receptors on these cells are CXCR3A and CCR5. Epithelial cells and keratinocytes are able to recruit TH1 cells to sites of infection by releasing the chemokines CXCL9, CXCL10 and CXCL11 in response to interferon gamma. Additionally, interferon gamma secreted by these cells seems to be important in downregulating tight junctions in the epithelial barrier.
CD8+ TC1 Cells
These cells generally produce interferon gamma. Interferon gamma and IL-12 promote differentiation toward TC1 cells. T-bet activation is required for both interferon gamma and cytolytic potential. CCR5 and CXCR3 are the main chemokine receptors for this cell.
Group 1 ILCs
Groups 1 ILCs are defined to include ILCs expressing the transcription factor T-bet and were originally thought to only include natural killer cells. Recently, there have been a large amount of NKp46+ cells that express certain master [transcription factor]s that allow them to be designated as a distinct lineage of natural killer cells termed ILC1s. ILC1s are characterized by the ability to produce interferon gamma, TNF, GM-CSF and IL-2 in response to cytokine stimulation but have low or no cytotoxic ability.
See also
Immune system
Humoral immunity (vs. cell-mediated immunity)
Immunity
References
Bibliography
Cell-mediated immunity (Encyclopædia Britannica)
Chapter 8:T Cell-Mediated Immunity Immunobiology: The Immune System in Health and Disease. 5th edition.
The 3 major types of innate and adaptive cell-mediated effector immunity
functions%20in%20steady-state%20homeostasis%20and%20during%20immune%20challenge. Innate lymphocytes-lineage, localization and timing of differentiation
Further reading
Cell-mediated immunity: How T cells recognize and respond to foreign antigens
Immunology
Helper
Human cells
Phagocytes
Cell biology
Immune system
Lymphatic system
Infectious diseases
Cell signaling | Cell-mediated immunity | Chemistry,Biology | 1,868 |
14,726,779 | https://en.wikipedia.org/wiki/HD%20213240 | HD 213240 is a possible binary star system in the constellation Grus. It has an apparent visual magnitude of 6.81, which lies below the limit of visibility for normal human sight. The system is located at a distance of 133.5 light years from the Sun based on parallax. The primary has an absolute magnitude of 3.77.
This is an ordinary G-type main-sequence star with a stellar classification of G0/G1V. It is a metal-rich star with an age that has been calculated as being anywhere from 2.7 to 4.6 billion years. The star has 1.6 times the mass of the Sun and 1.56 times the Sun's radius. It is spinning with a projected rotational velocity of 3.5 km/s. The star is radiating 2.69 times the luminosity of the Sun from its photosphere at an effective temperature of 5,921 K.
A red dwarf companion star was detected in 2005 with a projected separation of 3,898 AU.
Planetary system
The Geneva extrasolar planet search team discovered a planet orbiting this star in 2001. Since this planet was discovered by radial velocity, only its minimum mass was initially known, and there was a 5% chance of it being massive enough to be a brown dwarf. In 2023, the inclination and true mass of HD 213240 b were determined via astrometry, confirming its planetary nature.
See also
HD 212301
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Binary stars
Grus (constellation)
CD-50 13701
213240
111143 | HD 213240 | Astronomy | 337 |
6,153,368 | https://en.wikipedia.org/wiki/Flow%20chemistry | In flow chemistry, also called reactor engineering, a chemical reaction is run in a continuously flowing stream rather than in batch production. In other words, pumps move fluid into a reactor, and where tubes join one another, the fluids contact one another. If these fluids are reactive, a reaction takes place. Flow chemistry is a well-established technique for use at a large scale when manufacturing large quantities of a given material. However, the term has only been coined recently for its application on a laboratory scale by chemists and describes small pilot plants, and lab-scale continuous plants. Often, microreactors are used.
Batch vs. flow
Comparing parameter definitions in Batch vs Flow
Reaction stoichiometry: In batch production this is defined by the concentration of chemical reagents and their volumetric ratio. In flow this is defined by the concentration of reagents and the ratio of their flow rate.
Residence time: In batch production this is determined by how long a vessel is held at a given temperature. In flow the volumetric residence time is given by the ratio of the volume of the reactor and the overall flow rate, as most often, plug flow reactors are used.
Running flow reactions
Choosing to run a chemical reaction using flow chemistry, either in a microreactor or other mixing device offers a variety of pros and cons.
Advantages
Reaction temperature can be raised above the solvent's boiling point as the volume of the laboratory devices is typically small. Typically, non-compressible fluids are used with no gas volume so that the expansion factor as a function of pressure is small.
Mixing can be achieved within seconds at the smaller scales used in flow chemistry.
Heat transfer is intensified. Mostly, because the area to volume ratio is large. As a result, endothermic and exothermic reactions can be thermostated easily and consistently. The temperature gradient can be steep, allowing efficient control over reaction time.
Safety is increased:
Thermal mass of the system is dominated by the apparatus making thermal runaways unlikely.
Smaller reaction volume is also considered a safety benefit.
The reactor operates under steady-state conditions.
Flow reactions can be automated with far less effort than batch reactions. This allows for unattended operation and experimental planning. By coupling the output of the reactor to a detector system, it is possible to go further and create an automated system which can sequentially investigate a range of possible reaction parameters (varying stoichiometry, residence time and temperature) and therefore explore reaction parameters with little or no intervention.
Typical drivers are higher yields/selectivities, less needed manpower, or a higher safety level.
Multi step reactions can be arranged in a continuous sequence. This can be especially beneficial if intermediate compounds are unstable, toxic, or sensitive to air since they will exist only momentarily and in very small quantities.
The position along the flowing stream and reaction time point are directly related to one another. This means that it is possible to arrange the system such that further reagents can be introduced into the flowing reaction stream at a precise time point that is desired.
It is possible to arrange a flowing system such that purification is coupled with the reaction. There are three primary techniques that are used:
Solid phase scavenging
Chromatographic separation
Liquid/Liquid Extraction
Reactions that involve reagents containing dissolved gases are easily handled, whereas in batch a pressurized "bomb" reactor would be necessary.
Multi-phase liquid reactions (e.g. phase transfer catalysis) can be performed in a straightforward way, with high reproducibility over a range of scales and conditions.
Scale up of a proven reaction can be achieved rapidly with little or no process development work, by either changing the reactor volume or by running several reactors in parallel, provided that flows are recalculated to achieve the same residence times.
Disadvantages
Dedicated equipment is needed for precise continuous dosing (e.g. pumps), connections, etc.
Start-up and shut-down procedures have to be established.
Scale-up of micro effects such as the high area to volume ratio is not possible and economy of scale may not apply. Typically, a scale-up leads to a dedicated plant.
Safety issues for the storage of reactive material still apply.
The drawbacks have been discussed in view of establishing small scale continuous production processes by Pashkova and Greiner.
Continuous flow reactors
Continuous reactors are typically tube-like and manufactured from non-reactive materials such as stainless steel, glass, and polymers. Mixing methods include diffusion alone (if the diameter of the reactor is small e.g. <1 mm, such as in microreactors) and static mixers. Continuous flow reactors allow good control over reaction conditions including heat transfer, time, and mixing.
The residence time of the reagents in the reactor (i.e. the amount of time that the reaction is heated or cooled) is calculated from the volume of the reactor and the flow rate through it:
Therefore, to achieve a longer residence time, reagents can be pumped more slowly and/or a larger volume reactor used. Production rates can vary from nanoliters to liters per minute.
Some examples of flow reactors are spinning disk reactors; spinning tube reactors; multi-cell flow reactors; oscillatory flow reactors; microreactors; hex reactors; and 'aspirator reactors'.
In an aspirator reactor a pump propels one reagent, which causes a reactant to be sucked in. This type of reactor was patented around 1941 by the Nobel company for the production of nitroglycerin.
Flow reactor scale
The smaller scale of microflow reactors or microreactors can make them ideal for process development experiments. Although it is possible to operate flow processes at a ton scale, synthetic efficiency benefits from improved thermal and mass transfer as well as mass transport.
Key application areas
Use of gases in flow
Laboratory scale flow reactors are ideal systems for using gases, particularly those that are toxic or associated with other hazards. The gas reactions that have been most successfully adapted to flow are hydrogenation and carbonylation, although work has also been performed using other gases, e.g. ethylene and ozone.
Reasons for the suitability of flow systems for hazardous gas handling are:
Systems allow the use of a fixed bed catalyst. Combined with low solution concentrations, this allows all compounds to be adsorbed to the catalyst in the presence of gas
Comparatively small amounts of gas are continually exhausted by the system, eliminating the need for many of the special precautions normally required for handling toxic and/or flammable gases
The addition of pressure means that a far greater proportion of the gas will be in solution during the reaction than is the case conventionally
The greatly enhanced mixing of the solid, liquid, and gaseous phases allows the researcher to exploit the kinetic benefits of elevated temperatures without being concerned about the gas being displaced from the solution
Photochemistry in combination with flow chemistry
Continuous flow photochemistry offers multiple advantages over batch photochemistry. Photochemical reactions are driven by the number of photons that are able to activate molecules causing the desired reaction. The large surface area to volume ratio of a microreactor maximizes the illumination, and at the same time allows for efficient cooling, which decreases the thermal side products.
Electrochemistry in combination with flow chemistry
Continuous flow electrochemistry like continuous photochemistry offers many advantages over analogous batch conditions. Electrochemistry like Photochemical reactions can be considered as 'reagent-less' reactions. In an electrochemical reaction the reaction is facilitated by the number of electrons that are able to activate molecules causing the desired reaction. Continuous electrochemistry apparatus reduces the distance between the electrodes used to allow better control of the number of electrons transferred to the reaction media enabling better control and selectivity. Recent developments in electrochemical flow-systems enabled the combination of reaction-oriented electrochemical flow systems with species-focused spectroscopy which allows a complete analysis of reactions involving multiple electron transfer steps, as well as unstable intermediates. These systems which are referred to as spectroelectrochemistry systems can enable the use of UV-vis as well as more complex methods such as electrochemiluminescence. Furthermore, using electrochemistry allows another degree of flexibility since the user has control not only on the flow parameters and the nature of the electrochemical measurement itself but also on the geometry or nature of the electrode (or electrodes in the case of an electrode array).
Process development
The process development change from a serial approach to a parallel approach. In batch the chemist works first followed by the chemical engineer. In flow chemistry this changes to a parallel approach, where chemist and chemical engineer work interactively. Typically there is a plant setup in the lab, which is a tool for both. This setup can be either commercial or noncommercial. The development scale can be small (ml/hour) for idea verification using a chip system and in the range of a couple of liters per hour for scalable systems like the flow miniplant technology. Chip systems are mainly used for a liquid-liquid application while flow miniplant systems can deal with solids or viscous material.
Scale up of microwave reactions
Microwave reactors are frequently used for small-scale batch chemistry. However, due to the extremes of temperature and pressure reached in a microwave it is often difficult to transfer these reactions to conventional non-microwave apparatus for subsequent development, leading to difficulties with scaling studies. A flow reactor with suitable high-temperature ability and pressure control can directly and accurately mimic the conditions created in a microwave reactor. This eases the synthesis of larger quantities by extending reaction time.
Manufacturing scale solutions
Flow systems can be scaled to the tons per hour scale. Plant redesign (batch to conti for an existing plant), Unit Operation (exchanging only one reaction step) and Modular Multi-purpose (Cutting a continuous plant into modular units) are typical realization solutions for flow processes.
Other uses of flow
It is possible to run experiments in flow using more sophisticated techniques, such as solid phase chemistries. Solid phase reagents, catalysts or scavengers can be used in solution and pumped through glass columns, for example, the synthesis of alkaloid natural product oxomaritidine using solid phase chemistries.
There is an increasing interest in polymerization as a continuous flow process. For example, Reversible Addition-Fragmentation chain Transfer or RAFT polymerization.
Continuous flow techniques have also been used for the controlled generation of nanoparticles. The very rapid mixing and excellent temperature control of microreactors are able to give consistent and narrow particle size distribution of nanoparticles.
Segmented flow chemistry
As discussed above, running experiments in continuous flow systems is difficult, especially when one is developing new chemical reactions, which requires screening of multiple components, varying stoichiometry, temperature, and residence time. In continuous flow, experiments are performed serially, which means one experimental condition can be tested. Experimental throughput is highly variable and as typically five times the residence time is needed for obtaining steady state. For temperature variation the thermal mass of the reactor as well as peripherals such as fluid baths needs to be considered. More often than not, the analysis time needs to be considered.
Segmented flow is an approach that improves upon the speed in which screening, optimization, and libraries can be conducted in flow chemistry. Segmented flow uses a "Plug Flow" approach where specific volumetric experimental mixtures are created and then injected into a high-pressure flow reactor. Diffusion of the segment (reaction mixture) is minimized by using immiscible solvent on the leading and rear ends of the segment.
One of the primary benefits of segmented flow chemistry is the ability to run experiments in a serial/parallel manner where experiments that share the same residence time and temperature can be repeatedly created and injected. In addition, the volume of each experiment is independent of that of the volume of the flow tube thereby saving a significant amount of reactant per experiment. When performing reaction screening and libraries, segment composition is typically varied by the composition of matter. When performing reaction optimization, segments vary by stoichiometry.
Segmented flow is also used with online LCMS, both analytical and preparative where the segments are detected when exiting the reactor using UV and subsequently diluted for analytical LCMS or injected directly for preparative LCMS.
See also
Chemical reaction
Microfluidics
Microreactor
Organic chemistry
Plug flow reactor
References
External links
ReelReactor Continuous Chemical and Biological Reactor
Continuous flow multi-step organic synthesis - a Chemical Science Mini Review by Damien Webb and Timothy F. Jamison discussing the current state of the art and highlighting recent progress and current challenges facing the emerging area of continuous flow techniques for multi-step synthesis. Published by the Royal Society of Chemistry
Continuous flow reactors: a perspective Review by Paul Watts and Charlotte Wiles. Published by the Royal Society of Chemistry
Flow Chemistry: Continuous Synthesis and Purification of Pharmaceuticals and Fine Chemicals Short Course offered at MIT by Professors Timothy Jamison and Klavs Jensen]
Industrial processes
Chemical engineering
Microfluidics | Flow chemistry | Chemistry,Materials_science,Engineering | 2,651 |
43,515,953 | https://en.wikipedia.org/wiki/Phi%20Hydrae | The Bayer designation Phi Hydrae (φ Hya / φ Hydrae) is shared by three star systems, in the constellation Hydra:
φ1
φ2
φ3
The three stars form a triangle between the brighter μ Hydrae and ν Hydrae.
Hydrae, Phi
Hydra (constellation) | Phi Hydrae | Astronomy | 60 |
65,971,393 | https://en.wikipedia.org/wiki/Maryland%27s%20%22Rain%20Tax%22 | Maryland's "rain tax" was implemented in 2012 through the Watershed Protection and Restoration Act to fund stormwater management aiming to reduce the level of pollution in the Chesapeake Bay. This bill, HB 987, utilized a stormwater fee in the ten most urban jurisdictions in Maryland.
Background
The first stormwater fee nationwide was enacted in Washington in 1974. There are now nearly 1,500 jurisdictions with similar policies to address stormwater management. Numerous counties in Maryland have implemented fees and programs to address polluted runoff since the 1980s. In 2010, the U.S. EPA ordered the states in the Chesapeake Bay watershed to reduce stormwater runoff through independent funding methods. Maryland voted to use stormwater fees to cover the $14.8 billion cost.
The "Rain Tax"
The "rain tax" raised revenue to improve the stormwater management system while creating a financial incentive to minimize the construction of and replace current impervious surfaces. Collection of the stormwater fee on impervious surfaces varied from annually on the property tax bill to quarterly on the water bill. The rates and number of square feet used to calculate the Equivalent Residential Unit were set by local officials across the ten jurisdictions to adequately finance the work needed to meet the targets of the Chesapeake Clean Water Blueprint. The revenue collected was used to maintain and repair the stormwater infrastructure to reduce pollution, improve water quality, and enhance the livability of these jurisdictions.
Outcomes
Frederick County adopted an annual tax of one cent in protest, while Carroll County refused to impose any tax. Government agencies sited on properties with impervious surfaces, including the Department of Navy, declined to pay the stormwater fee. State and local governments and volunteer fire departments were exempt whereas churches and non-profit organizations were not. Residents and businesses had the opportunity to participate in stormwater remediation projects to lower their fees. In 2015, revisions eliminated the tax mandate and allowed each jurisdiction to determine whether to impose the tax. These jurisdictions are still required to clean up stormwater pollution and must demonstrate that they have adequate funding and plans to address the issue. Funding must be assigned to a dedicated fund, now financed from the stormwater fee or general revenues. Jurisdictions, including Baltimore City, have chosen to continue to fund stormwater pollution cleanup through the fee.
See also
Stormwater fee
References
Stormwater management
Environmental tax
Chesapeake Bay watershed
Water resource management in the United States
Environment of Maryland | Maryland's "Rain Tax" | Chemistry,Environmental_science | 483 |
7,884,988 | https://en.wikipedia.org/wiki/Salt%20tectonics | Salt tectonics, or halokinesis, or halotectonics, is concerned with the geometries and processes associated with the presence of significant thicknesses of evaporites containing rock salt within a stratigraphic sequence of rocks. This is due both to the low density of salt, which does not increase with burial, and its low strength.
Salt structures (excluding undeformed layers of salt) have been found in more than 120 sedimentary basins around the world.
Passive salt structures
Structures may form during continued sedimentary loading, without any external tectonic influence, due to gravitational instability. Pure halite has a density of 2160 kg/m3. When initially deposited, sediments generally have a lower density of 2000 kg/m3, but with loading and compaction their density increases to 2500 kg/m3, which is greater than that of salt. Once the overlying layers have become denser, the weak salt layer will tend to deform into a characteristic series of ridges and depressions, due to a form of Rayleigh–Taylor instability. Further sedimentation will be concentrated in the depressions and the salt will continue to move away from them into the ridges. At a late stage, diapirs tend to initiate at the junctions between ridges, their growth fed by movement of salt along the ridge system, continuing until the salt supply is exhausted. During the later stages of this process the top of the diapir remains at or near the surface, with further burial being matched by diapir rise, and is sometimes referred to as downbuilding. The Schacht Asse II and Gorleben salt domes in Germany are an example of a purely passive salt structure.
Such structures do not always form when a salt layer is buried beneath a sedimentary overburden. This can be due to a relatively high strength overburden or to the presence of sedimentary layers interbedded within the salt unit that increase both its density and strength.
Active salt structures
Active tectonics will increase the likelihood of salt structures developing. In the case of extensional tectonics, faulting will both reduce the strength of the overburden and thin it. In an area affected by thrust tectonics, buckling of the overburden layer will allow the salt to rise into the cores of anticlines, as seen in salt domes in the Zagros Mountains and in El Gordo diapir (Coahuila fold-and-thrust belt, NE Mexico).
If the pressure within the salt body becomes sufficiently high it may be able to push through its overburden, this is known as forceful diapirism. Many salt diapirs may contain elements of both active and passive salt movement. An active salt structure may pierce its overburden and from then on continue to develop as a purely passive salt diapir.
Reactive salt structures
In those cases where salt layers do not have the conditions necessary to develop passive salt structures, the salt may still move into relatively low pressure areas around developing folds and faults. Such structures are described as reactive.
Salt detached fault systems
When one or more salt layers are present during extensional tectonics, a characteristic set of structures is formed. Extensional faults propagate up from the middle part of the crust until they encounter the salt layer. The weakness of the salt prevents the fault from propagating through. However, continuing displacement on the fault offsets the base of the salt and causes bending of the overburden layer. Eventually the stresses caused by this bending will be sufficient to fault the overburden. The types of structures developed depend on the initial salt thickness. In the case of a very thick salt layer there is no direct spatial relationship between the faulting beneath the salt and that in the overburden, such a system is said to be unlinked. For intermediate salt thicknesses, the overburden faults are spatially related to the deeper faults, but offset from them, normally into the footwall; these are known as soft-linked systems. When the salt layer becomes thin enough, the fault that develops in the overburden is closely aligned with that beneath the salt, and forms a continuous fault surface after only a relatively small displacement, forming a hard-linked fault.
In areas of thrust tectonics salt layers act as preferred detachment planes. In the Zagros fold and thrust belt, variations in the thickness and therefore effectiveness of the late Neoproterozoic to Early Cambrian Hormuz salt are thought to have had a fundamental control on the overall topography.
Salt weld
When a salt layer becomes too thin to be an effective detachment layer, due to salt movement, dissolution or removal by faulting, the overburden and the underlying sub-salt basement become effectively welded together. This may cause the development of new faults in the cover sequence and is an important consideration when modeling the migration of hydrocarbons.
Salt welds may also develop in the vertical direction by putting the sides of a former diapir in contact.
Allochthonous salt structures
Salt that pierces to the surface, either on land or beneath the sea, tends to spread laterally away and such salt is said to be "allochthonous". Salt glaciers are formed on land where this happens in an arid environment, such as in the Zagros Mountains. Offshore tongues of salt are generated that may join together with others from neighbouring piercements to form canopies.
Effects on sedimentary systems
On passive margins where salt is present, such as the Gulf of Mexico, salt tectonics largely control the evolution of deep-water sedimentary systems; for example submarine channels, as modern and ancient case studies show.
Economic importance
A significant proportion of the world's hydrocarbon reserves are found in structures related to salt tectonics, including many in the Middle East, the South Atlantic passive margins (Brazil, Gabon and Angola), the Gulf of Mexico, and the Pricaspian Basin.
See also
References
External links
Gorleben salt dome
NOAA site on brine pools
Salt Tectonics Publications
Salts
Geological processes
Structural geology
Tectonics
Evaporite | Salt tectonics | Chemistry | 1,246 |
38,614,208 | https://en.wikipedia.org/wiki/Force%20Troops%20Command | Force Troops Command was a combat support and combat service support command of the British Army. Its headquarters was at Upavon, Wiltshire. It was formed in 2013 as a re-designation of the previous Headquarters Theatre Troops. Force Troops Command was renamed as 6th (United Kingdom) Division in August 2019.
History
Previously, General Officer Commanding, Theatre Troops was a senior British Army officer responsible for the provision of Combat Support and Combat Service Support operations worldwide in support of the UK's Defence Strategy. On formation in 2003 it included 1st Artillery Brigade; 7th Air Defence Brigade; Commander Royal Engineers (CRE) HQ RE Theatre Troops with 12th and 29th Engineer Brigades; 1st, 2nd, and 11th Signal Brigades; and two logistic brigades 102 Logistic Brigade in Germany and 101 Logistic Brigade in the United Kingdom which contained logistic units to support the two deployable divisions (1st Armoured Division in Germany and 3rd Mechanised Division in the United Kingdom). 104th Logistic Support Brigade with the specialist units needed to deploy a force overseas such as pioneers, movements and port units was also part of Theatre Troops. The final two components were 2 Medical Brigade and Commander, Equipment Support.
Theatre Troops became Force Troops Command under Army 2020 in 2013 and reached Full Operating Capability (FOC) on 1 April 2014. 101 or 102 Logistic Brigades subsequently left Force Troops Command.
The Joint Ground-Based Air Defence Command, which was jointly controlled by RAF Air Command, was replaced by 7 Air Defence Group on 1 April 2019.
Force Troops Command was renamed as 6th (United Kingdom) Division on 1 August 2019, with sub-units consisting of 1st Signal Brigade, 11th Signal Brigade, 1st Intelligence Surveillance and Reconnaissance Brigade, 77th Brigade and the Specialised Infantry Group. It will sit alongside restructured 1st UK Division and 3rd UK Division.
Structure
Formation
Largest
Force Troops Command comprised nine ‘functional’ brigades. The various units included: The Intelligence and Surveillance Brigade which provided integrated intelligence surveillance and reconnaissance capabilities, drawing specifically on lessons from Afghanistan. 1st Artillery Brigade delivered both close support artillery and precision fires, as well as leading Air-Land Integration. 8 Engineer Brigade commanded the close support engineer units, as well as Explosive Ordnance Disposal and Search, Force Support and Infrastructure Groups. The 77th Brigade was involved in conflict prevention and stabilisation through the projection of soft power.
Commanders
Commanders have included:
General Officer Commanding, Theatre Troops
2001–2004 Major General James Shaw
2004–2006 Major General Tim Cross
2006–2008 Major General Hamish Rollo
2008–2011 Major General Bruce Brealey
2011–2013 Major General Shaun Burley
General Officer Commanding, Force Troops Command
2013–2015 Major General Tim Radford
2015–2017 Major General Tyrone Urch
2017–2019 Major General Tom Copinger-Symes
July 2019–August 2019 Major General James Bowder
Footnotes
References
External links
6th (United Kingdom) Division
Commands of the British Army
Military units and formations established in 2013
Organisations based in Wiltshire
Army 2020
Military units and formations disestablished in 2019 | Force Troops Command | Engineering | 603 |
219,202 | https://en.wikipedia.org/wiki/Binary%20code | A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits (which is also called a byte) can represent any of 256 possible values and can, therefore, represent a wide variety of different items.
In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them.
A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case a, if represented by the bit string 01100001 (as it is in the standard ASCII code), can also be represented as the decimal number 97.
History of binary codes
Invention
The modern binary number system, the basis for binary code, is an invention by Gottfried Leibniz in 1689 and appears in his article Explication de l'Arithmétique Binaire (English: Explanation of the Binary Arithmetic) which uses only the characters 1 and 0, and some remarks on its usefulness. Leibniz's system uses 0 and 1, like the modern binary numeral system. Binary numerals were central to Leibniz's intellectual and theological ideas. He believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing.In Leibniz's view, binary numbers represented a fundamental form of creation, reflecting the simplicity and unity of the divine. Leibniz was also attempting to find a way to translate logical reasoning into pure mathematics. He viewed the binary system as a means of simplifying complex logical and mathematical processes, believing that it could be used to express all concepts of arithmetic and logic.
Previous Ideas
Leibniz explained in his work that he encountered the I Ching by Fu Xi that dates from the 9th century BC in China, through French Jesuit Joachim Bouvet and noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical visual binary mathematics he admired. Leibniz saw the hexagrams as an affirmation of the universality of his own religious belief. After Leibniz ideas were ignored, the book had confirmed his theory that life could be simplified or reduced down to a series of straightforward propositions. He created a system consisting of rows of zeros and ones. During this time period, Leibniz had not yet found a use for this system. The binary system of the I Ching is based on the duality of yin and yang. Slit drums with binary tones are used to encode messages across Africa and Asia. The Indian scholar Pingala (around 5th–2nd centuries BC) developed a binary system for describing prosody in his Chandashutram.
Mangareva people in French Polynesia were using a hybrid binary-decimal system before 1450. In the 11th century, scholar and philosopher Shao Yong developed a method for arranging the hexagrams which corresponds, albeit unintentionally, to the sequence 0 to 63, as represented in binary, with yin as 0, yang as 1 and the least significant bit on top. The ordering is also the lexicographical order on sextuples of elements chosen from a two-element set.
In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text. Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".
Boolean Logical System
George Boole published a paper in 1847 called 'The Mathematical Analysis of Logic' that describes an algebraic system of logic, now known as Boolean algebra. Boole's system was based on binary, a yes-no, on-off approach that consisted of the three most basic operations: AND, OR, and NOT. This system was not put into use until a graduate student from Massachusetts Institute of Technology, Claude Shannon, noticed that the Boolean algebra he learned was similar to an electric circuit. In 1937, Shannon wrote his master's thesis, A Symbolic Analysis of Relay and Switching Circuits, which implemented his findings. Shannon's thesis became a starting point for the use of the binary code in practical applications such as computers, electric circuits, and more.
Other forms of binary code
The bit string is not the only type of binary code: in fact, a binary system in general, is any system that allows only two choices such as a switch in an electronic system or a simple true or false test.
Braille
Braille is a type of binary code that is widely used by the blind to read and write by touch, named for its creator, Louis Braille. This system consists of grids of six dots each, three per column, in which each dot has two states: raised or not raised. The different combinations of raised and flattened dots are capable of representing all letters, numbers, and punctuation signs.
Bagua
The bagua are diagrams used in feng shui, Taoist cosmology and I Ching studies. The ba gua consists of 8 trigrams; bā meaning 8 and guà meaning divination figure. The same word is used for the 64 guà (hexagrams). Each figure combines three lines (yáo) that are either broken (yin) or unbroken (yang). The relationships between the trigrams are represented in two arrangements, the primordial, "Earlier Heaven" or "Fuxi" bagua, and the manifested, "Later Heaven", or "King Wen" bagua. (See also, the King Wen sequence of the 64 hexagrams).
Ifá, Ilm Al-Raml and Geomancy
The Ifá/Ifé system of divination in African religions, such as of Yoruba, Igbo, and Ewe, consists of an elaborate traditional ceremony producing 256 oracles made up by 16 symbols with 256 = 16 x 16. An initiated priest, or Babalawo, who had memorized oracles, would request sacrifice from consulting clients and make prayers. Then, divination nuts or a pair of chains are used to produce random binary numbers, which are drawn with sandy material on an "Opun" figured wooden tray representing the totality of fate.
Through the spread of Islamic culture, Ifé/Ifá was assimilated as the "Science of Sand" (ilm al-raml), which then spread further and became "Science of Reading the Signs on the Ground" (Geomancy) in Europe.
This was thought to be another possible route from which computer science was inspired, as Geomancy arrived at Europe at an earlier stage (about 12th Century, described by Hugh of Santalla) than I Ching (17th Century, described by Gottfried Wilhelm Leibniz).
Coding systems
ASCII code
The American Standard Code for Information Interchange (ASCII), uses a 7-bit binary code to represent text and other characters within computers, communications equipment, and other devices. Each letter or symbol is assigned a number from 0 to 127. For example, lowercase "a" is represented by 1100001 as a bit string (which is decimal 97).
Binary-coded decimal
Binary-coded decimal (BCD) is a binary encoded representation of integer values that uses a 4-bit nibble to encode decimal digits. Four binary bits can encode up to 16 distinct values; but, in BCD-encoded numbers, only ten values in each nibble are legal, and encode the decimal digits zero, through nine. The remaining six values are illegal and may cause either a machine exception or unspecified behavior, depending on the computer implementation of BCD arithmetic.
BCD arithmetic is sometimes preferred to floating-point numeric formats in commercial and financial applications where the complex rounding behaviors of floating-point numbers is inappropriate.
Early uses of binary codes
1875: Émile Baudot "Addition of binary strings in his ciphering system," which, eventually, led to the ASCII of today.
1884: The Linotype machine where the matrices are sorted to their corresponding channels after use by a binary-coded slide rail.
1932: C. E. Wynn-Williams "Scale of Two" counter
1937: Alan Turing electro-mechanical binary multiplier
1937: George Stibitz "excess three" code in the Complex Computer
1937: Atanasoff–Berry Computer
1938: Konrad Zuse Z1
Current uses of binary
Most modern computers use binary encoding for instructions and data. CDs, DVDs, and Blu-ray Discs represent sound and video digitally in binary form. Telephone calls are carried digitally on long-distance and mobile phone networks using pulse-code modulation, and on voice over IP networks.
Weight of binary codes
The weight of a binary code, as defined in the table of constant-weight codes, is the Hamming weight of the binary words coding for the represented words or sequences.
See also
Binary number
List of binary codes
Binary file
Unicode
Gray code
References
External links
Sir Francis Bacon's BiLiteral Cypher system , predates binary number system.
Table of general binary codes. An updated version of the tables of bounds for small general binary codes given in .
Table of Nonlinear Binary Codes. Maintained by Simon Litsyn, E. M. Rains, and N. J. A. Sloane. Updated until 1999.
cites some pre-ENIAC milestones.
First book in the world fully written in binary code: (IT) Luigi Usai, 01010011 01100101 01100111 01110010 01100101 01110100 01101001, Independently published, 2023, ISBN 979-8-8604-3980-1. URL consulted September 8, 2023.
Computer data
English inventions
Encodings
Gottfried Wilhelm Leibniz
2 (number) | Binary code | Technology | 2,211 |
29,738,945 | https://en.wikipedia.org/wiki/Fertilaid | Fertilaid as Fertilizer (production ended in 1992)
Fertilaid, was one of the first organically certified fertilizers acknowledged by the California Certification of Organics, in 1979. A nationwide process for certification did not begin to exist in the United States until 1990 per the National Organic Food Act. Previous to that, Fertilaid was awarded, in January 1954, its first Patent Office Citation.
This product was developed by John C. Porter, Sr., (b. 1909 – d. 2008), with his wife Fedil C. Porter (b._) at his side. Mr. Porter, who was a visionary agronomist and soil chemist, made products based on the philosophy that one works with nature, not against it; therefore, since soil is naturally made, it should be healed by understanding nature's processes and then assisted and supported with naturally made products. As far back as the 1950s, the Porters stood openly and publicly against the excessive use of chemical fertilizers and pesticides derived from petrochemicals.
Long years of research yielded a comprehensive program that demonstrated the ability to enhance the soil's fertility while improving crop yields without cost to the environment. Fertilaid research reports from Texas A&M University, other leading universities and colleges, as well as documented research projects conducted by the Rockefeller Foundation, Tracor, the National Coffee Commission, Eli Lilly and Company and others; time and time again announced Fertilaid'''s superior crop growth and fruit quality.Fertilaid came in various formulations specifically tailored for production of fruits and vegetables, root crops such as carrots and potatoes, floriculture, lawn/garden/house plants, and turf grass applications. It was a proven bio-organic catalyst that improves plant and fruit quality in virtually any soil condition.FertiGro, created by Mr. Porter's son, John V. Porter (b.) who worked for the family business since he was a boy, was a blended array of Fertilaid and other nutrients. This formulation helped the plant more efficiently utilize chemical nutrients, and was used for cereal grains, broad-leaf vegetables and commercial turf grass. FertiGro also came as a folliculor spray that was less expensive for large-scale applications, has reduced volatiles into the atmosphere, and was proven to not leach into the water table.Fertilaid products were manufactured in facilities in South Texas, Central Texas, the Gulf Coast of Texas, and in the state of Louisiana - but utilized all around the world.
March 16, 1973, The Austin Citizen newspaper wrote that Fertiliad, "… will result in sweeping changes to the growing systems of the smallest gardener to the largest farmer. Lush greens on harsh desert, hard clay, or alkaline soils and the sudden halt to some serious agricultural problems – namely compaction, excess salinity and deletion of necessary humus – have caused a great deal of command about the remarkable Fertilaid Organic Activator."
Production of Fertilaid'' ended in 1992, due to an attempted hostile takeover.
References
Fertilizers | Fertilaid | Chemistry | 647 |
8,110,246 | https://en.wikipedia.org/wiki/Maltotriose | Maltotriose is a trisaccharide (three-part sugar) consisting of three glucose molecules linked with α-1,4 glycosidic bonds.
It is most commonly produced by the digestive enzyme alpha-amylase (a common enzyme in human saliva) on amylose in starch. The creation of both maltotriose and maltose during this process is due to the random manner in which alpha amylase hydrolyses α-1,4 glycosidic bonds.
It is the shortest chain oligosaccharide that can be classified as maltodextrin.
References
Trisaccharides
Types of sugar | Maltotriose | Chemistry | 139 |
46,489,321 | https://en.wikipedia.org/wiki/Antibiotic%20synergy | Antibiotic synergy is one of three responses possible when two or more antibiotics are used simultaneously to treat an infection. In the synergistic response, the applied antibiotics work together to produce an effect more potent than if each antibiotic were applied singly. Compare to the additive effect, where the potency of an antibiotic combination is roughly equal to the combined potencies of each antibiotic singly, and antagonistic effect, where the potency of the combination is less than the combined potencies of each antibiotic.
Clinical interest
Clinical interest in synergism dates back to the early 1950s when practitioners noted that patients with enterococcal endocarditis experienced a high relapse rate when penicillin G alone was used for treatment and a demonstrably lower relapse rate when streptomycin was combined with penicillin G to combat the infection. Since that time the research community has conducted numerous studies regarding the effects and possibilities of antibiotic combinations. Today, combination therapy is recognized as providing a broad spectrum of antibiotic coverage, effectively fighting polymicrobial infections, minimizing selection for antibiotic resistant strains, lowering dose toxicity where applicable, and in some cases providing synergistic activity.
Desirability
Antibiotic synergy is desirable in a clinic sense for several reasons. At the patient level, the boosted antimicrobial potency provided by synergy allows the body to more rapidly clear infections, resulting in shorter courses of antibiotic therapy. Shorter courses of therapy in turn reduce the effects of dose-related toxicity, if applicable. Additionally, synergy aids in total bacterial eradication, more completely removing an infection than would be possible without synergy. At a higher level, synergistic effects are useful for combating resistant bacterial strains through increased potency and for stalling the spread of bacterial resistance through the total eradication of infections, preventing the evolutionary selection of resistant cells and strains.
Current research directions
Current research on antibiotic synergy and potential therapies is moving in three primary directions. Some research is devoted to finding combinations of extant antibiotics which when combined exhibit synergy. A classic example of this effect is the interaction between β-lactams, which damage the bacteria cell membrane, and aminoglycosides, which inhibit protein synthesis. The damage dealt to the cell wall by β-lactams allows more aminoglycoside molecules to be taken up into the cell than would otherwise be possible, enhancing cell damage. In some cases, antibacterial combinations restore potency to ineffective drugs. Other research has been devoted to finding antibiotic resistance breakers (ARB's) which enhance an antibiotic's potency. This effect is mediated through direct antibacterial activity of the ARB, targeting and destroying mechanisms of bacterial resistance thereby allowing the antibiotic to function properly, interacting with the host to trigger defensive mechanisms, or some combination thereof. The third direction of research involves combining traditional antibiotics with unconventional bactericides such as silver nano particles. Silver nano particles have strong non-specific interactions with bacterial cells that result in cell wall deformation and the generation of damaging reactive oxygen species (ROS) in the presence of cellular components. These effects are thought to weaken bacterial cells, making them more susceptible to assault from conventional antibiotics.
References
Antibiotics | Antibiotic synergy | Biology | 673 |
972,070 | https://en.wikipedia.org/wiki/Hodge%20301 | Hodge 301 is a star cluster in the Tarantula Nebula, visible from Earth's Southern Hemisphere. The cluster and nebula lie about 168,000 light years away, in one of the Milky Way's orbiting satellite galaxies, the Large Magellanic Cloud.
Hodge 301, along with the cluster R136, is one of two major star clusters situated in the Tarantula Nebula, a region which has seen intense bursts of star formation over the last few tens of millions of years. R136 is situated in the central regions of the nebula, while Hodge 301 is located about 150 light years away, to the north west as seen from Earth. Hodge 301 was formed early on in the current wave of star formation, with an age estimated at 20-25 million years old, some ten times older than R136.
Since Hodge 301 formed, it is estimated that at least 40 stars within it have exploded as supernovae, giving rise to violent gas motions within the surrounding nebula and emission of x-rays. This contrasts with the situation around R136, which is young enough that none of its stars have yet exploded as supernovae; instead, the stars of R136 are emitting fast stellar winds, which are colliding with the surrounding gases. The two clusters thus provide astronomers with a direct comparison between the impact of supernova explosions and stellar winds on surrounding gases.
References
External links
Hodge 301 at ESA/Hubble
Large Magellanic Cloud
Open clusters
Tarantula Nebula
Dorado | Hodge 301 | Astronomy | 307 |
48,429,253 | https://en.wikipedia.org/wiki/Tricholoma%20tenacifolium | Tricholoma tenacifolium is an agaric fungus of the genus Tricholoma. Found in Peninsular Malaysia, it was described as new to science in 1994 by English mycologist E.J.H. Corner.
See also
List of Tricholoma species
References
tenacifolium
Fungi described in 1994
Fungi of Asia
Taxa named by E. J. H. Corner
Fungus species | Tricholoma tenacifolium | Biology | 86 |
60,894,693 | https://en.wikipedia.org/wiki/NGC%202004 | NGC 2004 (also known as ESO 86-SC4) is an open cluster of stars in the southern constellation of Dorado. It was discovered by Scottish astronomer James Dunlop on September 24, 1826. This is a young, massive cluster with an age of about 20 million years and 23,000 times the mass of the Sun. It has a core radius of . NGC 2004 is a member of the Large Magellanic Cloud, which is a satellite galaxy of the Milky Way.
References
External links
Open clusters
2004
ESO objects
Dorado
Large Magellanic Cloud
Astronomical objects discovered in 1826
Discoveries by James Dunlop | NGC 2004 | Astronomy | 124 |
85,029 | https://en.wikipedia.org/wiki/Chemical%20synthesis | Chemical synthesis (chemical combination) is the artificial execution of chemical reactions to obtain one or several products. This occurs by physical and chemical manipulations usually involving one or more reactions. In modern laboratory uses, the process is reproducible and reliable.
A chemical synthesis involves one or more compounds (known as reagents or reactants) that will experience a transformation under certain conditions. Various reaction types can be applied to formulate a desired product. This requires mixing the compounds in a reaction vessel, such as a chemical reactor or a simple round-bottom flask. Many reactions require some form of processing ("work-up") or purification procedure to isolate the final product.
The amount produced by chemical synthesis is known as the reaction yield. Typically, yields are expressed as a mass in grams (in a laboratory setting) or as a percentage of the total theoretical quantity that could be produced based on the limiting reagent. A side reaction is an unwanted chemical reaction that can reduce the desired yield. The word synthesis was used first in a chemical context by the chemist Hermann Kolbe.
Strategies
Chemical synthesis employs various strategies to achieve efficient, precise, and molecular transformations that are more complex than simply converting a reactant A to a reaction product B directly. These strategies can be grouped into approaches for managing reaction sequences.
Reaction Sequences:
Multistep synthesis involves sequential chemical reactions, each requiring its own work-up to isolate intermediates before proceeding to the next stage. For example, the synthesis of paracetamol typically requires three separate reactions. Divergent synthesis starts with a common intermediate, which branches into multiple final products through distinct reaction pathways. Convergent synthesis synthesis involves the combination of multiple intermediates synthesized independently to create a complex final product. One-pot synthesis involves multiple reactions in the same vessel, allowing sequential transformations without intermediate isolation, reducing material loss, time, and the need for additional purification. Cascade reactions, a specific type of one-pot synthesis, streamline the process further by enabling consecutive transformations within a single reactant, minimizing resource consumption
Catalytic Strategies:
Catalysts play a vital role in chemical synthesis by accelerating reactions and enabling specific transformations. Photoredox catalysis provides enhanced control over reaction conditions by regulating the activation of small molecules and the oxidation state of metal catalysts. Biocatalysis uses enzymes as catalysts to speed up chemical reactions with high specificity under mild conditions.
Reactivity Control:
Chemoselectivity ensures that a specific functional group in a molecule reacts while others remain unaffected. Protecting groups temporarily mask reactive sites to enable selective reactions. Kinetic control prioritizes reaction pathways that form products quickly, often yielding less stable compounds. In contrast, thermodynamic control favors the formation of the most stable products.
Advanced Planning and Techniques:
Retrosynthetic analysis is a strategy used to plan complex syntheses by breaking down the target molecule into simpler precursors. Flow chemistry is a continuous reaction method where reactants are pumped through a reactor, allowing precise control over reaction conditions and scalability. This approach has been employed in the large-scale production of pharmaceuticals such as Tamoxifen.
Organic synthesis
Organic synthesis is a special type of chemical synthesis dealing with the synthesis of organic compounds. For the total synthesis of a complex product, multiple procedures in sequence may be required to synthesize the product of interest, needing a lot of time. A purely synthetic chemical synthesis begins with basic lab compounds. A semisynthetic process starts with natural products from plants or animals and then modifies them into new compounds.
Inorganic synthesis
Inorganic synthesis and organometallic synthesis are used to prepare compounds with significant non-organic content. An illustrative example is the preparation of the anti-cancer drug cisplatin from potassium tetrachloroplatinate.
Green Chemistry
Chemical synthesis using green chemistry promotes the design of new synthetic methods and apparatus that simplify operations and seeks environmentally benign solvents. Key principles include atom economy, which aims to incorporate all reactant atoms into the final product, and the reduction of waste and inefficiencies in chemical processes. Innovations in green chemistry, contribute to more sustainable and efficient chemical synthesis, reducing the environmental and health impacts of traditional methods.
Applications
Chemical synthesis plays a crucial role across various industries, enabling the development of materials, medicines, and technologies with significant real-world impacts.
Catalysis: The development of catalysts is vital for numerous industrial processes, including petroleum refining, petrochemical production, and pollution control. Catalysts synthesized through chemical processes enhance the efficiency and sustainability of these operations.
Medicine: Organic synthesis plays a vital role in drug discovery, allowing chemists to develop and optimize new drugs by modifying organic molecules. Additionally, the synthesis of metal complexes for medical imaging and cancer treatments is a key application of chemical synthesis, enabling advanced diagnostic and therapeutic techniques.
Biopharmaceuticals: Chemical synthesis is critical in the production of biopharmaceuticals, including monoclonal antibodies and other biologics. Chemical synthesis enables the creation and modification of organic and biologically sourced compounds used in these treatments. Advanced techniques, such as DNA recombinant technology and cell fusion, rely on chemical synthesis to produce biologics tailored for specific diseases, ensuring they work effectively and target diseases precisely.
See also
Beilstein database
Biosynthesis
Chemical engineering
Click chemistry
Electrosynthesis
Methods in Organic Synthesis
Organic synthesis
Peptide synthesis
Total synthesis
Automated synthesis
References
External links
The Organic Synthesis Archive
Natural product syntheses
Chemistry | Chemical synthesis | Chemistry | 1,113 |
61,985,302 | https://en.wikipedia.org/wiki/Sony%20CLI%C3%89%20PEG-TG50 | The Clié PEG-TG50 is a Personal Digital Assistant (PDA) which was manufactured by Sony, released in March 2003. Running the Palm operating system (version 5.0), the TG50 was notable as it featured a built-in backlit mini qwerty keyboard, in lieu of a dedicated handwriting recognition area as was the trend on most other PDAs.
This handheld featured a 320x320 colour LCD, bluetooth, and additional multimedia features, including MP3 and ATRAC3 audio playback, a voice-recorder, and a slot for MemoryStick PRO memory cards. The TG50 was powered by a 200 MHz Intel XScale PXA250 processor, with 16MB of RAM, 11MB of which was available for user data storage. The TG50 also featured the "Jog Dial" scroll wheel on the side of the device, as was common on Sony Clie models, and came with a flip cover to protect the front face of the device when not in use.
Specifications
Palm OS: 5.0
CPU: Intel XScale PXA250 200 MHz
Audio codec: AK4534VN
PMIC: Panasonic AN32502A
Touch controller: Analog Devices AD7873
Gate array / IO expander: NEC 65943-L63
Memory: 16MB RAM (11MB avail.), 16MB ROM
Display: 320x 320 transflective back-lit TFT-LCD, 16bit Colour (65k colours)
Sound: Internal audio amplifier, Rear speaker, Mono Mic, Stereo Headphone out.
External Connectors: USB
Expansion: Memory Stick Pro, MSIO
Wireless: Infrared IrDA, Bluetooth
Battery: Rechargeable Li-Ion Polymer (900mAh)
Size & Weight: 5.0" x 2.8" x 0.63"; 6.2 oz.
Color: Silver
See also
Sony CLIÉ TH Series - The successor to the TG series.
External links
TG50 Review at Palm Info Center
TG50 Review at CNet
TG50 Review at BrightHand
TG50 review at PCMag
References
Sony CLIÉ | Sony CLIÉ PEG-TG50 | Technology | 443 |
72,016,563 | https://en.wikipedia.org/wiki/Sarcomyxa%20edulis | Sarcomyxa edulis is a species of fungus in the family Sarcomyxaceae. Fruit bodies grow as ochraceous to ochraceous-brown, overlapping fan- or oyster-shaped caps on the wood of deciduous trees. The gills on the underside are closely spaced, ochraceous, and have an adnate attachment to the stipe. Spores are smooth, amyloid, and measure 4.5–6 by 1–2 μm.
The species was previously confused with the greenish-capped Sarcomyxa serotina which is bitter-tasting. Sarcomyxa edulis is mild-tasting and edible. In Japan, where it is called mukitake, it is considered "one of the most delicious edible mushrooms" and a system has recently been developed to cultivate the mushroom in plastic greenhouses. In China, it is called “元蘑/yuanmo,” “黄蘑/huangmo,” or “冻蘑/dongmo”. It is considered a delicacy in China, rich in nutrition. "Generally, it grows on the fallen woods of broad-leaved trees in remote mountains and old forests, but not all broad-leaved trees are suitable for its growth, and the rotten basswood is very easy to grow S. edulis". "S. edulisis distributed in provinces of Hebei, Heilongjiang, Jilin, Shanxi, Guangxi, northern Shaanxi, Sichuan" in China, and at present, China already has high yield cultivation techniques.
Sarcomyxa edulis is known to occur in China, Japan, and the Russian Far East.
References
Fungi described in 2003
Fungi of Asia
Edible fungi
fungi in cultivation
Fungus species | Sarcomyxa edulis | Biology | 358 |
2,510,384 | https://en.wikipedia.org/wiki/Trimyristin | Trimyristin is a saturated fat and the triglyceride of myristic acid with the chemical formula C45H86O6. Trimyristin is a white to yellowish-gray solid that is insoluble in water, but soluble in ethanol, acetone, benzene, chloroform, dichloromethane, ether, and TBME.
Occurrence
Trimyristin is found naturally in many vegetable fats and oils.
Isolation from nutmeg
The isolation of trimyristin from powdered nutmeg is a common introductory-level college organic chemistry experiment. It is an uncommonly simple natural product extraction because nutmeg oil generally consists of over eighty percent trimyristin. Trimyristin makes up between 20-25% of the overall mass of dried, ground nutmeg. Separation is generally carried out by steam distillation and purification uses extraction from ether followed by distillation or rotary evaporation to remove the volatile solvent. The extraction of trimyristin can also be done with diethyl ether at room temperature, due to its high solubility in the ether. The experiment is frequently included in curricula, both for its relative ease and to provide instruction in these techniques. Trimyristin can then be used to prepare myristic acid or one of its salts as an example of saponification.
See also
Linolein
References
Triglycerides | Trimyristin | Chemistry,Biology | 298 |
14,770,410 | https://en.wikipedia.org/wiki/EN2%20%28gene%29 | Homeobox protein engrailed-2 is a protein that in humans is encoded by the EN2 gene. It is a member of the engrailed gene family.
Function
Homeobox-containing genes are thought to have a role in controlling development. In Drosophila, the 'engrailed' (en) gene plays an important role during development in segmentation, where it is required for the formation of posterior compartments. Different mutations in the mouse homologs, En1 and En2, produced different developmental defects that frequently are lethal. The human engrailed homologs 1 and 2 encode homeodomain-containing proteins and have been implicated in the control of pattern formation during development of the central nervous system.
Description
The Engrailed-2 gene encodes for the Engrailed-2 homeobox transcription factor. The signaling molecule, fibroblast growth factor 8 (FGF8), controls the expression of the En2 gene. The isthmus organizer expresses varying concentrations of FGF8 that influence the En2 transcription factor. En2 transcription factor is involved in patterning the midbrain of the central nervous system during embryonic development. Specifically, it is required for proper positioning of folia in the developing hemispheres. It continues to regulate foliation throughout nervous system development. En2 patterns cerebellum foliation in the mediolateral axis. Several birth defects can arise from inadequate or abnormal En2 expression. Scientists use a mice model to study the effects of En2 knockout alleles on development. When the En2 gene is knocked out, vermis foliation patterning becomes extremely altered. Along with decreased cerebellum foliation complexity, mutations in the En2 gene result in a depleted vermis or an overly simplified foliation pattern. The Engrailed genes are essential to proper neural circuit development.
In cancer diagnosis
A method for diagnosing prostate cancer by detection of EN2 in urine has been developed. The results of a clinical trial of 288 men suggest that EN2 could be a marker for prostate cancer which might prove more reliable than current methods that use prostate-specific antigen (PSA). If effective, a urine test is considered easier and less embarrassing for the patient than blood tests or rectal examinations and, therefore, less likely to discourage early diagnosis. At the time of the report, it was not clear whether or not the EN2 test could distinguish between aggressive tumours that would require intervention and relatively benign ones that would not.
Licensing and marketing
The EN2 test for prostate cancer has been licensed to Zeus Scientific, as they reported in March 2013. In that announcement they said they expected the test to be submitted to the US-FDA in a year, and available worldwide in 2 years.
Negative results
However, an independent study published in 2020 questioned the value of EN2 as a urinary marker for prostate cancer. In a comparison between 90 PC patients and 30 healthy subjects, their results show that EN2 as a PC biomarker brings no additional value to the current use of PSA in clinical practice. Despite their announcement of new clinical trial in 2018, the developers of the urinary EN2 test at the University of Surrey never registered such a trial at ClinicalTrials.gov or published any results of it. Also, Randox Ltd, the diagnostic company which was to commercialize the urinary EN2, does not offer it any more in their product portfolio.
References
Further reading
External links
Transcription factors
Prostate cancer | EN2 (gene) | Chemistry,Biology | 723 |
62,078,710 | https://en.wikipedia.org/wiki/Realme%20X2%20Pro | The Realme X2 Pro is a smartphone from the Chinese smartphone manufacturer Realme, released in October 2019.
Specifications
The phone measures 161 mm × 75.7 mm × 8.7 mm (6.34 in × 2.98 in × 0.34 in) and weighs 192 grams (7.02 oz). It has an aluminum frame with Gorilla Glass 5 on the front and back. The display is a 6.5-inch Super AMOLED with 1080 by 2400 pixel resolution, 90 Hz refresh rate, and a maximum brightness of 1000 nits. The phone shipped with ColorOS 6.1, based on Android 9.0 ("Pie") but was upgraded to Realme UI 1.0 between March and April 2020. It contains a Qualcomm Snapdragon 855+ system on a chip, and an Adreno 640 GPU.
The phone has a rear-facing quad-camera array, with one 64 MP f/1.8 wide-angle lens (26 mm full-frame focal length equivalent), one 13 MP f/2.5 telephoto lens (52 mm equivalent), one 8 MP f/2.2 ultra-wide-angle lens (16 mm equivalent), and one 2 MP f/2.4 depth sensor. The three higher-resolution cameras are equipped with phase-detection autofocus. It also has a front-facing 16 MP f/2.0 wide-angle lens (25 mm equivalent).
The phone was sold in 3 variations: 6 GB RAM and 64 GB storage, 8 GB RAM and 128 GB storage and 12 GB RAM and 256 GB storage. The 6 GB/64 GB configuration was only available in the Indian market.
Reception
The phone received mostly positive reviews from critics. TechRadar gave it a score of 4.5/5, praising the phone's battery, speakers, and display with a 90 Hz refresh rate, while criticizing the software, image quality, and in-built gestures. The Verge described the phone as Realme's first phone with high-end specs. Android Authority gave it a review of 9.3/10, praising its display, internals, charging-speed, and camera setup, while criticizing the color OS and low-light camera performance. It also described the X2 Pro as having flagship specs.
Controversy
The smartphone's original operating system, ColorOS 6.1, allowed the unlocking of the phone's bootloader, allowing support for custom Android ROMs such as Lineage OS. When the phone upgraded to realme UI 1.0, in the first weeks any users with locked bootloader could no longer unlock, but soon Realme released an official method to unlock the bootloader.
References
"Realme X2 Pro: Price in Pakistan, Full Specifications & Features"
Realme mobile phones
Phablets
Mobile phones introduced in 2019
Chinese brands
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones | Realme X2 Pro | Technology | 617 |
40,352,051 | https://en.wikipedia.org/wiki/Picrinine | Picrinine is a bio-active alkaloid from Alstonia boonei, a medicinal tree of West Africa.
References
Alkaloids
Methyl esters
Oxygen heterocycles
Nitrogen heterocycles
Heterocyclic compounds with 6 rings | Picrinine | Chemistry | 53 |
68,749,580 | https://en.wikipedia.org/wiki/Agatha%20Jassem | Agatha Jassem is a Canadian clinical microbiologist and the program head of the Virology Lab at the British Columbia Centre for Disease Control Public Health Laboratory, and a clinical associate professor in the Department of Pathology & Laboratory Medicine at the University of British Columbia in Vancouver, British Columbia, Canada.
Jassem obtained her PhD at the University of British Columbia, followed by a fellowship in Clinical Microbiology at the National Institutes of Health. Her research focuses the detection of healthcare-and community-associated infections, emerging pathogens, and drug resistance determinants.
During the COVID-19 pandemic response, Jassem led research efforts on COVID-19 breakthrough infections from vaccinated individuals, SARS-CoV-2 population level seroprevalence, antibody response as well as collaborating on research on securing reagents for COVID-19 during world-wide shortages, and the role of ACEII.
References
Canadian microbiologists
Academic staff of the University of British Columbia
COVID-19 drug development
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Canadian women biologists
Women microbiologists
University of British Columbia alumni
Canadian medical researchers
Women medical researchers
21st-century Canadian biologists
21st-century Canadian women scientists | Agatha Jassem | Chemistry | 259 |
4,557,927 | https://en.wikipedia.org/wiki/Radio%20Ice%20Cherenkov%20Experiment | Radio Ice Cherenkov Experiment (RICE) was an experiment designed to detect the Cherenkov emission in the radio regime of the electromagnetic spectrum from the interaction of high energy neutrinos (greater than 1 PeV, so-called ultra-high energy UHE neutrinos) with the Antarctic ice cap (ice molecules). The goals of this experiment are to determine the potential of the radio-detection technique for measuring the high energy cosmic neutrino flux, determining the sources of this flux, and measuring neutrino-nucleon cross sections at energies above those accessible with existing accelerators. Such an experiment also has sensitivity to neutrinos from gamma ray bursts, as well as highly ionizing charged particles (monopoles, e.g.) traversing the Antarctic icecap.
The experiment operated 1999-2012 (prototypes before 1999, data-taking 1999-2010). The experiment's radio receivers were located 100–350 meters deep under the ice-sheet directly below the Martin A. Pomerantz Observatory (MAPO) at the South Pole Station. The MAPO-building housed the experiment's hardware. The drill holes housing the radio receivers were primarily drilled for the AMANDA and later (AMANDA was shut down 2009) IceCube experiments; RICE used the holes as a secondary experiment.
Experimental operation and results
Two antennas were installed successfully during the 1995–96 austral summer. During the 1996–97 season, a prototype array of three antennas was deployed down the (AMANDA) bore holes at depths from 140–210 meters. This prototype demonstrated the ability to successfully deploy receivers and transmitters and enabled an estimate of the noise temperature in the deep ice. Several more receivers and transmitters were deployed in three new AMANDA holes during the 1997–1998 season, in dedicated (specifically drilled for RICE) shallow "dry" holes during the 1998–99 season, and finally in several AMANDA holes drilled during the 1999–2000 season. Five years of data-taking (two years of livetime) resulted in the most stringent upper limits on the neutrino flux in the interval 50 PeV – 1 EeV, as well as results on departures from Standard Model cross-sections and searches for gamma-ray burst coincidences. Currently, RICE hardware is being modified for use in the IceCube boreholes being drilled from 2006 to 2010.
In 2008-2009, the RICE experiment was extended into the Neutrino Array Radio Calibration (NARC) experiment. The continued experiment is known as RICE/NARC or just RICE.
In 2012, the results of the full dataset (collected 2000-2010) of RICE (RICE/NARC) were published and the RICE (RICE/NARC) experiment was described as "presently at the end of useful data-taking." No ultra-high energy (UHE) neutrinos were detected; this is in accordance with theoretic expectation.
The radio Cherenkov technique of detecting neutrinos is continued by RICE's successor experiment, Askaryan Radio Array (ARA), to which RICE hardware (and some of the researchers) was transferred. ARA is also deployed at the South Pole Station under ice. First ARA prototype was tested in South Pole in the Antarctic summer of 2010-2011.
See also
Neutrino telescope
References
External links
Neutrino Flux Upper Limit Results
Calibration of Experiment
Gamma-Ray Burst Coincidence Search
Bounds on Low-Scale Gravity
NSF article
Astronomical experiments in the Antarctic
Neutrino astronomy | Radio Ice Cherenkov Experiment | Astronomy | 717 |
649,382 | https://en.wikipedia.org/wiki/Pareidolia | Pareidolia (; ) is the tendency for perception to impose a meaningful interpretation on a nebulous stimulus, usually visual, so that one detects an object, pattern, or meaning where there is none. Pareidolia is a type of apophenia.
Common examples include perceived images of animals, faces, or objects in cloud formations; seeing faces in inanimate objects; or lunar pareidolia like the Man in the Moon or the Moon rabbit. The concept of pareidolia may extend to include hidden messages in recorded music played in reverse or at higher- or lower-than-normal speeds, and hearing voices (mainly indistinct) or music in random noise, such as that produced by air conditioners or by fans. Face pareidolia has also been demonstrated in rhesus macaques.
Etymology
The word derives from the Greek words pará (, "beside, alongside, instead [of]") and the noun eídōlon (, "image, form, shape").
The German word was used in articles by Karl Ludwig Kahlbaum—for example in his 1866 paper "" ("On Delusion of the Senses"). When Kahlbaum's paper was reviewed the following year (1867) in The Journal of Mental Science, Volume 13, was translated into English as "pareidolia", and noted to be synonymous with the terms "...changing hallucination, partial hallucination, [and] perception of secondary images."
Link to other conditions
Pareidolia correlates with age and is frequent among patients with Parkinson's disease and dementia with Lewy bodies.
Explanations
Pareidolia can cause people to interpret random images, or patterns of light and shadow, as faces. A 2009 magnetoencephalography study found that objects perceived as faces evoke an early (165 ms) activation of the fusiform face area at a time and location similar to that evoked by faces, whereas other common objects do not evoke such activation. This activation is similar to a slightly faster time (130 ms) that is seen for images of real faces. The authors suggest that face perception evoked by face-like objects is a relatively early process, and not a late cognitive reinterpretation phenomenon.
A functional magnetic resonance imaging (fMRI) study in 2011 similarly showed that repeated presentation of novel visual shapes that were interpreted as meaningful led to decreased fMRI responses for real objects. These results indicate that the interpretation of ambiguous stimuli depends upon processes similar to those elicited by known objects.
Pareidolia was found to affect brain function and brain waves. In a 2022 study, EEG records show that responses in the frontal and occipitotemporal cortexes begin prior to when one recognizes faces and later, when they are not recognized. By displaying these proactive brain waves, scientists can then have a basis for data rather than relying on self-reported sightings.
These studies help to explain why people generally identify a few lines and a circle as a "face" so quickly and without hesitation. Cognitive processes are activated by the "face-like" object which alerts the observer to both the emotional state and identity of the subject, even before the conscious mind begins to process or even receive the information. A "stick figure face", despite its simplicity, can convey mood information, and be drawn to indicate emotions such as happiness or anger. This robust and subtle capability is hypothesized to be the result of natural selection favoring people most able to quickly identify the mental state, for example, of threatening people, thus providing the individual an opportunity to flee or attack preemptively. This ability, though highly specialized for the processing and recognition of human emotions, also functions to determine the demeanor of wildlife.
Examples
Mimetoliths
A mimetolithic pattern is a pattern created by rocks that may come to mimic recognizable forms through the random processes of formation, weathering and erosion. A well-known example is the Face on Mars, a rock formation on Mars that resembled a human face in certain satellite photos. Most mimetoliths are much larger than the subjects they resemble, such as a cliff profile that looks like a human face.
Picture jaspers exhibit combinations of patterns, such as banding from flow or depositional patterns (from water or wind), or dendritic or color variations, resulting in what appear to be miniature scenes on a cut section, which is then used for jewelry.
Chert nodules, concretions, or pebbles may in certain cases be mistakenly identified as skeletal remains, egg fossils, or other antiquities of organic origin by amateur enthusiasts.
In the late 1970s and early 1980s, Japanese researcher Chonosuke Okamura self-published a series of reports titled Original Report of the Okamura Fossil Laboratory, in which he described tiny inclusions in polished limestone from the Silurian period (425 mya) as being preserved fossil remains of tiny humans, gorillas, dogs, dragons, dinosaurs and other organisms, all of them only millimeters long, leading him to claim, "There have been no changes in the bodies of mankind since the Silurian period... except for a growth in stature from 3.5 mm to 1,700 mm." Okamura's research earned him an Ig Nobel Prize (a parody of the Nobel Prize) in biodiversity in 1996.
Some sources describe various mimetolithic features on Pluto, including a heart-shaped region.
Clouds
Seeing shapes in cloud patterns is another example of this phenomenon. Rogowitz and Voss (1990) showed a relationship between seeing shapes in cloud patterns and fractal dimension. They varied the fractal dimension of the boundary contour from 1.2 to 1.8, and found that the lower the fractal dimension, the more likely people were to report seeing nameable shapes of animals, faces, and fantasy creatures. From above, pareidolia may be perceived in satellite imagery of tropical cyclones. Notably hurricanes Matthew and Milton gained much attention for resembling a human face or skull when viewed from the side.
Mars canals
A notable example of pareidolia occurred in 1877, when observers using telescopes to view the surface of Mars thought that they saw faint straight lines, which were then interpreted by some as canals. It was theorized that the canals were possibly created by sentient beings. This created a sensation. In the next few years better photographic techniques and stronger telescopes were developed and applied, which resulted in new images in which the faint lines disappeared, and the canal theory was debunked as an example of pareidolia.
Lunar surface
Many cultures recognize pareidolic images in the disc of the full moon, including the human face known as the Man in the Moon in many Northern Hemisphere cultures and the Moon rabbit in East Asian and indigenous American cultures. Other cultures see a walking figure carrying a wide burden on their back, including in Germanic tradition, Haida mythology, and Latvian mythology.
Projective tests
The Rorschach inkblot test uses pareidolia in an attempt to gain insight into a person's mental state. The Rorschach is a projective test that elicits thoughts or feelings of respondents that are "projected" onto the ambiguous inkblot images. Rorschach inkblots have low-fractal-dimension boundary contours, which may elicit general shape-naming behaviors, serving as vehicles for projected meanings.
Banknotes
Owing to the way designs are engraved and printed, occurrences of pareidolia have occasionally been reported in banknotes.
One example is the 1954 Canadian Landscape Canadian dollar banknote series, known among collectors as the "Devil's Head" variety of the initial print runs. The obverse of the notes features what appears to be an exaggerated grinning face, formed from patterns in the hair of Queen Elizabeth II. The phenomenon generated enough attention for revised designs to be issued in 1956, which removed the effect.
Literature
Renaissance authors have shown a particular interest in pareidolia. In William Shakespeare's play Hamlet, for example, Prince Hamlet points at the sky and "demonstrates" his supposed madness in this exchange with Polonius:
HAMLET
Do you see yonder cloud that's almost in the shape of a camel?
POLONIUS
By th'Mass and 'tis, like a camel indeed.
HAMLET
Methinks it is a weasel.
POLONIUS
It is backed like a weasel.
HAMLET
Or a whale.
POLONIUS
Very like a whale.
Nathaniel Hawthorne wrote a short story called "The Great Stone Face" in which a face seen in the side of a mountain (based on the real-life The Old Man of the Mountain) is revered by a village.
Art
Renaissance artists often used pareidolia in paintings and drawings: Andrea Mantegna, Leonardo da Vinci, Giotto, Hans Holbein, Giuseppe Arcimboldo, and many more have shown images—often human faces—that due to pareidolia appear in objects or clouds.
In his notebooks, Leonardo da Vinci wrote of pareidolia as a device for painters, writing:
Salem, a 1908 painting by Sydney Curnow Vosper, gained notoriety due to a rumour that it contained a hidden face, that of the devil. This led many commentators to visualize a demonic face depicted in the shawl of the main figure, despite the artist's denial that any faces had deliberately been painted into the shawl.
Surrealist artists such as Salvador Dalí would intentionally use pareidolia in their works, often in the form of a hidden face.
Architecture
Two 13th-century edifices in Turkey display architectural use of shadows of stone carvings at the entrance. Outright pictures are avoided in Islam but tessellations and calligraphic pictures were allowed, so designed "accidental" silhouettes of carved stone tessellations became a creative escape.
Niğde Alaaddin Mosque in Niğde, Turkey (1223), with its "mukarnas" art where the shadows of three-dimensional ornamentation with stone masonry around the entrance form a chiaroscuro drawing of a woman's face with a crown and long hair appearing at a specific time, at some specific days of the year.
Divriği Great Mosque and Hospital in Sivas, Turkey (1229), shows shadows of the three-dimensional ornaments of both entrances of the mosque part, to cast a giant shadow of a praying man that changes pose as the sun moves, as if to illustrate what the purpose of the building is. Another detail is the difference in the impressions of the clothing of the two shadow-men indicating two different styles, possibly to tell who is to enter through which door.
Religion
There have been many instances of perceptions of religious imagery and themes, especially the faces of religious figures, in ordinary phenomena. Many involve images of Jesus, the Virgin Mary, the word Allah, or other religious phenomena: in September 2007 in Singapore, for example, a callus on a tree resembled a monkey, leading believers to pay homage to the "Monkey god" (either Sun Wukong or Hanuman) in the monkey tree phenomenon.
Publicity surrounding sightings of religious figures and other surprising images in ordinary objects has spawned a market for such items on online auctions like eBay. One famous instance was a grilled cheese sandwich with the face of the Virgin Mary.
During the September 11 attacks, television viewers supposedly saw the face of Satan in clouds of smoke billowing out of the World Trade Center after it was struck by the airplane. Another example of face recognition pareidolia originated in the fire at Notre Dame Cathedral, when a few observers claimed to see Jesus in the flames.
While attempting to validate the imprint of a crucified man on the Shroud of Turin as Jesus, a variety of objects have been described as being visible on the linen. These objects include a number of plant species, a coin with Roman numerals, and multiple insect species. In an experimental setting using a picture of plain linen cloth, participants who had been told that there could possibly be visible words in the cloth, collectively saw 2 religious words. Those told that the cloth was of some religious importance saw 12 religious words, and those who were also told that it was of religious importance, but also given suggestions of possible religious words, saw 37 religious words. The researchers posit that the reason the Shroud has been said to have so many different symbols and objects is because it was already deemed to have the imprint of Jesus prior to the search for symbols and other imprints in the cloth, and therefore it was simply pareidolia at work.
Computer vision
Pareidolia can occur in computer vision, specifically in image recognition programs, in which vague clues can spuriously detect images or features. In the case of an artificial neural network, higher-level features correspond to more recognizable features, and enhancing these features brings out what the computer sees. These examples of pareidolia reflect the training set of images that the network has "seen" previously.
Striking visuals can be produced in this way, notably in the DeepDream software, which falsely detects and then exaggerates features such as eyes and faces in any image. The features can be further exaggerated by creating a feedback loop where the output is used as the input for the network. (The adjacent image was created by iterating the loop 50 times.) Additionally the output can be modified such as slightly zooming in to create an animation of the images perspective flying through the surrealistic imagery.
Auditory
In 1971 Konstantīns Raudive wrote Breakthrough, detailing what he believed was the discovery of electronic voice phenomena (EVP). EVP has been described as auditory pareidolia. Allegations of backmasking in popular music, in which a listener claims a message has been recorded backward onto a track meant to be played forward, have also been described as auditory pareidolia. In 1995, the psychologist Diana Deutsch invented an algorithm for producing phantom words and phrases with the sounds coming from two stereo loudspeakers, one to the listener's left and the other to his right, producing a phase offset in time between the speakers. After listening for a while, phantom words and phrases suddenly emerge, and these often appear to reflect what is on the listener's mind.
Deliberate practical use
Medical education, radiology images
Medical educators sometimes teach medical students and resident physicians (doctors in training) to use pareidolia and patternicity to learn to recognize human anatomy on radiology imaging studies.
Examples include assessing radiographs (X-ray images) of the human vertebral spine. Patrick Foye, M.D., professor of physical medicine and rehabilitation at Rutgers University, New Jersey Medical School, has written that pareidolia is used to teach medical trainees to assess for spinal fractures and spinal malignancies (cancers). When viewing spinal radiographs, normal bony anatomic structures resemble the face of an owl. (The spinal pedicles resemble an owl's eyes and the spinous process resembles an owl's beak.) But when cancer erodes the bony spinal pedicle, the radiographic appearance changes such that now that eye of the owl seems missing or closed, which is called the "winking owl sign". Another common pattern is a "Scottie dog sign" on a spinal X-ray.
In 2021, Foye again published in the medical literature on this topic, in a medical journal article called "Baby Yoda: Pareidolia and Patternicity in Sacral MRI and CT Scans". Here, he introduced a novel way of visualizing the sacrum when viewing MRI magnetic resonance imaging and CT scans (computed tomography scans). He noted that in certain image slices the human sacral anatomy resembles the face of "Baby Yoda" (also called Grogu), a fictional character from the television show The Mandalorian. Sacral openings for exiting nerves (sacral foramina) resemble Baby Yoda's eyes, while the sacral canal resembles Baby Yoda's mouth.
In popular culture
In January 2017, an anonymous user placed an eBay auction of a Cheeto that looked like the gorilla Harambe. Bidding began at , but the Cheeto was eventually sold for .
Starting from 2021, an Internet meme emerged around an online game called Among Us, where users presented everyday items such as dogs, statues, garbage cans, big toes, and pictures of the Boomerang Nebula that looked like the game's "crewmate" protagonists. In May 2021, an eBay user named Tav listed a Chicken McNugget shaped like a crewmate from Among Us for online auction. The Chicken McNugget was sold for to an anonymous buyer.
Related phenomena
A shadow person (also known as a shadow figure, shadow being or black mass) is often attributed to pareidolia. It is the perception of a patch of shadow as a living, humanoid figure, particularly as interpreted by believers in the paranormal or supernatural as the presence of a spirit or other entity.
Pareidolia is also what some skeptics believe causes people to believe that they have seen ghosts.
See also
Clustering illusion
Eigenface
Hitler teapot
Madonna of the Toast
Mondegreen
Musical ear syndrome – similar to auditory pareidolia, but with hearing loss
Optical illusion
Perceptions of religious imagery in natural phenomena
Signal-to-noise ratio
References
External links
Skepdic.com Skeptic's Dictionary definition of pareidolia
A Japanese museum of rocks which look like faces
Article in The New York Times, 13 February 2007, about cognitive science of face recognition
1860s neologisms
Auditory illusions
Cognitive biases
Forteana
Optical illusions
Visual perception | Pareidolia | Physics | 3,643 |
25,753,134 | https://en.wikipedia.org/wiki/Asbestos%20Mountains | The Asbestos Mountains is a range of hills in the Northern Cape province of South Africa, stretching south-southwest from Kuruman, where the range is known as the Kuruman Hills, to Prieska. It passes Boetsap, Danielskuil, Lime Acres, Douglas and Griekwastad. The range lies about west of Kimberley and rises from the Ghaap Plateau.
Overview
The mountains were named after the asbestos which was mined in the 20th century and is found as a variety of amphibole called crocidolite. Veins occur in slaty rocks, and are associated with jasper and quartzite rich in magnetite and brown iron-ore. Geologically it belongs to the Griquatown series.
The Griquas, for whom Griquatown was named, were a Khoikhoi people who in 1800 were led by a freed slave, Adam Kok, from Piketberg in the western Cape to the foothills of the Asbestos Mountains where they settled at a place called Klaarwater. John Campbell, (1766–1840), a Scottish missionary in South Africa, renamed it Griquatown in 1813. The mission station became a staging post for expeditions to the interior - here David Livingstone met his future wife, Mary Moffat, daughter of the missionary Robert Moffat - William Burchell visited here in 1811.
John Campbell described the mountains in his book "Travels in South Africa: Undertaken at the request of the Missionary Society":
The coloured variants, which Campbell found, are named because of their chatoyance: tiger's eye, hawk's eye, and cat's eye by lapidaries, and are silicified crocidolite.
Wonderwerk Cave is located in the range near Kuruman and was occupied by man during the Later Stone Age, while much earlier manuports, introduced by hominins in the terminal Acheulean, have been found at the back of the cave.
Mining history
Serious mining of crocidolite in these mountains started in 1893 when open-cast quarrying produced 100 tons of material. By 1918, underground mining had started and scattered mines were to be found from Prieska to Kuruman along the length of the range, and mills were constructed at both of these towns. Between 1950 and 1960 production had risen to 100,000 tons, each mine was doing its own milling and the tailings dumps had grown in size.
Health hazards
At the time of Campbell's writing the health hazards of asbestos were unknown. Tens of thousands of mine workers were exposed to the fibres at their workplaces, and when winds blew across the wastedumps, at their homes. By the mid-1950s the medical profession were still diagnosing asbestosis as "metastatic carcinomas from an unknown primary site". The resulting increase in cases of asbestosis and mesothelioma became alarming. Records later revealed that pleural endothelioma had first been reported in 1917, but had also been considered as metastatic.
Substitutes for asbestos now include ceramic, carbon, metallic and Aramid fibers, such as Twaron or Kevlar.
David Goldblatt from the University of the Witwatersrand wrote:
The Green Mountains, for which Vermont was named, were produced by the same geologic processes that produced the Asbestos Mountains - they produced an abundance of serpentine, which is the source of chrysotile asbestos.
See also
List of mountain ranges of South Africa
Mining industry of South Africa
Asbestos and the law
References and notes
Asbestos
Kimberley, Northern Cape
Mountain ranges of the Northern Cape | Asbestos Mountains | Environmental_science | 739 |
515,812 | https://en.wikipedia.org/wiki/Dumbek%20rhythms | Dumbek rhythms are a collection of rhythms that are usually played with hand drums such as the dumbek. These rhythms are various combinations of these three basic sounds:
Doom (D), produced with the dominant hand striking the sweet spot of the skin.
Tak (T), produced with the recessive hand striking the rim.
Ka (K), produced with the dominant hand striking the rim.
Notation
In a simple notation, these three sounds are represented by three letters: D, T, and K. When capitalized, the beat is emphasized, and when lower-case, it is played less emphatically. These basic sounds can be combined with other sounds:
Sak or slap (S) (sometimes called 'pa'), produced with the dominant hand. Similar to the doom except the fingers are cupped to capture the air, making a loud terminating sound. The hand remains on the drum head to prevent sustain.
Trill (l), produced by lightly tapping three fingers of one hand in rapid succession on the rim
Roll or (rash, r), produced by a rapid alternating pattern of taks and kas
This is the simple dumbek rhythm notation for the 2/4 rhythm known as ayyoub:
1-&-2-&-
D--kD-T-
Rhythms
There are many traditional rhythms. Some are much more popular than others. The "big six" Middle Eastern rhythms are Ayyoub, Beledi (Masmoudi Saghir), Chiftitelli, Maqsoum, Masmoudi and Saidi.
References
See also
Iqa'
Wazn
Egyptian music
Belly dance
Arabic music
Usul (music)
Rhythm and meter
Percussion performance techniques | Dumbek rhythms | Physics | 350 |
6,652,820 | https://en.wikipedia.org/wiki/Message%20Understanding%20Conference | The Message Understanding Conferences (MUC) for computing and computer science, were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage the development of new and better methods of information extraction. The character of this competition, many concurrent research teams competing against one another—required the development of standards for evaluation, e.g. the adoption of metrics like precision and recall.
Topics and exercises
Only for the first conference (MUC-1) could the participant choose the output format for the extracted
information. From the second conference the output format, by which the participants'
systems would be evaluated, was prescribed. For each topic fields were given, which had to be
filled with information
from the text. Typical fields were, for example, the cause, the agent, the time and place of an event,
the consequences etc. The number of fields increased from conference to conference.
At the sixth conference (MUC-6) the task of recognition of named entities and coreference was added.
For named entity all phrases in the text were supposed to be marked as person, location, organization,
time or quantity.
The topics and text sources, which were processed, show a continuous move from military to civil themes, which mirrored
the change in business interest in information extraction taking place at the time.
Literature
Ralph Grishman, Beth Sundheim: Message Understanding Conference - 6: A Brief History. In: Proceedings of the 16th International Conference on Computational Linguistics (COLING), I, Copenhagen, 1996, 466–471.
See also
DARPA TIPSTER Program
External links
MUC-7
MUC-6
SAIC Information Extraction
MUC | Message Understanding Conference | Technology | 335 |
8,748,291 | https://en.wikipedia.org/wiki/Expansion%20ratio | The expansion ratio of a liquefied and cryogenic substance is the volume of a given amount of that substance in liquid form compared to the volume of the same amount of substance in gaseous form, at room temperature and normal atmospheric pressure.
If a sufficient amount of liquid is vaporized within a closed container, it produces pressures that can rupture the pressure vessel. Hence the use of pressure relief valves and vent valves are important.
The expansion ratio of liquefied and cryogenic from the boiling point to ambient is:
nitrogen – 1 to 696
liquid helium – 1 to 745
argon – 1 to 842
liquid hydrogen – 1 to 850
liquid oxygen – 1 to 860
neon – Neon has the highest expansion ratio with 1 to 1445.
See also
Liquid-to-gas ratio
Boiling liquid expanding vapor explosion
Thermal expansion
References
External links
cryogenic-gas-hazards
Cryogenics | Expansion ratio | Physics,Chemistry | 181 |
27,347,213 | https://en.wikipedia.org/wiki/Radical%20cyclization | Radical cyclization reactions are organic chemical transformations that yield cyclic products through radical intermediates. They usually proceed in three basic steps: selective radical generation, radical cyclization, and conversion of the cyclized radical to product.
Introduction
Radical cyclization reactions produce mono- or polycyclic products through the action of radical intermediates. Because they are intramolecular transformations, they are often very rapid and selective. Selective radical generation can be achieved at carbons bound to a variety of functional groups, and reagents used to effect radical generation are numerous. The radical cyclization step usually involves the attack of a radical on a multiple bond. After this step occurs, the resulting cyclized radicals are quenched through the action of a radical scavenger, a fragmentation process, or an electron-transfer reaction. Five- and six-membered rings are the most common products; formation of smaller and larger rings is rarely observed.
Three conditions must be met for an efficient radical cyclization to take place:
A method must be available to generate a radical selectively on the substrate.
Radical cyclization must be faster than trapping of the initially formed radical.
All steps must be faster than undesired side reactions such as radical recombination or reaction with solvent.
Advantages: because radical intermediates are not charged species, reaction conditions are often mild and functional group tolerance is high and orthogonal to that of many polar processes. Reactions can be carried out in a variety of solvents (including arenes, alcohols, and water), as long as the solvent does not have a weak bond that can undergo abstraction, and products are often synthetically useful compounds that can be carried on using existing functionality or groups introduced during radical trapping.
Disadvantages: the relative rates of the various stages of radical cyclization reactions (and any side reactions) must be carefully controlled so that cyclization and trapping of the cyclized radical is favored. Side reactions are sometimes a problem, and cyclization is especially slow for small and large rings (although macrocyclizations, which resemble intermolecular radical reactions, are often high yielding).
Mechanism and stereochemistry
Prevailing mechanism
Because many reagents exist for radical generation and trapping, establishing a single prevailing mechanism is not possible. However, once a radical is generated, it can react with multiple bonds in an intramolecular fashion to yield cyclized radical intermediates. The two ends of the multiple bond constitute two possible sites of reaction. If the radical in the resulting intermediate ends up outside of the ring, the attack is termed "exo"; if it ends up inside the newly formed ring, the attack is called "endo." In many cases, exo cyclization is favored over endo cyclization (macrocyclizations constitute the major exception to this rule). 5-hexenyl radicals are the most synthetically useful intermediates for radical cyclizations, because cyclization is extremely rapid and exo selective. Although the exo radical is less thermodynamically stable than the endo radical, the more rapid exo cyclization is rationalized by better orbital overlap in the chair-like exo transition state (see below).
(1)
Substituents that affect the stability of these transition states can have a profound effect on the site selectivity of the reaction. Carbonyl substituents at the 2-position, for instance, encourage 6-endo ring closure. Alkyl substituents at positions 2, 3, 4, or 6 enhance selectivity for 5-exo closure.
Cyclization of the homologous 6-heptenyl radical is still selective, but is much slower—as a result, competitive side reactions are an important problem when these intermediates are involved. Additionally, 1,5-shifts can yield stabilized allylic radicals at comparable rates in these systems. In 6-hexenyl radical substrates, polarization of the reactive double bond with electron-withdrawing functional groups is often necessary to achieve high yields. Stabilizing the initially formed radical with electron-withdrawing groups provides access to more stable 6-endo cyclization products preferentially.
(2)
Cyclization reactions of vinyl, aryl, and acyl radicals are also known. Under conditions of kinetic control, 5-exo cyclization takes place preferentially. However, low concentrations of a radical scavenger establish thermodynamic control and provide access to 6-endo products—not via 6-endo cyclization, but by 5-exo cyclization followed by 3-exo closure and subsequent fragmentation (Dowd-Beckwith rearrangement). Whereas at high concentrations of the exo product is rapidly trapped preventing subsequent rearrangement to the endo product Aryl radicals exhibit similar reactivity.
(3)
Cyclization can involve heteroatom-containing multiple bonds such as nitriles, oximes, and carbonyls. Attack at the carbon atom of the multiple bond is almost always observed. In the latter case attack is reversible; however alkoxy radicals can be trapped using a stannane trapping agent.
Stereoselectivity
The diastereoselectivity of radical cyclizations is often high. In most all-carbon cases, selectivity can be rationalized according to Beckwith's guidelines, which invoke the reactant-like, exo transition state shown above. Placing substituents in pseudoequatorial positions in the transition state leads to cis products from simple secondary radicals. Introducing polar substituents can favor trans products due to steric or electronic repulsion between the polar groups. In more complex systems, the development of transition state models requires consideration of factors such as allylic strain and boat-like transition states
(4)
Chiral auxiliaries have been used in enantioselective radical cyclizations with limited success. Small energy differences between early transition states constitute a profound barrier to success in this arena. In the example shown, diastereoselectivity (for both configurations of the left-hand stereocenter) is low and enantioselectivity is only moderate.
(5)
Substrates with stereocenters between the radical and multiple bond are often highly stereoselective. Radical cyclizations to form polycyclic products often take advantage of this property.
Scope and limitations
Radical generation methods
The use of metal hydrides (tin, silicon and mercury hydrides) is common in radical cyclization reactions; the primary limitation of this method is the possibility of reduction of the initially formed radical by H-M. Fragmentation methods avoid this problem by incorporating the chain-transfer reagent into the substrate itself—the active chain-carrying radical is not released until after cyclization has taken place. The products of fragmentation methods retain a double bond as a result, and extra synthetic steps are usually required to incorporate the chain-carrying group.
Atom-transfer methods rely on the movement of an atom from the acyclic starting material to the cyclic radical to generate the product. These methods use catalytic amounts of weak reagents, preventing problems associated with the presence of strong reducing agents (such as tin hydride). Hydrogen- and halogen-transfer processes are known; the latter tend to be more synthetically useful.
(6)
Oxidative and reductive cyclization methods also exist. These procedures require fairly electrophilic and nucleophilic radicals, respectively, to proceed effectively. Cyclic radicals are either oxidized or reduced and quenched with either external or internal nucleophiles or electrophiles, respectively.
Ring sizes
In general, radical cyclization to produce small rings is difficult. However, it is possible to trap the cyclized radical before re-opening. This process can be facilitated by fragmentation (see the three-membered case below) or by stabilization of the cyclized radical (see the four-membered case). Five- and six-membered rings are the most common sizes produced by radical cyclization.
(7)
Polycycles and macrocycles can also be formed using radical cyclization reactions. In the former case, rings can be pre-formed and a single ring closed with radical cyclization, or multiple rings can be formed in a tandem process (as below). Macrocyclizations, which lack the FMO requirement of cyclizations of smaller substrates, have the unique property of exhibiting endo selectivity.
(8)
Comparison with other methods
In comparison to cationic cyclizations, radical cyclizations avoid issues associated with Wagner-Meerwein rearrangements, do not require strongly acidic conditions, and can be kinetically controlled. Cationic cyclizations are usually thermodynamically controlled. Radical cyclizations are much faster than analogous anionic cyclizations, and avoid β-elimination side reactions. Anionic Michael-type cyclization is an alternative to radical cyclization of activated olefins. Metal-catalyzed cyclization reactions usually require mildly basic conditions, and substrates must be chosen to avoid β-hydride elimination. The primary limitation of radical cyclizations with respect to these other methods is the potential for radical side reactions.
Experimental conditions and procedure
Typical conditions
Radical reactions must be carried out under inert atmosphere as dioxygen is a triplet radical which will intercept radical intermediates. Because the relative rates of a number of processes are important to the reaction, concentrations must be carefully adjusted to optimize reaction conditions. Reactions are generally carried out in solvents whose bonds have high bond dissociation energies (BDEs), including benzene, methanol or benzotrifluoride. Even aqueous conditions are tolerated, since water has a strong O-H bond with a BDE of 494 kJ/mol. This is in contrast to many polar processes, where hydroxylic solvents (or polar X-H bonds in the substrate itself) may not be tolerated due to the nucleophilicity or acidity of the functional group.
Example procedure
(9)
A mixture of bromo acetal 1 (549 mg, 1.78 mmol), AIBN (30.3 mg, 0.185 mmol), and Bu3SnH (0.65 mL, 2.42 mmol) in dry benzene (12 mL) was heated under reflux for 1 hour and then evaporated under reduced pressure. Silicagel column chromatography of the crude product with hexane–EtOAc (92:8) as eluant gave tetrahydropyran 2 (395 mg, 97%) as an oily mixture of two diastereomers. (c 0.43, CHCl3); IR (CHCl3):1732 cm–1;1H NMR (CDCl3)δ 4.77–4.89 (m, 0.6H), 4.66–4.69 (m, 0.4H), 3.40–4.44 (m, 4H), 3.68 (s, 3H), 2.61 (dd, J = 15.2, 4.2 Hz, 1H), 2.51 (dd, J = 15.2, 3.8 Hz, 1H), 0.73–1.06 (m, 3H); mass spectrum: m/z 215 (M+–Me); Anal. Calcd for C12H22O4: C, 62.6; H, 9.65. Found: C, 62.6; H, 9.7.
References
Organic reactions | Radical cyclization | Chemistry | 2,446 |
30,495,023 | https://en.wikipedia.org/wiki/Tandem%20Repeats%20Database | The Tandem Repeats Database (TRDB) is a database of tandem repeats in genomic DNA.
See also
Tandem repeats
References
External links
https://tandem.bu.edu/cgi-bin/trdb/trdb.exe
Genetics databases
Repetitive DNA sequences | Tandem Repeats Database | Biology | 57 |
28,411,451 | https://en.wikipedia.org/wiki/Kundt%20spacetime | In mathematical physics, Kundt spacetimes are Lorentzian manifolds admitting a geodesic null congruence with vanishing optical scalars (expansion, twist and shear). A well known member of Kundt class is pp-wave. Ricci-flat Kundt spacetimes in arbitrary dimension are algebraically special. In four dimensions Ricci-flat Kundt metrics of Petrov type III and N are completely known. All VSI spacetimes belong to a subset of the Kundt spacetimes.
References
Lorentzian manifolds | Kundt spacetime | Physics | 115 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.