id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
47,767,818
https://en.wikipedia.org/wiki/Phellodon%20putidus
Phellodon putidus is a species of tooth fungus in the family Bankeraceae. Found in North America, it was first described scientifically by George F. Atkinson as Hydnum putidum in 1900. Howard James Banker transferred it to the genus Phellodon in 1906. References External links Fungi described in 1900 Fungi of North America putidus Fungus species
Phellodon putidus
Biology
76
35,178,060
https://en.wikipedia.org/wiki/CA-Modern
CA-Modern was an American magazine devoted to mid-century modern architecture and design in California. The Fall 2023 issue was the final issue of the print magazine. The Eichler Network will continue producing its website, emailed blogs, and printed annual Home Maintenance Directory. About CA-Modern was published by the Eichler Network, a company based in San Francisco that continues to operate a website and sends weekly e-mail news features to subscribers. It also publishes a print service directory of firms that specialize in repair and improvement of mid-century modern homes, including those built from the 1950s to 1970s by Bay Area developer Joe Eichler of Eichler Homes, Inc. Founded in 1993 by publisher Marty Arbunich, first as a four-page letter-size, black-and-white mailer and then as a 16-page tabloid newsletter also called Eichler Network, it became a 36-page oversized color magazine in January 2006. Over the years it expanded its coverage, from its early focus exclusively on Eichler homes, with an emphasis on preservation and home improvement and maintenance, to historical articles on mid-century architecture and design, light features, nostalgia, music, etc. The magazine was mailed free to the property addresses of Eichler homes In Northern California, as well as to other mid-century modern homes, such as select homes built by the Streng Brothers in the Sacramento and Davis areas in the Sacramento Valley. The magazine was available by subscription (with back issues from 2006 to the present) and was not sold on newsstands. Its hundreds of articles continue to appear on the website of the Eichler Network. Eichler Network the newsletter, originally sent only to Eichler homeowners, started a Sacramento Valley edition in 2003, directed to owners of Streng homes. The switch in format and name from Eichler Network to CA-Modern included an increased geographic scope, adding Los Angeles, San Fernando Valley, Long Beach-Orange, and Palm Springs editions, and coincided with a broadening of the subject matter. The magazine profiled several Southern California architects, including William Krisel, Don Wexler, and Ray Kappe; ran a news briefs column; reviewed books and other media; invited contemporary architects to devise a '21st Century Eichler,' and has readers compete in a best kitchen remodel contest. In recent years the magazine's distribution returned to a Northern California focus. The magazine featured extensive color photography and mid-century type design, and the website and email newsletters continue that tradition. The designer is Doreen Jorgensen. In January 2012, the magazine entered the national conversation about Apple innovator Steve Jobs, who had reportedly drawn design inspiration from his childhood Eichler home. The Eichler Network's investigation showed that Jobs' home was designed by Anshen and Allen, architects who worked for Eichler, but was built by Mackay Homes." References External links Visual arts magazines published in the United States Quarterly magazines published in the United States Architecture magazines Design magazines Magazines established in 1993 Magazines published in San Francisco
CA-Modern
Engineering
638
18,437,184
https://en.wikipedia.org/wiki/Progerin
Progerin (UniProt# P02545-6) is a truncated version of the lamin A protein involved in the pathology of Hutchinson–Gilford progeria syndrome. Progerin is most often generated by a sporadic single point nucleotide polymorphism c.1824 C>T (GGC -> GGT, p.Gly608Gly) in the gene that codes for matured Lamin A. This mutation activates a cryptic splice site that induces a larger mutation in the processed prelamin A messenger RNA, causing the deletion of a 50 amino-acid group near the C-terminus of the prelamin A protein. The endopeptidase ZMPSTE24 cannot cleave between the missing RSY - LLG amino acid sequence (as seen in the figure) during the maturation of Lamin A, due to the deletion of the 50 amino acids which included that sequence. This leaves the intact premature Lamin A bonded to the methylated carboxyl farnesyl group creating the defective protein Progerin, rather than the desired protein matured Lamin A. Approximately 90% of all Hutchinson–Gilford progeria syndrome cases are heterozygous for this deleterious single nucleotide polymorphism within exon 11 of the LMNA gene causing the post-translational modifications to produce progerin. Lamin A constitutes a major structural component of the lamina, a scaffold of proteins found inside the nuclear membrane of a cell; progerin does not properly integrate into the lamina, which disrupts the scaffold structure and leads to significant disfigurement of the nucleus, characterized by a globular shape. Progerin activates genes that regulate stem cell differentiation via the Notch signaling pathway. Progerin increases the frequency of unrepaired double-strand breaks in DNA following exposure to ionizing radiation. Also, overexpression of progerin is correlated with an increase in non-homologous end joining relative to homologous recombination among those DNA double-strand breaks that are repaired. Furthermore, the fraction of homologous recombination events occurring by gene conversion is increased. These findings suggest that the normal untruncated nuclear lamina has an important role in the proper repair of DNA double-strand breaks. Point mutation c.1824 C>T (GGC -> GGT, p.Gly608Gly) is the single point nucleotide polymorphism that occurs in most patients with progeria. The mutation occurs in the region G608 in exon 11 causing the sporadic mutation resulting in the amino acid glycine GGC to an alternative version of glycine GGT known as Gly608Gly. This single nucleotide C -> T polymorphism encodes for exon 11 to delete the 50 essential amino acid groups in the maturation of Lamin A. This deletion is then what causes the mutation of premature Lamin A to become the defective protein Progerin. Premature aging The defective gene in HGPS Progerin has effects on accelerated aging effects due to the conformational stress Progerin has on the cell membrane. Matured Lamin A is a protein that maintains the cell's structural stability along with other functions. The insertion of Progerin protein rather than the normal functioning matured Lamin A results in DNA damage along the cellular membrane causing stress which activates the protein p53 resulting in premature cellular senescence causing the rapid aging effects you see in HGPS. Lonafarnib Researchers are exploring lonafarnib (a farnesyltransferase inhibitor) as a potential pharmacological therapy against the negative effects of Progerin on nuclear morphology in HGPS. lonafarnib, so far is currently the only FDA approved treatment for HGPS. Other information Recently, rapamycin has been shown to prevent Progerin aggregates in cells and hence delay premature aging. Progerin, which has been linked to normal aging, is produced in healthy individuals via "sporadic use of the cryptic splice site". References Aging-related proteins Human proteins
Progerin
Biology
876
3,203,629
https://en.wikipedia.org/wiki/Radiodrum
The Radiodrum or radio-baton is a musical instrument played in three-dimensional space using two mallets (snare drum sticks with wires). It was developed at Bell Labs in the 1980s (and patented), originally to be a three-dimensional computer mouse. Currently it is used as a musical instrument similar to a MIDI controller in the sense that it has no inherent sound or effect, but rather produces control signals that can be used to control sound-production (or other effect.) As such, it can be thought of as a general telepresence input device. The radiodrum works in a similar way to the theremin, which uses magnetic capacitance to locate the position of the drumsticks. The two mallets act as antennas transmitting on slightly different frequencies and the drum surface acts as a set of antennas. The combination of the antenna signals is used to derive X, Y and Z. The radiodrum was designed by Bob Boie. Max Mathews recognized its musical potential, mainly focusing on a conducting paradigm, and developed several other versions of it. Andrew Schloss pioneered its use as a percussion device and further developed its software and hardware. The radiodrum has been used to control visual effects, and even robotic acoustic instruments like the Yamaha Disklavier and Trimpin instruments. The latest version (as of 2013) of the radiodrum was developed by Bob Boie and Andrew Schloss. In addition to X, Y and Z, there is an output for the derivative of Z, which is used to detect changes of direction of the mallets, enabling fine control over snare-drum rolls and other nuanced percussive techniques. In addition to works by Andrew Schloss, the instrument has been used extensively by composer David A. Jaffe, with Schloss as soloist, in works including: "The Seven Wonders of the Ancient World," a 70-minute seven-movement concerto for radiodrum-controlled Yamaha Disklavier piano and an orchestra of plucked strings and percussion instruments "Racing Against Time," for radiodrum-controlled computer physical models (electronic sound), with two violins, two saxophones and piano "The Space Between Us," for radiodrum-controlled Trimpin percussion instruments and eight strings distributed around the concert hall "Underground Economy," an Afro-Cuban improvisational work for radiodrum-controlled electronics, violin and piano Other works include Richard Boulanger's "Solemn Song for Evening", using the Bohlen-Pierce scale. See also List of music software References External links Max Mathews demonstrates Radio Baton in 2010 (Computer History Museum) Pictures of radiodrum Information on David A. Jaffe's music Electronic musical instruments Human–computer interaction Musical instrument parts and accessories Music software Articles containing video clips
Radiodrum
Technology,Engineering
565
18,782,004
https://en.wikipedia.org/wiki/Magnesium%20hydride
Magnesium hydride is the chemical compound with the molecular formula MgH2. It contains 7.66% by weight of hydrogen and has been studied as a potential hydrogen storage medium. Preparation In 1951 preparation from the elements was first reported involving direct hydrogenation of Mg metal at high pressure and temperature (200 atmospheres, 500 °C) with MgI2 catalyst: Mg + H2 → MgH2 Lower temperature production from Mg and H2 using nanocrystalline Mg produced in ball mills has been investigated. Other preparations include: the hydrogenation of magnesium anthracene under mild conditions: Mg(anthracene) + H2 → MgH2 the reaction of diethylmagnesium with lithium aluminium hydride product of complexed MgH2 e.g. MgH2.THF by the reaction of phenylsilane and dibutyl magnesium in ether or hydrocarbon solvents in the presence of THF or TMEDA as ligand. Structure and bonding The room temperature form α-MgH2 has a rutile structure. There are at least four high pressure forms: γ-MgH2 with α-PbO2 structure, cubic β-MgH2 with Pa-3 space group, orthorhombic HP1 with Pbc21 space group and orthorhombic HP2 with Pnma space group. Additionally a non stoichiometric MgH(2-δ) has been characterised, but this appears to exist only for very small particles (bulk MgH2 is essentially stoichiometric, as it can only accommodate very low concentrations of H vacancies). The bonding in the rutile form is sometimes described as being partially covalent in nature rather than purely ionic; charge density determination by synchrotron x-ray diffraction indicates that the magnesium atom is fully ionised and spherical in shape and the hydride ion is elongated. Molecular forms of magnesium hydride, MgH, MgH2, Mg2H, Mg2H2, Mg2H3, and Mg2H4 molecules identified by their vibrational spectra have been found in matrix isolated samples at below 10 K, formed following laser ablation of magnesium in the presence of hydrogen. The Mg2H4 molecule has a bridged structure analogous to dimeric aluminium hydride, Al2H6. Reactions MgH2 readily reacts with water to form hydrogen gas: MgH2 + 2 H2O → 2 H2 + Mg(OH)2 At 287 °C it decomposes to produce H2 at 1 bar pressure. The high temperature required is seen as a limitation in the use of MgH2 as a reversible hydrogen storage medium: MgH2 → Mg + H2 References Magnesium compounds Metal hydrides
Magnesium hydride
Chemistry
573
470,261
https://en.wikipedia.org/wiki/Night-blooming%20cereus
Night-blooming cereus is the common name referring to many flowering ceroid cacti that bloom at night. The flowers are short lived, and some of these species, such as Selenicereus grandiflorus, bloom only once a year, for a single night, though most put out multiple flowers over several weeks, each of which opens for only a single night. Other names for one or more cacti with this habit are princess of the night, Honolulu queen (for Hylocereus undatus), Christ in the manger, dama de noche, and queen of the night (which is also used for an unrelated plant species). Genera and species While many cacti referred to as night-blooming cereus belong to the tribe Cereeae, other night-blooming cacti in the subfamily Cactoideae may also be called night-blooming cereus. Cacti which may be called by this name include: Cereus Echinopsis (usually Echinopsis pachanoi, San Pedro cactus) Epiphyllum (usually Epiphyllum oxypetalum, gooseneck cactus; grown as an indoor houseplant throughout the world, and the most popular cultivated night-blooming cereus) Harrisia Hylocereus (of which Hylocereus undatus is the most frequently cultivated outdoors and is the main source of the commercial fruit crop, dragonfruit) Monvillea Nyctocereus (usually Nyctocereus serpentinus) Peniocereus (Peniocereus greggii, the best known, is strictly a desert plant which grows from an underground tuber and is infrequently cultivated) Selenicereus (usually Selenicereus grandiflorus) Trichocereus Description Regardless of genus or species, night-blooming cereus flowers are almost always white or very pale shades of other colors, often large and frequently fragrant. Most of the flowers open after nightfall, and by dawn, most are wilting. Plants in the same geographical area tend to bloom on the same night. Also, for healthy plants, there can sometimes be as many as three separate blooming events spread out over the warmest months. The plants that bear such flowers can be tall, columnar, and sometimes extremely large and tree-like, but more frequently are thin-stemmed climbers. While some night-blooming cereus are grown indoors in homes or greenhouses in colder climates, most plants are too large or ungainly for this treatment and are only found outdoors in tropical areas. Cultivation and uses The dried flowers of the night-blooming cereus (霸王花) are a common ingredient used in Cantonese slow-simmered soup (traditional Chinese: 老火湯; pinyin: lǎohuǒ tāng; Jyutping: lou5 fo2 tong1). Some night-blooming cereus plants produce fruits which are large enough for people to consume. These include some of the members of the genus Cereus, but most commonly the fruit of the Hylocereus. Hylocereus fruit has the advantage of lacking exterior spines, in contrast to the fruit of cacti such as the Selenicereus fruit, being brightly colored and having a pleasant taste. Since the late 1990s, Hylocereus fruit has been commercially grown and sold in tropical locations like Australia, the Philippines, Vietnam, Taiwan, and Hawaii. See also Ceroid cactus Pitaya Queen of the Night References Notes Sources Night Blooming Cereus Bud to Bloom documentation over 33-day period Night-blooming plants Plant common names Cactoideae
Night-blooming cereus
Biology
756
8,892,818
https://en.wikipedia.org/wiki/Home%20range
A home range is the area in which an animal lives and moves on a periodic basis. It is related to the concept of an animal's territory which is the area that is actively defended. The concept of a home range was introduced by W. H. Burt in 1943. He drew maps showing where the animal had been observed at different times. An associated concept is the utilization distribution which examines where the animal is likely to be at any given time. Data for mapping a home range used to be gathered by careful observation, but in more recent years, the animal is fitted with a transmission collar or similar GPS device. The simplest way of measuring the home range is to construct the smallest possible convex polygon around the data but this tends to overestimate the range. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods. More recently, nonparametric methods such as the Burgman and Fox's alpha-hull and Getz and Wilmers local convex hull have been used. Software is available for using both parametric and nonparametric kernel methods. History The concept of the home range can be traced back to a publication in 1943 by W. H. Burt, who constructed maps delineating the spatial extent or outside boundary of an animal's movement during the course of its everyday activities. Associated with the concept of a home range is the concept of a utilization distribution, which takes the form of a two dimensional probability density function that represents the probability of finding an animal in a defined area within its home range. The home range of an individual animal is typically constructed from a set of location points that have been collected over a period of time, identifying the position in space of an individual at many points in time. Such data are now collected automatically using collars placed on individuals that transmit through satellites or using mobile cellphone technology and global positioning systems (GPS) technology, at regular intervals. Methods of calculation The simplest way to draw the boundaries of a home range from a set of location data is to construct the smallest possible convex polygon around the data. This approach is referred to as the minimum convex polygon (MCP) method which is still widely employed, but has many drawbacks including often overestimating the size of home ranges. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods. This group of methods is part of a more general group of parametric kernel methods that employ distributions other than the normal distribution as the kernel elements associated with each point in the set of location data. Recently, the kernel approach to constructing utilization distributions was extended to include a number of nonparametric methods such as the Burgman and Fox's alpha-hull method and Getz and Wilmers local convex hull (LoCoH) method. This latter method has now been extended from a purely fixed-point LoCoH method to fixed radius and adaptive point/radius LoCoH methods. Although, currently, more software is available to implement parametric than nonparametric methods (because the latter approach is newer), the cited papers by Getz et al. demonstrate that LoCoH methods generally provide more accurate estimates of home range sizes and have better convergence properties as sample size increases than parametric kernel methods. Home range estimation methods that have been developed since 2005 include: LoCoH Brownian Bridge Line-based Kernel GeoEllipse Line-Buffer Computer packages for using parametric and nonparametric kernel methods are available online. In the appendix of a 2017 JMIR article, the home ranges for over 150 different bird species in Manitoba are reported. See also Territoriality Dear enemy recognition References Ethology Ecology terminology
Home range
Biology
755
5,088,878
https://en.wikipedia.org/wiki/Countercurrent%20distribution
Countercurrent distribution (CCD, also spelled "counter current" distribution) is an analytical chemistry technique which was developed by Lyman C. Craig in the 1940s. Countercurrent distribution is a separation process that is founded on the principles of liquid–liquid extraction where a chemical compound is distributed (partitioned) between two immiscible liquid phases (oil and water for example) according to its relative solubility in the two phases. The simplest form of liquid-liquid extraction is the partitioning of a mixture of compounds between two immiscible liquid phases in a separatory funnel. This occurs in five steps: 1) preparation of the separatory funnel with the two phase solvent system, 2) introduction of the compound mixture into the separatory funnel, 3) vigorous shaking of the separatory funnel to mix the two layers and allow for mass transfer of compounds in and out of the phases, 4) The contents of the separatory funnel are allowed to settle back into two distinct phases and 5) the two phases are separated from each other by draining out the bottom phase. If a compound is insoluble in the lower phase it will distribute into the upper phase and stay in the separatory funnel. If a compound is insoluble in the upper phase it will distribute into the lower phase and be removed from the separatory funnel. If the mixture contains one or more compounds that are soluble in the upper phase and one or more compounds that are soluble in the lower phase, then an extraction has occurred. Often, an individual compound is soluble to a certain extent in both phases and the extraction is, therefore, incomplete. The relative solubility of a compound in two phases is known as the partition coefficient. While one separatory funnel is useful in separating certain compound mixtures with a carefully formulated biphasic solvent system, a series of separatory funnels may be employed to separate compounds that have different partition coefficients. Countercurrent distribution, therefore, is a method of using a series of vessels (separatory funnels) to separate compounds by a sequence of liquid-liquid extraction operations. Contrary to liquid-liquid extraction, in the CCD instruments the upper phase is decanted from the lower phase once the phases have settled. First, a mixture is introduced to vessel 1 (V1) charged with both phases and the liquid-liquid extraction process is performed. The upper phase is added to a second vessel (V2) which already holds fresh lower phase. Fresh upper phase is added to V1. Both vessels are shaken and allowed to settle. upper phase from V1 is transferred to V2 at the same time the upper phase from V2 is transferred to V3 which already holds fresh lower phase. Fresh upper phase is added to V1, all three vessels are shaken and settled and the process continues. Compounds that are more soluble in the upper phase than lower phase travel faster and farther down the series of vessels (the "train") while those compounds which are more soluble in the lower phase than the upper phase tend to lag behind. A compound insoluble in the upper phase will remain in V1 while a compound insoluble in the lower phase will stay in the lead vessel. Historical development Early work in the development of liquid-liquid separation techniques was undertaken by Cornish et al. with a process called "systematic fractional distribution" as well as Randall and Longtin, however, the central figure is certainly Lyman C. Craig. Lyman Craig's development of countercurrent distribution began with studying the distribution of a pharmaceutical, mepacrine (atabrine), between the two layers of an ethylene dichloride, methanol, and aqueous buffer biphasic solvent system. The distribution coefficient (Kc which coincides with partition coefficient) of atabrine varied by the composition of the solvent system and the pH of the buffer. In the next article, Craig was inspired by the work of Martin and Synge with partition chromatography to develop an apparatus that would separate compounds based on their distribution constant (K which coincides with partition coefficient). It was shown that a solvent system composed of benzene, n-hexane, methanol and water would separate mixtures of organic acids. It is remarkable that the mathematical theory developed hand-in-hand with the progression of applications. Craig continued to pursue this method of separation by testing different compounds, formulating biphasic solvent systems, and most importantly developing a commercially viable instrument. The CCD technique was employed in many notable separations such as penicillin, polycyclic aromatic hydrocarbons, insulin, bile acids, ribonucleic acids, taxol, Streptomyces antibiotics. and many other antibiotics. References Analytical chemistry
Countercurrent distribution
Chemistry
988
48,968
https://en.wikipedia.org/wiki/Guanxi
Guanxi () is a term used in Chinese culture to describe an individual's social network of mutually beneficial personal and business relationships. The character guan, 关, means "closed" and "caring" while the character xi 系 means "system" and together the term refers to a closed caring system of relationships that is somewhat analogous to the term old boy's network in the West. In Western media, the pinyin romanization guanxi is more widely used than common translations such as "connections" or "relationships" because those terms do not capture the significance of a person's guanxi to most personal and business dealings in China. Unlike in the West, guanxi relationships are almost never established purely through formal meetings but must also include spending time to get to know each other during tea sessions, dinner banquets, or other personal meetings. Essentially, guanxi requires a personal bond before any business relationship can develop. As a result, guanxi relationships are often more tightly bound than relationships in Western personal social networks. Guanxi has a major influence on the management of businesses based in mainland China, Hong Kong, and those owned by Overseas Chinese people in Southeast Asia (the bamboo network). Guanxi networks are grounded in Confucian doctrine about the proper structure of family, hierarchical, and friendly relationships in a community, including the need for implicit mutual commitments, reciprocity, and trust. Guanxi has 3 sub-dimensions sometimes abbreviated as GRX which stands for ganqing, a measure of the emotional attachment in a relationship, renqing ( rénqíng/jen-ch'ing), the moral obligation to maintain a relationship with reciprocal exchange of favors, and xinren, or the amount of interpersonal trust. Guanxi is also related to the idea of "face" (, miànzi/mien-tzu), which refers to social status, propriety, prestige, or a combination of all three. Other related concepts include wulun (), the five cardinal types of relationships, which supports the idea of a long-term, developing relationship between a business and its client, and yi-ren and ren, which respectively support reciprocity and empathy. History The guanxi system developed in imperial, dynastic China. Historically, China lacked a strong rule of law and the government did not hold every citizen subject to the law. As a result, the law did not provide the same legal protection as it did in the West. Chinese people developed guanxi along with the concept of face and personal reputation to help ensure trust between each other in business and personal matters. Today, the power of guanxi resides primarily within the Chinese Communist Party (CCP). Description and usage In a personal context At its most basic, guanxi describes a personal connection between two people in which one is able to prevail upon another to perform a favor or service, or be prevailed upon, that is, one's standing with another. The two people need not be of equal social status. Guanxi can also be used to describe a network of contacts, which an individual can call upon when something needs to be done, and through which he or she can exert influence on behalf of another. Guanxi also refers to the benefits gained from social connections and usually extends from extended family, school friends, workmates and members of standard clubs or organizations. It is customary for Chinese people to cultivate an intricate web of guanxi relationships, which may expand in a huge number of directions, and includes lifelong relationships. Staying in contact with members of your network is not necessary to bind reciprocal obligations. Reciprocal favors are the key factor to maintaining one's guanxi web. At the same time failure to reciprocate is considered an unforgivable offense (that is, the more one asks of someone, the more one owes them). Guanxi can perpetuate a never-ending cycle of favors. The term is not generally used to describe interpersonal relationships within a family, although guanxi obligations can sometimes be described in terms of an extended family. Essentially, familial relations are the core of one's interpersonal relations, while the various non-familial interpersonal relations are modifications or extensions of familial relations. Chinese culture's emphasis on familial relations informs guanxi as well, making it such that both familial relations and non-familial interpersonal relations are grounded by similar behavioral norms. An individual may view and interact with other individuals in a way that is similar to their viewing of and interactions with family members; through guanxi, a relationship between two friends can be likened by each friend to being a pseudo elder sibling–younger sibling relationship, with each friend acting accordingly based on that relationship (the friend who sees himself as the "younger sibling" will show more deference to the friend who is the "older sibling"). Guanxi is also based on concepts like loyalty, dedication, reciprocity, and trust, which help to develop non-familial interpersonal relations, while mirroring the concept of filial piety, which is used to ground familial relations. Ultimately, the relationships formed by guanxi are personal and not transferable. In a business context In China, a country where business relations are highly socially embedded, guanxi plays a central role in the shaping and development of day-to-day business transactions by allowing inter-business relationships and relationships between businesses and the government to grow as individuals representing these organizations work with one another. Specifically, in a business context, guanxi occurs through individual interactions first before being applied on a corporate level (e.g., one member of a business may perform a favor for a member of another business because they have interpersonal ties, which helps to facilitate the relationship between the two businesses involved in this interaction). Guanxi also acts as an essential informal governance mechanism, helping leverage Chinese organizations on social and economic platforms. In places in China where institutions, like the structuring of local governments and government policies, may make business interactions less efficient to facilitate, guanxi can serve as a way for businesses to circumvent such institutions by having their members cultivate their interpersonal ties. Thus, guanxi is important in two domains: social ties with managers of suppliers, buyers, competitors, and other business intermediaries; and social ties with government officials at various national government-regulated agencies. Given its extensive influential power in the shaping of business operations, many see guanxi as a crucial source of social capital and a strategic tool for business success. Thanks to a good knowledge of guanxi, companies obtain secret information, increase their knowledge about precise government regulations, and receive privileged access to stocks and resources. Knowing this, some economists have warned that Western countries and others that trade regularly with China should improve their "cultural competency" in regards to practices such as guanxi. In doing so, such countries can avoid financial fallout caused by a lack of awareness regarding the way practices like guanxi operate. The nature of guanxi, however, can also form the basis of patron–client relations. As a result, it creates challenges for businesses whose members are obligated to repay favors to members of other businesses when they cannot sufficiently do so. In following these obligations, businesses may also be forced to act in ways detrimental to their future, and start to over-rely on each other. Members within a business may also start to more frequently discuss information that all members knew prior, rather than try and discuss information only known by select members. If the ties fail between two businesses within an overall network built through guanxi, the other ties comprising the overall network have a chance of failing as well. A guanxi network may also violate bureaucratic norms, leading to corporate corruption. Note that the aforementioned organizational flaws guanxi creates can be diminished by having more efficient institutions (like open market systems that are regulated by formal organizational procedures while promoting competition and innovation) in place to help facilitate business interactions more effectually. In East Asian societies, the boundary between business and social lives can sometimes be ambiguous as people tend to rely heavily on their closer relations and friends. This can result in nepotism in the workforce being created through guanxi, as it is common for authoritative figures to draw from family and close ties to fill employment opportunities, instead of assessing talent and suitability. This practice often prevents the most suitably qualified person from being employed for the position. However, guanxi only becomes nepotism when individuals start to value their interpersonal relationships as ways to accomplish their goals over the relationships themselves. When interpersonal relationships are seen in this light, then, it is usually the case that individuals are not viewing their cultivation of prospective business relationships without bias. In addition, guanxi and nepotism are distinct in that the former is inherently a social transaction (considering the emphasis on the actual act of building relationships) and not purely based in financial transactions, while the latter is explicitly based in financial transactions and has a higher chance of resulting in legal consequences. However, cronyism is less obvious and can lead to low-risk sycophancy and empire-building bureaucracy within the internal politics of an organisation. In a political context For relationship-based networks such as guanxi, reputation plays an important role in shaping interpersonal and political relations. As a result, the government is still the most important stakeholder, despite China's recent efforts to minimise government involvement. Key government officials wield the authority to choose political associates and allies, approve projects, allocate resources, and distribute finances. Thus, it is especially crucial for international companies to develop harmonious personal relationships with government officials. In addition to holding major legislative power, the Chinese government owns vital resources including land, banks, and major media networks and wields major influence over other stakeholders. Thus, it is important to maintain good relations with the central government in order for a business to maintain guanxi. However, the issue of guanxi as a form of government corruption has been raised into question over recent years. This is often the case when businesspeople interpret guanxi's reciprocal obligations as unethical gift-giving in exchange for government approval. The line drawn between ethical and unethical reciprocal obligation is unclear, but China is currently looking into understanding the structural problems inherent in the guanxi system. In a diasporic context Guanxi can be used as a school of thought that influences how ethnic Chinese think of and view society. The Chinese in the diaspora are more likely to adhere and connect to the group of people with shared background. Moreover, diasporic communities might possess ties with individuals in their home country. Guanxi allows the diaspora to maintain their networks and foster close relations with people in their home country and form a subethnic enclave within society. Guanxi could also influence how the diaspora assimilates into the host country, and how the diaspora deals with racism in society. Groups that could be studied are Chinese-Americans, Chinese-Indonesians who have faced prejudice in their host countries. Marred by the LA massacre in 1871, Saigu in 1992, the Japanese American internment during World War II, and the idea of the "Hindu Invasion", the Asian Americans already in the United States faced discrimination from the wider American society. They had to find solutions based on trial and error, looking for legal, political, and social ways to find their place in society. Ethical concerns In recent years, the ethical consequences of guanxi have been brought into question. While guanxi can bring benefits to people directly within the guanxi network, it also has the potential to bring harm to individuals, societies and nations when misused or abused. For example, mutual reciprocal obligation is a major component of guanxi. However, the specific date, time and method are often unspecified. Thus, guanxi can be ethically questionable when one party takes advantage of others' personal favors, without seeking to reciprocate. A common example of unethical reciprocal obligation involves the abuse of business-government relations. In 2013, an official of the CCP criticized government officials for using public funds of over 10,000 yuan for banquets. This totals approximately 48 billion dollars worth of banquets per year. Guanxi may also allow for interpersonal obligations to take precedence over civic duties. Guanxi is a neutral word, but the use or practice of guanxi can range from 'benign, neutral, to questionable and corrupt'. In mainland China, terms like guanxi practice or la guanxi are used to refer to bribery and corruption. Guanxi practice is commonly employed by favour seekers to seek corrupt benefits from power-holders. Guanxi offers an efficient information transmission channel to help guanxi members to identify potential and trustworthy partners; it also offers a safe and secret platform for illegal transactions. Guanxi norms help buyers and sellers of corrupt benefits justify and rationalize their acts. Li's Performing Bribery in China (2011) as well as Wang's The buying and selling of military positions (2016) analyze how guanxi practice works in corrupt exchanges. This question is especially critical in cross-cultural business partnerships, when Western firms and auditors are operating within Confucian cultures. Western-based managers must exercise caution in determining whether or not their Chinese colleagues and business partners are in fact practicing guanxi. Caution and extra guidance should be taken to ensure that conflict does not occur as a result of misunderstood cultural agreements. Other studies argue that guanxi is not in fact unethical, but is rather wrongly accused of an act thought unethical in the eyes of those unacquainted with it and Chinese culture. Just as how the Western juridical system is the image of the Western ethical attitudes, it can be said that the Eastern legal system functions similarly. Also, while Westerners might misunderstand guanxi as a form of corruption, the Chinese recognize guanxi as a subset of renqing, which likens the maintenance of interpersonal relationships to a moral obligation. As such, any relevant actions taken to maintain such relationships are recognized as working within ethical constraints. The term guanxixue (, the 'art' or 'knowledge' of guanxi) is also used to specifically refer to the manipulation and corruption brought about by a selfish and sometimes illegal utilization of guanxi. In turn, guanxixue distinguishes unethical usage of guanxi from the term guanxi itself. Although many Chinese lament the strong importance of guanxi in their culture because of the unethical use that arises through it, they still consider guanxi as a Chinese element that should not be denied. Similar concepts in other cultures Sociologists have linked guanxi with the concept of social capital (it has been described as a Gemeinschaft value structure), and it has been exhaustively described in Western studies of Chinese economic and political behavior. Blat in Russian culture Shurobadzhanashtina in Bulgarian society Wasta in Middle Eastern culture Sociolismo in Cuban culture Old boy network in Anglo-Saxon and Finnish culture Dignitas in ancient Roman culture Ksharim (literally 'connections') in Israeli culture. Protektsia (from the word 'Protection') is the use of ksharim for personal gain or helping another, also known in slang as 'Vitamin P'. Enchufe (literally 'plug in' – compare English 'hook up') in Spain, meaning to 'plug' friends or acquaintances 'into' a job or position. Compadrazgo in Latin American culture Padrino System in the Philippines (basically "godfather" or patron), also known locally as "kapit" (Filipino word for "to hang on," "to hook on.") Western vs. Eastern social business relations The four dimensions for a successful business networking comprise: trust, bonding, mutual relationship, and empathy. Nevertheless, the points of view in which these dimensions are understood and consolidated into business tasks are extensively disparate in the East vs the West. From the Western point of view, trust is treated as shared unwavering quality, constancy, and correspondence. Instead, from the Eastern point of view, trust is additionally synonymous with obligation, where guanxi is required to be kept up through persistent long haul affiliation and connection. The Chinese system of wulun (the basic norms of guanxi) supports the Eastern attitude, emphasizing that one's fulfillment of one's responsibilities in a given role ensures the smooth functioning of Chinese society. Correspondence is likewise a measurement that is substantially more stressed in the East than in the West. As per Confucianism, every individual is urged to wind up a yi-ren (exemplary individual) and compensate some help with altogether more than one has gotten. In conclusion, compassion is a measurement that is exceedingly implanted in Eastern business bonds, the significance for dealers and clients to see each other's needs is extremely important. The Confucian understanding of ren, which also equates to "Do not do to others as one does not want others to do to him", stresses the importance for sellers and customers to understand each other's needs. Cross-cultural differences in its usage also distinguish Western relationship marketing from Chinese guanxi. Unlike Western relationship marketing, where networking plays a more surface-level impersonal role in shaping larger business relations, guanxi plays a much more central and personal role in shaping social business relations. Chinese culture borrows much of its practices from Confucianism, which emphasizes collectivism and long-term personal relations. Likewise, guanxi functions within Chinese culture and directly reflects the values and behaviors expressed in Chinese business relations. For example, reciprocal obligation plays an intricate role in maintaining harmonious business relations. It is expected that both sides not only stay friendly with each other, but also reciprocate a favor given by the other party. Western relationship marketing, on the other hand, is much more formally constructed, in which no social obligation and further exchanges of favors are expected. Thus, long-term personal relations are more emphasized in Chinese guanxi practice than in Western relationship marketing. See also Blat (similar phenomenon in Russia) Sociolismo (similar phenomenon in Cuba) Compadrazgo (similar phenomenon in Latin America) Ubuntu philosophy (similar phenomenon in Africa) System D (similar concept of informality from European French) Bamboo network Chinese social relations Ganqing Mianzi Social capital Social network Xenos (guest-friend) an ancient Greek concept References External links China's modern power house, BBC article discussing the role of Guanxi in the modern governance of China. What is guanxi? Wiki discussion about definitions of guanxi, developed by the publishers of Guanxi: The China Letter. Guanxi, The art of relationships, by Robert Buderi, Gregory T. Huang, . China Characteristics – Regarding Guanxi GCiS China Strategic Research Bamboo network Business culture Chinese culture Society of China Confucianism in China Interpersonal relationships
Guanxi
Biology
3,956
24,306,333
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Greece
The NUTS codes of Greece are part of the Nomenclature of Territorial Units for Statistics, an official nomenclature of the European Commission used by Eurostat for statistical purposes. Changes In 2011, the NUTS1 code of Greece was changed from GR to EL. GR1 was changed to EL5, GR2 to EL6, GR3 to EL3 and GR4 to EL4. The change became official per European Commission regulation No. 31/2011. With regard to the transmission of data to Eurostat, the new codes entered into force by 1 January 2012. Following the Kallikratis territorial reform, the NUTS regions of Greece were redefined. With the region of Epirus being reclassified as part of Voreia Ellada ("Northern Greece", former EL1), and the region of Thessaly in exchange going to Kentriki Ellada ("Central Greece", former EL2), new NUTS1 codes have been assigned to both regions. Apart from that, a number of third-level divisions have been changed. The changes became official in December 2013 by European Commission regulation No. 1319/2013. With regard to the transmission of data to Eurostat, the new codes entered into force on 1 January 2015. NUTS levels The three NUTS levels are as follows: NUTS codes Per 1 January 2015, the NUTS codes for Greece are as follows: Local administrative units Below the NUTS levels, the two LAU (Local Administrative Units) levels are: The LAU codes of Greece can be downloaded here: See also Subdivisions of Greece ISO 3166-2 codes of Greece FIPS region codes of Greece References External links Hierarchical list of the Nomenclature of territorial units for statistics – NUTS and the Statistical regions of Europe Overview map of EU Countries – NUTS level 1 ELLADA – NUTS level 2 ELLADA – NUTS level 3 Correspondence between the NUTS levels and the national administrative units List of current NUTS codes Download current NUTS codes (ODS format) Departments of Greece, Statoids.com Greece Nuts
NUTS statistical regions of Greece
Mathematics
401
65,299,368
https://en.wikipedia.org/wiki/Southwest%20National%20Primate%20Research%20Center
The Southwest National Primate Research Center (SNPRC) is a federally funded biomedical research facility affiliated with the Texas Biomedical Research Institute. The SNPRC became the seventh National Primate Research Center in 1999. Research The SNPRC has two scientific units: "Infectious Diseases Immunology & Control" and "Comparative Medicine & Health Outcomes". The SNPRC also has a Laboratory Core Services Division, which consists of three laboratories: immunology, research imaging, and pathology. Primates in captivity The center houses over 2,500 non-human primates. Among the primates held in captivity at the SNPRC are baboons, chimpanzees, common marmosets, and rhesus macaques. The center houses over 1,000 baboons, which makes it the world's largest colony of baboons used for biomedical research. Furthermore, the center sells primates from their colonies to other researchers. Incidents and controversies 2014 In 2014, a male baboon was injured in its cage and died after its injuries were uncared for. The injuries went unreported and the baboon went uncared for several days after. As a result, the baboon was emaciated, developed scabs and a large abscess on its leg, and also contracted blood poisoning from which he died. In 2014, a macaque was placed in a new group of other macaques, and sustained several severe injuries during the following year including a tail degloving injury and multiple lacerations to the face and body. A veterinarian recommended that the group be assessed by the facility behavior team, but no assessment was ever conducted. In 2014, the USDA cited the SNPRC for inaccuracies on their 2013 annual report. More specifically, the SNPRC did not accurately report the number of animals which had pain or distress that did not have anesthetic, analgesics or tranquilizing drugs administered. In 2014, a juvenile baboon was killed when a guillotine door fell on the animal. 2015 In 2015, a USDA inspection found that one research protocol contained incomplete descriptions of methods for hand rearing and euthanizing neonatal animals. In 2015, the USDA cited the SNPRC for three instances of negligence. The first instance involved the center having a supply of numerous outdated drugs and medical supplies. The second instance involved several bags of food enrichment items being left open, which may have allowed contamination and/or deterioration of the food. The third instance involved a large amount of cockroaches living in a primate housing area. In 2015, there were two incidents in which baboons were injured or killed, which were due to errors made by employees at the SNPRC. In one incident, a female baboon was injured after three male baboons gained access to here chute system. In the second incident, a male baboon gained access to a chute containing a female and her infant, attacked the two, and killed the infant. 2016 In 2016, a USDA inspection revealed several instances of negligence and breaches of protocol at the SNPRC. In one instance, researchers had failed to use the approved scoring sheet and euthanasia criteria for a particular study. In another instance, the center had used 45 more animals in two studies than they had been approved for. In yet another instance, the USDA found that the SNPRC's 2015 annual report was missing information regarding the standards and regulations for the sanitation of primate enclosures and the feeding of primates. In another instance, it was revealed that animals in some studies may have experienced unrelieved pain or distress prior to euthanasia. 2017 In 2017, a baboon received second-degree burns from an exposed heating pipe in its cage. In 2017, a USDA inspection report revealed deteriorating and unmaintained conditions in some of the primate cages. More specifically, some surfaces were deteriorating and paint was eroding from one of the walls. In 2017, two macaques sustained injuries after they opened a divider between their enclosures and comingled. This incident was the fault of a caretaker who failed to secure the latch on the divider. 2018 In April 2018, four baboons escaped from the SNPRC but were later recaptured. 2019 In 2019, a macaque sustained an injury to her finger after sticking it in a hole in her enclosure. As a result, the macaque's finger had to be amputated. Staff at the SNPRC were aware of the risk of the hole in the enclosure, but did not take protections against it on that day. In 2019, a marmoset was severely injured and then euthanized after another marmoset gained access to its cage. In 2019, a USDA inspection report revealed several instances of unclean and deteriorating conditions at the center. For example, the report described dirty light fixtures, peeling paint, damaged dry wall, and damaged edge of a counter. 2021 In 2021, a USDA inspectors reported that the walls to numerous animal enclosures had peeling paint, which makes the walls difficult to properly sanitize. See also Texas Biomedical Research Institute References External links SNPRC home page Primate research centers Animal testing on non-human primates Medical research institutes in Texas Biomedical research foundations
Southwest National Primate Research Center
Engineering,Biology
1,097
11,793,523
https://en.wikipedia.org/wiki/Armillaria%20fuscipes
Armillaria fuscipes is a plant pathogen that causes Armillaria root rot on Pinus, coffee plants, tea and various hardwood trees. It is common in South Africa. The mycelium of the fungus is bioluminescent. Host and symptoms Armillaria root rot is a disease that affects a wide variety of trees and is caused by multiple species in the Armillaria species complex. Armillaria spp. is a basidiomycete fungi. The symptoms for Armillaria spp. can vary greatly because of the wide host range and different species of pathogen. The hosts of Armillaria fuscipes specifically are tropical members of the genus Pinus, Camellia sinensis (tea), and members of the genus Coffea. General symptoms of A. fuscipes, include stunting of the plant, sparse foliage and chlorosis of the leaves. For hosts in the Pinus genus, such as Pinus elliottii, P. kesiya, P. patula, P. taeda, chlorosis of the needles of the infected plant is also a common symptom. Signs of this pathogen are white fans of hyphae that grow between the bark and wood of infected trees as well as the black mycelial cord or rhizomorph of the fungi growing in a net around the root system. The mycelium of A. fuscipes are bioluminescent and the rhizomorph is used to transfer nutrients over large distances to create fruiting bodies as well as infect other trees. The fruiting bodies are brown and white mushrooms that emerge from the base of the tree. The cracking of bark and resin leaking from the base of the tree are other symptoms seen mostly in the Pinus hosts. Importance Armillaria root rot caused by this A. fuscipes can result in the death of many Pinus species native to South Africa. The disease can spread from one tree to many and result in patches of dead trees of a considerable area. A. fuscipes is the major cause of armillaria root rot on tea in Kenya and has been found in other African countries. This has major economic implications for the tea industry in countries where the pathogen is prevalent, especially because of its wide distribution in Africa ranging from South Africa to as far north as Ethiopia. Kenya is the largest producer of tea in Africa, which accounts for 17–20% of the revenue made from exports. The way the disease spreads and symptoms, which greatly affect yield, make it an important disease to control, primarily in places where the plants it affects are of economic importance. A. fuscipes can infect coffee plants as well, but it mostly affects stands of tea. Management Managing A. fuscipes can be difficult because removing the pathogen via the application of fungicides isn't very straight forward. While fumigation of the plants is an option for control, it isn't often used because many fumigants, such as methyl bromide, are banned due to their extreme toxicity and the adverse effects they have on the environment. Another option for controlling inoculum is mechanical removal of infected stumps and plant material. It is difficult to completely eradicate the pathogen in this manner and it is invasive, expensive and labor-intensive. Some newer and more promising methods of management include solarization of the soil and the application of Trichoderma harzianum to the soil as a biological control. In a German study, it was found that solarization for 10 weeks increased the soil temperature enough that the viability of the pathogen was almost eliminated. The application of T. harzianum was effective in controlling A. fuscipes in woody species, and when combined with 5 weeks of solarization, caused a total loss of pathogen viability. Breeding for resistance and increasing host vigor are also options for long term management of this pathogen. See also List of Armillaria species List of bioluminescent fungi References Bioluminescent fungi fuscipes Coffee diseases Fungal tree pathogens and diseases Fungi described in 1909 Fungus species
Armillaria fuscipes
Biology
839
38,858,316
https://en.wikipedia.org/wiki/Trenbolone%20enanthate
Trenbolone enanthate, known by the nickname Trenabol, is a synthetic and injected anabolic–androgenic steroid (AAS) and a derivative of nandrolone which was never marketed. It is the C17β enanthate ester and a long-acting prodrug of trenbolone. Trenbolone enanthate was never approved for medical or veterinary use but is used in scientific research and has been sold on the internet black market as a designer steroid for bodybuilders and athletes. Side Effects Trenbolone Enanthate being a potent anabolic steroid has several potential side effects stemming from its particularly strong androgenic properties and its regulation on human hormones. Psychological Anabolic-androgenic steroid (AAS) users report significant psychological changes. Fluctuation in testosterone, a key AAS, affects the brain's structure and function, potentially leading to mood disorders and aggressive behavior. Trenbolone Enanthate users have reported significant psychological changes, including increased aggression, mood instability, and impaired social interactions. Prolonged use of AAS may also lead to psychological dependence, with approximately 1/3 of users experiencing withdrawal symptoms upon cessation. Reproductive health Anabolic-androgenic steroids(AAS) impact male reproductive health. Their misuse can lead to a condition known as anabolic steroid-induced hypogonadism (ASIH), where natural testosterone production is suppressed. The decreased testosterone production leads to decreased sperm production and can cause prolonged infertility even after stopping use. Chronic AAS use increases the risk of infertility due to its impact on hormonal balance, and some effects such as gynecomastia may not be reversible. Androgenic Trenbolone Enanthate can include several androgenic side effects such as increased body hair growth, acne, and potential baldness in predisposed individuals. Gynecomastia Gynecomastia is a condition that enlarges the breast tissue in males which is often produced as a side effect from the use of AAS. AAS can disrupt the normal balance of estrogen and testosterone in the body due to the increase in testosterone, which can be aromatized into estrogen. Elevated levels of estrogen in males can be linked to both weight gain and gynecomastia. While Trenbolone Enanthate is known for not aromatizing into estrogen, trenbolone exhibits high progestogenic activity which is one of three natural developments of breast tissue in males, also leading to gynecomastia. Primary Uses Bodybuilding Trenbolone Enanthate is renowned for its capacity to promote significant muscle growth and strength gain. Its anabolic effects facilitate increased protein synthesis and hydrogen retention in muscle tissue. Trenbolone Enanthate is also notable in the field of strength gain, with many users reporting marked improvements in their lifting capabilities in part due to the AAS' ability to increase red blood cell count and improve oxygenation of muscle tissue along with the increase in testosterone. The compound is also often employed for its fat-burning properties. It enhances the metabolic rate and promotes the conversion of fat into energy, contributing to leaner muscle development during cutting cycles in body building. Enhanced Recovery Trenbolone Enanthate is a potential treatment for muscle and bone loss without adverse effects commonly associated with testosterone, such as prostate growth or polycythemia. Trenbolone Enanthate was hypothesized to offer benefits similar to selective androgen receptor modulators(SARMs) due to its inability to convert into more potent androgens in specific tissues. Veterinary Uses In the veterinary field, Trenbolone Enanthate has a history of use for increasing muscle mass in livestock. This application is aimed at improving the lean muscle yield in animals prior to slaughter, enhancing the quality of meat production. History Trenbolone was first synthesized in 1963 by L. Velluz and his co-workers, it was originally developed for veterinary use to improve muscle mass and feed efficiency in cattle; however, trenbolone's potent anabolic and androgenic properties soon caught the attention of bodybuilders and athletes. The drug, however, has never been approved for human use, which has legal implications. Legality Trenbolone Enanthate has never had regulatory approval for human use from any single health agency. Its legal status varies by country, however, it is commonly a controlled substance and non-prescribed use is highly illegal. For example, in the United States, the Drug Enforcement Administration (DEA) considers trenbolone and its esters (including its acetate and enanthate form) as Schedule III controlled substances similar to the likes of Australia where use of possession is a criminal offence. However, trenbolone is considered a class C drug with no penalty for personal use or possession in the United Kingdom. The legal framework surrounding trenbolone and its derivatives is complex and not universally consistent, reflecting the substance's potent effects and the concerns over its potential misuse. Sporting organizations, including the World Anti Doping Agency (WADA), closely monitor the use of such substances, especially in competitive events like the Olympics, due to these concerns. See also List of androgen esters § Trenbolone esters References Further reading Abandoned drugs Androgen esters Anabolic–androgenic steroids Enanthate esters Estranes Ketones Sex hormone esters and conjugates Progestogens
Trenbolone enanthate
Chemistry
1,161
18,446,893
https://en.wikipedia.org/wiki/Rouse%20number
The Rouse number (P or Z) is a non-dimensional number in fluid dynamics which is used to define a concentration profile of suspended sediment and which also determines how sediment will be transported in a flowing fluid. It is a ratio between the sediment fall velocity and the upwards velocity on the grain as a product of the von Kármán constant and the shear velocity . Occasionally the factor β is included before the von Kármán constant in the equation, which is a constant which correlates eddy viscosity to eddy diffusivity. This is generally taken to be equal to 1, and therefore is ignored in actual calculation. However, it should not be ignored when considering the full equation. It is named after the American fluid dynamicist Hunter Rouse. It is a characteristic scale parameter in the Rouse Profile of suspended sediment concentration with depth in a flowing fluid. The concentration of suspended sediment with depth goes as the power of the negative Rouse number. It also is used to determine how the particles will move in the fluid. The required Rouse numbers for transport as bed load, suspended load, and wash load, are given below. See also Sediment transport Sediment Dimensionless quantity References Whipple, K. X (2004), 12.163 Course Notes, MIT Open Courseware. Dimensionless numbers of physics Fluid dynamics Sedimentology
Rouse number
Chemistry,Engineering
265
2,931,717
https://en.wikipedia.org/wiki/Conditioner%20%28chemistry%29
In chemistry and materials science, a conditioner is a substance or process that improves the quality of a given material. Conditioning agents used in skincare products are also known as moisturizers, and usually are composed of various oils and lubricants. One method of their use is as a coating of the substrate to alter the feel and appearance. For cosmetic products, this effect is a temporary one but can help to protect skin and hair from further damage. In cosmetic products the types of conditioning agents used are as follows: Emollients, usually oils, fats, waxes or silicones, which are hydrophobic molecules of natural or synthetic origin that coat the skin or hair and provide an occlusive surface that helps prevent further loss of moisture as well as providing slip and lubricity Humectants, typically polyols or glycols, that can hydrogen bond with water in the skin and hair and reduce water loss Cationic surfactants or polymers that are substantive to the slightly negatively-charged skin and hair and provide a film on the hair that limits further damage Fatty alcohols which are amphiphilic and provide a hydrophobic coating to skin and hair as well as building a lamellar structure in the cosmetic product that builds viscosity as well as improving product stability See also Chemical conditioning Leather conditioner References Materials science Cosmetics chemicals
Conditioner (chemistry)
Physics,Materials_science,Engineering
277
50,055,439
https://en.wikipedia.org/wiki/List%20of%20self-intersecting%20polygons
Self-intersecting polygons, crossed polygons, or self-crossing polygons are polygons some of whose edges cross each other. They contrast with simple polygons, whose edges never cross. Some types of self-intersecting polygons are: the crossed quadrilateral, with four edges the antiparallelogram, a crossed quadrilateral with alternate edges of equal length the crossed rectangle, an antiparallelogram whose edges are two opposite sides and the two diagonals of a rectangle, hence having two edges parallel Star polygons pentagram, with five edges hexagram, with six edges heptagram, with seven edges octagram, with eight edges enneagram or nonagram, with nine edges decagram, with ten edges hendecagram, with eleven edges dodecagram, with twelve edges icositetragram, with twenty four edges 257-gram, with two hundred and fifty seven edges See also Complex polygon Geometric shapes Mathematics-related lists
List of self-intersecting polygons
Mathematics
212
7,607
https://en.wikipedia.org/wiki/Collagen%20helix
In molecular biology, the collagen triple helix or type-2 helix is the main secondary structure of various types of fibrous collagen, including type I collagen. In 1954, Ramachandran & Kartha (13, 14) advanced a structure for the collagen triple helix on the basis of fiber diffraction data. It consists of a triple helix made of the repetitious amino acid sequence glycine-X-Y, where X and Y are frequently proline or hydroxyproline. Collagen folded into a triple helix is known as tropocollagen. Collagen triple helices are often bundled into fibrils which themselves form larger fibres, as in tendons. Structure Glycine, proline, and hydroxyproline must be in their designated positions with the correct configuration. For example, hydroxyproline in the Y position increases the thermal stability of the triple helix, but not when it is located in the X position. The thermal stabilization is also hindered when the hydroxyl group has the wrong configuration. Due to the high abundance of glycine and proline contents, collagen fails to form a regular α-helix and β-sheet structure. Three left-handed helical strands twist to form a right-handed triple helix. A collagen triple helix has 3.3 residues per turn. Each of the three chains is stabilized by the steric repulsion due to the pyrrolidine rings of proline and hydroxyproline residues. The pyrrolidine rings keep out of each other's way when the polypeptide chain assumes this extended helical form, which is much more open than the tightly coiled form of the alpha helix. The three chains are hydrogen bonded to each other. The hydrogen bond donors are the peptide NH groups of glycine residues. The hydrogen bond acceptors are the CO groups of residues on the other chains. The OH group of hydroxyproline does not participate in hydrogen bonding but stabilises the trans isomer of proline by stereoelectronic effects, therefore stabilizing the entire triple helix. The rise of the collagen helix (superhelix) is 2.9 Å (0.29 nm) per residue. The center of the collagen triple helix is very small and hydrophobic, and every third residue of the helix must have contact with the center. Due to the very tiny and tight space at the center, only the small hydrogen of the glycine side chain is capable of interacting with the center. This contact is impossible even when a slightly bigger amino acid residue is present other than glycine. References Protein structural motifs Helices Protein folds
Collagen helix
Biology
566
313,373
https://en.wikipedia.org/wiki/Rip%20tide
A rip tide, or riptide, is a strong offshore current that is caused by the tide pulling water through an inlet along a barrier beach, at a lagoon or inland marina where tide water flows steadily out to sea during ebb tide. It is a strong tidal flow of water within estuaries and other enclosed tidal areas. The riptides become the strongest where the flow is constricted. When there is a falling or ebbing tide, the outflow water is strongly flowing through an inlet toward the sea, especially once stabilised by jetties. Dynamics During these falling and ebbing tides, a riptide can carry a person far offshore. For example, the ebbing tide at Shinnecock Inlet in Southampton, New York, extends more than offshore. Because of this, riptides are typically more powerful than rip currents. During slack tide, the water is motionless for a short period of time until the flooding or rising tide starts pushing the sea water landward through the inlet. Riptides also occur at constricted areas in bays and lagoons where there are no waves near an inlet. These strong, reversing currents can also be termed ebb jets, flood jet, or tidal jets by coastal engineers because they carry large quantities of sand outward that form sandbars far out in the ocean or into the bay outside the inlet channel. The term "ebb jet" would be used for a tidal current leaving an enclosed tidal area, and "flood jet" for the equivalent tidal current entering it. Rip tide and rip currents The term rip tide is often incorrectly used to refer to rip currents, which are not tidal flows. A rip current is a strong, narrow jet of water that moves away from the beach and into the ocean as a result of local wave motion. Rip currents can flow quickly, are unpredictable, and come about from what happens to waves as they interact with the shape of the sea bed. In contrast, a rip tide is caused by tidal movements, as opposed to wave action, and is a predictable rise and fall of the water level. The United States National Oceanic and Atmospheric Administration comments: Rip currents are not rip tides. A specific type of current associated with tides may include both the ebb and flood tidal currents that are caused by egress and ingress of the tide through inlets and the mouths of estuaries, embayments, and harbors. These currents may cause drowning deaths, but these tidal currents or tidal jets are separate and distinct phenomena from rip currents. Recommended terms for these phenomena include ebb jet, flood jet, or tidal jet. Surviving rip currents People often drown by swimming directly against a rip current, which tires them out. People are advised to not fight the current, which is too strong for any swimmer. People should not try to swim directly inwards, towards the beach. They should relax, and swim parallel to the beach. Eventually, they will be out of the rip current. See also Rip current Baïne Tidal bore References Oceanography Physical oceanography Bodies of water Tides
Rip tide
Physics,Environmental_science
620
28,057,162
https://en.wikipedia.org/wiki/Dwarf%20gallery
A dwarf gallery is an architectural ornament in Romanesque architecture. It is a natural development of the blind arcade and consists of an arcaded gallery, usually just below the roof, recessed into the thickness of the walls. Usually dwarf galleries can be found at church towers or apses but they frequently appear at other parts of buildings as well, or even go around the entire building. Although principally meant as a decorative element, some dwarf galleries can be used. During the septennial Pilgrimage of the Relics in Maastricht, relics were shown daily from the dwarf gallery of St Servatius' to pilgrims gathered in front of the church in Vrijthof. Dwarf galleries mainly appear at Romanesque churches in Germany and Italy. A few examples can be found in Belgium and the Netherlands (see Mosan art). Remarkably, in France no dwarf galleries were built. The oldest church in Germany with a dwarf gallery is Trier Cathedral. The apsis with dwarf gallery at Speyer Cathedral, described as “one of the most memorable pieces of Romanesque design”, was copied in many other places in the German Rhineland. Several of Cologne's twelve Romanesque churches feature dwarf galleries, as well as important Rhineland churches like Mainz Cathedral, Worms Cathedral and Bonn Minster. In Italy, dwarf galleries appear at churches in the central and northern regions of the country. Examples are Santa Maria della Pieve in Arezzo, Modena Cathedral, Pistoia Cathedral, San Donato in Genoa and Pisa Cathedral. The famous Leaning Tower of Pisa could be described as having six rings of dwarf galleries. In France, simple dwarf galleries are rare. But there was a luxurious development. In some façades, sculptures were placed between the columns. Most famous are the Galleries of Kings (Galeries des Rois) on Notre-Dame de Paris and the Cathedral of Amiens. Dwarf galleries incidentally feature in Romanesque Revival architecture, notably in Germany, but also in other parts of the world. References Romanesque architecture Architectural elements
Dwarf gallery
Technology,Engineering
400
20,500,703
https://en.wikipedia.org/wiki/List%20of%20omics%20topics%20in%20biology
Inspired by the terms genome and genomics, other words to describe complete biological datasets, mostly sets of biomolecules originating from one organism, have been coined with the suffix -ome and -omics. Some of these terms are related to each other in a hierarchical fashion. For example, the genome contains the ORFeome, which gives rise to the transcriptome, which is translated to the proteome. Other terms are overlapping and refer to the structure and/or function of a subset of proteins (e.g. glycome, kinome). An omicist is a scientist who studies omeomics, cataloging all the “omics” subfields. Omics.org is a Wiki that collects and alphabetically lists all the known "omes" and "omics." List of topics Hierarchy of topics For the sake of clarity, some topics are listed more than once. Bibliome Connectome Cytome Editome Embryome Epigenome Methylome Exposome Envirome Toxome Foodome Microbiome Sociome Genome Variome Exome ORFeome Transcriptome Proteome Kinome Secretome Chaperome Allergenome Pharmacogenome Regulome Hologenome Interactome Interferome Ionome Fluxome Membranome Metagenome Metallome Microbiome Moleculome Glycome Ionome Lipidome Metabolome Volatilome Metallome Proteome Acetylome Obesidome Organome Phenome Physiome Connectome Synaptome Dynome Mechanome Regulome Researchsome Toponome Trialome Antibodyome References Systems biology
List of omics topics in biology
Biology
354
265,731
https://en.wikipedia.org/wiki/PSR%20B1620%E2%88%9226%20b
PSR B1620-26 b is an exoplanet located approximately 12,400 light-years from Earth in the constellation of Scorpius. It bears the unofficial nicknames "Methuselah" and "the Genesis planet" (named after the Biblical character Methuselah, who, according to the Bible, lived to be the oldest person) due to its extreme age. The planet is in a circumbinary orbit around the two stars of PSR B1620-26 (which are a pulsar (PSR B1620-26 A) and a white dwarf (WD B1620-26)) and is the first circumbinary planet ever confirmed. It is also the first planet found in a globular cluster. The planet is one of the oldest known extrasolar planets, believed to be about 12.7 billion years old. Characteristics Mass, orbit, and age PSR B1620-26 b has a mass of 2.627 times that of Jupiter, and orbits at a distance of 23 AU (3.4 billion km), a little larger than the distance between Uranus and the Sun. Each orbit of the planet takes about 100 years. The triple system is just outside the core of the globular cluster Messier 4. The age of the cluster has been estimated to be about 12.7 billion years, and because all stars in a cluster form at about the same time, and planets form together with their host stars, it is likely that PSR B1620-26 b is also about 12.7 billion years old. This is much older than any other known planet discovered to date, and nearly three times as old as Earth. Host stars PSR B1620-26 b orbits a pair of stars. The primary star, PSR B1620-26, is a pulsar, a neutron star spinning at 100 revolutions per second, with a mass of 1.34 , a likely radius of around 20 kilometers (0.00003 ) and a likely temperature less than or equal to 300,000 K. The second is a white dwarf with a mass of 0.34 , a likely radius of around 0.01 , and a likely temperature less than or equal to 25,200 K. These stars orbit each other at a distance of 1 AU about once every six months. The age of the system is 12.7 to 13 billion years old, making this one of the oldest binary stars known. In comparison, the Sun has an age of 4.6 billion years. The binary system's apparent magnitude, or how bright it appears from Earth's perspective, is 24. It is far too dim to be seen with the naked eye. Evolutionary history The origin of this pulsar planet is still uncertain, but it probably did not form where it is found today. Because of the decreased gravitational force when the core of star collapses to a neutron star and ejects most of its mass in a supernova explosion, it is unlikely that a planet could remain in orbit after such an event. It is more likely that the planet formed in orbit around the star that has now evolved into the white dwarf, and that the star and planet were only later captured into orbit around the neutron star. Stellar encounters are not very common in the disk of the Milky Way, where the Sun is, but in the dense core of globular clusters, they occur frequently. At some point during the 10 billion years, the neutron star is thought to have encountered and captured the host star of the planet into a tight orbit, probably losing a previous companion star in the process. About half a billion years ago, the newly captured star began to expand into a red giant (see stellar evolution). Typical pulsar periods for young pulsars are of the order of one second, and they increase with time; the very short periods exhibited by so-called millisecond pulsars are due to the transfer of material from a binary companion. The pulse period of PSR B1620-26 is a few milliseconds, providing strong evidence for matter transfer. It is believed that as the pulsar's red giant companion expanded, it filled and then exceeded its Roche lobe, so that its surface layers started being transferred onto the neutron star. The infalling matter produced complex and spectacular effects. The infalling matter 'spun up' the neutron star, due to the transfer of angular momentum, and for a few hundred million years, the stars formed a low-mass X-ray binary, as the infalling matter was heated to temperatures high enough to glow in X-rays. Mass transfer came to an end when the surface layers of the mass-losing star were depleted, and the core slowly shrunk to a white dwarf. Now the stars peacefully orbit around each other. The long-term prospects for PSR B1620-26 b are poor, though. The triple system, which is much more massive than a typical isolated star in M4, is slowly drifting down into the core of the cluster, where the density of stars is very high. In a billion years or so, the triple will probably have another close encounter with a nearby star. The most common outcome of such encounters is that the lightest companion is ejected from the multiple star system. If this happens, PSR B1620-26 b will most likely be ejected completely from M4, and will spend the rest of its existence wandering alone in interstellar space as an interstellar planet. Detection and discovery Like nearly all extrasolar planets discovered prior to 2008, PSR B1620-26 b was originally detected through the Doppler shifts its orbit induces on radiation from the star it orbits (in this case, changes in the apparent pulsation period of the pulsar). In the early 1990s, a group of astronomers led by Donald Backer, who were studying what they thought was a binary pulsar, determined that a third object was needed to explain the observed Doppler shifts. Within a few years, the gravitational effects of the planet on the orbit of the pulsar and white dwarf had been measured, giving an estimate of the mass of the third object that was too small for it to be a star. The conclusion that the third object was a planet was announced by Stephen Thorsett and his collaborators in 1993. The study of the planetary orbit allowed the mass of the white dwarf star to be estimated as well, and theories of the formation of the planet suggested that the white dwarf should be young and hot. On July 10, 2003, the detection of the white dwarf and confirmation of its predicted properties were announced by a team led by Steinn Sigurdsson, using observations from the Hubble Space Telescope. It was at a NASA press briefing that the name Methuselah was introduced, capturing press attention around the world. Name While the designation PSR B1620-26 b is not used in any scientific papers, the planet is listed in the SIMBAD database as PSR B1620-26 b. Some popular sources use the designation PSR B1620-26 c to refer to the planet, as it was described as the third member of a triple system (composed of the planet and two stars). This designation doesn't appear in the SIMBAD database, and more modern naming conventions use a separated lettering system where lower-case letters to refer to planets and upper-case letters to designate stars (e.g. Gliese 667 Cc is the 'c' planet orbiting Gliese 667C, which is the 'C' star of a triple system), making PSR B1620-26 b the designation for a planet orbiting both stars of the PSR B1620-26 system. Neither usage is employed in the scientific literature with respect to the PSR B1620-26 planet. Though not officially recognized, the name "Methuselah" is commonly used for the planet in popular articles. This name is usually used as the informal name to show the similarities to the planets of the Solar System, while the "letter name" is used astronomically. Methuselah is the only planet to have received a biblical name or nickname, although three other extrasolar planets have been unofficial mythological nicknames (just like in the Solar System), those planets being Dimidium, originally dubbed "Bellerophon"; Gliese 581 g, sometimes called "Zarmina," or even more rarely "Zarmina's World" or "Zarmina's Planet"; and HD 209458 b, occasionally referred to as "Osiris." See also 51 Pegasi b PSR 1257+12 Pulsar planet List of exoplanets discovered before 2000 List of exoplanet extremes List of exoplanet firsts References External links Circumbinary planets Exoplanets detected by timing Exoplanets discovered in 1993 Exoplanets with proper names Giant planets Pulsar planets Scorpius
PSR B1620−26 b
Astronomy
1,867
25,911,979
https://en.wikipedia.org/wiki/Pluteus%20glaucus
Pluteus glaucus is a mushroom in the family Pluteaceae. Chemistry 0.28% psilocybin, 0.12% psilocin (Stijve and de Meijer 1993). See also List of Pluteus species List of Psilocybin mushrooms References Fungi described in 1962 Fungi of Sweden glaucus Psychoactive fungi Psychedelic tryptamine carriers Taxa named by Rolf Singer Fungus species
Pluteus glaucus
Biology
87
23,125,145
https://en.wikipedia.org/wiki/Ammonium%20azide
Ammonium azide is the chemical compound with the formula , being the salt of ammonia and hydrazoic acid. Like other inorganic azides, this colourless crystalline salt is a powerful explosive, although it has a remarkably low sensitivity. is physiologically active and inhalation of small amounts causes headaches and palpitations. It was first obtained by Theodor Curtius in 1890, along with other azides. Structure Ammonium azide is ionic, meaning it is a salt consisting of ammonium cations and azide anions , therefore its formula is . It is a structural isomer of tetrazene. Ammonium azide contains about 93% nitrogen by mass. References Further reading Azides Explosive chemicals Ammonium compounds Nitrogen hydrides
Ammonium azide
Chemistry
154
55,186,941
https://en.wikipedia.org/wiki/Hydridotetrakis%28triphenylphosphine%29rhodium%28I%29
Hydridotetrakis(triphenylphosphine)rhodium(I) is the coordination complex with the formula HRh[P(C6H5)3]4. It consists of a Rh(I) center complexed to four triphenylphosphine (PPh3) ligands and one hydride. The molecule has idealized C3v symmetry. The compound is a homogeneous catalyst for hydrogenation and related reactions. It is a yellow solid that dissolves in aromatic solvents. Preparation In the presence of base, H2, and additional triphenylphosphine, Wilkinson's catalyst (chloridotris(triphenylphosphane)rhodium(I)) converts to HRh(PPh3)4: RhCl(PPh3)3 + H2 + KOH + PPh3 → RhH(PPh3)4 + H2O + KCl References Rhodium(I) compounds Catalysts Homogeneous catalysis Triphenylphosphine complexes Hydrogenation catalysts Hydrido complexes
Hydridotetrakis(triphenylphosphine)rhodium(I)
Chemistry
233
14,932,478
https://en.wikipedia.org/wiki/Testicular%20immunology
Testicular Immunology is the study of the immune system within the testis. It includes an investigation of the effects of infection, inflammation and immune factors on testicular function. Two unique characteristics of testicular immunology are evident: (1) the testis is described as an immunologically privileged site, where suppression of immune responses occurs; and, (2) some factors which normally lead to inflammation are present at high levels in the testis, where they regulate the development of sperm instead of promoting inflammation. History of testicular immunology 460-377 BC Hippocrates described testicular inflammation associated with mumps 1785 Hunter and Michaelis performed transplant experiments in domestic chickens 1849 Berthold transplanted testes between roosters and showed maintenance of male sex characteristics only in birds with successfully grafted testes 1899-1900 Sperm recognized as immunogenic (will cause an autoimmune reaction if transplanted from the testis into a different area of the body) by Landsteiner (1899) and Metchinikoff, (1900) 1913-1914 Human testis transplants performed by Lespinasse (1913), and Lydson (1914) who performed a graft on himself! 1954 Discovery that sperm autoantibodies contribute to infertility, 1977 Billingham recognized that the testis is site of immune privilege Immune cells found in the testis Immune cells of the human testis are not as well characterized as those from rodents, due to the rarity of normal human testes available for experiment. The majority of experiments have studied the rat testis due to its convenience: it is of relatively large size and is easily extracted from experimental animals. Macrophages Macrophages are directly involved in the fight against invading micro-organisms as well as being antigen-presenting cells which activate lymphocytes. Early studies demonstrated the presence of macrophages in the rat testis Testicular macrophages are the largest population of immune cells in the rodent testis. Macrophages have also been found in the testes of humans, guinea pigs, hamsters, boars, horses and bulls. They originate from blood monocytes which move into the testis then mature into macrophages. In the rat, testicular macrophages have been described as either “resident” or “newly arrived” from the blood supply. It is likely that most of the adult population of testicular macrophages in adult rats are a result of very rapid proliferation of early precursors that entered the testis during postnatal maturation Testicular macrophages can respond to infectious stimuli and become activated (undergo changes enabling the killing of the invading micro-organism), but do so to a lesser extent than other types of macrophages. An example is production of the inflammatory cytokines TNFα and IL-1β by activated rat testicular macrophages: these macrophages produce significantly less TNFα and IL-1β than activated rat peritoneal macrophages. Aside from responding to infectious stimuli, testicular macrophages are also involved in maintaining normal testis function. They have been shown to secrete 25-hydroxycholesterol, a sterol that can be converted to testosterone by Leydig cells. Their presence is necessary for the normal development and function of the Leydig cells, which are the testosterone-producing cells of the testis. B-Lymphocytes B-lymphocytes take part in the adaptive immune response and produce antibodies. These cells are not normally found in the testis, even during inflammatory conditions. The lack of B-lymphocytes in the testis is significant, since these are the antibody-producing cells of the immune system. Since anti-sperm antibodies can cause infertility, it is important that antibody-producing B-lymphocytes are kept separated from the testis. T-lymphocytes T-lymphocytes (T-cells) are white blood cells which take part in cell-mediated immunity. They are often found within tissues where they can be activated by antigen-presenting cells upon infection. They are present in rat and human testes, where they constitute approximately 10 to 20% of the immune cells present, as well as mouse and ram testes. Both cytotoxic T-cells and Helper T cells are found in the testes of rats. Also present in the testes of rats and humans are natural killer cells and Natural killer T cells have been found in rats and mice. Mast cells Mast cells are regulators of immune responses, particularly those against parasites. They are also involved in the development of autoimmune diseases and allergies. Mast cells have been found in relatively low numbers in the testes of humans, rats, mice, dogs, cats, bulls, boars and deer. In the mammalian testis mast cells regulate testosterone production. There are two lines of evidence that restriction of mast cell activation in the testis could be beneficial during treatment of inflammatory conditions; (1) In experimental models of testicular inflammation, mast cells were present in 10-fold greater numbers and showed signs of activation, and (2) Treatment with drugs which stabilize mast cell activation has proved beneficial in treating some types of male infertility. Eosinophils Eosinophils directly fight parasitic infections and are involved in allergic reactions. They have been found in relatively low numbers in the rat, mouse, dog, cat, bull and deer testes. Almost nothing is known about their significance or function in the testis. Dendritic cells Dendritic cells initiate adaptive immune responses. Relatively small amounts of dendritic cells have been found in the testes of humans, rats and mice. The functional role of dendritic cells in the testis is not well understood, although they have been shown to be involved in autoimmune orchitis during animal experiments. When autoimmune orchitis is induced in rats, the dendritic cell population of the testis greatly increases. This is likely to contribute to testicular inflammation, considering the well-established role of dendritic cells in other types of autoimmune inflammation. Neutrophils Neutrophils are white blood cells which are present in the blood but not normally in tissues. They move out from the blood into tissues and organs upon infection or damage. They directly fight invading pathogens such as bacteria. Neutrophils are not found in the rodent testis under normal conditions but can enter from the blood supply upon infection or inflammatory stimulus. This has been demonstrated in the rat after injection with bacterial cell wall components to produce an immune reaction. Neutrophils also enter the rat testis after treatment with hormones that increase the permeability of blood vessels. In humans, neutrophils have been found in the testis when associated with some tumors. In rat experiments, testicular torsion leads to neutrophil entry into the testis. Neutrophil activity in the testis is an inflammatory response which needs to be tightly regulated by the body, since inflammation-induced damage to the testis can lead to infertility. It is assumed that the role of the immunosuppressive environment of the testis is to protect developing sperm from inflammation. Immune privilege in the testis Sperm are immunogenic - that is they will cause an autoimmune reaction if transplanted from the testis into a different part of the body. This has been demonstrated in experiments using rats by Landsteiner (1899) and Metchinikoff (1900), mice and guinea pigs. The likely reason for this is that sperm first mature at puberty, after immune tolerance is established, therefore the body recognizes them as foreign and mounts an immune reaction against them. Therefore, mechanisms for their protection must exist in this organ to prevent any autoimmune reaction. The blood-testis barrier is likely to contribute to the survival of sperm. However, it is believed in the field of testicular immunology that the blood-testis barrier cannot account for all immune suppression in the testis, due to (1) its incompleteness at a region called the rete testis and (2) the presence of immunogenic molecules outside the blood-testis barrier, on the surface of spermatogonia. Another mechanism which is likely to protect sperm is the suppression of immune responses in the testis. Both the suppression of immune responses and the increased survival of grafts in the testis have led to its recognition as an immunologically privileged site. Other immunologically privileged sites include the eye, brain and uterus. The two main features of immune privilege in the rat testis are; a diminishment in the activation of testicular macrophages by infections such as bacteria, and a defect in the activation of T-cells when antigen is presented to them, leading to the absence of an adaptive immune response to sperm in the testis. It is also predicted that the high level of inflammatory cytokines in the testis contributes to immune privilege. Immune privilege in rodents and other experimental animals The existence of immune privilege in the testes of rodents is well accepted, due to many experiments demonstrating prolonged, and sometimes indefinite, survival of tissue transplanted into the testis, or testicular tissue transplanted elsewhere. Evidence includes the tolerance of testicular grafts in mice and rats, as well as the increased survival of transplants of pancreatic insulin-producing cells in rats, when cells from the testes (Sertoli cells) are added to the transplanted material. Complete spermatogenesis, forming functional pig or goat sperm, can be established by the grafting of pig or goat testicular tissue onto the backs of mice - however, immunodeficient mice needed to be used. Immune privilege in humans The presence of immune-privilege in the human testis is controversial and insufficient evidence exists to either confirm or rule out this phenomenon. Evidence for human/primate testicular immune privilege: Sperm are protected from autoimmune attack, which when it occurs in humans leads to infertility. Local injury of seminiferous tubules caused by fine-needle biopsies in humans does not cause testicular inflammation (orchitis). Furthermore, human testis cells tolerate early HIV infection with little response. Evidence against human/primate testicular immune-privilege: In transplant experiments, primate testes fail to support grafts of monkey thyroid tissue. Human testis tissue transplanted into the mouse elicited an immune response and was rejected, however, this immune response was not as extensive as that against other types of grafted tissue. How does the testis suppress immune responses? How the testicular environment suppresses the immune response is only partially understood. Recent experiments have uncovered a number of biological processes that most likely contribute to immune privilege in the testes of rodents: 1. Experiments in the rat have shown that Sertoli cells can help protect from graft rejection. These cells were isolated from the testis, then added to transplants of the insulin-producing cells of the pancreas (islets of Langerhans), resulting in increased graft survival. Molecules released by the Sertoli cells are predicted to protect the graft. 2. It is likely that the testicular environment itself is inhibiting the activation of T-cells, in order to protect the developing sperm which are immunogenic. The fluid present in the testis is a potent inhibitor of the activation of T-cells under laboratory conditions. 3. The diminishment of the testis inflammatory response is likely to result from relatively low levels of inflammatory cytokines released by activated testicular macrophages. Since protection of developing sperm is so important to the survival of a species, it would not be surprising if more than one mechanism were in use. Immune factors regulate normal testis function Curiously, the testis contains factors such as cytokines, which are usually only produced upon infections and tissue damage. The cytokines interleukin-1α (IL-1α), IL-6 and Activin A are found in the testis, often at high levels. In other tissues, these cytokine would promote inflammation, but here they control testis function. They regulate the development of sperm by controlling their cell division and survival. Other immune factors found in the testis include the enzyme inducible nitric oxide synthase (iNOS), and its product nitric oxide (NO), transforming growth factor beta (TGFβ), the enzyme cyclooxygenase-2 (COX-2) and its product prostaglandin E2, and many others. Further research is required to define the functional roles of these immune factors in the testis. The effects of infections and immune responses on the testis Mumps Mumps is a viral disease which causes swelling of the salivary glands and testes. The mumps virus lives in the upper respiratory tract and spreads through direct contact with saliva. Prior to widespread vaccination programs, it was a common childhood disease. Mumps is generally not serious in children, but in adults, where sperm have matured in the testis, it can cause more severe complications, such as infertility. Sexually transmitted diseases Gonorrhea is a sexually transmitted disease caused by the bacteria Niesseria gonorrhea which can lead to testicular pain and swelling. Gonorrhea also infects the female reproductive system around the cervix and uterus, and can grow in the mouth, throat, eyes and anus. It can be effectively treated with antibiotics, however, if untreated, gonorrhea can cause infertility in men. Chlamydia is caused by the sexually transmitted bacteria Chlamydia trachomatis which infects the genitals. It more commonly affects women, and if untreated, can lead to pelvic inflammatory disease and infertility. Serious symptoms in men are rare, but include swollen testicles and an unusual discharge from the penis. It is effectively treated with antibiotics. Antisperm antibodies Antisperm antibodies (ASA) have been considered as infertility cause in around 10–30% of infertile couples. ASA production are directed against surface antigens on sperm, which can interfere with sperm motility and transport through the female reproductive tract, inhibiting capacitation and acrosome reaction, impaired fertilization, influence on the implantation process, and impaired growth and development of the embryo. Risk factors for the formation of antisperm antibodies in men include the breakdown of the blood‑testis barrier, trauma and surgery, orchitis, varicocele, infections, prostatitis, testicular cancer, failure of immunosuppression and unprotected receptive anal or oral sex with men. Testicular torsion Testicular torsion is a condition of physical twisting of the testis which results in cutting off the blood supply. It leads to damage that, if not treated within a few hours, causes the death of testicular tissue, and requires removal of the testis to prevent gangrene, and therefore can cause infertility. Autoimmune orchitis Orchitis is a condition of testicular pain involving swelling, inflammation and possibly infection. Orchitis can be caused by an autoimmune reaction (autoimmune orchitis) leading to a reduction in fertility. Autoimmune orchitis is rare in humans, compared to anti-sperm antibodies. To study orchitis in the testis, autoimmune orchitis has been induced in the rodent testis. The disease starts with the appearance of testicular antibodies, then movement of macrophages and lymphocytes from the blood stream into the testis, breaking of the physical interactions between the developing sperm and Sertoli cells, entry of neutrophils or eosinophils, and finally death of the developing sperm, leading to infertility. Inflammation models in the rodent Experiments in rats have examined, in fine detail, the course of testicular events during a bacterial infection. In the short term (3 hours) multiple inflammatory factors are produced and released by testicular macrophages. Examples are prostaglandin E2, inducible nitric oxide synthase (iNOS), TNFα and IL-1β, although at lower levels than other tissues. Non-immune cells of the testis such as Sertoli cells and Leydig cells also able to respond to bacteria. During a bacterial infection, testosterone levels and the amount of testicular interstitial fluid are reduced. Neutrophils enter the testis about 12 hours after infection. Importantly, there is damage to the developing sperm, which start to die under severe infections. Despite all the data on the effects of bacteria on normal testis parameters, there is little experimental data regarding its effect on rodent fertility. Other diseases where testicular inflammation can be a symptom Testicular inflammation can be a symptom of the following diseases: Coxsackie A virus, varicella (chicken pox) human immunodeficiency virus (HIV), dengue fever, Epstein Barr virus-associated infectious mononucleosis, syphilis, leprosy, tuberculosis. References Branches of immunology Andrology
Testicular immunology
Biology
3,593
14,134,201
https://en.wikipedia.org/wiki/Advanced%20oxidation%20process
Advanced oxidation processes (AOPs), in a broad sense, are a set of chemical treatment procedures designed to remove organic (and sometimes inorganic) materials in water and wastewater by oxidation through reactions with hydroxyl radicals (·OH). In real-world applications of wastewater treatment, however, this term usually refers more specifically to a subset of such chemical processes that employ ozone (O3), hydrogen peroxide (H2O2) and UV light or a combination of the few processes. Description AOPs rely on in-situ production of highly reactive hydroxyl radicals (·OH) or other oxidative species for oxidation of contaminant. These reactive species can be applied in water and can oxidize virtually any compound present in the water matrix, often at a diffusion-controlled reaction speed. Consequently, ·OH reacts unselectively once formed and contaminants will be quickly and efficiently fragmented and converted into small inorganic molecules. Hydroxyl radicals are produced with the help of one or more primary oxidants (e.g. ozone, hydrogen peroxide, oxygen) and/or energy sources (e.g. ultraviolet light) or catalysts (e.g. titanium dioxide). Precise, pre-programmed dosages, sequences and combinations of these reagents are applied in order to obtain a maximum •OH yield. In general, when applied in properly tuned conditions, AOPs can reduce the concentration of contaminants from several-hundreds ppm to less than 5 ppb and therefore significantly bring COD and TOC down, which earned it the credit of "water treatment processes of the 21st century". The AOP procedure is particularly useful for cleaning biologically toxic or non-degradable materials such as aromatics, pesticides, petroleum constituents, and volatile organic compounds in wastewater. Additionally, AOPs can be used to treat effluent of secondary treated wastewater which is then called tertiary treatment. The contaminant materials are largely converted into stable inorganic compounds such as water, carbon dioxide and salts, i.e. they undergo mineralization. A goal of the wastewater purification by means of AOP procedures is the reduction of the chemical contaminants and the toxicity to such an extent that the cleaned wastewater may be reintroduced into receiving streams or, at least, into a conventional sewage treatment. Although oxidation processes involving ·OH have been in use since late 19th century (such as Fenton's reagent, which was used as an analytical reagent at that time), the utilization of such oxidative species in water treatment did not receive adequate attention until Glaze et al. suggested the possible generation of ·OH "in sufficient quantity to affect water purification" and defined the term "Advanced Oxidation Processes" for the first time in 1987. AOPs still have not been put into commercial use on a large scale (especially in developing countries) even up to today mostly because of relatively high associated costs. Nevertheless, its high oxidative capability and efficiency make AOPs a popular technique in tertiary treatment in which the most recalcitrant organic and inorganic contaminants are to be eliminated. The increasing interest in water reuse and more stringent regulations regarding water pollution are currently accelerating the implementation of AOPs at full-scale. There are roughly 500 commercialized AOP installations around the world at present, mostly in Europe and the United States. Other countries like China are showing increasing interests in AOPs. The reaction, using H2O2 for the formation of ·OH, is carried out in an acidic medium (2.5-4.5 pH) and a low temperature (30 °C - 50 °C), in a safe and efficient way, using optimized catalyst and hydrogen peroxide formulations. Chemical principles Generally speaking, chemistry in AOPs could be essentially divided into three parts: Formation of ·OH; Initial attacks on target molecules by ·OH and their breakdown to fragments; Subsequent attacks by ·OH until ultimate mineralization. The mechanism of ·OH production (Part 1) highly depends on the sort of AOP technique that is used. For example, ozonation, UV/H2O2, photocatalytic oxidation and Fenton's oxidation rely on different mechanisms of ·OH generation: UV/H2O2: H2O2 + UV → 2·OH (homolytic bond cleavage of the O-O bond of H2O2 leads to formation of 2·OH radicals) UV/HOCl: HOCl + UV → ·OH + Cl· Ozone based AOP: O3 + HO− → HO2− + O2 (reaction between O3 and a hydroxyl ion leads to the formation of H2O2 (in charged form)) O3 + HO2− → HO2· + O3−· (a second O3 molecule reacts with the HO2− to produce the ozonide radical) O3−· + H+ → HO3· (this radical gives to ·OH upon protonation) HO3· → ·OH + O2 the reaction steps presented here are just a part of the reaction sequence, see reference for more details Fenton based AOP: Fe2+ + H2O2 → Fe3++ HO· + OH− (initiation of Fenton's reagent) Fe3+ + H2O2 → Fe2++ HOO· + H+ (regeneration of Fe2+ catalyst) H2O2 → HO· + HOO· + H2O (Self scavenging and decomposition of H2O2) the reaction steps presented here are just a part of the reaction sequence, see reference for more details Photocatalytic oxidation with TiO2: TiO2 + UV → e− + h+ (irradiation of the photocatalytic surface leads to an excited electron (e−) and electron gap (h+)) Ti(IV) + H2O Ti(IV)-H2O (water adsorbs onto the catalyst surface) Ti(IV)-H2O + h+ Ti(IV)-·OH + H+ the highly reactive electron gap will react with water the reaction steps presented here are just a part of the reaction sequence, see reference for more details Currently there is no consensus on the detailed mechanisms in Part 3, but researchers have cast light on the processes of initial attacks in Part 2. In essence, ·OH is a radical species and should behave like a highly reactive electrophile. Thus two type of initial attacks are supposed to be Hydrogen Abstraction and Addition. The following scheme, adopted from a technical handbook and later refined, describes a possible mechanism of the oxidation of benzene by ·OH. Scheme 1. Proposed mechanism of the oxidation of benzene by hydroxyl radicals The first and second steps are electrophilic addition that breaks the aromatic ring in benzene (A) and forms two hydroxyl groups (–OH) in intermediate C. Later an ·OH grabs a hydrogen atom in one of the hydroxyl groups, producing a radical species (D) that is prone to undergo rearrangement to form a more stable radical (E). E, on the other hand, is readily attacked by ·OH and eventually forms 2,4-hexadiene-1,6-dione (F). As long as there are sufficient ·OH radicals, subsequent attacks on compound F will continue until the fragments are all converted into small and stable molecules like H2O and CO2 in the end, but such processes may still be subject to a myriad of possible and partially unknown mechanisms. Advantages AOPs hold several advantages in the field of water treatment: They can effectively eliminate organic compounds in aqueous phase, rather than collecting or transferring pollutants into another phase. Due to the reactivity of ·OH, it reacts with many aqueous pollutants without discriminating. AOPs are therefore applicable in many, if not all, scenarios where many organic contaminants must be removed at the same time. Some heavy metals can also be removed in forms of precipitated M(OH)x. In some AOPs designs, disinfection can also be achieved, which makes these AOPs an integrated solution to some water quality problems. Since the complete reduction product of ·OH is H2O, AOPs theoretically do not introduce any new hazardous substances into the water. Current shortcomings AOPs are not perfect and have several drawbacks. Most prominently, the cost of AOPs is fairly high, since a continuous input of expensive chemical reagents is required to maintain the operation of most AOP systems. As a result of their very nature, AOPs require hydroxyl radicals and other reagents proportional to the quantity of contaminants to be removed. Some techniques require pre-treatment of wastewater to ensure reliable performance, which could be potentially costly and technically demanding. For instance, presence of bicarbonate ions (HCO3−) can appreciably reduce the concentration of ·OH due to scavenging processes that yield H2O and a much less reactive species, ·CO3−. As a result, bicarbonate must be wiped out from the system or AOPs are compromised. It is not cost effective to use solely AOPs to handle a large amount of wastewater; instead, AOPs should be deployed in the final stage after primary and secondary treatment have successfully removed a large proportion of contaminants. Ongoing research also been done to combine AOPs with biological treatment to bring the cost down. Future Since AOPs were first defined in 1987, the field has witnessed a rapid development both in theory and in application. So far, TiO2/UV systems, H2O2/UV systems, and Fenton, photo-Fenton and Electro-Fenton systems have received extensive scrutiny. However, there are still many research needs on these existing AOPs. Recent trends are the development of new, modified AOPs that are efficient and economical. In fact, there has been some studies that offer constructive solutions. For instance, doping TiO2 with non-metallic elements could possibly enhance the photocatalytic activity; and implementation of ultrasonic treatment could promote the production of hydroxyl radicals. Modified AOPs such as Fluidized-Bed Fenton has also shown great potential in terms of degradation performance and economics. See also List of waste-water treatment technologies Fenton reaction Electro-oxidation In situ chemical oxidation Process engineering Water purification References Further reading Michael OD Roth: Chemical oxidation: Technology for the Nineties, volume VI: Technologies for the Nineties: 6 (Chemical oxidation) W. Wesley corner fields and John A. Roth, Technomic Publishing CO, Lancaster among other things. 1997, . (engl.) Water treatment Environmental engineering Environmental chemistry Green chemistry
Advanced oxidation process
Chemistry,Engineering,Environmental_science
2,217
581,417
https://en.wikipedia.org/wiki/Table%20of%20standard%20reduction%20potentials%20for%20half-reactions%20important%20in%20biochemistry
The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage. When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci): The Nernst equation is a function of and can be written as follows: At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul. The numerically simplified form of the Nernst equation is expressed as: Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0. At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero as with the standard hydrogen electrode (SHE) at 1 M (pH = 0) in classical electrochemistry, but that versus the standard hydrogen electrode (SHE). The same also applies for the reduction potential of oxygen: For , = 1.229 V, so, applying the Nernst equation for pH = 7 gives: For obtaining the values of the reduction potential at pH = 7 for the redox reactions relevant for biological systems, the same kind of conversion exercise is done using the corresponding Nernst equation expressed as a function of pH. The conversion is simple, but care must be taken not to inadvertently mix reduction potential converted at pH = 7 with other data directly taken from tables referring to SHE (pH = 0). Expression of the Nernst equation as a function of pH The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side): The half-cell standard reduction potential is given by where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :   where curly braces { } indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units). This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2. Formal standard reduction potential combined with the pH dependency To obtain the reduction potential as a function of the measured concentrations of the redox-active species in solution, it is necessary to express the activities as a function of the concentrations. Given that the chemical activity denoted here by { } is the product of the activity coefficient γ by the concentration denoted by [ ]: ai = γi·Ci, here expressed as {X} = γx [X] and {X}x = (γx)x [X]x and replacing the logarithm of a product by the sum of the logarithms (i.e., log (a·b) = log a + log b), the log of the reaction quotient () (without {H+} already isolated apart in the last term as h pH) expressed here above with activities { } becomes: It allows to reorganize the Nernst equation as: Where is the formal standard potential independent of pH including the activity coefficients. Combining directly with the last term depending on pH gives: For a pH = 7: So, It is therefore important to know to what exact definition does refer the value of a reduction potential for a given biochemical redox process reported at pH = 7, and to correctly understand the relationship used. Is it simply: calculated at pH 7 (with or without corrections for the activity coefficients), , a formal standard reduction potential including the activity coefficients but no pH calculations, or, is it, , an apparent formal standard reduction potential at pH 7 in given conditions and also depending on the ratio . This requires thus to dispose of a clear definition of the considered reduction potential, and of a sufficiently detailed description of the conditions in which it is valid, along with a complete expression of the corresponding Nernst equation. Were also the reported values only derived from thermodynamic calculations, or determined from experimental measurements and under what specific conditions? Without being able to correctly answering these questions, mixing data from different sources without appropriate conversion can lead to errors and confusion. Determination of the formal standard reduction potential when 1 The formal standard reduction potential can be defined as the measured reduction potential of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when 1) under given conditions. Indeed: as, , when , , when , because , and that the term is included in . The formal reduction potential makes possible to more simply work with molar or molal concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential. The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949), "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal". The activity coefficients and are included in the formal potential , and because they depend on experimental conditions such as temperature, ionic strength, and pH, cannot be referred as an immuable standard potential but needs to be systematically determined for each specific set of experimental conditions. Formal reduction potentials are applied to simplify results interpretations and calculations of a considered system. Their relationship with the standard reduction potentials must be clearly expressed to avoid any confusion. Main factors affecting the formal (or apparent) standard reduction potentials The main factor affecting the formal (or apparent) reduction potentials in biochemical or biological processes is the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be clearly indicated. When using, or comparing, several formal (or apparent) reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials ( versus SHE, pH = 0) with formal (or apparent) reduction potentials ( at pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and directly mixing data from classical electrochemistry textbooks ( versus SHE, pH = 0) and microbiology textbooks ( at pH = 7) without paying attention to the conventions on which they are based). Example in biochemistry For example, in a two electrons couple like : the reduction potential becomes ~ 30 mV (or more exactly, 59.16 mV/2 = 29.6 mV) more positive for every power of ten increase in the ratio of the oxidised to the reduced form. Some important apparent potentials used in biochemistry See also Nernst equation Electron bifurcation Pourbaix diagram Reduction potential Dependency of reduction potential on pH Standard electrode potential Standard reduction potential Standard reduction potential (data page) Standard state References Bibliography Electrochemistry Bio-electrochemistry Microbiology Biochemistry Standard reduction potentials for half-reactions important in biochemistry Electrochemical potentials Thermodynamics databases Biochemistry databases
Table of standard reduction potentials for half-reactions important in biochemistry
Physics,Chemistry,Biology
2,091
45,446,636
https://en.wikipedia.org/wiki/Gioiello%20%28galaxy%20cluster%29
The XDCPJ0044.0-2033 (Gioello) galaxy cluster at redshift z=1.579 was discovered in the archive of the XMM-Newton mission, as part of the XMM-Newton Distant Cluster Project (XDCP) and first published by Santos et al. 2011. Gioiello is the most distant massive galaxy cluster that has been found and studied today. This massive galaxy cluster contains 400 trillion times the mass of the Sun and is located 9.6 billion light years away from Earth. The name Gioiello, meaning "jewel" in Italian, was given to this massive galaxy cluster because an image of the cluster contains many beautiful pink, purple, and red sparkling colors from the hot X-ray–emitting gas and other star-forming galaxies within the cluster. History Gioiello (lit. "Jewel" in Italian; officially XDCP J0044.0-2033) was first detected by astronomers in 2011 (Santos et al 2011). The follow-up research on this cluster using dedicated X-ray imaging from the Chandra observatory was discussed at the Italian village sharing its namesake, Villi il Gioiello. At this research meeting the unique properties of Gioiello were discussed at length. Some of these facts unique to Gioiello include that it emits purple light and is incredibly massive, containing 400 trillion times more mass than the Sun. Since the time of its discovery there has not been a wealth of new information released about Gioiello. The results of research surrounding Gioiello and other massive galaxy clusters were published in The Astrophysical Journal, with its main author being Paolo Tozzi of the National Institute for Astrophysics (INAF) in Florence, Italy. His co-author is Joana Santos, who is also from the National Institute for Astrophysics. Characteristics Gioiello was spotted six times between 8 September and 24 November in 2013. The cluster image shown here is a composite of X-rays (purple); optical (red, green, blue); and far-infrared (red). This galaxy cluster is at the coordinates RA 00h 44m 05.20s | Dec −20° 33’ 59.70" which is located in the Cetus constellation. Gioiello is unique compared to other clusters in that it still has many stars forming within its galaxies. This gives astronomers a new perspective on the state of younger galaxy clusters and the way that they behave. Although several clusters have been confirmed at this size it is the only cluster with redshifts capable of diffusing X-ray emission which creates different limitations on the temperature and on the mass of the galaxy. Further research needs to be done to better understand these galaxies but as of now the outlook for further studies is low following the basis of currently planned missions. Surrounding galaxies Galaxy clusters are the largest structures in the Universe held together by gravity. It is essential to observe these galaxy clusters with high redshift in order to understand how they have evolved over time. NASA observed this cluster over a period of four days. The time allotted to this study gave researchers the opportunity to identify the accurate mass of the cluster. These researchers determined that the Gioiello Cluster contains over 400 trillion times the mass of the Sun. Previous to the discovery of the Gioiello Cluster, researchers found another very large galaxy cluster named "El Gordo." This galaxy cluster was located 7 billion light years away, along with a few other distant clusters. A distance estimate scale provided by NASA shows how difficult it is to find clusters as massive and as distant as Gioiello and El Gordo. This distance estimate scale shows the solar system as being closest to the Sun, followed by the Milky Way, then nearby galaxies, distant galaxies, and finally the early universe. Looking at the scale, the location of the Gioiello Cluster with respect to the Sun stretches all the way to the early universe category. It is the most distant galaxy cluster that has been discovered thus far, although astronomers have discovered several smaller galaxy clusters that are quite close to the distance of Gioiello. These clusters have been identified as being more than 9.5 billion light-years away. However, some of these distant objects appeared to be proto-clusters, which are better defined as precursors to fully developed galaxy clusters. These astronomers and researchers detected hints of uneven structure in the hot gas, which would appear as large clumps. These uneven structures could have been caused by collisions with smaller galaxy clusters, and provide clues to how the cluster became so massive very early on. These researchers expect that Gioiello is still young enough to be undergoing many interactions and changes in its composition. See also List of galaxies List of nearest galaxies References Galaxy clusters
Gioiello (galaxy cluster)
Astronomy
967
205,483
https://en.wikipedia.org/wiki/Jean-Pierre%20Serre
Jean-Pierre Serre (; born 15 September 1926) is a French mathematician who has made contributions to algebraic topology, algebraic geometry and algebraic number theory. He was awarded the Fields Medal in 1954, the Wolf Prize in 2000 and the inaugural Abel Prize in 2003. Biography Personal life Born in Bages, Pyrénées-Orientales, to pharmacist parents, Serre was educated at the Lycée de Nîmes. Then he studied at the École Normale Supérieure in Paris from 1945 to 1948. He was awarded his doctorate from the Sorbonne in 1951. From 1948 to 1954 he held positions at the Centre National de la Recherche Scientifique in Paris. In 1956 he was elected professor at the Collège de France, a position he held until his retirement in 1994. His wife, Professor Josiane Heulot-Serre, was a chemist; she also was the director of the Ecole Normale Supérieure de Jeunes Filles. Their daughter is the former French diplomat, historian and writer Claudine Monteil. The French mathematician Denis Serre is his nephew. He practices skiing, table tennis, and rock climbing (in Fontainebleau). Career From a very young age he was an outstanding figure in the school of Henri Cartan, working on algebraic topology, several complex variables and then commutative algebra and algebraic geometry, where he introduced sheaf theory and homological algebra techniques. Serre's thesis concerned the Leray–Serre spectral sequence associated to a fibration. Together with Cartan, Serre established the technique of using Eilenberg–MacLane spaces for computing homotopy groups of spheres, which at that time was one of the major problems in topology. In his speech at the Fields Medal award ceremony in 1954, Hermann Weyl gave high praise to Serre, and also made the point that the award was for the first time awarded to a non-analyst. Serre subsequently changed his research focus. Algebraic geometry In the 1950s and 1960s, a fruitful collaboration between Serre and the two-years-younger Alexander Grothendieck led to important foundational work, much of it motivated by the Weil conjectures. Two major foundational papers by Serre were Faisceaux Algébriques Cohérents (FAC, 1955), on coherent cohomology, and Géométrie Algébrique et Géométrie Analytique (GAGA, 1956). Even at an early stage in his work Serre had perceived a need to construct more general and refined cohomology theories to tackle the Weil conjectures. The problem was that the cohomology of a coherent sheaf over a finite field could not capture as much topology as singular cohomology with integer coefficients. Amongst Serre's early candidate theories of 1954–55 was one based on Witt vector coefficients. Around 1958 Serre suggested that isotrivial principal bundles on algebraic varieties – those that become trivial after pullback by a finite étale map – are important. This acted as one important source of inspiration for Grothendieck to develop the étale topology and the corresponding theory of étale cohomology. These tools, developed in full by Grothendieck and collaborators in Séminaire de géométrie algébrique (SGA) 4 and SGA 5, provided the tools for the eventual proof of the Weil conjectures by Pierre Deligne. Other work From 1959 onward Serre's interests turned towards group theory, number theory, in particular Galois representations and modular forms. Amongst his most original contributions were: his "Conjecture II" (still open) on Galois cohomology; his use of group actions on trees (with Hyman Bass); the Borel–Serre compactification; results on the number of points of curves over finite fields; Galois representations in ℓ-adic cohomology and the proof that these representations have often a "large" image; the concept of p-adic modular form; and the Serre conjecture (now a theorem) on mod-p representations that made Fermat's Last Theorem a connected part of mainstream arithmetic geometry. In his paper FAC, Serre asked whether a finitely generated projective module over a polynomial ring is free. This question led to a great deal of activity in commutative algebra, and was finally answered in the affirmative by Daniel Quillen and Andrei Suslin independently in 1976. This result is now known as the Quillen–Suslin theorem. Honors and awards Serre, at twenty-seven in 1954, was and still is the youngest person ever to have been awarded the Fields Medal. He went on to win the Balzan Prize in 1985, the Steele Prize in 1995, the Wolf Prize in Mathematics in 2000, and was the first recipient of the Abel Prize in 2003. He has been awarded other prizes, such as the Gold Medal of the French National Scientific Research Centre (Centre National de la Recherche Scientifique, CNRS). He is a foreign member of several scientific Academies (US, Norway, Sweden, Russia, the Royal Society, Royal Netherlands Academy of Arts and Sciences (1978), American Academy of Arts and Sciences, National Academy of Sciences, the American Philosophical Society) and has received many honorary degrees (from Cambridge, Oxford, Harvard, Oslo and others). In 2012 he became a fellow of the American Mathematical Society. Serre has been awarded the highest honors in France as Grand Cross of the Legion of Honour (Grand Croix de la Légion d'Honneur) and Grand Cross of the Legion of Merit (Grand Croix de l'Ordre National du Mérite). See also Multiplicity (mathematics) Bourbaki group — Serre joined it in the late 1940s Bibliography Groupes Algébriques et Corps de Classes (1959), Hermann , translated into English as Corps Locaux (1962), Hermann , as Cohomologie Galoisienne (1964) Collège de France course 1962–63, as Algèbre Locale, Multiplicités (1965) Collège de France course 1957–58, as Algèbres de Lie Semi-simples Complexes (1966), as Abelian ℓ-Adic Representations and Elliptic Curves (1968), reissue, Cours d'arithmétique (1970), PUF, as Représentations linéaires des groupes finis (1971), Hermann, as Arbres, amalgames, SL2 (1977), SMF, as Oeuvres/Collected Papers in four volumes (1986) Vol. IV in 2000, Springer-Verlag Exposés de séminaires 1950–1999 (2001), SMF, , Correspondance Serre-Tate (2015), edited with Pierre Colmez, SMF, Finite Groups: an Introduction (2016), Higher Education Press & International Press, Rational Points on Curves over Finite Fields (2020), with contributions by E. Howe, J. Oesterlé, C. Ritzenthaler, SMF, A list of corrections, and updating, of these books can be found on his home page at Collège de France. Notes External links Jean-Pierre Serre, Collège de France, biography and publications. Jean-Pierre Serre at the French Academy of Sciences, in French. Interview with Jean-Pierre Serre in Notices of the American Mathematical Society. An Interview with Jean-Pierre Serre by C.T. Chong and Y.K. Leong, National University of Singapore. How to write mathematics badly a public lecture by Jean-Pierre Serre on writing mathematics. Biographical page (in French) 1926 births Living people People from Pyrénées-Orientales Foreign associates of the National Academy of Sciences 20th-century French mathematicians Abel Prize laureates Algebraic geometers Algebraists École Normale Supérieure alumni Academic staff of the École Normale Supérieure Nicolas Bourbaki Fields Medalists Academic staff of the Collège de France Foreign members of the Royal Society French number theorists Topologists University of Paris alumni Grand Cross of the Legion of Honour Wolf Prize in Mathematics laureates Members of the French Academy of Sciences Members of the Norwegian Academy of Science and Letters Members of the Royal Netherlands Academy of Arts and Sciences Foreign members of the Russian Academy of Sciences Fellows of the American Mathematical Society Institute for Advanced Study visiting scholars Members of the American Philosophical Society Members of the Royal Swedish Academy of Sciences Research directors of the French National Centre for Scientific Research
Jean-Pierre Serre
Mathematics
1,732
65,245,542
https://en.wikipedia.org/wiki/Doxy%20%28vibrator%29
The Doxy is a brand of British-made wand vibrators. The Doxy was created in 2013 in response to the unavailability of the Hitachi Magic Wand in the United Kingdom. It has been described as the most powerful wand vibrator on the market. , it is manufactured by CMG Leisure in Callington, in the county of Cornwall in the south of England. References External links Vibrators British brands 2013 establishments in the United Kingdom Callington Companies based in Cornwall
Doxy (vibrator)
Biology
102
23,484,562
https://en.wikipedia.org/wiki/Algestone
Algestone (), also known as alphasone or alfasone, as well as dihydroxyprogesterone, is a progestin which was never marketed. Another progestin, algestone acetophenide, in contrast, has been marketed as a hormonal contraceptive. Chemistry Algestone, also known as 16α,17α-dihydroxyprogesterone or as 16α,17α-dihydroxypregn-4-ene-3,20-dione, is a synthetic pregnane steroid and a derivative of progesterone and 17α-hydroxyprogesterone. Closely related analogues of algestone include 16α-hydroxyprogesterone, algestone acetonide, and algestone acetophenide. References Abandoned drugs Diketones Diols Pregnanes Progestogens
Algestone
Chemistry
194
3,492,951
https://en.wikipedia.org/wiki/Sorafenib
Sorafenib, sold under the brand name Nexavar, is a kinase inhibitor drug approved for the treatment of primary kidney cancer (advanced renal cell carcinoma), advanced primary liver cancer (hepatocellular carcinoma), FLT3-ITD positive AML and radioactive iodine resistant advanced thyroid carcinoma. Mechanism of action Sorafenib is a protein kinase inhibitor with activity against many protein kinases, including VEGFR, PDGFR and RAF kinases. Of the RAF kinases, sorafenib is more selective for c-Raf than B-RAF. (See BRAF (gene)#Sorafenib for details the drug's interaction with B-Raf.) Sorafenib treatment induces autophagy, which may suppress tumor growth. Based on its 1,3-disubstituted urea structure, sorafenib is also a potent soluble epoxide hydrolase inhibitor and this activity likely reduces the severity of its adverse effects. Medical uses Sorafenib is indicated as a treatment for advanced renal cell carcinoma (RCC), unresectable hepatocellular carcinomas (HCC) and thyroid cancer. Kidney cancer Clinical trial results, published January 2007, showed that, compared with placebo, treatment with sorafenib prolongs progression-free survival in patients with advanced clear cell renal cell carcinoma in whom previous therapy has failed. The median progression-free survival was 5.5 months in the sorafenib group and 2.8 months in the placebo group (hazard ratio for disease progression in the sorafenib group, 0.44; 95% confidence interval [CI], 0.35 to 0.55; P<0.01). In Australia this is one of two TGA-labelled indications for sorafenib, although it is not listed on the Pharmaceutical Benefits Scheme for this indication. Liver cancer At ASCO 2007, results from the SHARP trial were presented, which showed efficacy of sorafenib in hepatocellular carcinoma. The primary endpoint was median overall survival, which showed a 44% improvement in patients who received sorafenib compared to placebo (hazard ratio 0.69; 95% CI, 0.55 to 0.87; p=0.0001). Both median survival and time to progression showed 3-month improvements; however, there was no significant difference in median time to symptomatic progression (p=0.77). There was no difference in quality of life measures, possibly attributable to toxicity of sorafenib or symptoms related to underlying progression of liver disease. Of note, this trial only included patients with Child-Pugh Class A (i.e. mildest) cirrhosis. Because of this trial sorafenib obtained FDA approval for the treatment of advanced hepatocellular carcinoma in November 2007. In a randomized, double-blind, phase II trial combining sorafenib with doxorubicin, the median time to progression was not significantly delayed compared with doxorubicin alone in patients with advanced hepatocellular carcinoma. Median durations of overall survival and progression-free survival were significantly longer in patients receiving sorafenib plus doxorubicin than in those receiving doxorubicin alone. A prospective single-centre phase II study which included the patients with unresectable hepatocellular carcinoma (HCC) concluding that the combination of sorafenib and DEB-TACE in patients with unresectable HCC is well tolerated and safe, with most toxicities related to sorafenib. In Australia this is the only indication for which sorafenib is listed on the PBS and hence the only government-subsidised indication for sorafenib. Along with renal cell carcinoma, hepatocellular carcinoma is one of the TGA-labelled indications for sorafenib. Thyroid cancer On 22 November 2013, sorafenib was approved by the FDA for the treatment of locally recurrent or metastatic, progressive differentiated thyroid carcinoma (DTC) refractory to radioactive iodine treatment. The phase III DECISION trial showed significant improvement in progression-free survival but not in overall survival. However, as is known, the side effects were very frequent, specially hand and foot skin reaction. Adverse effects Adverse effects by frequency Note: Potentially serious side effects are in bold. Very common (>10% frequency) Lymphopenia Hypophosphataemia Haemorrhage Hypertension Diarrhea Rash Alopecia (hair loss; occurs in roughly 30% of patients receiving sorafenib) Hand-foot syndrome Pruritus (itchiness) Erythema Increased amylase Increased lipase Fatigue Pain Nausea Vomiting Common (1-10% frequency) Leucopenia Neutropoenia Anaemia Thrombocytopenia Anorexia (weight loss) Hypocalcaemia Hypokalaemia Depression Peripheral sensory neuropathy Tinnitus Congestive heart failure Myocardial infarction Myocardial ischaemia Hoarseness Constipation Stomatitis Dyspepsia Dysphagia Dry skin Exfoliative dermatitis Acne Skin desquamation Arthralgia Myalgia Kidney failure Proteinuria Erectile dysfunction Asthenia (weakness) Fever Influenza-like illness Transient increase in transaminase Uncommon (0.1-1% frequency) Folliculitis Infection Hypersensitivity reactions Hypothyroidism Hyperthyroidism Hyponatraemia Dehydration Reversible posterior leukoencephalopathy Hypertensive crisis Rhinorrhoea Interstitial lung disease-like events Gastro-oesophageal reflux disease (GORD) Pancreatitis Gastritis Gastrointestinal perforations Increase in bilirubin leading, potentially, to jaundice Cholecystitis Cholangitis Eczema Erythema multiforme Keratoacanthoma Squamous cell carcinoma Gynaecomastia (swelling of the breast tissue in men) Transient increase in blood alkaline phosphatase INR abnormal Prothrombin level abnormal bulbous skin reaction Rare (0.01-0.1% frequency) QT interval prolongation Angiooedema Anaphylactic reaction Hepatitis Radiation recall dermatitis Stevens–Johnson syndrome Leucocytoclastic vasculitis Toxic epidermal necrolysis Nephrotic syndrome Rhabdomyolysis History Renal cancer Sorafenib was approved by the U.S. Food and Drug Administration (FDA) in December 2005, and received European Commission marketing authorization in July 2006, both for use in the treatment of advanced renal cancer. Liver cancer The European Commission granted marketing authorization to the drug for the treatment of patients with hepatocellular carcinoma(HCC), the most common form of liver cancer, in October 2007, and FDA approval for this indication followed in November 2007. In November 2009, the UK's National Institute for Clinical Excellence declined to approve the drug for use within the NHS in England, Wales and Northern Ireland, stating that its effectiveness (increasing survival in primary liver cancer by 6 months) did not justify its high price, at up to £3000 per patient per month. In Scotland the drug had already been refused authorization by the Scottish Medicines Consortium for use within NHS Scotland, for the same reason. In March 2012, the Indian Patent Office granted a domestic company, Natco Pharma, a license to manufacture generic sorafenib, bringing its price down by 97%. Bayer sells a month's supply, 120 tablets, of Nexavar for. Natco Pharma will sell 120 tablets for , while still paying a 6% royalty to Bayer. The royalty was later raised to 7% on appeal by Bayer. Under the Patents Act, 1970 and the World Trade Organisation TRIPS Agreement, the government can issue a compulsory license when a drug is not available at an affordable price. Society and culture Nexavar controversy In January 2014, Bayer's CEO Marijn Dekkers allegedly stated that Nexavar was developed for "Western patients who can afford it, not for Indians". A kidney cancer patient would pay $96,000 (£58,000) for a year's course of the Bayer-made drug, whereas the cost of the Indian version of the generic drug would be around $2,800 (£1,700). Research Lung In some kinds of lung cancer (with squamous-cell histology) sorafenib administered in addition to paclitaxel and carboplatin may be detrimental to patients. Ovarian cancer Sorafenib has been studied as maintenance therapy after ovarian cancer treatment and in combination with chemotherapy for recurrent ovarian cancer but did not show results that led to approval of the drug for these indications. Brain (recurrent glioblastoma) There is a phase I/II study at the Mayo Clinic of sorafenib and CCI-779 (temsirolimus) for recurrent glioblastoma. Desmoid tumor (aggressive fibromatosis) A study performed in 2008 showed that sorafenib is active against aggressive fibromatosis. This study is being used as justification for using sorafenib as an initial course of treatment in some patients with the condition. A phase III clinical trial is testing the effectiveness of sorafenib to treat desmoid tumors (also known as aggressive fibromatosis), after positive results in the first two trial stages. Dosage is typically half of that applied for malignant cancers (400 mg vs 800 mg). NCI are sponsoring this trial. See also Donafenib, a deuterated derivative of sorafenib with improved pharmacokinetic properties Notes References External links Drugs developed by Bayer Orphan drugs Receptor tyrosine kinase inhibitors Chloroarenes Trifluoromethyl compounds Anilines Ureas Phenol ethers Pyridines Carboxamides Diaryl ethers
Sorafenib
Chemistry
2,135
15,071,945
https://en.wikipedia.org/wiki/Mucin%203A
Mucin 3A is a protein that in humans is encoded by the MUC3A gene. References 03
Mucin 3A
Chemistry
23
37,103,566
https://en.wikipedia.org/wiki/98%20Herculis
98 Herculis is a single star located approximately 590 light years from the Sun in the northern constellation Hercules. It is visible to the naked eye as a dim, red-hued point of light with an apparent visual magnitude of 4.96. The brightness of the star is diminished by an extinction of 0.19 due to interstellar dust. The star is moving closer to the Earth with a heliocentric radial velocity of −19 km/s. This is an aging red giant star on the asymptotic giant branch with a stellar classification of M3-SIII, where the suffix notation indicating this is an S-type star. It is a mild barium star with an intensity class of 0.2, and is a suspected variable star, although Percy and Shepherd (1992) were unable to confirm this. With the hydrogen at its core exhausted, the star has expanded to around 85 times the Sun's radius. It is radiating 1,330 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 3,772 K. References M-type giants S-type stars Suspected variables Barium stars Hercules (constellation) Durchmusterung objects Herculis, 098 165625 088657 6765
98 Herculis
Astronomy
258
52,677,227
https://en.wikipedia.org/wiki/Corynesporina%20elegans
Corynesporina elegans is a species of fungus of unknown placement within Ascomycota. References Fungi described in 1994 Enigmatic Ascomycota taxa Fungus species
Corynesporina elegans
Biology
39
566,866
https://en.wikipedia.org/wiki/880%20%28number%29
880 (eight hundred [and] eighty) is the natural number following 879 and preceding 881. Characteristics In mathematics and sound It is the number of 4-by-4 magic squares. And the triple factorial: 11!!! = 880. 880 is the frequency in hertz of the musical note A5. Other 880 is also: The code for international direct dialing phone calls to Bangladesh The year 880 BC or AD 880. Interstate 880, several Interstate highways in the United States. Dodge Custom 880, an automobile manufactured from 1962 to 1965. References Integers
880 (number)
Mathematics
117
78,464,236
https://en.wikipedia.org/wiki/2024%20Palmyra%20airstrike
On 20 November 2024, the Israeli Air Force conducted an airstrike on residential buildings and an industrial area in Palmyra in central Syria. According to the Syrian Observatory for Human Rights, the strikes killed at least 108 people, including 73 Iranian-backed Syrian militiamen and 29 foreign Iranian-backed militiamen, mostly members of the Harakat Hezbollah al-Nujaba of Iraq, as well as 15 Hezbollah militants. The strikes also injured more than 50 people. Background Since the beginning of the Syrian civil war in 2011, Israel has been conducting hundreds of airstrikes in Syria, targeting the Syrian Army and Iran-backed groups in the country. Since the Israel–Hamas war started on 7 October 2023, Israel increased its airstrikes on Syria as its hostilities with Hezbollah intensified. Airstrikes The Syrian Ministry of Defense reported that the airstrikes, which took place at 1:30 p.m. on 20 November. 2024, were launched from the direction of the United States military base in Al-Tanf and caused "significant material damage". The Syrian Observatory for Human Rights also reported that the strikes targeted "a weapons depot near the industrial area" in Palmyra. The United Nations deputy special envoy to Syria told the UN Security Council that the attack was "likely the deadliest Israeli strike in Syria to date". Syrian state media initially reported that the attacks killed 36 people and wounded at least 50 more, as a result of the strikes on residential buildings and an industrial area in Palmyra. The United Kingdom-based Syrian Observatory for Human Rights however reported that the airstrikes killed more than 108 people. The dead include 73 Iranian-backed Syrian militiamen, including 11 officers working for Hezbollah in Lebanon, and 29 Iranian-backed non-Syrian militiamen, primarily Iraqi fighters of Harakat Hezbollah al-Nujaba. The Israel Defense Forces declined to comment on the airstrike. References 2024 airstrikes November 2024 events in Syria Middle Eastern crisis (2023–present) Iran–Israel conflict during the Syrian civil war Israeli airstrikes during the Syrian civil war Palmyra in the Syrian civil war 2024 building bombings Residential building bombings in Syria Military operations of the Syrian civil war in 2024 Military operations of the Syrian civil war involving Hezbollah Military operations of the Syrian civil war involving the Syrian government Industrial fires and explosions 2024 industrial disasters 2024 Iran–Israel conflict
2024 Palmyra airstrike
Chemistry
487
379,868
https://en.wikipedia.org/wiki/Linear%20differential%20equation
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of an unknown function of the variable . Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives. Types of solution A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any. The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound. Basic terminology The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term , which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the . A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation. A of a differential equation is a function that satisfies the equation. The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation. Linear differential operator A basic differential operator of order is a mapping that maps any differentiable function to its th derivative, or, in the case of several variables, to one of its partial derivatives of order . It is commonly denoted in the case of univariate functions, and in the case of functions of variables. The basic differential operators include the derivative of order 0, which is the identity mapping. A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form where are differentiable functions, and the nonnegative integer is the order of the operator (if is not the zero function). Let be a linear differential operator. The application of to a function is usually denoted or , if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar. As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions. The language of operators allows a compact writing for differentiable equations: if is a linear differential operator, then the equation may be rewritten There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in and the right-hand and of the equation, such as or . The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation . In the case of an ordinary differential operator of order , Carathéodory's existence theorem implies that, under very mild conditions, the kernel of is a vector space of dimension , and that the solutions of the equation have the form where are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval , if the functions are continuous in , and there is a positive real number such that for every in . Homogeneous equation with constant coefficients A homogeneous linear differential equation has constant coefficients if it has the form where are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients. The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function , which is the unique solution of the equation such that . It follows that the th derivative of is , and this allows solving homogeneous linear differential equations rather easily. Let be a homogeneous linear differential equation with constant coefficients (that is are real or complex numbers). Searching solutions of this equation that have the form is equivalent to searching the constants such that Factoring out (which is never zero), shows that must be a root of the characteristic polynomial of the differential equation, which is the left-hand side of the characteristic equation When these roots are all distinct, one has distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at . Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator). In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form where is a nonnegative integer, is a root of the characteristic polynomial of multiplicity , and . For proving that these functions are solutions, one may remark that if is a root of the characteristic polynomial of multiplicity , the characteristic polynomial may be factored as . Thus, applying the differential operator of the equation is equivalent with applying first times the operator and then the operator that has as characteristic polynomial. By the exponential shift theorem, and thus one gets zero after application of As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions. In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if is a root of the characteristic polynomial, then is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing and by and . Second-order case A homogeneous linear differential equation of the second order may be written and its characteristic polynomial is If and are real, there are three cases for the solutions, depending on the discriminant . In all three cases, the general solution depends on two arbitrary constants and . If , the characteristic polynomial has two distinct real roots , and . In this case, the general solution is If , the characteristic polynomial has a double root , and the general solution is If , the characteristic polynomial has two complex conjugate roots , and the general solution is which may be rewritten in real terms, using Euler's formula as Finding the solution satisfying and , one equates the values of the above general solution at and its derivative there to and , respectively. This results in a linear system of two linear equations in the two unknowns and . Solving this system gives the solution for a so-called Cauchy problem, in which the values at for the solution of the DEQ and its derivative are specified. Non-homogeneous equation with constant coefficients A non-homogeneous equation of order with constant coefficients may be written where are real or complex numbers, is a given function of , and is the unknown function (for sake of simplicity, "" will be omitted in the following). There are several methods for solving such an equation. The best method depends on the nature of the function that makes the equation non-homogeneous. If is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, is a linear combination of functions of the form , , and , where is a nonnegative integer, and a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when satisfies a homogeneous linear differential equation, typically, a holonomic function. The most general method is the variation of constants, which is presented here. The general solution of the associated homogeneous equation is where is a basis of the vector space of the solutions and are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering as constants, they can be considered as unknown functions that have to be determined for making a solution of the non-homogeneous equation. For this purpose, one adds the constraints which imply (by product rule and induction) for , and Replacing in the original equation and its derivatives by these expressions, and using the fact that are solutions of the original homogeneous equation, one gets This equation and the above ones with as left-hand side form a system of linear equations in whose coefficients are known functions (, the , and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives , and then . As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation. First-order equation with variable coefficients The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of , is: If the equation is homogeneous, i.e. , one may rewrite and integrate: where is an arbitrary constant of integration and is any antiderivative of . Thus, the general solution of the homogeneous equation is where is an arbitrary constant. For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the reciprocal of a solution of the homogeneous equation. This gives As the product rule allows rewriting the equation as Thus, the general solution is where is a constant of integration, and is any antiderivative of (changing of antiderivative amounts to change the constant of integration). Example Solving the equation The associated homogeneous equation gives that is Dividing the original equation by one of these solutions gives That is and For the initial condition one gets the particular solution System of linear differential equations A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations. An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if {{tmath| y', y, \ldots, y^{(k)} }} appear in an equation, one may replace them by new unknown functions that must satisfy the equations and for . A linear system of the first order, which has unknown functions and differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form where and the are functions of . In matrix notation, this system may be written (omitting "") The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication. Let be the homogeneous equation associated to the above matrix equation. Its solutions form a vector space of dimension , and are therefore the columns of a square matrix of functions , whose determinant is not the zero function. If , or is a matrix of constants, or, more generally, if commutes with its antiderivative , then one may choose equal the exponential of . In fact, in these cases, one has In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion. Knowing the matrix , the general solution of the non-homogeneous equation is where the column matrix is an arbitrary constant of integration. If initial conditions are given as the solution that satisfies these initial conditions is Higher order with variable coefficients A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory. The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory. Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers. Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm. Cauchy–Euler equation Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form where are constant coefficients. Holonomic functions A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients. Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions. Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input. Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows. A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa''. It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc. See also Continuous-repayment mortgage Fourier transform Laplace transform Linear difference equation Variation of parameters References External links http://eqworld.ipmnet.ru/en/solutions/ode.htm Dynamic Dictionary of Mathematical Function. Automatic and interactive study of many holonomic functions. Differential equations
Linear differential equation
Mathematics
3,569
269,193
https://en.wikipedia.org/wiki/Noise-cancelling%20headphones
Noise-cancelling headphones are headphones that suppress unwanted ambient sounds using active noise control (ANC). Active noise cancellation makes it possible to listen to audio content without raising the volume excessively. In an aviation environment, noise-cancelling headphones increase the signal-to-noise ratio significantly more than passive noise attenuating headphones or no headphones, making hearing important information such as safety announcements easier. Noise-cancelling headphones can improve listening enough to completely offset the effect of a distracting concurrent activity. Theory To cancel the lower-frequency portions of the noise, noise-cancelling headphones use active noise control. A microphone captures the targeted ambient sounds, and a small amplifier generates sound waves that are exactly out of phase with the undesired sounds. When the sound pressure of the noise wave is high, the cancelling wave is low (and vice versa). The opposite sound waves collide and are eliminated or "cancelled" (destructive interference). Most noise-cancelling headsets in the consumer market generate the noise-cancelling waveform in real time with analog technology. In contrast, other active noise and vibration control products use soft real-time digital processing. According to an experiment conducted to test how lightweight earphones reduced noise as compared to commercial headphones and earphones, lightweight headphones achieved better noise reduction than normal headphones. The experiment also supported that in-ear headphones worked better at reducing noise than outer-ear headphones. Cancellation focuses on constant droning sounds like road noise and is less effective on short/sharp sounds like voices or breaking glass. It also is ineffective in eliminating higher frequency noises like the sound of spraying. Noise-cancelling headphones often combine sound isolation with ANC to maximize the sound reduction across the frequency spectrum. Noise cancellation can also be used without sound isolation to make wanted sounds (such as voices) easier to hear. Noise cancellation to eliminate ambient noise is never passive because of the circuitry required, so references to passive noise cancellation actually are referring to products featuring sound isolation. To prevent higher-frequency noise from reaching the ear, most noise-cancelling headphones depend on sound isolation or soundproofing. Higher-frequency sound has a shorter wavelength, and cancelling this sound would require locating devices to detect and counteract it closer to the listener's eardrum than is currently technically feasible or would require digital algorithms that would complicate the headphone's electronics. Noise-cancelling headphones specify the amount of noise they can cancel in terms of decibels. This number may be useful for comparing products but does not tell the whole story, as it does not specify noise reduction at various frequencies. In aviation By the 1950s, Lawrence J. Fogel created systems and submitted patents regarding active noise cancellation in the field of aviation. This system was designed to reduce noise for the pilots in the cockpit area and help make their communication easier and protect hearing. Fogel is considered to be the inventor of active noise cancellation, and he designed one of the first noise-cancelling headphones systems. Later on, Willard Meeker designed an active noise control model that was applied to circumaural earmuffs for advanced hearing protection. Noise-cancelling aviation headsets are now commonly available. In 1989, Bose Corporation introduced its Aviation Headset Series I, which became the first commercially available ANR headset. Several airlines provide noise-cancelling headphones in their business and first-class cabins. Bose started supplying American Airlines with noise-cancelling headphones in 1999 and started offering the "Quiet Comfort" line for the general consumer in 2000. As a sleeping aid Noise-cancellation headphones have been used as sleeping aids as well. Both passive isolating and active noise-cancellation headphones or earplugs help to achieve a reduction of ambient sounds, which is particularly helpful for people suffering from insomnia or other sleeping disorders, for whom sounds such as cars honking and snoring impact their ability to sleep. For that reason, noise-cancelling sleep headphones and ear plugs are designed to cater to this segment of patients. In hospitals The use of noise-cancelling headphones for patients in intensive care units has been implemented to reduce the amount of noise exposure that they face while in a hospital environment. Active noise control technology is shown to have a relationship with sleep disturbance, delirium, and morbidity, therefore bringing up concerns about lowering the levels of noise for patients receiving care. Health and safety There is a general danger that listening to loud music in headphones can distract the listener and lead to injury and accidents. Noise-cancelling headphones add extra risk. Several countries and states have made it illegal to wear headphones while driving or cycling. It is not uncommon to get a pressure-like feeling when using noise-cancelling headphones initially. This is caused by the lack of low-frequency sounds as being perceived as a pressure differential between the inner and outer ear. Autism A December 2016 study from the Hong Kong Journal of Occupational Therapy found that noise-cancellation headphones helped children with autism spectrum disorder cope with behaviors related to hyper-reactivity and auditory stimuli. Drawbacks The active noise control requires power, usually supplied by a USB port or a battery that must occasionally be replaced or recharged. Without power, some models do not function as regular headphones. Any battery and additional electronics may increase the size and weight of the headphones compared to regular headphones. The noise-cancelling circuitry may reduce audio quality and add high-frequency hiss, although reducing the noise may result in higher perceived audio quality. See also Active vibration control Noise-canceling microphone Passive noise-cancelling headphones Throat microphone References Audio engineering Noise reduction Headphones
Noise-cancelling headphones
Engineering
1,169
14,576,408
https://en.wikipedia.org/wiki/Radiative%20transfer%20equation%20and%20diffusion%20theory%20for%20photon%20transport%20in%20biological%20tissue
Photon transport in biological tissue can be equivalently modeled numerically with Monte Carlo simulations or analytically by the radiative transfer equation (RTE). However, the RTE is difficult to solve without introducing approximations. A common approximation summarized here is the diffusion approximation. Overall, solutions to the diffusion equation for photon transport are more computationally efficient, but less accurate than Monte Carlo simulations. Definitions The RTE can mathematically model the transfer of energy as photons move inside a tissue. The flow of radiation energy through a small area element in the radiation field can be characterized by radiance with units . Radiance is defined as energy flow per unit normal area per unit solid angle per unit time. Here, denotes position, denotes unit direction vector and denotes time (Figure 1). Several other important physical quantities are based on the definition of radiance: Fluence rate or intensity Fluence Current density (energy flux) . This is the vector counterpart of fluence rate pointing in the prevalent direction of energy flow. Radiative transfer equation The RTE is a differential equation describing radiance . It can be derived via conservation of energy. Briefly, the RTE states that a beam of light loses energy through divergence and extinction (including both absorption and scattering away from the beam) and gains energy from light sources in the medium and scattering directed towards the beam. Coherence, polarization and non-linearity are neglected. Optical properties such as refractive index , absorption coefficient μa, scattering coefficient μs, and scattering anisotropy are taken as time-invariant but may vary spatially. Scattering is assumed to be elastic. The RTE (Boltzmann equation) is thus written as: where is the speed of light in the tissue, as determined by the relative refractive index μtμa+μs is the extinction coefficient is the phase function, representing the probability of light with propagation direction being scattered into solid angle around . In most cases, the phase function depends only on the angle between the scattered and incident directions, i.e. . The scattering anisotropy can be expressed as describes the light source. Diffusion theory Assumptions In the RTE, six different independent variables define the radiance at any spatial and temporal point (, , and from , polar angle and azimuthal angle from , and ). By making appropriate assumptions about the behavior of photons in a scattering medium, the number of independent variables can be reduced. These assumptions lead to the diffusion theory (and diffusion equation) for photon transport. Two assumptions permit the application of diffusion theory to the RTE: Relative to scattering events, there are very few absorption events. Likewise, after numerous scattering events, few absorption events will occur, and the radiance will become nearly isotropic. This assumption is sometimes called directional broadening. In a primarily scattering medium, the time for substantial current density change is much longer than the time to traverse one transport mean free path. Thus, over one transport means free path, the fractional change in current density is much less than unity. This property is sometimes called temporal broadening. Both of these assumptions require a high-albedo (predominantly scattering) medium. The RTE in the diffusion approximation Radiance can be expanded on a basis set of spherical harmonics n, m. In diffusion theory, radiance is taken to be largely isotropic, so only the isotropic and first-order anisotropic terms are used: where n, m are the expansion coefficients. Radiance is expressed with 4 terms: one for n = 0 (the isotropic term) and 3 terms for n = 1 (the anisotropic terms). Using properties of spherical harmonics and the definitions of fluence rate and current density , the isotropic and anisotropic terms can respectively be expressed as follows: Hence, we can approximate radiance as Substituting the above expression for radiance, the RTE can be respectively rewritten in scalar and vector forms as follows (The scattering term of the RTE is integrated over the complete solid angle. For the vector form, the RTE is multiplied by direction before evaluation.): The diffusion approximation is limited to systems where reduced scattering coefficients are much larger than their absorption coefficients and having a minimum layer thickness of the order of a few transport mean free path. The diffusion equation Using the second assumption of diffusion theory, we note that the fractional change in current density over one transport mean free path is negligible. The vector representation of the diffusion theory RTE reduces to Fick's law , which defines current density in terms of the gradient of fluence rate. Substituting Fick's law into the scalar representation of the RTE gives the diffusion equation: is the diffusion coefficient and μ'sμs is the reduced scattering coefficient. Notably, there is no explicit dependence on the scattering coefficient in the diffusion equation. Instead, only the reduced scattering coefficient appears in the expression for . This leads to an important relationship; diffusion is unaffected if the anisotropy of the scattering medium is changed while the reduced scattering coefficient stays constant. Solutions to the diffusion equation For various configurations of boundaries (e.g. layers of tissue) and light sources, the diffusion equation may be solved by applying appropriate boundary conditions and defining the source term as the situation demands. Point sources in infinite homogeneous media A solution to the diffusion equation for the simple case of a short-pulsed point source in an infinite homogeneous medium is presented in this section. The source term in the diffusion equation becomes , where is the position at which fluence rate is measured and is the position of the source. The pulse peaks at time . The diffusion equation is solved for fluence rate to yield the Green function for the diffusion equation: The term represents the exponential decay in fluence rate due to absorption in accordance with Beer's law. The other terms represent broadening due to scattering. Given the above solution, an arbitrary source can be characterized as a superposition of short-pulsed point sources. Taking time variation out of the diffusion equation gives the following for a time-independent point source : is the effective attenuation coefficient and indicates the rate of spatial decay in fluence. Boundary conditions Fluence rate at a boundary Consideration of boundary conditions permits use of the diffusion equation to characterize light propagation in media of limited size (where interfaces between the medium and the ambient environment must be considered). To begin to address a boundary, one can consider what happens when photons in the medium reach a boundary (i.e. a surface). The direction-integrated radiance at the boundary and directed into the medium is equal to the direction-integrated radiance at the boundary and directed out of the medium multiplied by reflectance : where is normal to and pointing away from the boundary. The diffusion approximation gives an expression for radiance in terms of fluence rate and current density . Evaluating the above integrals after substitution gives: Substituting Fick's law () gives, at a distance from the boundary z=0, The extrapolated boundary It is desirable to identify a zero-fluence boundary. However, the fluence rate at a physical boundary is, in general, not zero. An extrapolated boundary, at b for which fluence rate is zero, can be determined to establish image sources. Using a first order Taylor series approximation, which evaluates to zero since . Thus, by definition, b must be z as defined above. Notably, when the index of refraction is the same on both sides of the boundary, F is zero and the extrapolated boundary is at b. Pencil beam normally incident on a semi-infinite medium Using boundary conditions, one may approximately characterize diffuse reflectance for a pencil beam normally incident on a semi-infinite medium. The beam will be represented as two point sources in an infinite medium as follows (Figure 2): Set scattering anisotropy 2 for the scattering medium and set the new scattering coefficient μs2 to the original μs1 multiplied by 1, where 1 is the original scattering anisotropy. Convert the pencil beam into an isotropic point source at a depth of one transport mean free path ' below the surface and power = '. Implement the extrapolated boundary condition by adding an image source of opposite sign above the surface at 'b. The two point sources can be characterized as point sources in an infinite medium via is the distance from observation point to source location in cylindrical coordinates. The linear combination of the fluence rate contributions from the two image sources is This can be used to get diffuse reflectance d via Fick's law: is the distance from the observation point to the source at and is the distance from the observation point to the image source at b. Properties of diffusion equation Scaling Let be the Green function solution to the diffusion equation for a homogeneous medium of optical properties , , then the Green function solution for a homogeneous medium which differs from the former only by optical properties , , such that , can be obtained with the following rescaling: where and . Such property can also be extended to the radiance in the more general general framework of the RTE, by substituting the transport coefficients , with the extinction coefficients , . The usefulness of the property resides in taking the results obtained for a given geometry and set of optical properties, typical of a lab scale setting, rescaling them and extending them to contexts in which it would be complicated to perform measurements due to the sheer extension or inaccessibility. Dependence on absorption Let be the Green function solution to the diffusion equation for a non-absorbing homogeneous medium. Then, the Green function solution for the medium when its absorption coefficient is can be obtained as: Again, the same property also holds for radiance within the RTE. Diffusion theory solutions vs. Monte Carlo simulations Monte Carlo simulations of photon transport, though time consuming, will accurately predict photon behavior in a scattering medium. The assumptions involved in characterizing photon behavior with the diffusion equation generate inaccuracies. Generally, the diffusion approximation is less accurate as the absorption coefficient μa increases and the scattering coefficient μs decreases. For a photon beam incident on a medium of limited depth, error due to the diffusion approximation is most prominent within one transport mean free path of the location of photon incidence (where radiance is not yet isotropic) (Figure 3). Among the steps in describing a pencil beam incident on a semi-infinite medium with the diffusion equation, converting the medium from anisotropic to isotropic (step 1) (Figure 4) and converting the beam to a source (step 2) (Figure 5) generate more error than converting from a single source to a pair of image sources (step 3) (Figure 6). Step 2 generates the most significant error. See also Monte Carlo method for photon transport Radiative transfer References Further reading (2011) Scattering, absorption and radiative transfer (optics)
Radiative transfer equation and diffusion theory for photon transport in biological tissue
Chemistry
2,229
49,503,240
https://en.wikipedia.org/wiki/Fast%20travel
Fast travel or teleportation is a video game mechanic used in open world games that allows a player character to instantaneously travel between previously discovered locations (teleport waypoints or fast-travel points) without having to traverse that distance in real time. It is a type of warp that is specifically used to traverse the game's world rather than the inside of a level. Sometimes in-game time passes while fast-traveling, while in other cases the travel is simply implied or the player is teleported by magical or technological means. While typically used as a means of providing convenience to the player, fast travel has been criticized as detracting from games' design, as some worlds or quests are designed to incorporate it at the expense of depth, memorability or realism. Characteristics Fast travel is usually performed from an in-game menu upon accessing either a map of the overworld or an object such as a vehicle or save point. The player is immediately transported from one location to another, sometimes with an appropriate amount of in-game time passing in between, as though they had traveled straight to their destination. Some games have restrictions on the amount of fast traveling that can be performed, generally by requiring the use of a purchasable item each time, like a tent or magical talisman. Others allow infinite fast travel with no penalty. For example, Genshin Impact allows unlimited fast travel to any unlocked Teleport Waypoint, Statue of the Seven, or Domain on the map, but requires a consumable "portable waypoint" to be deployed for seven days to fast travel to anywhere more specific. Horses and cars are often used as partial substitutes for fast travel that allow faster, but not instantaneous movement through the world. Reception GameCrate called fast travel a "tremendous convenience" that makes game "appealing to the masses" and helps players who are on a "tight schedule", but suggested that players not use it for a better experience. Brendan Caldwell of Rock, Paper, Shotgun went further in expressing his dislike of fast travel, stating that he enjoyed Skyrim much more after downloading a mod that allowed fast travel to be turned off. He stated that "fast travel removes all sense of real distance", citing Dark Souls, a game that was designed around walking, as evoking the concept and emotions of a journey much more, and stating that the removal of any boredom also eliminates the feeling of a "real quest". While making the counter-argument that players would become too bored if they were forced to manually travel everywhere, he stated that it would force game designers to make the world interesting to walk through. Patricia Hernandez of Kotaku stated that playing Fallout 4 without using fast travel "completely transformed" her experience with the game. Similarly, Kirk Hamilton suggested fast traveling less in The Legend of Zelda: Breath of the Wild. See also Warp (gaming) References Video game terminology
Fast travel
Technology
595
2,869,798
https://en.wikipedia.org/wiki/Eta%20Piscium
Eta Piscium (η Piscium, abbreviated Eta Psc, η Psc) is a binary star and the brightest star in the constellation of Pisces, with an apparent visual magnitude of +3.6. Based upon a measured annual parallax shift of 9.33 mas as seen from Earth, it is located roughly 350 light-years distant from the Sun in the thin disk population of the Milky Way. The two components are designated Eta Piscium A (formally named Alpherg , the traditional name of the system) and B. Nomenclature η Piscium (Latinised to Eta Piscium) is the system's Bayer designation. The designations of the two constituents as Eta Piscium A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). The system bore the traditional names Al Pherg (in this context meaning the emptying) and Kullat Nunu. At the time that the Sun at the March Equinox entered into Pisces having lay in Aries, the system was in the first ecliptic constellation of the Neo-Babylonians, Kullat Nūnu − Nūnu being Babylonian for fish and Kullat referring to either the bucket or the cord that binds the fish. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Alpherg for the component Eta Piscium A on 1 June 2018 (for its official List). In Chinese, (), meaning Official in Charge of the Pasturing, refers to an asterism consisting of Eta Piscium, Rho Piscium, Pi Piscium, Omicron Piscium and 104 Piscium. Consequently, the Chinese name for Eta Piscium itself is (, .) Properties At its present distance, the visual magnitude of the system is diminished by an extinction factor of due to interstellar dust. This system's binary nature was discovered in 1878 by an amateur astronomer, S. W. Burnham. It has an orbital period of roughly 850 years, a semimajor axis of 1.2 arc seconds, and an eccentricity of 0.47. The primary, component A, is an evolved, magnitude 3.83 G-type giant star with a stellar classification of G7 IIIa. It has a weak magnetic field with a strength of , and is a Gamma Cassiopeiae variable. The companion, component B, is a magnitude 7.51 star. References External links G-type giants Gamma Cassiopeiae variable stars Binary stars Piscium, Eta Pisces (constellation) Alpherg Durchmusterung objects Piscium, 099 009270 007097 0437
Eta Piscium
Astronomy
612
1,566,768
https://en.wikipedia.org/wiki/Material%20properties%20of%20diamond
Diamond is the allotrope of carbon in which the carbon atoms are arranged in the specific type of cubic lattice called diamond cubic. It is a crystal that is transparent to opaque and which is generally isotropic (no or very weak birefringence). Diamond is the hardest naturally occurring material known. Yet, due to important structural brittleness, bulk diamond's toughness is only fair to good. The precise tensile strength of bulk diamond is little known; however, compressive strength up to has been observed, and it could be as high as in the form of micro/nanometer-sized wires or needles (~ in diameter, micrometers long), with a corresponding maximum tensile elastic strain in excess of 9%. The anisotropy of diamond hardness is carefully considered during diamond cutting. Diamond has a high refractive index (2.417) and moderate dispersion (0.044) properties that give cut diamonds their brilliance. Scientists classify diamonds into four main types according to the nature of crystallographic defects present. Trace impurities substitutionally replacing carbon atoms in a diamond's crystal structure, and in some cases structural defects, are responsible for the wide range of colors seen in diamond. Most diamonds are electrical insulators and extremely efficient thermal conductors. Unlike many other minerals, the specific gravity of diamond crystals (3.52) has rather small variation from diamond to diamond. Hardness and crystal structure Known to the ancient Greeks as (, 'proper, unalterable, unbreakable') and sometimes called adamant, diamond is the hardest known naturally occurring material, and serves as the definition of 10 on the Mohs scale of mineral hardness. Diamond is extremely strong owing to its crystal structure, known as diamond cubic, in which each carbon atom has four neighbors covalently bonded to it. Bulk cubic boron nitride (c-BN) is nearly as hard as diamond. Diamond reacts with some materials, such as steel, and c-BN wears less when cutting or abrading such material. (Its zincblende structure is like the diamond cubic structure, but with alternating types of atoms.) A currently hypothetical material, beta carbon nitride (β-), may also be as hard or harder in one form. It has been shown that some diamond aggregates having nanometer grain size are harder and tougher than conventional large diamond crystals, thus they perform better as abrasive material. Owing to the use of those new ultra-hard materials for diamond testing, more accurate values are now known for diamond hardness. A surface perpendicular to the [111] crystallographic direction (that is the longest diagonal of a cube) of a pure (i.e., type IIa) diamond has a hardness value of when scratched with a nanodiamond tip, while the nanodiamond sample itself has a value of when tested with another nanodiamond tip. Because the test only works properly with a tip made of harder material than the sample being tested, the true value for nanodiamond is likely somewhat lower than . The precise tensile strength of diamond is unknown, though strength up to has been observed, and theoretically it could be as high as depending on the sample volume/size, the perfection of diamond lattice and on its orientation: Tensile strength is the highest for the [100] crystal direction (normal to the cubic face), smaller for the [110] and the smallest for the [111] axis (along the longest cube diagonal). Diamond also has one of the smallest compressibilities of any material. Cubic diamonds have a perfect and easy octahedral cleavage, which means that they only have four planes—weak directions following the faces of the octahedron where there are fewer bonds—along which diamond can easily split upon blunt impact to leave a smooth surface. Similarly, diamond's hardness is markedly directional: the hardest direction is the diagonal on the cube face, 100 times harder than the softest direction, which is the dodecahedral plane. The octahedral plane is intermediate between the two extremes. The diamond cutting process relies heavily on this directional hardness, as without it a diamond would be nearly impossible to fashion. Cleavage also plays a helpful role, especially in large stones where the cutter wishes to remove flawed material or to produce more than one stone from the same piece of rough (e.g. Cullinan Diamond). Diamonds crystallize in the diamond cubic crystal system (space group Fdm) and consist of tetrahedrally, covalently bonded carbon atoms. A second form called lonsdaleite, with hexagonal symmetry, has also been found, but it is extremely rare and forms only in meteorites or in laboratory synthesis. The local environment of each atom is identical in the two structures. From theoretical considerations, lonsdaleite is expected to be harder than diamond, but the size and quality of the available stones are insufficient to test this hypothesis. In terms of crystal habit, diamonds occur most often as euhedral (well-formed) or rounded octahedra and twinned, flattened octahedra with a triangular outline. Other forms include dodecahedra and (rarely) cubes. There is evidence that nitrogen impurities play an important role in the formation of well-shaped euhedral crystals. The largest diamonds found, such as the Cullinan Diamond, were shapeless. These diamonds are pure (i.e. type II) and therefore contain little if any nitrogen. The faces of diamond octahedrons are highly lustrous owing to their hardness; triangular shaped growth defects (trigons) or etch pits are often present on the faces. A diamond's fracture is irregular. Diamonds which are nearly round, due to the formation of multiple steps on octahedral faces, are commonly coated in a gum-like skin (nyf). The combination of stepped faces, growth defects, and nyf produces a "scaly" or corrugated appearance. Many diamonds are so distorted that few crystal faces are discernible. Some diamonds found in Brazil and the Democratic Republic of the Congo are polycrystalline and occur as opaque, darkly colored, spherical, radial masses of tiny crystals; these are known as ballas and are important to industry as they lack the cleavage planes of single-crystal diamond. Carbonado is a similar opaque microcrystalline form which occurs in shapeless masses. Like ballas diamond, carbonado lacks cleavage planes and its specific gravity varies widely from 2.9 to 3.5. Bort diamonds, found in Brazil, Venezuela, and Guyana, are the most common type of industrial-grade diamond. They are also polycrystalline and often poorly crystallized; they are translucent and cleave easily. Hydrophobia and lipophilia Due to great hardness and strong molecular bonding, a cut diamond's facets and facet edges appear the flattest and sharpest. A curious side effect of a natural diamond's surface perfection is hydrophobia combined with lipophilia. The former property means a drop of water placed on a diamond forms a coherent droplet, whereas in most other minerals the water would spread out to cover the surface. Additionally, diamond is unusually lipophilic, meaning grease and oil readily collect and spread on a diamond's surface, whereas in other minerals oil would form coherent drops. This property is exploited in the use of grease pencils, which apply a line of grease to the surface of a suspect diamond simulant. Diamond surfaces are hydrophobic when the surface carbon atoms terminate with a hydrogen atom and hydrophilic when the surface atoms terminate with an oxygen atom or hydroxyl radical. Treatment with gases or plasmas containing the appropriate gas, at temperatures of or higher, can change the surface property completely. Naturally occurring diamonds have a surface with less than a half monolayer coverage of oxygen, the balance being hydrogen and the behavior is moderately hydrophobic. This allows for separation from other minerals at the mine using the so-called "grease-belt". Toughness Unlike hardness, which denotes only resistance to scratching, diamond's toughness or tenacity is only fair to good. Toughness relates to the ability to resist breakage from falls or impacts. Because of diamond's perfect and easy cleavage, it is vulnerable to breakage. A diamond will shatter if hit with an ordinary hammer. The toughness of natural diamond has been measured as , which is good compared to other gemstones like aquamarine (blue colored), but poor compared to most engineering materials. As with any material, the macroscopic geometry of a diamond contributes to its resistance to breakage. Diamond has a cleavage plane and is therefore more fragile in some orientations than others. Diamond cutters use this attribute to cleave some stones, prior to faceting. Ballas and carbonado diamond are exceptional, as they are polycrystalline and therefore much tougher than single-crystal diamond; they are used for deep-drilling bits and other demanding industrial applications. Particular faceting shapes of diamonds are more prone to breakage and thus may be uninsurable by reputable insurance companies. The brilliant cut of gemstones is designed specifically to reduce the likelihood of breakage or splintering. Solid foreign crystals are commonly present in diamond. They are mostly minerals, such as olivine, garnets, ruby, and many others. These and other inclusions, such as internal fractures or "feathers", can compromise the structural integrity of a diamond. Cut diamonds that have been enhanced to improve their clarity via glass infilling of fractures or cavities are especially fragile, as the glass will not stand up to ultrasonic cleaning or the rigors of the jeweler's torch. Fracture-filled diamonds may shatter if treated improperly. Pressure resistance Used in so-called diamond anvil experiments to create high-pressure environments, diamonds withstand crushing pressures in excess of 600 gigapascals (6 million atmospheres). Optical properties Color and its causes Diamonds occur in various colors: black, brown, yellow, gray, white, blue, orange, purple to pink, and red. Colored diamonds contain crystallographic defects, including substitutional impurities and structural defects, that cause the coloration. Theoretically, pure diamonds would be transparent and colorless. Diamonds are scientifically classed into two main types and several subtypes, according to the nature of defects present and how they affect light absorption: Type I diamond has nitrogen (N) atoms as the main impurity, at a concentration of up to 1%. If the N atoms are in pairs or larger aggregates, they do not affect the diamond's color; these are Type Ia. About 98% of gem diamonds are type Ia: these diamonds belong to the Cape series, named after the diamond-rich region formerly known as Cape Province in South Africa, whose deposits are largely Type Ia. If the nitrogen atoms are dispersed throughout the crystal in isolated sites (not paired or grouped), they give the stone an intense yellow or occasionally brown tint (type Ib); the rare canary diamonds belong to this type, which represents only ~0.1% of known natural diamonds. Synthetic diamond containing nitrogen is usually of type Ib. Type Ia and Ib diamonds absorb in both the infrared and ultraviolet region of the electromagnetic spectrum, from . They also have a characteristic fluorescence and visible absorption spectrum. Type II diamonds have very few if any nitrogen impurities. Pure (type IIa) diamond can be colored pink, red, or, brown owing to structural anomalies arising through plastic deformation during crystal growth; these diamonds are rare (1.8% of gem diamonds), but constitute a large percentage of Australian diamonds. Type IIb diamonds, which account for ~0.1% of gem diamonds, are usually a steely blue or gray due to boron atoms scattered within the crystal matrix. These diamonds are also semiconductors, unlike other diamond types (see Electrical properties). Most blue-gray diamonds coming from the Argyle mine of Australia are not of type IIb, but of Ia type. Those diamonds contain large concentrations of defects and impurities (especially hydrogen and nitrogen) and the origin of their color is yet uncertain. Type II diamonds weakly absorb in a different region of the infrared (the absorption is due to the diamond lattice rather than impurities), and transmit in the ultraviolet below 225 nm, unlike type I diamonds. They also have differing fluorescence characteristics, but no discernible visible absorption spectrum. Certain diamond enhancement techniques are commonly used to artificially produce an array of colors, including blue, green, yellow, red, and black. Color enhancement techniques usually involve irradiation, including proton bombardment via cyclotrons; neutron bombardment in the piles of nuclear reactors; and electron bombardment by Van de Graaff generators. These high-energy particles physically alter the diamond's crystal lattice, knocking carbon atoms out of place and producing color centers. The depth of color penetration depends on the technique and its duration, and in some cases the diamond may be left radioactive to some degree. Some irradiated diamonds are completely natural; one famous example is the Dresden Green Diamond. In these natural stones the color is imparted by "radiation burns" (natural irradiation by alpha particles originating from uranium ore) in the form of small patches, usually only micrometers deep. Additionally, Type IIa diamonds can have their structural deformations "repaired" via a high-pressure high-temperature (HPHT) process, removing much or all of the diamond's color. Luster The luster of a diamond is described as "adamantine", which simply means diamond-like. Reflections on a properly cut diamond's facets are undistorted, due to their flatness. The refractive index of diamond (as measured via sodium light, ) is 2.417. Because it is cubic in structure, diamond is also isotropic. Its high dispersion of 0.044 (variation of refractive index across the visible spectrum) manifests in the perceptible fire of cut diamonds. This fire—flashes of prismatic colors seen in transparent stones—is perhaps diamond's most important optical property from a jewelry perspective. The prominence or amount of fire seen in a stone is heavily influenced by the choice of diamond cut and its associated proportions (particularly crown height), although the body color of fancy (i.e., unusual) diamonds may hide their fire to some degree. More than 20 other minerals have higher dispersion (that is difference in refractive index for blue and red light) than diamond, such as titanite 0.051, andradite 0.057, cassiterite 0.071, strontium titanate 0.109, sphalerite 0.156, synthetic rutile 0.330, cinnabar 0.4, etc. (see Dispersion (optics)). However, the combination of dispersion with extreme hardness, wear and chemical resistivity, as well as clever marketing, determines the exceptional value of diamond as a gemstone. Fluorescence Diamonds exhibit fluorescence, that is, they emit light of various colors and intensities under long-wave ultraviolet light (365 nm): Cape series stones (type Ia) usually fluoresce blue, and these stones may also phosphoresce yellow, a unique property among gemstones. Other possible long-wave fluorescence colors are green (usually in brown stones), yellow, mauve, or red (in type IIb diamonds). In natural diamonds, there is typically little if any response to short-wave ultraviolet, but the reverse is true of synthetic diamonds. Some natural type IIb diamonds phosphoresce blue after exposure to short-wave ultraviolet. In natural diamonds, fluorescence under X-rays is generally bluish-white, yellowish or greenish. Some diamonds, particularly Canadian diamonds, show no fluorescence. The origin of the luminescence colors is often unclear and not unique. Blue emission from type IIa and IIb diamonds is reliably identified with dislocations by directly correlating the emission with dislocations in an electron microscope. However, blue emission in type Ia diamond could be either due to dislocations or the N3 defects (three nitrogen atoms bordering a vacancy). Green emission in natural diamond is usually due to the H3 center (two substitutional nitrogen atoms separated by a vacancy), whereas in synthetic diamond it usually originates from nickel used as a catalyst (see figure). Orange or red emission could be due to various reasons, one being the nitrogen-vacancy center which is present in sufficient quantities in all types of diamond, even type IIb. Optical absorption Cape series (Ia) diamonds have a visible absorption spectrum (as seen through a direct-vision spectroscope) consisting of a fine line in the violet at ; however, this line is often invisible until the diamond has been cooled to very low temperatures. Associated with this are weaker lines at , , , , and . All those lines are labeled as N3 and N2 optical centers and associated with a defect consisting of three nitrogen atoms bordering a vacancy. Other stones show additional bands: brown, green, or yellow diamonds show a band in the green at (H3 center, see above), sometimes accompanied by two additional weak bands at and (H4 center, a large complex presumably involving 4 substitutional nitrogen atoms and 2 lattice vacancies). Type IIb diamonds may absorb in the far red due to the substitutional boron, but otherwise show no observable visible absorption spectrum. Gemological laboratories make use of spectrophotometer machines that can distinguish natural, artificial, and color-enhanced diamonds. The spectrophotometers analyze the infrared, visible, and ultraviolet absorption and luminescence spectra of diamonds cooled with liquid nitrogen to detect tell-tale absorption lines that are not normally discernible. Electrical properties Diamond is a good electrical insulator, having a resistivity of to ( – ), and is famous for its wide bandgap of 5.47 eV. High carrier mobilities and high electric breakdown field at room temperature are also important characteristics of diamond. Those characteristics allow single crystalline diamond to be one of the promising materials for semiconductors. A wide bandgap is advantageous in semiconductors because it allows them to maintain high resistivity even at high temperature, important for high power applications. Semiconductors whose carrier mobilities are high such as diamond are easier to utilize in industry because they do not need high input voltage. High breakdown voltage avoids a huge current suddenly occurring at typical input voltages. Most natural blue diamonds are an exception and are semiconductors due to substitutional boron impurities replacing carbon atoms. Natural blue or blue-gray diamonds, common for the Argyle diamond mine in Australia, are rich in hydrogen; these diamonds are not semiconductors and it is unclear whether hydrogen is actually responsible for their blue-gray color. Natural blue diamonds containing boron and synthetic diamonds doped with boron are p-type semiconductors. N-type diamond films are reproducibly synthesized by phosphorus doping during chemical vapor deposition. Diode p-n junctions and UV light emitting diodes (LEDs, at ) have been produced by sequential deposition of p-type (boron-doped) and n-type (phosphorus-doped) layers. Diamond's electronic properties can be also modulated by strain engineering. Diamond transistors have been produced (for research purposes). In January 2024, a Japanese research team fabricated a MOSFET using phosphorus-doped n-type diamond, which would have superior characteristics to silicon-based technology in high-temperature, high-frequency or high-electron mobility applications. FETs with SiN dielectric layers, and SC-FETs have been made. In April 2004, research published in the journal Nature reported that below , synthetic boron-doped diamond is a bulk superconductor. Superconductivity was later observed in heavily boron-doped films grown by various chemical vapor deposition techniques, and the highest reported transition temperature (by 2009) is . (See also Covalent superconductor#Diamond) Uncommon magnetic properties (spin glass state) were observed in diamond nanocrystals intercalated with potassium. Unlike paramagnetic host material, magnetic susceptibility measurements of intercalated nanodiamond revealed distinct ferromagnetic behavior at . This is essentially different from results of potassium intercalation in graphite or C60 fullerene, and shows that sp3 bonding promotes magnetic ordering in carbon. The measurements presented first experimental evidence of intercalation-induced spin-glass state in a nanocrystalline diamond system. Thermal conductivity Unlike most electrical insulators, diamond is a good conductor of heat because of the strong covalent bonding and low phonon scattering. Thermal conductivity of natural diamond was measured to be about 2,200 W/(m·K), which is five times more than silver, the most thermally conductive metal. Monocrystalline synthetic diamond enriched to 99.9% the isotope 12C had the highest thermal conductivity of any known solid at room temperature: 3,320 W/(m·K), though reports exist of superior thermal conductivity in both carbon nanotubes and graphene. Because diamond has such high thermal conductance it is already used in semiconductor manufacture to prevent silicon and other semiconducting materials from overheating. At lower temperatures conductivity becomes even better, and reaches 41,000 W/(m·K) at (12C-enriched diamond). Diamond's high thermal conductivity is used by jewelers and gemologists who may employ an electronic thermal probe to distinguish diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about 2–3 seconds. However, older probes will be fooled by moissanite, a crystalline mineral form of silicon carbide introduced in 1998 as an alternative to diamonds, which has a similar thermal conductivity. Technologically, the high thermal conductivity of diamond is used for the efficient heat removal in high-end power electronics. Diamond is especially appealing in situations where electrical conductivity of the heat sinking material cannot be tolerated e.g. for the thermal management of high-power radio-frequency () microcoils that are used to produce strong and local RF fields. Thermal stability If heated over in air, diamond, being a form of carbon, oxidizes and its surface blackens, but the surface can be restored by re-polishing. In absence of oxygen, e.g. in a flow of high-purity argon gas, diamond can be heated up to about . At high pressure (~) diamond can be heated up to , and a report published in 2009 suggests that diamond can withstand temperatures of and above. Diamonds are carbon crystals that form under high temperatures and extreme pressures such as deep within the Earth. At surface air pressure (one atmosphere), diamonds are not as stable as graphite, and so the decay of diamond is thermodynamically favorable (δH = ). However, owing to a very large kinetic energy barrier, diamonds are metastable; they will not decay into graphite under normal conditions. See also Chemical vapor deposition of diamond Crystallographic defects in diamond Nitrogen-vacancy center Synthetic diamond References Further reading Pagel-Theisen, Verena. (2001). Diamond grading ABC: The manual (9th ed.), pp. 84–85. Rubin & Son n.v.; Antwerp, Belgium. Webster, Robert, and Jobbins, E. Allan (Ed.). (1998). Gemmologist's compendium, p. 21, 25, 31. St Edmundsbury Press Ltd, Bury St Edwards. External links Properties of diamond Properties of diamond (S. Sque, PhD thesis, 2005, University of Exeter, UK) Material properties of diamond Allotropes of carbon Native element minerals Superhard materials
Material properties of diamond
Physics,Chemistry
5,014
59,433,945
https://en.wikipedia.org/wiki/Mediated%20intercultural%20communication
Mediated intercultural communication is digital communication between people of different cultural backgrounds. Media include social networks, blogs and conferencing services. Digital communication is distinct from traditional media, creating new avenues for intercultural communication. User take online classes; post, consume and comment on others content; and play multi-player video games. This creates spaces to form virtual communities that can ease communication across boundaries of space, time and culture. New media technologies can change culture in positive ways or become a tool of repression. History Intercultural communication is as ancient as human movement in search of food sources. The systematic study of intercultural communication began with Edward Hall's labor at the Foreign Service Institute, and the publication of his The Silent Language (1959). Later research, primarily focused on face-to-face communication in various areas such as interpersonal, group, and organizational and cultural identity. International and development media have been studied under the umbrella of international communication. Media imperialism, cultural imperialism and dependency theories inform this research. Mediated intercultural communication examines the bidirectional relationships between media and intercultural communication. References Further reading Lister, M., Dovey, J., Giddings, S., Grant, I., Kelly, K. (2009). New media: A critical introduction. New York, N.Y.: Routledge DeGoede, M.E., Van Vianen, A. M., & Klehe, U. (2011). "Attracting applicants on the web: PO fit, industry culture stereotypes, and website design". International Journal of Selection & Assessment, 19 (1), 51-61. Deuze, M. (2007). Media work. Cambridge, UK: Polity Press Hall, T. Edward (1959). The Silent Language. Garden City, N.Y.: Doubleday Lister, M., Dovey, J., Giddings, S., Grant, I., Kelly, K. (2009). New media: A critical introduction. New York, N.Y.: Routledge Digital media Interculturalism
Mediated intercultural communication
Technology
431
2,770,302
https://en.wikipedia.org/wiki/Paris%20meridian
The Paris meridian is a meridian line running through the Paris Observatory in Paris, France – now longitude 2°20′14.02500″ East. It was a long-standing rival to the Greenwich meridian as the prime meridian of the world. The "Paris meridian arc" or "French meridian arc" (French: la Méridienne de France) is the name of the meridian arc measured along the Paris meridian. The French meridian arc was important for French cartography, since the triangulations of France began with the measurement of the French meridian arc. Moreover, the French meridian arc was important for geodesy as it was one of the meridian arcs which were measured to determine the figure of the Earth via the arc measurement method. The determination of the figure of the Earth was a problem of the highest importance in astronomy, as the diameter of the Earth was the unit to which all celestial distances had to be referred. History French cartography and the figure of the Earth In the year 1634, France ruled by Louis XIII and Cardinal Richelieu, decided that the Ferro Meridian through the westernmost of the Canary Islands should be used as the reference on maps, since El Hierro (Ferro) was the most western position of the Ptolemy's world map. It was also thought to be exactly 20 degrees west of Paris. The astronomers of the French Academy of Sciences, founded in 1666, managed to clarify the position of El Hierro relative to the meridian of Paris, which gradually supplanted the Ferro meridian. In 1666, Louis XIV of France had authorized the building of the Paris Observatory. On Midsummer's Day 1667, members of the Academy of Sciences traced the future building's outline on a plot outside town near the Port Royal abbey, with Paris meridian exactly bisecting the site north–south. French cartographers would use it as their prime meridian for more than 200 years. Old maps from continental Europe often have a common grid with Paris degrees at the top and Ferro degrees offset by 20 at the bottom. A French astronomer, Abbé Jean Picard, measured the length of a degree of latitude along the Paris meridian (arc measurement) and computed from it the size of the Earth during 1668–1670. The application of the telescope to angular instruments was an important step. He was the first who in 1669, with the telescope, using such precautions as the nature of the operation requires, measured a precise arc of meridian (Picard's arc measurement). He measured with wooden rods a baseline of 5,663 toises, and a second or base of verification of 3,902 toises; his triangulation network extended from Malvoisine, near Paris, to Sourdon, near Amiens. The angles of the triangles were measured with a quadrant furnished with a telescope having cross-wires. The difference of latitude of the terminal stations was determined by observations made with a sector on a star in Cassiopeia, giving 1° 22′ 55″ for the amplitude. The terrestrial degree measurement gave the length of 57,060 toises, whence he inferred 6,538,594 toises for the Earth's diameter. Four generations of the Cassini family headed the Paris Observatory. They directed the surveys of France for over 100 years. Hitherto geodetic observations had been confined to the determination of the magnitude of the Earth considered as a sphere, but a discovery made by Jean Richer turned the attention of mathematicians to its deviation from a spherical form. This astronomer, having been sent by the Academy of Sciences of Paris to the island of Cayenne (now in French Guiana) in South America, for the purpose of investigating the amount of astronomical refraction and other astronomical objects, observed that his clock, which had been regulated at Paris to beat seconds, lost about two minutes and a half daily at Cayenne, and that to bring it to measure mean solar time it was necessary to shorten the pendulum by more than a line (about 1⁄12th of an in.). This fact, which was scarcely credited till it had been confirmed by the subsequent observations of Varin and Deshayes on the coasts of Africa and America, was first explained in the third book of Newton’s Principia, who showed that it could only be referred to a diminution of gravity arising either from a protuberance of the equatorial parts of the Earth and consequent increase of the distance from the centre, or from the counteracting effect of the centrifugal force. About the same time (1673) appeared Christiaan Huygens’ De Horologio Oscillatorio, in which for the first time were found correct notions on the subject of centrifugal force. It does not, however, appear that they were applied to the theoretical investigation of the figure of the Earth before the publication of Newton's Principia. In 1690 Huygens published his De Causa Gravitatis, which contains an investigation of the figure of the Earth on the supposition that the attraction of every particle is towards the centre. Between 1684 and 1718 Giovanni Domenico Cassini and Jacques Cassini, along with Philippe de La Hire, carried a triangulation, starting from Picard's base in Paris and extending it northwards to Dunkirk and southwards to Collioure. They measured a base of 7,246 toises near Perpignan, and a somewhat shorter base near Dunkirk; and from the northern portion of the arc, which had an amplitude of 2° 12′ 9″, obtained 56,960 toises for the length of a degree; while from the southern portion, of which the amplitude was 6° 18′ 57″, they obtained 57,097 toises. The immediate inference from this was that, with the degree diminishing with increasing latitude, the Earth must be a prolate spheroid. This conclusion was totally opposed to the theoretical investigations of Newton and Huygens, and accordingly the Academy of Sciences of Paris determined to apply a decisive test by the measurement of arcs at a great distance from each other – one in the neighbourhood of the equator, the other in a high latitude. Thus arose the celebrated , to the Equator and to Lapland, the latter directed by Pierre Louis Maupertuis. In 1740 an account was published in the Paris Mémoires, by Cassini de Thury, of a remeasurement by himself and Nicolas Louis de Lacaille of the meridian of Paris. With a view to determine more accurately the variation of the degree along the meridian, they divided the distance from Dunkirk to Collioure into four partial arcs of about two degrees each, by observing the latitude at five stations. The results previously obtained by Giovanni Domenico and Jacques Cassini were not confirmed, but, on the contrary, the length of the degree derived from these partial arcs showed on the whole an increase with increasing latitude. The West Europe-Africa Meridian-arc Cesar-François Cassini de Thury completed the Cassini map, which was published by his son Cassini IV in 1790. Moreover, the Paris meridian was linked with international collaboration in geodesy and metrology. Cesar-François Cassini de Thury (1714–1784) expressed the project to extend the French geodetic network all around the world and to connect the Paris and Greenwich observatories. In 1783 the French Academy of Science presented his proposal to King George III. This connection and a proposal from General William Roy led to the first triangulation of Great Britain. France and Great Britain surveys' connection was repeated by French astronomers and geodesists in 1787 by Cassini IV, in 1823–1825 by François Arago and in 1861–1862 by François Perrier. Between 1792 and 1798 Pierre Méchain and Jean-Baptiste Delambre surveyed the Paris meridian arc between Dunkirk and Barcelona (see meridian arc of Delambre and Méchain). They extrapolated from this measurement the distance from the North Pole to the Equator which was 5,130,740 toises. As the metre had to be equal to one ten-millionth of this distance, it was defined as 0,513074 toises or 443,296 lignes of the Toise of Peru (see below) and of the double-toise N° 1 of the apparatus which had been devised by Lavoisier and Borda for this survey at specified temperatures. In the early 19th century, the Paris meridian's arc was recalculated with greater precision between Shetland and the Balearic Islands by the astronomer François Arago, whose name now appears on the plaques or medallions tracing the route of the meridian through Paris (see below). Biot and Arago published their work as a fourth volume following the three volumes of "Bases du système métrique décimal ou mesure de l'arc méridien compris entre les parallèles de Dunkerque et Barcelone" (Basis for the decimal metric system or measurement of the meridian arc comprised between Dunkirk and Barcelona) by Delambre and Méchain. In the second half of the 19th century, Carlos Ibáñez e Ibáñez de Ibero directed the survey of Spain. From 1870 to 1894 the Paris meridan's arc was remeasured by Perrier and Bassot in France and Algeria. In 1879, Ibáñez de Ibero for Spain and François Perrier for France directed the junction of the Spanish geodetic network with Algeria. This connection was a remarkable enterprise where triangles with a maximum length of 270 km were observed from mountain stations over the Mediterranean Sea. The triangulation of France was then connected to those of Great Britain, Spain and Algeria and thus the Paris meridian's arc measurement extended from Shetland to the Sahara. The fundamental co-ordinates of the Panthéon were also obtained anew, by connecting the Panthéon and the Paris Observatory with the five stations of Bry-sur-Marne, Morlu, Mont Valérien, Chatillon and Montsouris, where the observations of latitude and azimuth were effected. Geodesy and metrology In 1860, the Russian Government at the instance of Otto Wilhelm von Struve invited the Governments of Belgium, France, Prussia and Britain to connect their triangulations to measure the length of an arc of parallel in latitude 52° and to test the accuracy of the figure and dimensions of the Earth, as derived from the measurements of arc of meridian. To combine the measurements it was necessary to compare the geodetic standards of length used in the different countries. The British Government invited those of France, Belgium, Prussia, Russia, India, Australia, Austria, Spain, United States and Cape of Good Hope to send their standards to the Ordnance Survey office in Southampton. Notably the geodetic standards of France, Spain and United States were based on the metric system, whereas those of Prussia, Belgium and Russia where calibrated against the toise, of which the oldest physical representative was the Toise of Peru. The Toise of Peru had been constructed in 1735 for Bouguer and De La Condamine as their standard of reference in the French Geodesic Mission to the Equator, conducted in what is now Ecuador from 1735 to 1744 in collaboration with the Spanish officers Jorge Juan and Antonio de Ulloa. Alexander Ross Clarke and Henry James published the first results of the standards' comparisons in 1867. The same year Russia, Spain and Portugal joined the Europäische Gradmessung and the General Conference of the association proposed the metre as a uniform length standard for the Arc measurement and recommended the establishment of an International Metre Commission. The Europäische Gradmessung decided the creation of an international geodetic standard at the General Conference held in Paris in 1875. The Metre Convention was signed in 1875 in Paris and the International Bureau of Weights and Measures was created under the supervision of the International Committee for Weights and Measures. The first president of the International Committee for Weights and Measures was the Spanish geodesist Carlos Ibáñez e Ibáñez de Ibero. He also was the president of the Permanent Commission of the Europäische Gradmessung from 1874 to 1886. In 1886 the association changed name for the International Geodetic Association (German: Internationale Erdmessung) and Carlos Ibáñez e Ibáñez de Ibero was reelected as president. He remained in this position until his death in 1891. During this period the International Geodetic Association gained worldwide importance with the joining of United States, Mexico, Chile, Argentina and Japan. In 1883 the General Conference of the Europäische Gradmessung proposed to select the Greenwich meridian as the prime meridian in the hope that Great Britain would accede to the Metre Convention. From the Paris meridian to the Greenwich meridian The United States passed an Act of Congress on 3 August 1882, authorizing President Chester A. Arthur to call an international conference to fix on a common prime meridian for time and longitude throughout the world. Before the invitations were sent out on 1 December, the joint efforts of Abbe, Fleming and William Frederick Allen, Secretary of the US railways' General Time Convention and Managing Editor of the Travellers' Official Guide to the Railways, had brought the US railway companies to an agreement which led to standard railway time being introduced at noon on 18 November 1883 across the nation. Although this was not legally established until 1918, there was thus a strong sense of fait accompli that preceded the International Meridian Conference, although setting local times was not part of the remit of the conference. In 1884, at the International Meridian Conference in Washington DC, the Greenwich meridian was adopted as the prime meridian of the world. San Domingo, now the Dominican Republic, voted against. France and Brazil abstained. The United Kingdom acceded to the Metre Convention in 1884 and to the International Geodetic Association in 1898. In 1911, Alexander Ross Clarke and Friedrich Robert Helmert stated in the Encyclopædia Britannica : In 1891, timetabling for its growing railways led to standardised time in France changing from mean solar time of the local centre to that of the Paris meridian: 9 minutes 20.921 seconds ahead of Greenwich Mean Time (GMT). In 1911 the country switched to GMT for timekeeping; in 1914 it switched to the Greenwich meridian for navigation. To this day, French cartographers continue to indicate the Paris meridian on some maps. From wireless telegraphy to Coordinated Universal Time With the arrival of wireless telegraphy, France established a transmitter on the Eiffel Tower to broadcast a time signal. The creation of the International Time Bureau, seated at the Paris Observatory, was decided upon during the 1912 Conférence internationale de l'heure radiotélégraphique. The following year an attempt was made to regulate the international status of the bureau through the creation of an international convention. However, the convention wasn't ratified by its member countries due to the outbreak of World War I. In 1919, after the war, it was decided upon to make the bureau the executive body of the International Commission of Time, one of the commissions of the then newly founded International Astronomical Union (IAU). From 1956 until 1987 the International Time Bureau was part of the Federation of Astronomical and Geophysical Data Analysis Services (FAGS). In 1987 the bureau's tasks of combining different measurements of Atomic Time were taken over by the International Bureau of Weights and Measures (BIPM). Its tasks related to the correction of time with respect to the celestial reference frame and the Earth's rotation to realize the Coordinated Universal Time (UTC) were taken over by the International Earth Rotation and Reference Systems Service (IERS) which was established in its present form in 1987 by the International Astronomical Union and the International Union of Geodesy and Geophysics (IUGG). The Arago medallions In 1994 the Arago Association and the city of Paris commissioned a Dutch conceptual artist, Jan Dibbets, to create a memorial to Arago. Dibbets came up with the idea of setting 135 bronze medallions (although only 121 are documented in the official guide to the medallions) into the ground along the Paris meridian between the northern and southern limits of Paris: a total distance of 9.2 kilometres/5.7 miles. Each medallion is 12 cm in diameter and marked with the name ARAGO plus N and S pointers. Another project, the Green Meridian (An 2000 – La Méridienne Verte), aimed to establish a plantation of trees along the entire length of the meridian arc in France. Several missing Arago medallions appear to have been replaced with the newer 'An 2000 – La Méridienne Verte' markers. Unfounded speculation In certain circles, some kind of occult or esoteric significance is ascribed to the Paris meridian; sometimes it is even perceived as a sinister axis. Dominique Stezepfandts, a French conspiracy theorist, attacks the Arago medallions that supposedly trace the route of "an occult geographical line". To him the Paris meridian is a "Masonic axis" or even "the heart of the Devil." Henry Lincoln, in his book The Holy Place, argued that various ancient structures are aligned according to the Paris meridian. They even include medieval churches, built long before the meridian was established according to conventional history, and Lincoln found it obvious that the meridian "was based upon the 'cromlech intersect division line'." David Wood, in his book Genesis, likewise ascribes a deeper significance to the Paris meridian and takes it into account when trying to decipher the geometry of the myth-encrusted village of Rennes-le-Château: The meridian passes about 350 metres (1,150 ft) west of the site of the so-called "Poussin tomb," an important location in the legends and esoteric theories relating to that place. A sceptical discussion of these theories, including the supposed alignments, can be found in Bill Putnam and Edwin Wood's book The Treasure of Rennes-le-Château – A mystery solved. In fiction The confusion between the Greenwich and the Paris meridians is one of the plot elements of Tintin book Red Rackham's Treasure. The meridian line, dubbed the "Rose Line" by author Dan Brown, appeared in the novel The Da Vinci Code. See also Cartography of France Anglo-French Survey (1784–1790) Principal Triangulation of Great Britain Meridian arc History of geodesy History of the metre Seconds pendulum Struve Geodetic Arc References External links Better formatted mathematics at Wikisource. The Arago medallions on Google Earth Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth: history of science book by Prof. Paul Murdin The Greenwich Meridian, by Graham Dolan Named meridians Geography of Paris Geography of France History of Paris Prime meridians Geodesy Metrology Geography of England Geography of Spain Geography of Algeria Paris Observatory
Paris meridian
Mathematics
3,894
11,495,730
https://en.wikipedia.org/wiki/NBR2
NBR2 is a gene best known for its location near the breast cancer associated gene BRCA1. Like BRCA1, NBR2 has been a subject of research, but links to breast cancer are currently inconclusive. NBR2 recently was identified as a glucose starvation-induced long non-coding RNA. NBR2 interacts with AMP-activated protein kinase (AMPK), a critical energy sensor in most eukaryotic cells, and promotes AMPK function to mediate energy stress response. Knockdown of NBR2 attenuates energy stress-induced AMPK activation, resulting in unchecked cell cycling, altered apoptosis/autophagy response, and increased tumour development in vivo. It is now appreciated that NBR2, a former junk gene, plays critical roles in tumor suppression. References External links
NBR2
Chemistry,Biology
175
1,968,544
https://en.wikipedia.org/wiki/Cartan%E2%80%93K%C3%A4hler%20theorem
In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals . It is named for Élie Cartan and Erich Kähler. Meaning It is not true that merely having contained in is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution. Statement Let be a real analytic EDS. Assume that is a connected, -dimensional, real analytic, regular integral manifold of with (i.e., the tangent spaces are "extendable" to higher dimensional integral elements). Moreover, assume there is a real analytic submanifold of codimension containing and such that has dimension for all . Then there exists a (locally) unique connected, -dimensional, real analytic integral manifold of that satisfies . Proof and assumptions The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary. References Jean Dieudonné, Eléments d'analyse, vol. 4, (1977) Chapt. XVIII.13 R. Bryant, S. S. Chern, R. Gardner, H. Goldschmidt, P. Griffiths, Exterior Differential Systems, Springer Verlag, New York, 1991. External links R. Bryant, "Nine Lectures on Exterior Differential Systems", 1999 E. Cartan, "On the integration of systems of total differential equations," transl. by D. H. Delphenich E. Kähler, "Introduction to the theory of systems of differential equations," transl. by D. H. Delphenich Partial differential equations Theorems in analysis
Cartan–Kähler theorem
Mathematics
363
62,627,139
https://en.wikipedia.org/wiki/Yves%20Pomeau
Yves Pomeau, born in 1942, is a French mathematician and physicist, emeritus research director at the CNRS and corresponding member of the French Academy of sciences. He was one of the founders of the Laboratoire de Physique Statistique, École Normale Supérieure, Paris. He is the son of literature professor René Pomeau. Career Yves Pomeau did his state thesis in plasma physics, almost without any adviser, at the University of Orsay-France in 1970. After his thesis, he spent a year as a postdoc with Ilya Prigogine in Brussels. He was a researcher at the CNRS from 1965 to 2006, ending his career as DR0 in the Physics Department of the Ecole Normale Supérieure (ENS) (Statistical Physics Laboratory) in 2006. He was a lecturer in physics at the École Polytechnique for two years (1982–1984), then a scientific expert with the Direction générale de l'armement until January 2007. He was Professor, with tenure, part-time at the Department of Mathematics, University of Arizona, from 1990 to 2008. He was visiting scientist at Schlumberger–Doll Laboratories (Connecticut, USA) from 1983 to 1984. He was a visiting professor at MIT in Applied Mathematics in 1986 and in Physics at UC San Diego in 1993. He was Ulam Scholar at CNLS, Los Alamos National Lab, in 2007–2008. He has written 3 books, and published around 400 scientific articles. "Yves Pomeau occupies a central and unique place in modern statistical physics. His work has had a profound influence in several areas of physics, and in particular on the mechanics of continuous media. His work, nourished by the history of scientific laws, is imaginative and profound. Yves Pomeau combines a deep understanding of physical phenomena with varied and elegant mathematical descriptions. Yves Pomeau is one of the most recognized French theorists at the interface of physics and mechanics, and his pioneering work has opened up many avenues of research and has been a continuous source of inspiration for several generations of young experimental physicists and theorists worldwide." Education École normale supérieure, 1961–1965. Licence (1962). DEA in Plasma Physics, 1964. Aggregation of Physics 1965. State thesis in plasma physics, University of Orsay, 1970. Research In his thesis he showed that in a dense fluid the interactions are different from what they are at equilibrium and propagate through hydrodynamic modes, which leads to the divergence of transport coefficients in 2 spatial dimensions. This aroused his interest in fluid mechanics, and in the transition to turbulence. Together with Paul Manneville they discovered a new mode of transition to turbulence, the transition by temporal Intermittency, which was confirmed by numerous experimental observations and CFD simulations. This is the so-called Pomeau–Manneville scenario, associated with the Pomeau-Manneville maps In papers published in 1973 and 1976, Jean Hardy, Pomeau and Olivier de Pazzis introduced the first lattice Boltzmann model, which is called the HPP model after the authors. Generalizing ideas from his thesis, together with Uriel Frisch and Brosl Hasslacher, they found a very simplified microscopic fluid model (FHP model) which allows simulating very efficiently the complex movements of a real fluid. He was a pioneer of lattice Boltzmann methods and played a historical role in the timeline of computational physics. Reflecting on the situation of the transition to turbulence in parallel flows, he showed that turbulence is caused by a contagion mechanism, and not by local instability. Front can be static or mobile depending on the conditions of the system, and the causes of the motion can be the variation of a free energy, where the most energetically favorable state invades the less favorable one. The consequence is that this transition belongs to the class of directed percolation phenomena in statistical physics, which has also been amply confirmed by experimental and numerical studies. In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean network depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (), and can be characterized by the Hamming distance as distance measure. If for every node, the transition between the stable and chaotic range depends on . Bernard Derrida and Yves Pomeau proved that, the critical value of the average number of connections is . A droplet of nonwetting viscous liquid moves on an inclined plane by rolling along it. Together with Lakshminarayanan Mahadevan, he gave a scaling law for the uniform speed of such a droplet. With Christiane Normand, and Manuel García Velarde, he studied convective instability. Apart from simple situations, capillarity remains an area where fundamental questions remain. He showed that the discrepancies appearing in the hydrodynamics of the moving contact line on a solid surface could only be eliminated by taking into account the evaporation/condensation near this line. Capillary forces are almost always insignificant in solid mechanics. Nevertheless, with Serge Mora and collaborators they have shown theoretically and experimentally that soft gel filaments are subject to Rayleigh-Plateau instability, an instability never observed before for a solid. In collaboration with his former PhD student Basile Audoly and Henri Berestycki, he studied the speed of the propagation of a reaction front in a fast steady flow with a given structure in space. With Basile Audoly and Martine Ben Amar, Pomeau developed a theory of large deformations of elastic plates which led them to introduce the concept of "d-cone", that is, a geometrical cone preserving the overall developability of the surface, an idea now taken up by the solid mechanics community. The theory of superconductivity is based on the idea of the formation of pairs of electrons that become more or less bosons undergoing Bose-Einstein condensation. This pair formation would explain the halving of the flux quantum in a superconducting loop. Together with Len Pismen and Sergio Rica they have shown that, going back to Onsager's idea explaining the quantification of the circulation in fundamental quantum states, it is not necessary to use the notion of electron pairs to understand this halving of the circulation quantum. He also analyzed the onset of BEC from the point of view of kinetic theory. Whereas the kinetic equation for a dilute Bose gas had been known for many years, the way it can describe what happens when the gas is cooled down to reach temperature below the temperature of transition. At this temperature the gas gets a macroscopic component in the quantum ground state, as had been predicted by Einstein long ago. Pomeau and collaborators showed that the solution of the kinetic equation becomes singular at zero energies and we did also find how the density of the condensate grows with time after the transition. They also derived the kinetic equation for the Bogoliubov excitations of Bose-Einstein condensates, where they found three collisional processes. Before the surge of interest in super-solids started by Moses Chan experiments, they had shown in an early simulation that a slightly modified NLS equation yields a fair representation of super-solids. With Alan C. Newell, he studied turbulent crystals in macroscopic systems. From his more recent work we must distinguish those concerning a phenomenon typically out of equilibrium, that of the emission of photons by an atom maintained in an excited state by an intense field that creates Rabi oscillations. The theory of this phenomenon requires a precise consideration of the statistical concepts of quantum mechanics in a theory satisfying the fundamental constraints of such a theory. With Martine Le Berre and Jean Ginibre they showed that the good theory was that of a Kolmogorov equation based on the existence of a small parameter, the ratio of the photon emission rate to the atomic frequency itself. Known for Timeline of computational physics Lattice Boltzmann Models Intermittency Pomeau–Manneville scenario Pomeau-Manneville maps The stability of Boolean network Front Lattice gas automaton Logistic map Stellar pulsation Hardy-Pomeau-Pazzis (HPP) model The physics of ice skating Lorenz system Saffman-Taylor Fingers Hénon-Pomeau attractor Prizes and awards FPS Paul Langevin Award in 1981. FPS Jean Ricard Award in 1985. Perronnet–Bettancourt Prize (1993) awarded by the Spanish government for collaborative research between France and Spain. Chevalier of the Légion d'Honneur since 1991. Elected corresponding member of the French Academy of sciences in 1987 (Mechanical and Computer Sciences). Boltzmann Medal (2016) Three Physicists Prize (2024) References 1942 births French mathematicians French physicists Plasma physicists Members of the French Academy of Sciences Living people Academic staff of the École Normale Supérieure Research directors of the French National Centre for Scientific Research Recipients of the Boltzmann Medal
Yves Pomeau
Physics
1,882
46,623,233
https://en.wikipedia.org/wiki/Molybdenum%20ditelluride
Molybdenum(IV) telluride, molybdenum ditelluride or just molybdenum telluride is a compound of molybdenum and tellurium with formula MoTe2, corresponding to a mass percentage of 27.32% molybdenum and 72.68% tellurium. It can crystallise in two dimensional sheets which can be thinned down to monolayers that are flexible and almost transparent. It is a semiconductor, and can fluoresce. It is part of a class of materials called transition metal dichalcogenides. As a semiconductor the band gap lies in the infrared region. This raises the potential use as a semiconductor in electronics or an infrared detector. Preparation MoTe2 can be prepared by heating the correct ratio of the elements together at 1100 °C in a vacuum. Another method is via vapour deposition, where molybdenum and tellurium are volatilised in bromine gas and then deposited. Using bromine results in forming an n-type semiconductor, whereas using tellurium only results in a p-type semiconductor. The amount of tellurium in molybdenum ditelluride can vary, with tellurium being slightly deficient unless it is added in excess during production. Tellurium molecular proportion range from 1.97 to 2. Excess tellurium deposited during this process can be dissolved off with sulfuric acid. By annealing molybdenum film in a tellurium vapour at 850 to 870 K for several hours, a thin layer of MoTe2 is formed. An amorphous form can be produced by sonochemically reacting molybdenum hexacarbonyl with tellurium dissolved in decalin. Molybdenum ditelluride can be formed by electrodeposition from a solution of molybdic acid (H2MoO4) and tellurium dioxide (TeO2). The product can be electroplated on stainless steel or indium tin oxide. Tellurization of thin Mo film at 650 °C by chemical vapor deposition (CVD) leads to the hexagonal, semiconducting α-form (2H-MoTe2) while using MoO3 film produces the monoclinic, semimetallic β-form (1T'-MoTe2) at the same temperature of 650 °C. Physical properties Colour In powdered form MoTe2 is black. Very thin crystals of MoTe2 can be made using sticky tape. When they are thin around 500 nm thick red light can be transmitted. Even thinner layers can be orange or transparent. An absorption edge occurs in the spectrum with wavelengths longer than 6720 Å transmitted and shorter wavelengths heavily attenuated. At 77 K this edge changes to 6465 Å. This corresponds to deep red. Infrared MoTe2 reflects about 43% in the infrared band but has a peak at 234.5 cm−1 and a minimum at 245.8 cm−1. As the temperature is lowered the absorption bands become narrower. At 77 K there are absorption peaks at 1.141, 1.230, 1.489, 1.758, 1.783, 2.049, 2.523, 2.578, and 2.805 eV. Exciton energy levels are at 1.10 eV, called A, and 1.48 eV, called B, with a difference of 0.38 eV. Raman spectrum The Raman spectrum has four lines with wavenumbers of 25.4, 116.8, 171.4, and a double one at 232.4 and 234.5 cm−1. The peak at 234.5 cm−1 is due to E12g mode, especially in nanolayers, but the thicker forms and the bulk has the second peak at 232.4 cm−1 also perhaps due to the E21u phonon mode. The peak near 171.4 cm−1 comes from the A1g. 138 and 185 cm−1 peaks may be due to harmonics. B12g is assigned to a peak around 291 cm−1 in nanolayers with few layers. The E12g frequency increases as the number of layers decreases to 236.6 cm−1 for single layer. The A1g mode lowers its frequency as the number of layers decreases, becoming 172.4 cm−1 for the monolayer. Crystal form MoTe2 commonly exists in three crystalline forms with rather similar layered structures: hexagonal α (2H-MoTe2), monoclinic β (1T-MoTe2) and orthorhombic β' (1T'-MoTe2). At room temperature it crystallises in the hexagonal system similar to molybdenum disulfide. Crystals are platy or flat. MoTe2 has unit cell sizes of a=3.519 Å c=13.964 Å and a specific gravity of 7.78 g·cm−3. Each molybdenum atom is surrounded by six tellurium atoms in a trigonal prism with the separation of these Mo and Te atoms being 2.73 Å. This results in sublayers of molybdenum sandwiched between two sublayers of tellurium atoms, and then this three layer structure is stacked. Each layer is 6.97 Å thick. Within this layer two tellurium atoms in the same sublayer subtend an angle of 80.7°. The tellurium atoms on one sublayer are directly above those in the lower sublayer, and they subtend an angle of 83.1° at the molybdenum atom. The other Te-Mo-Te angle across sublayers is 136.0°. The distance between molybdenum atoms within a sublayer is 3.518 Å. This is the same as the distance between tellurium atoms in a sublayer. The distance between a tellurium atom in one sublayer and the atom in the other sublayer is 3.60 Å. The layers are only bonded together with van der Waals force. The distance between tellurium atoms across the layers is 3.95 Å. The tellurium atom at the bottom of one layer is aligned with the centre of a triangle of tellurium atoms on the top of the layer below. The layers are thus in two different positions. The crystal is very easily cleaved on the plane between the three layer sheets. The sizes change with temperature, at 100 K a=3.492 Å and at 400 K is 3.53 Å. In the same range c changes from 13.67 Å to 14.32 Å due to thermal expansion. The hexagonal form is also called 2H-MoTe2, where "H" stands for hexagonal, and "2" means that the layers are in two different positions. Every second layer is positioned the same. At temperatures above 900 °C MoTe2 crystallises in the monoclinic 1T form (β–MoTe2), with space group P21/m with unit cell sizes of a=6.33 Å b=3.469 Å and c=13.86 Å with the angle β=93°55′. The high-temperature form has rod shaped crystals. The measured density of this polymorph is 7.5 g·cm−3, but in theory it should be 7.67 g·cm−3. Tellurium atoms form a distorted octahedron around the molybdenum atoms. This high-temperature form, termed β–MoTe2 can be quenched to room temperature by rapid cooling. In this metastable state β-MoTe2 can survive below 500 °C. When metastable β–MoTe2 is cooled below −20 °C, its crystal form changes to orthorhombic. This is because the monoclinic angle c changes to 90°. This form is called β' or, misleadingly, Td. The transition from α- to β-MoTe2 happens at 820 °C, but if Te is reduced by 5% the required transition temperature increases to 880 °C. K. Ueno and K. Fukushima claim that when the α form is heated in a low or high vacuum that it oxidises to form MoO2 and that reversible phase transitions do not take place. In bulk, MoTe2 can be produced as a single crystal with difficulty, but can also be made as a powder, as a polycrystalline form, as a thin film, as a nanolayer consisting of a few TeMoTe sheets, a bilayer consisting of two sheets or as a monolayer with one sheet. Thin nanolayer forms of α-MoTe2 have different symmetry depending on how many layers there are. With an odd number of layers the symmetry group is D13h without inversion, but for an even number of layers, the lattice is the same if inverted and the symmetry group is D33d. Nanotubes with a 20–60 nm diameter can be made by heat treating amorphous MoTe2. Electrical N-type bulk α-MoTe2 has an electrical conductivity of 8.3 Ω−1cm−1 with 5×1017 mobile electrons per cubic centimeter. P-type bulk MoTe2 has an electrical conductivity of 0.2 Ω−1cm−1 and a hole concentration of 3.2×1016 cm−3. The peak electrical conductivity is around 235 K, dropping off slowly with decreasing temperatures, but also reducing to a minimum around 705 K. Above 705 K conductivity increases again with temperature. Powdered MoTe2 has a much higher resistance. β–MoTe2 has a much lower resistivity than α–MoTe2 by more than a thousand times with values around 0.002 Ω·cm. It is much more metallic in nature. In the β form the molybdenum atoms are closer together so that the conduction band overlaps. At room temperature resistivity is 0.000328 Ω·cm. Orthorhombic MoTe2 has a resistance about 10% lower than the β form, and the resistance shows hysteresis of several degrees across the transition point around 250 K. The resistance drops roughly linearly with decreasing temperature. At 180 K resistivity is 2.52×10−4 Ω·cm, and at 120 mK the material becomes a superconductor. Since orthorhombic MoTe2 breaks spatial inversion symmetry, it exhibits ferroelectricity which can be coupled to its innate superconductivity. This coupling was leveraged to create a superconducting switch with MoTe2. At low electric current levels the voltage is proportional to the current in the α form. With high electric currents MoTe2 shows negative resistance, where as the current increases the voltage across the material decreases. This means there is a maximum voltage that can be applied. In the negative resistance region the current must be limited, otherwise thermal runaway will destroy the item made from the material. The Hall constant at room temperature is around 120 cm3/Coulomb for stochiometric α-MoTe2. But as Te is depleted the constant drops to close to 0 for compositions in the range MoTe1.94 to MoTe1.95. The Seebeck coefficient is about 450 μV/K at room temperature for pure MoTe2, but this drops to 0 for MoTe1.95. The Seebeck coefficient increases as temperature drops. Band gap In the bulk α form of MoTe2 the material is a semiconductor with a room temperature indirect band gap of 0.88 eV and a direct band gap of 1.02 eV. If instead of bulk forms, nanolayers are measured, the indirect band gap increases as the number of layers is reduced. α-MoTe2 changes from an indirect to a direct band gap material in very thin slices. It is a direct bandgap material when it is one or two layers (monolayer or bilayer). The band gap is reduced for tellurium-deficient MoTe2 from 0.97 to 0.5. The work function is 4.1 eV. Magnetism α–MoTe2 is diamagnetic whereas β–MoTe2 is paramagnetic. X-ray X-ray photoelectron spectroscopy on clean MoTe2 crystal surfaces show peaks at 231 and 227.8 eV due to molybdenum 3d3/2 and 3d5/2; with 582.9 and 572.5 due to tellurium 3d3/2 and 3d5/2 electrons. The X-ray K absorption edge occurs at 618.41±0.04 X units compared to molybdenum metal at 618.46 xu. Microscopy Atomic force microscopy (AFM) of the van der Waals surface of α-MoTe2 shows alternating rows of smooth balls, which are the tellurium atoms. AFM images are often done on a silica (SiO2) surface on silicon. A monolayer of α-MoTe2 has its surface 0.9 nm above the silica, and each extra layer of α-MoTe2 adds 0.7 nm. Scanning tunneling microscopy (STM) of α-MoTe2 reveals a hexagonal grid like chicken wire, where the molybdenum atoms are contributing to the current. Higher bias voltages are required to get an image, either over 0.5 V or below −0.3 V. β-MoTe2 surfaces examined with scanning tunneling microscopy can show either a pattern of tellurium atoms or a pattern of molybdenum atoms on different parts. When the scanning tip is further from the surface only tellurium atoms are visible. This is explained by the dz2 orbitals from molybdenum penetrating up through the surface layer of tellurium. The molybdenum can supply a much bigger current than tellurium. But at greater distance only the p orbital from tellurium can be detected. Lower voltages than used for α form still produce atomic images. Friction force microscopy (FFM) has been used to get a slip-stick image at a resolution below that of the unit cell. Thermal Heat in α-MoTe2 is due to vibrations of the atoms. These vibrations can be resolved into phonons in which the atoms move backwards and forwards in different ways. For a monolayer twisting of the tellurium atoms within the plane is termed E″, a scissoring action where tellurium moves in the plane of the layer is termed E′. Where tellurium vibrates in opposite directions perpendicular to the layer out of the plane the phonon mode is A′1 and where the tellurium moves in the same direction opposite to the molybdenum the mode is called A″1. Of these modes the first three are active in the Raman spectrum. In a bilayer there is an extra interaction between the atoms on the bottom of one layer and the atom on the top of the under layer. The mode symbols are modified with a suffix, "g" or "u" . In the bulk form with many layers, the modes are called A1g (corresponding to A′1 in the monolayer), A2u, B1u B2g, E1g, E1u, E2g and E2u. Modes E1g, E12g, E22g, and A1g are Raman active. Modes E11u, E21u, A12u, and A22u are infrared active. Molar heat of formation of α-MoTe2 is −6 kJ/mol from β-MoTe2. Heat of formation of β-MoTe2 is −84 kJ/mol. For Mo3Te4 it is −185 kJ/mol. Thermal conductivity is 2 Wm−1K−1. Pressure Under pressure α-MoTe2 is predicted to become a semimetal between 13 and 19 GPa. The crystal form should stay the same at pressures up to 100 GPa. β-MoTe2 is not predicted to become more metallic under pressure. Angle-resolved photoemission spectroscopy MoTe2 exhibits topological Fermi arcs. This is evidence for a new type (type-II) of Weyl fermion that arises due to the breaking of Lorentz invariance, which does not have a counterpart in high-energy physics, which can emerge as topologically protected touching between electron and hole pockets. The topological surface states are confirmed by directly observing the surface states using bulk- and surface-sensitive angle-resolved photoemission spectroscopy. Other Poisson ratio V∞=0.37. Monolayer relaxed ion elastic coefficients C11=80 and C12=21. Monolayer relaxed ion piezoelectric coefficient d11=9.13. Reactions MoTe2 gradually oxidises in air forming molybdenum dioxide (MoO2). At elevated temperatures MoTe2 oxidation produces Te2MoO7 and TeMo5O16. Other oxidation products include molybdenum trioxide, tellurium, and tellurium dioxide. Flakes of molybdenum ditelluride that contain many defects have lower luminescence, and absorb oxygen from the air, losing their luminescence. When heated to high temperatures, tellurium evaporates from molybdenum ditelluride, producing the tellurium deficient forms and then Mo2Te3. This change is disruptive to experiments as the properties change significantly with Te content as well as with temperature. The vapour pressure of Te2 over hot MoTe2 is given by 108.398-11790/T. On further heating Mo2Te3 gives off Te2 vapour. The partial pressure of Te2 is given by 105.56-9879/T where T is in K and the pressure is in bars. Molybdenum metal is left behind. The surface on the flat part of the hexagonal crystal (0001) is covered in tellurium, and is relatively inert. It can have other similar layers added onto it. Tungsten disulfide and tungsten diselenide layers have been added to molybdenum ditelluride by van der Waals epitaxy (vdWE). Gold can be deposited on the cleavage surfaces of MoTe2. On the α form gold tends to be isotropically deposited, but on the β form it makes elongated strips along the [010] crystal direction. Other substances that have been deposited on the crystal surface include indium selenide (InSe), cadmium sulfide (CdS), cadmium telluride (CdTe), tin disulfide (SnS2), tin diselenide (SnSe2), and tantalum diselenide (TaSe2). Some other monolayers are also predicted to be able to form on MoTe2 surfaces, including silicene. Silicene is claimed to become a zero-gap semiconductor on a bulk crystal, but have a metallic form on or between monolayers of MoTe2. Organic molecules can be incorporated as a layer on the van der Waals surface, including perylene tetracarboxylic acid anhydride. The sheets in α-MoTe2 can be separated and dispersed in water with a sodium cholate surfactant and sonication. It forms an olive green suspension. MoTe2 is hydrophobic, but the surfactant coats the surface with its lipophilic tail. The sheets in α-MoTe2 are able to be penetrated by alkali metals such as lithium to form intercalation compounds. This property means that it could be used as an electrode in a lithium battery. Up to Li1.6MoTe2 can be formed. This material has a similar X-ray diffraction pattern to α-MoTe2. André Morette, the first to make tellurides of molybdenum, discovered that it would burn in a flame, colouring it blue, and making a white smoke of tellurium dioxide. Dilute nitric acid can dissolve it by oxidation. However hot or cold hydrochloric or sulfuric acid could not attack MoTe2. However concentrated sulfuric acid at 261 °C does completely dissolve MoTe2. Sodium hydroxide solution partially dissolves MoTe2. Related substances Another molybdenum telluride has formula Mo2Te3. Yet another molybdenum telluride, called hexamolybdenum octatelluride Mo6Te8 forms black crystals shaped like cubes. It is formed when the elements in the correct ratio are heated together at 1000 °C for a week. It is related to the Chevrel phases, but without an extra metallic cation, however it is not superconducting. Metal atoms and organic molecules can be intercalated between the layers of MoTe2. Potential applications Potential uses for MoTe2 are for lubricant, electronics, optoelectronics or a photoelectric cell material. Diodes have been fabricated from MoTe2 by baking a p-type material in bromine. The diode's current versus voltage plot shows very little current with reverse bias, an exponential region with dV/dln(j) of 1.6, and at higher voltages (>0.3V) a linear response due to resistance. When operated as a capacitor, the capacitance varies as the inverse square of the bias, and also drops for higher frequencies. Transistors have also been built from MoTe2. MoTe2 has potential to build low power electronics. Field effect transistors (FET) have been built from a bilayer, trilayer and thicker nanolayers. An ambipolar FET has been built, and also a FET that can operate in n- or p-modes which had two top electrodes. Because MoTe2 has two phases, devices can be constructed that mix the 2H semiconductor, and the 1T' metallic form. A laser can rapidly heat a thin layer to transform 2H-MoTe2 to the metallic form 1T'-MoTe2 (β–MoTe2). Recent research, however, has shown that a decomposition of MoTe2 to Te metal happens instead. The dominant Raman bands of Te and 1T'-MoTe2 (β–MoTe2) come at similar wavenumbers; therefore, it is quite easy to confuse the Raman spectra of the elemental Te and metallic 1T'-MoTe2. A FET can be constructed with a thin layer of molybdenum ditelluride covered with a liquid gate composed of an ionic liquid or an electrolyte such as potassium perchlorate dissolved in polyethylene glycol. With low gate voltages below 2 volts, the device operates in an electrostatic mode, where the current from drain to source is proportional to the gate voltage. Above 2 volts the device enters an intermediate region where current does not increase. Above 3.5 volts current leaks through the gate, and electrolysis occurs intercalating potassium atoms in the MoTe2 layer. The potassium intercalated molybdenum ditelluride becomes superconducting below 2.8 K. As a lubricant molybdenum ditelluride can function well in a vacuum and at temperatures up to 500 °C with a coefficient of friction below 0.1. However molybdenum disulfide has a lower friction, and molybdenum diselenide can function at higher temperatures. Related dichalcogenides can be fabricated into fairly efficient photoelectric cells. Potentially, stacked monolayers of indium nitride and molybdenum ditelluride can result in improved properties for photovoltaics, including lower refractive index, and greater absorbance. Cadmium telluride solar cells are often deposited on a backplate of molybdenum. Molybdenum ditelluride can form at the contact, and if this is n-type it will degrade the performance of the solar cell. Small pieces of nanolayers of molybdenum ditelluride can be mixed in and dispersed in molten pewter without reacting, and it causes a doubling of the stiffness of the resultant composite. Molybdenum ditelluride has been used as a substrate for examining proteins with an atomic force microscope. It is superior because the protein sticks harder than with more traditional materials such as mica. β–MoTe2 is a comparatively good hydrogen evolution electrocatalyst showing even in unsupported form and without any additional nanostructuring a Tafel slope of 78 mV/dec. The semiconducting polymorph of α–MoTe2 was found inactive for HER. The superior activity was attributed to higher conductivity of β–MoTe2 phase. Recent work has shown that electrodes covered with β–MoTe2 demonstrated an increase in the amount of hydrogen gas produced during the electrolysis when a specific pattern of high-current pulses was applied. By optimising the pulses of current through the acidic electrolyte, the authors could reduce the overpotential needed for hydrogen evolution by nearly 50% when compared with the original non-activated material. Few-layered metallic form 1T'-MoTe2 (β–MoTe2) enhance SERS signal and therefore, some lipophilic markers (β–sitosterol) of coronary artery and cardiovascular diseases can be selectively detected at the surface of the few-layered films. References Molybdenum(IV) compounds Tellurides Transition metal dichalcogenides Monolayers
Molybdenum ditelluride
Physics
5,425
26,836,721
https://en.wikipedia.org/wiki/Antoine%20Nicolas%20Duchesne
Antoine Nicolas Duchesne (born 7 October 1747 Versailles; died 18 February 1827 Paris) was a French botanist known for his keen observation of variation within species, and for demonstrating that species are not immutable, because mutations can occur. "As Duchesne's observations were unaided by knowledge of modern concepts of genetics and molecular biology, his insight was truly remarkable." His particular interests were in strawberries and gourds. Duchesne worked in the gardens of Versailles, where he was a student of Bernard de Jussieu and corresponded with Carl Linnaeus. He established a notable collection of strawberries in the botanical garden of the Petit Trianon and was the first to document the separation of sexes in wild strawberry and the hybrid origin of the garden strawberry. The genus Duchesnea Sm. (Rosaceae) was named after him. Works (selected) Manuel de botanique, contenant les propriétés des plantes utiles, 1764 Essai sur l’histoire naturelle des courges, 46 pp. Panckoucke, Paris 1786. Histoire naturelle des fraisiers contenant les vues d'économie réunies à la botanique et suivie de remarques particulières sur plusieurs points qui ont rapport à l'histoire naturelle générale, Didot jeune, Paris 1766. on line at Bayerische Staatsbibliothek and GoogleBooks Le Jardinier prévoyant, contenant par forme de tableau, le rapport des opérations journalières avec le temps des récoltes successives qu'elles préparent. 11 vols. P. F. Didot jeune, Paris 1770-1781 Sur la formation des jardins, Dorez, Paris 1775. Le Porte-feuille des enfans, mélange intéressant d'animaux, fruits, fleurs, habillemens, plans, cartes et autres objets.... Mérigot jeune, Paris, [n.d., probably 1784]. Le Livret du ″Porte-feuille des enfans″, à l'usage des écoles... d'après la loi du 11 germinal an IV. Imprimerie de Gueffier, Paris, an VI – 1797. Le Cicerone de Versailles, ou l'Indicateur des curiosités et des établissemens de cette ville.... J.-P. Jacob, Versailles, an XII — 1804; revised and augmented in 1815. Sources Adrien Davy de Virville (ed.) (1955) Histoire de la botanique en France. Paris: SEDES 394 p. Further reading Günter Staudt (2003), Les dessins d'Antoine Nicolas Duchesne pour son histoire naturelles des fraisiers / A.N. Duchesne's drawings for his Histoire naturelle des fraisiers. Publications scientifiques du Muséum national d'histoire naturelle (Paris) : 370 p. (coll. Des Planches et des Mots 1) . Foreword of Michel Chauvet on Pl@ntUse. Harry S. Paris (2007), The drawings of Antoine Nicolas Duchesne for his Natural History of the Gourds / Les dessins d'Antoine Nicolas Duchesne pour son histoire naturelle des courges. Publications scientifiques du Muséum national d'histoire naturelle (Paris) : 454 p. (coll. Des Planches et des Mots 4, ed. Christian Erard.) . Foreword of Michel Chauvet on Pl@ntUse. References 1747 births 1827 deaths 19th-century French botanists Proto-evolutionary biologists 18th-century French botanists
Antoine Nicolas Duchesne
Biology
801
7,204,913
https://en.wikipedia.org/wiki/Carbon-to-nitrogen%20ratio
A carbon-to-nitrogen ratio (C/N ratio or C:N ratio) is a ratio of the mass of carbon to the mass of nitrogen in organic residues. It can, amongst other things, be used in analysing sediments and soil including soil organic matter and soil amendments such as compost. Sediments In the analysis of sediments, C/N ratios are a proxy for paleoclimate research, having different uses whether the sediment cores are terrestrial-based or marine-based. Carbon-to-nitrogen ratios indicate the degree of nitrogen limitation of plants and other organisms. They can identify whether molecules found in the sediment under study come from land-based or algal plants. Further, they can distinguish between different land-based plants, depending on the type of photosynthesis they undergo. Therefore, the C/N ratio serves as a tool for understanding the sources of sedimentary organic matter, which can lead to information about the ecology, climate, and ocean circulation at different times in Earth's history. Ranges C/N ratios in the range of 4-10:1 usually come from marine sources, whereas higher ratios are likely to come from a terrestrial source. Vascular plants from terrestrial sources tend to have C/N ratios greater than 20. The lack of cellulose, which has a chemical formula of (C6H10O5)n, and greater amount of proteins in algae versus vascular plants causes this significant difference in the C/N ratio. Instruments Examples of devices that can be used to measure this ratio are the CHN analyzer and the continuous-flow isotope ratio mass spectrometer (CF-IRMS). However, for more practical applications, desired C/N ratios can be achieved by blending commonly used substrates of known C/N content, which are readily available and easy to use. By sediment type Marine Organic matter that is deposited in marine sediments contains a key indicator as to its source and the processes it underwent before reaching the floor as well as after deposition, its carbon to nitrogen ratio. In the global oceans, freshly produced algae in the surface ocean typically have a carbon-to-nitrogen ratio of about 4 to 10. However, it has been observed that only 10% of this organic matter (algae) produced in the surface ocean sinks to the deep ocean without being degraded by bacteria in transit, and only about 1% is permanently buried in the sediment. An important process called sediment diagenesis accounts for the other 9% of organic carbon that sank to the deep ocean floor, but was not permanently buried, that is 9% of the total organic carbon produced is degraded in the deep ocean. The microbial communities utilizing the sinking organic carbon as an energy source, are partial to nitrogen-rich compounds because much of these bacteria are nitrogen-limited and much prefer it over carbon. As a result, the carbon-to-nitrogen ratio of sinking organic carbon in the deep ocean is elevated compared to fresh surface ocean organic matter that has not been degraded. An exponential increase in C/N ratios is observed with increasing water depth—with C/N ratios reaching ten at intermediate water depths of about 1000 meters and up to 15 in the deep ocean (deeper than about 2500 meters) . This elevated C/N signature is preserved in the sediment until another form of diagenesis, post-depositional diagenesis, alters its C/N signature once again. Post-depositional diagenesis occurs in organic-carbon-poor marine sediments where bacteria can oxidize organic matter in aerobic conditions as an energy source. The oxidation reaction proceeds as follows: CH2O + H2O → CO2 + 4H+ + 4e−, with standard free energy of –27.4 kJ mol−1 (half-reaction). Once all of the oxygen is used up, bacteria can carry out an anoxic sequence of chemical reactions as an energy source, all with negative ∆G°r values, with the reaction becoming less favorable as the chain of reactions proceeds. The same principle described above explains the preferential degradation of nitrogen-rich organic matter within the sediments, as they are more labile and in higher demand. This principle has been utilized in paleoceanographic studies to identify core sites that have not experienced much microbial activity or contamination by terrestrial sources with much higher C/N ratios. Lastly, ammonia, the product of the second reduction reaction, which reduces nitrate and produces nitrogen gas and ammonia, is readily adsorbed on clay mineral surfaces and protected from bacteria. This has been proposed to explain lower-than-expected C/N signatures of organic carbon in sediments undergoing post-depositional diagenesis. Ammonium produced from the remineralisation of organic material, exists in elevated concentrations (1 - >14μM) within cohesive shelf sea sediments found in the Celtic Sea (depth: 1–30 cm). The sediment depth exceeds 1m and would be a suitable study site for conducting paleolimnology experiments with C:N. Lacustrine Unlike in marine sediments, diagenesis does not pose a large threat to the integrity of the C/N ratio in lacustrine sediments. Though wood from living trees around lakes have consistently higher C/N ratios than wood buried in sediment, the change in elemental composition is not large enough to remove the vascular versus non-vascular plant signals due to the refractory nature of terrestrial organic matter. Abrupt shifts in the C/N ratio down-core can be interpreted as shifts in the organic source material. For example, two studies on Mangrove Lake, Bermuda, and Lake Yunoko, Japan, show irregular, abrupt fluctuations between C/N around 11 to 18. These fluctuations are attributed to shifts from mainly algal dominance to land-based vascular dominance. Results of studies that show abrupt shifts in algal dominance and vascular dominance often lead to conclusions about the state of the lake during these distinct periods of isotopic signatures. Times in which algal signals dominate lakes suggest a deep-water lake, while times in which vascular plant signals dominate lakes suggest the lake is shallow, dry, or marshy. Using the C/N ratio in conjunction with other sediment observations, such as physical variations, D/H isotopic analyses of fatty acids and alkanes, and δ13C analyses on similar biomarkers can lead to further regional climate interpretations that describe the more significant phenomena at play. Soil In microbial communities like soil, the C:N ratio is a key indicator as it describes a balance between energetic foods (represented by carbon) and material to build protein with (represented by nitrogen). An optimal C:N ratio of around 24:1 provides for higher microbial activity. The C:N ratio of soil can be modified by the addition of materials such as compost, manure, and mulch. A feedstock with a near-optimal C:N ratio will be consumed quickly. Any excess C will cause the N originally in the soil to be consumed, competing with the plant for nutrients (immobilization) – at least temporarily until the microbes die. Any excess N, on the other hand, will usually just be left behind (mineralization), but too much excess may result in leaching losses. The recommended C:N ratio for soil materials is, therefore, 30:1. A soil test may be done to find the C:N ratio of the soil itself. The C:N ratio of microbes themselves is generally around 10:1. A lower ratio is correlated with higher soil productivity. Compost The role of C:N ratio in compost feedstock is similar to that of soil feedstock. The recommendation is around 20-30:1. The microbes prefer a ratio of 30-35:1, but the carbon is usually not completely digested (especially in the case of lignin feedstock), hence the lowered ratio. An imbalance of C:N ratio causes a slowdown in the composting process and a drop in temperature. When the C:N ratio is less than 15:1, outgassing of ammonium may occur, creating odor and losing nitrogen. A finished compost has a C:N ratio of around 10:1. Estimating C and N contents of feedstocks The C and N contents of feedstocks is generally known from lookup tables listing common types of feedstock. It is important to deduct the moisture content if the listed value is for dry material. For foodstuffs with a nutrition analysis, the N content may be estimated from the protein content as , reversing the crude protein calculation. The C content may be estimated from crude ash content (often reported in animal feed) or from reported macronutrient levels as . Given the C:N ratio and one of C and N contents, the other content may be calculated using the very definition of the ratio. When only the ratio is known, one must estimate the total C+N% or one of the contents to get both values. Managing mixed feedstocks The C:N ratio of mixed feedstocks is calculated by summing their C and N amounts together and dividing the two results. For compost, moisture is also an important factor. References External links C/N calculator Composting Soil chemistry Geochemistry
Carbon-to-nitrogen ratio
Chemistry
1,891
14,445,573
https://en.wikipedia.org/wiki/Extrachromosomal%20array
An extrachromosomal array is a method for mosaic analysis in genetics. It is a cosmid, and contains two functioning (wild-type) closely linked genes: a gene of interest and a mosaic marker. Such an array is injected into germ line cells, which already contain mutant (specifically, loss of function) alleles of all three genes in their chromosomal DNA. The cosmid, which is not packed correctly during mitosis, is occasionally present in only one daughter cell following cell division. The daughter cell containing the array expresses the gene of interest; the cell lacking the array does not. The mosaic marker is a gene which exhibits a visible phenotype change between the functioning and non-functioning alleles. For example, ncl-1, located in chromosomal DNA, exhibits a larger nucleolus than the wild-type allele, which is in the array. Thus, cells which exhibit larger nucleoli have usually not retained the extrachromosomal array. The gene of interest is the target of the mosaic analysis. Cells lacking the extrachromosomal array also lack the functional gene of interest. Cells which develop normally without the array do not require the gene of interest for normal function. Cells which do not develop normally are said to require the gene. In this way, those cell lineages which require a specific gene can be identified. Extrachromosomal arrays replace an earlier technique involving a duplicated piece of chromosome called a free duplication. The latter technique required that the gene of interest and the mosaic marker be closely linked on the duplication; the former allows free choice of mosaic marker and target gene. References Miller LM, Waring DA, Kim SK (1996). Mosaic analysis using a ncl-1 (+) extrachromosomal array reveals that lin-31 acts in the Pn.p cells during Caenorhabditis elegans vulval development. Genetics 143 (3): 1181-1191. Genetics
Extrachromosomal array
Biology
410
230,004
https://en.wikipedia.org/wiki/Scratching
Scratching, sometimes referred to as scrubbing, is a DJ and turntablist technique of moving a vinyl record back and forth on a turntable to produce percussive or rhythmic sounds. A crossfader on a DJ mixer may be used to fade between two records simultaneously. While scratching is most associated with hip hop music, where it emerged in the mid-1970s, from the 1990s it has been used in some styles of EDM like techno, trip hop, and house music and rock music such as rap rock, rap metal, rapcore, and nu metal. In hip hop culture, scratching is one of the measures of a DJ's skills. DJs compete in scratching competitions at the DMC World DJ Championships and IDA (International DJ Association), formerly known as ITF (International Turntablist Federation). At scratching competitions, DJs can use only scratch-oriented gear (turntables, DJ mixer, digital vinyl systems or vinyl records only). In recorded hip hop songs, scratched "hooks" often use portions of other songs. Other music genres such as jazz, pop, and rock have also incorporated scratching. History Precursors A rudimentary form of turntable manipulation that is related to scratching was developed in the late 1940s by radio music program hosts, disc jockeys (DJs), or the radio program producers who did their own technical operation as audio console operators. It was known as back-cueing, and was used to find the very beginning of the start of a song (i.e., the cue point) on a vinyl record groove. This was done to permit the operator to back the disc up (rotate the record or the turntable platter itself counter-clockwise) in order to permit the turntable to be switched on, and come up to full speed without ruining the first few bars of music with the "wow" of incorrect, unnaturally slow-speed playing. This permitted the announcer to time their remarks, and start the turntable in time for when they wanted the music on the record to begin. Back cueing was a basic skill that all radio production staff needed to learn, and the dynamics of it were unique to the brand of professional turntable in use at a given radio station. The older, larger and heavier turntables needed a 180-degree backward rotation to allow for run up to full speed; some of the newer 1950s models used aluminum platters and cloth-backed rubber mats which required a third of a rotational turn or less to achieve full speed when the song began. All this was done in order to present a music show on air with the least amount of silence ("alive air") between music, the announcer's patter and recorded advertising commercials. The rationale was that any "dead air" on a radio station was likely to prompt a listener to switch stations, so announcers and program directors instructed DJs and announcers to provide a continuous, seamless stream of sound–from music to an announcer to a pre-recorded commercial, to a "jingle" (radio station theme song), and then immediately back to more music. Back-cueing was a key function in delivering this seamless stream of music. Radio personnel demanded robust equipment and manufacturers developed special tonearms, styli, cartridges and lightweight turntables to meet these demands. Turntablism Modern scratching techniques were made possible by the invention of direct-drive turntables, which led to the emergence of turntablism. Early belt-drive turntables were unsuitable for scratching since they had a slow start-up time, and they were prone to wear and tear and breakage, as the belt would break from backspinning or scratching. The first direct-drive turntable was invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka, Japan. It eliminated belts, and instead employed a motor to directly drive a platter on which a vinyl record rests. In 1969, Matsushita released it as the SP-10, the first direct-drive turntable on the market, and the first in their influential Technics series of turntables. In the 1970s, hip hop musicians and club DJs began to use this specialized turntable equipment to move the record back and forth, creating percussive sounds and effects–"scratching"–to entertain their dance floor audiences. Whereas the 1940s–1960s radio DJs had used back-cueing while listening to the sounds through their headphones, without the audience hearing, with scratching, the DJ intentionally lets the audience hear the sounds that are being created by manipulating the record on the turntable, by directing the output from the turntable to a sound reinforcement system so that the audience can hear the sounds. Scratching was developed by early hip hop DJs from New York City such as Grand Wizzard Theodore, who described scratching as, "nothing but the back-cueing that you hear in your ear before you push it [the recorded sound] out to the crowd." He developed the technique when experimenting with the Technics SL-1200, a direct-drive turntable released by Matsushita in 1972 when he found that the motor would continue to spin at the correct RPM even if the DJ wiggled the record back and forth on the platter. Afrika Bambaataa made a similar discovery with the SL-1200 in the 1970s. The Technics SL-1200 went on to become the most widely used turntable for the next several decades. Jamaican-born DJ Kool Herc, who immigrated to New York City, influenced the early development of scratching. Kool Herc developed break-beat DJing, where the breaks of funk songs—being the most danceable part, often featuring percussion—were isolated and repeated for the purpose of all-night dance parties. He was influenced by Jamaican dub music, and developed his turntable techniques using the Technics SL-1100, released in 1971, due to its strong motor, durability, and fidelity. Although previous artists such as writer and poet William S. Burroughs had experimented with the idea of manipulating a reel-to-reel tape manually to make sounds, as with his 1950s recording, "Sound Piece"), vinyl scratching as an element of hip hop pioneered the idea of making the sound an integral and rhythmic part of music instead of an uncontrolled noise. Scratching is related to "scrubbing" (in terms of audio editing and production) when the reels of an open reel-to-reel tape deck (typically 1/4 inch magnetic audiotape) are gently rotated back and forth while the playback head is live and amplified, to isolate a specific spot on the tape where an editing "cut" is to be made. Today, both scratching and scrubbing can be done on digital audio workstations (DAWs) which are equipped for these techniques. Christian Marclay was one of the earliest musicians to scratch outside hip hop. In the mid-1970s, Marclay used gramophone records and turntables as musical instruments to create sound collages. He developed his turntable sounds independently of hip hop DJs. Although he is little-known to mainstream audiences, Marclay has been described as "the most influential turntable figure outside hip hop" and the "unwitting inventor of turntablism." In 1981 Grandmaster Flash released the song "The Adventures of Grandmaster Flash on the Wheels of Steel" which is notable for its use of many DJ scratching techniques. It was the first commercial recording produced entirely using turntables. In 1982, Malcolm McLaren & the World's Famous Supreme Team released a single "Buffalo Gals", juxtaposing extensive scratching with calls from square dancing, and, in 1983, the EP, D'ya Like Scratchin'?, which is entirely focused on scratching. Another 1983 release to prominently feature scratching is Herbie Hancock's Grammy Award-winning single "Rockit". This song was also performed live at the 1984 Grammy Awards, and in the documentary film Scratch, the performance is cited by many 1980s-era DJs as their first exposure to scratching. The Street Sounds Electro compilation series which started in 1983 is also notable for early examples of scratching. Also, a notable piece was "For A Few Dollars More" by Bill Laswell-Michael Beinhorn band Material, released on 12" single in Japan and containing scratch performed by Grand Mixer DXT, another pioneer of scratching. Basic techniques Vinyl recordings Most scratches are produced by rotating a vinyl record on a direct drive turntable rapidly back and forth with the hand with the stylus ("needle") in the record's groove. This produces the distinctive sound that has come to be one of the most recognizable features of hip hop music. Over time with excessive scratching, the stylus will cause what is referred to as "cue burn", or "record burn". The basic equipment setup for scratching includes two turntables and a DJ mixer, which is a small mixer that has a crossfader and cue buttons to allow the DJ to cue up new music in their headphones without the audience hearing. When scratching, this crossfader is utilized in conjunction with the scratching hand that is manipulating the record platter. The hand manipulating the crossfader is used to cut in and out of the record's sound. Digital vinyl systems Using a digital vinyl system (DVS) consists of playing vinyl discs on turntables whose contents are a timecode signal instead of a real music record. The turntables' audio outputs are connected to the audio inputs of a computer audio interface. The audio interface digitizes the timecode signal from the turntables and transfers it to the computer's DJ software. The DJ software uses this data (e.g., about how fast the platter is spinning) to determine the playback status, speed, scratch sound of the hardware turntables, etc., and it duplicates these effects on the digital audio files or computer tracks the DJ is using. By manipulating the turntables' platters, speed controls, and other elements, the DJ thus controls how the computer plays back digitized audio and can therefore produce "scratching" and other turntablism effects on songs which exist as digital audio files or computer tracks. There is not a single standard of DVS, so each form of DJ software has its own settings. Some DJ software such as Traktor Scratch Pro or Serato Scratch Live supports only the audio interface sold with their software, requiring multiple interfaces for one computer to run multiple programs. Some digital vinyl systems software include: Traktor Scratch Pro Cross DVS VirtualDJ Pro Serato Scratch Live M-Audio Torq Deckadance Xwax Non-vinyl scratching While some turntablists consider the only true scratching media to be the vinyl disc, there are other ways to scratch, such as: Specialized DJ-CD players (CDJ) with jog wheels, allowing the DJ to manipulate a CD as if it were a vinyl record, have become widely available in the 2000s. A vinyl emulation is an emulation software, which may be combined with hardware elements, which allows a DJ to manipulate the playback of digital music files on a computer via a DJ control surface (generally MIDI or a HID controller). DJs can scratch, beatmatch, and perform other turntablist operations that cannot be done with a conventional keyboard and mouse. DJ software performing computer scratch operations include Traktor Pro, Mixxx, Serato Scratch Live & Itch, VirtualDJ, M-Audio Torq, DJay, Deckadance, Cross. DJs have also used magnetic tape, such as cassette or reel to reel to both mix and scratch. Tape DJing is rare, but Ruthless Ramsey in the US, TJ Scratchavite in Italy and Mr Tape in Latvia use exclusively tape formats to perform. Sounds Sounds that are frequently scratched include but are not limited to drum beats, horn stabs, spoken word samples, and vocals/lyrics from other songs. Any sound recorded to vinyl can be used, and CD players providing a turntable-like interface allow DJs to scratch not only material that was never released on vinyl, but also field recordings and samples from television and movies that have been burned to CD-R. Some DJs and anonymous collectors release 12-inch singles called battle records that include trademark, novel or hard-to-find scratch "fodder" (material). The most recognizable samples used for scratching are the "Ahh" and "Fresh" samples, which originate from the song "Change the Beat" by Fab 5 Freddy. There are many scratching techniques, which differ in how the movements of the record are combined with opening and closing the crossfader (or another fader or switch, such as a kill switch, where "open" means that the signal is audible, and "closed" means that the signal is inaudible). This terminology is not unique; the following discussion, however, is consistent with the terminology used by DJ QBert on his Do It Yourself Scratching DVD. Basic techniques Faderless scratches Baby scratch - The simplest scratch form, it is performed with the scratching hand only, moving the record back and forth in continuous movements while the crossfader is in the open position. Scribble scratch - The scribble scratch is by rapidly pushing the record back and forth. The crossfader is not used. Drag scratch - Equivalent to the baby and scribble scratch, but done more slowly. The crossfader is not used. Tweak scratch - Performed while the turntable's motor is not running. The record platter is set in motion manually, then "tweaked" faster and slower to create a scratch. This scratch form is best performed with long, sustained sounds. Hydrophonic scratch - A baby scratch with a "tear scratch" sound produced by the thumb running in the opposite direction as the fingers used to scratch. This rubbing of the thumb adds a vibrating effect or reverberation to forward movements on the turntable. Tear scratch - Tear scratches are scratches where the record is moved in a staggered fashion, dividing the forward and backward movement into two or more movements. This allows creating sounds similar to "flare scratches" without the use of the crossfader and it allows for more complex rhythmic patterns. The term can also refer to a simpler, slower version of the chirp. Orbit scratch - Describes any scratch, most commonly flares, that is repeated during the forward and backward movement of the record. "Orbit" is also used as a shorthand for two-click flares. Transformer scratch - with the crossfader closed, the record is moved with the scratching hand while periodically "tapping" the crossfader open and immediately closing it again. Forward and backward scratch - The forward scratch, also referred to as scrubbing, is a baby scratch where the crossfader is closed during the backwards movement of the record. If the record is let go instead of being pushed forward it is also called "release scratch" or "drop". Cutting out the forward part of the record movement instead of the backward part gives a "backward scratch" Chirp scratch - The chirp scratch involves closing the crossfader just after playing the start of a sound, stopping the record at the same point, then pushing it back while opening the fader to create a "chirping" sound. When performed using a recording of drums, it can create the illusion of doubled scratching speed, due to the attack created by cutting in the crossfader on the backward movement. Flare scratch - Begins with the crossfader open, and then the record is moved while briefly closing the fader one or more times to cut the sound out. This produces a staggering sound which can make a single "flare" sound like a very fast series of "chirps" or "tears." The number of times the fader is closed ("clicks") during the record's movement is usually used as a prefix to distinguish the variations. The flare allows a DJ to scratch continuously with less hand fatigue than would result from the transformer. The flare can be combined with the crab for an extremely rapid continuous series of scratches. Euro scratch - A variation of the "flare scratch" in which two faders are used simultaneously with one hand to cut the sound much faster. It can also be performed by using only the up fader and the phono line switch to cut the sound. Crab scratch - Consists of moving the record while quickly tapping the crossfader open or closed with each finger of the crossfader hand. In this way, DJs are able to perform transforms or flares much faster than they could by manipulating the crossfader with the whole hand. Twiddle scratch - A crab scratch using only the index and middle fingers. Scratch combinations More complex combinations can be generated by grouping elementary crossfader motions (such as the open, close, and tap) into three and four-move sequences. Closing and tapping motions can be followed by opens and taps, and opens can be followed by closes only. Note that some sequences of motions ultimately change the direction of the switch, whereas others end in a position such that they can be repeated immediately without having to reset the position of the switch. Sequences that change the direction of the switch can be dovetailed with sequences that change it in the opposite directions to produce repeating patterns, or can be used to transition between open and closed crossfader techniques, such as chirps/flares and transforms, respectively. These crossfader sequences are frequently combined with orbits and tears to produce combination scratches, such as the aquaman scratch, which goes "close-tap-open". Subculture While scratching is becoming more and more popular in pop music, particularly with the crossover success of pop-hip hop tracks in the 2010s, sophisticated scratching and other expert turntablism techniques are still predominantly an underground style developed by the DJ subculture. The Invisibl Skratch Piklz from San Francisco focuses on scratching. In 1994, the group was formed by DJs Q-Bert, Disk & Shortkut and later Mix Master Mike. In July 2000, San Francisco's Yerba Buena Center for the Arts held Skratchcon2000, the first DJ Skratch forum that provided "the education and development of skratch music literacy". In 2001, Thud Rumble became an independent company that works with DJ artists to produce and distribute scratch records. In 2004, Scratch Magazine, one of the first publications about hip hop DJs and record producers, released its debut issue, following in the footsteps of the lesser-known Tablist magazine. Pedestrian is a UK arts organisation that runs Urban Music Mentors workshops led by DJs. At these workshops, DJs teach youth how to create beats, use turntables to create mixes, act as an MC at events, and perform club sets. Use outside hip hop Scratching has been incorporated into a number of other musical genres, including pop, rock, jazz, some subgenres of heavy metal (notably nu metal) and some contemporary and avant-garde classical music performances. For recording use, samplers are often used instead of physically scratching a vinyl record. DJ Product©1969, formerly of the rap rock band Hed PE, recalled that the punk rock band the Vandals was the first rock band he remembered seeing use turntable scratching. Product©1969 also recalled the early rap metal band Proper Grounds, which was signed to Madonna's Maverick Records, as being another one of the first rock bands to utilize scratching in their music. Guitarist Tom Morello, known for his work with Rage Against the Machine and Audioslave, has performed guitar solos that imitate scratching by using the kill switch on his guitar. Perhaps the best-known example is "Bulls on Parade", in which he creates scratch-like rhythmic sounds by rubbing the strings over the pick-ups while using the pickup selector switch as a crossfader. Since the 1990s, scratching has been used in a variety of popular music genres such as nu metal, exemplified by Linkin Park, Slipknot and Limp Bizkit. It has also been used by artists in pop music (e.g. Nelly Furtado) and alternative rock (e.g. Incubus). Scratching is also popular in various electronic music styles, such as techno. See also List of turntablists Tape-bow violin Vinyl emulation software Sources Allmusic's Grand Wizard Theodore biography (also at Artist Direct) DJ Grandmaster Flash quoted in Toop, David (1991). Rap Attack 2, 65. New York: Serpent's Tail. . References DJing Hip-hop production Rap metal Rap rock Audio engineering African-American culture Articles containing video clips Hip-hop terminology
Scratching
Engineering
4,265
8,270,295
https://en.wikipedia.org/wiki/Tetrathionate
The tetrathionate anion, , is a sulfur oxyanion derived from the compound tetrathionic acid, H2S4O6. Two of the sulfur atoms present in the ion are in oxidation state 0 and two are in oxidation state +5. Alternatively, the compound can be viewed as the adduct resulting from the binding of to SO3. Tetrathionate is one of the polythionates, a family of anions with the formula [Sn(SO3)2]2−. Its IUPAC name is 2-(dithioperoxy)disulfate, and the name of its corresponding acid is 2-(dithioperoxy)disulfuric acid. The Chemical Abstracts Service identifies tetrathionate by the CAS Number 15536-54-6. Formation Tetrathionate is a product of the oxidation of thiosulfate, , by iodine, I2: 2 + I2 → + 2I− The use of bromine instead of iodine is dubious as excess bromine will oxidize the thiosulfate to sulfate. Structure Tetrathionate's structure can be visualized by following three edges of a rectangular cuboid, as in the diagram below. The structure shown is the configuration of in BaS4O6·2H2O and Na2S4O6·2H2O. Dihedral S–S–S–S angles approaching 90° are common in polysulfides. Compounds Compounds containing the tetrathionate anion include sodium tetrathionate, Na2S4O6, potassium tetrathionate, K2S4O6, and barium tetrathionate dihydrate, BaS4O6·2H2O. Properties As other species of sulfur at intermediate oxidation state, such as thiosulfate, tetrathionate can be responsible for the pitting corrosion of carbon steel and stainless steel. Tetrathionate has also been found to serve as a terminal electron acceptor for Salmonella enterica serotype Typhimurium, whereas existing thiosulfate in the small intestines of mammals is oxidized by reactive oxygen species released by the immune system (mainly NADPH oxidase produced superoxide) to form tetrathionate. This aids in the growth of the bacterium, helped by the inflammatory response. See also Corrosion Dithionite Polysulfides Thiosulfate References Corrosion Sulfur oxyanions
Tetrathionate
Chemistry,Materials_science
519
5,713,212
https://en.wikipedia.org/wiki/Humane%20education
Humane education is broadly defined as education that nurtures compassion and respect for living beings In addition to focusing on the humane treatment of non-human animals, humane education also increasingly contains content related to the environment, the compassionate treatment of other people, and the interconnectedness of issues pertaining to people and the planet. Humane education encourages cognitive, affective, and behavioral growth through personal development of critical thinking, problem solving, perspective-taking, and empathy as it relates to people, animals, the planet, and the intersections among them. Education taught through the lens of humane pedagogy supports more than knowledge acquisition, it allows learners to process personal values and choose prosocial behaviors aligned with those values. History Humane education as a discrete field of education was created in the late 1800s by individuals like George Angell as an attempt to address social injustices and prevent cruelty to animals before it started along with the formation of SPCAs, such as the Massachusetts SCPA and the ASPCA. The formation of humane education and animal protection/welfare organizations was associated with the expansion of women’s suffrage and the temperance movement, and many of those involved in creation and early advocacy of humane education also worked in those other areas of social change as well. These early activists successfully advocated for the passage of laws supporting or even requiring the teaching of humane education in schools, and many teachers did teach it. The animal welfare organizations also visited schools and other youth centers to teach “push-in” programs that supplemented—and possibly augmented—the children’s other education. In addition to school-based programs and activities, humane education was also initially conducted through Bands of Mercy; although these have been disbanded, humane education continues to be conducted in community-based settings. These include animal shelters, humane education centers and parks as well as, e.g., Boys and Girls Clubs, YWCAs and YMCAs, cultural and religious centers, etc. Currently, humane education is often conducted by animal welfare organizations and organizations that include humane education among their primary focuses. General Goals, Content, and Pedagogical Strategies Humane education seeks to nurture the development of compassion and concern that people—especially children and adolescents—have towards one group (e.g., humans) be extended to other groups (e.g., animals). One of the beliefs that helped establish humane education as a field has been that helping children learn to treat animals with kindness will encourage them to grow up to be adults who are kind to all animals, human and non-human. This “cross-fertilization” of kindness is also used, e.g., to try to have the care that children have for their own pets be extended to animals in their community, animals in circuses and zoos, animals in agriculture and on factory farms, or to show how reducing pollution in one’s neighborhood can help ecosystems far away. Typical Current Content In addition to the humane treatment of domestic animals, humane education now often examines broader issues including human relationships and animal exploitation. Common topics currently covered include responsible pet care (e.g., spaying/neutering and responsible adoption); animal agriculture; factory farming; captive wild animals; understanding animal emotions, sentience, and communication; blood sports; bite prevention; ecological stewardship; the interconnectedness of life; pollution; reduction/reuse/recycling of materials; bullying; non-violent conflict resolution; critical thinking, child labor; and the effects of every-day activities on other people, animals, and the environment. Pedagogical Strategies Since the beginning, humane education has focused on a constructivist approach to teaching and includes methods such as service-learning and experiential learning. Organizations that conduct humane education programs, therefore, often create community- or home-based activities in which students can learn humane education content and behaviors through experience and reflection. Humane education programs may be conducted in a variety of ways in schools. Programs may be supplemental or add-on programs such as when a humane educator or the teacher of record will devote a class period to humane education content; in these cases, the lesson is often devoted wholly to teach humane education content (e.g., responsible pet or environmental care, spaying/neutering, respect for others). Programs may also be infused into the curriculum or add-ins. These infused programs allow for the most effective form of Humane Pedagogy (a teaching approach inspired by critical pedagogy, which attempts to help students activate cognitive, affective, and psychomotor domains of learning and determine personal values with an ecocentric lens.) The strongest humane pedagogy is part of both the written and unwritten curriculum. Humane education may also be integrated into traditional lessons. Since most children and adolescents find animals and nature to be engaging topics, humane education can be an effective vehicle to also teach other content, such as literature, history, civics, or science. Effectiveness Although teachers who use humane education often report anecdotal evidence that it works, and although there is a welter of qualitative research that also suggests it is effective, there are few objective, well-controlled studies that compare humane education programs against good control groups. Nonetheless, those who have studied it carefully tend to find that it is effective---probably at least as effective as other, comparable non-humane education programs. Animal-Assisted Education Animal-assisted education is education that is employs direct interaction/perception of animals to enhance learning. One such program used shelter dogs in a school-based violence prevention and character education program. According to the researcher, "[f]indings indicate that receiving the program significantly alters students’ normative beliefs about aggression, levels of empathy, and displays of violent and aggressive behaviors". School-based Programs Probably the largest study of humane education ever conducted included a "large evaluation conducted over 3 separate years in 25 public elementary schools in 5 cities across eastern China". The author randomly assigned about half of the schools to participate in Caring-for-Life, a humane education program, and randomly assigned the other half to the control group. In all, the effect of the program was tested on over 2,000 first and second grade students. The author reports that "[s]tudents who participated in the program displayed significantly greater gains in prosociality than similar students who didn’t. Students who participated in an expanded version of the program appeared to realize even greater gains". Another large-scale, randomized control trial found that a 12-lesson humane education program significantly improved lower elementary students' attitudes and behaviors about the environment. The humane education program was taught by the students' teachers during one period of the normal school day over one academic year. By the end of the year, the children who participated in the program reported caring more about a range of environmental issues and that they engaged in more behaviors to address these issues (than did peers who did not participate in the humane education program). The humane education program that was studied was designed to address the United Nationals Educational, Scientific, and Cultural Organisation's (UNESCO's) Four Pillars of Education through both humane education strategies and content. Another experimental-vs-control study compared the effect of the HEART humane education program on elementary students in several schools in two cities in the United States. Students self-reported their attitudes about the treatment of animals and the environment, and teachers rated each student's prosocial and disruptive behaviors. The authors found that "the development of prosocial behaviors and self-reported attitudes significantly interacted with group assignment: Students who participated in the humane education program showed stronger growth in both of these outcomes compared with students in the control group". However, they did not find changes in disruptive behaviors to differ between the groups. Overall, the authors state that "[t]he results support the effectiveness of a humane education program to teach a relatively large and diverse group of upper elementary students to learn about animal welfare issues and to improve their prosocial behaviors. Effects appeared strongest on attitudes; behavioral effects were found to be largely limited to behaviors directly addressed by the humane education program." Duration of Effects The effects of a humane education program seem to last for at least a year. Piek and colleagues found that young children randomly assigned to participate in the Animal Fun program, which "was designed to enhance motor and social development in young children" showed significant improvements in teacher-rated prosocial behaviour and total difficulties compared to children randomly assigned to the control group. The effect of the program was found to still be strong not only 6 months but also 12 and 18 months later. As the authors states, "The Animal Fun program appears to be effective in improving social and behavioural outcomes". List of Humane Education Organizations Bands of Mercy Factory Farming Awareness Coalition InterNICHE See also Animal-assisted interventions Animal cruelty Anti-vivisection movement Animal welfare Environmental education Environmental protection Ecopedagogy Human rights Social justice Sustainability Vegetarianism, Veganism References Further reading Unti, B. & DeRosa, B. (2003). Humane education: Past, present, and future. In D. J. Salem & A. N. Rowam (Eds.), The State of the Animals II: 2003 (pp. 27 – 50). Washington, D.C.: Humane Society Press Alternative education Animal welfare Anti-vivisection movement
Humane education
Chemistry
1,910
24,341,453
https://en.wikipedia.org/wiki/147P/Kushida%E2%80%93Muramatsu
147P/Kushida–Muramatsu is a quasi-Hilda comet discovered in 1993 by Japanese astronomers Yoshio Kushida and Osamu Muramatsu. According to calculations made by Katsuhiko Ohtsuka of the Tokyo Meteor Network and David Asher of Armagh Observatory, Kushida–Muramatsu was temporarily captured by Jupiter as an irregular moon between May 14, 1949, and July 15, 1962, ( years). It is the fifth such object known to have been captured. It is thought that quasi-Hilda comets may be escaped Hilda asteroids. Comet Shoemaker–Levy 9, which collided with Jupiter in 1994, is a more famous example of a quasi-Hilda comet. References External links 147P/Kushida-Muramatsu – Seiichi Yoshida @ aerith.net 147P/Kushida-Muramatsu Periodic comets Encke-type comets 0147 Hilda asteroids 147P 147P 19931210 Discoveries by Yoshio Kushida Discoveries by Osamu Muramatsu
147P/Kushida–Muramatsu
Astronomy
207
2,526,986
https://en.wikipedia.org/wiki/Isotopes%20of%20rhodium
Naturally occurring rhodium (45Rh) is composed of only one stable isotope, 103Rh. The most stable radioisotopes are 101Rh with a half-life of 3.3 years, 102Rh with a half-life of 207 days, and 99Rh with a half-life of 16.1 days. Thirty other radioisotopes have been characterized with atomic weights ranging from 88.949 u (89Rh) to 121.943 u (122Rh). Most of these have half-lives that are less than an hour except 100Rh (half-life: 20.8 hours) and 105Rh (half-life: 35.36 hours). There are also numerous meta states with the most stable being 102mRh (0.141 MeV) with a half-life of about 3.7 years and 101mRh (0.157 MeV) with a half-life of 4.34 days. The primary decay mode before the only stable isotope, 103Rh, is electron capture and the primary mode after is beta emission. The primary decay product before 103Rh is ruthenium and the primary product after is palladium. List of isotopes |-id=Rhodium-90 | rowspan=2|90Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 45 | rowspan=2|89.94457(22)# | rowspan=2|29(3) ms | β+ | 90Ru | rowspan=2|(0+) | rowspan=2| |- | β+, p? (<0.7%) | 89Tc |-id=Rhodium-90m | rowspan=2 style="text-indent:1em" | 90mRh | rowspan=2 colspan="3" style="text-indent:2em" | 0(500)# keV | rowspan=2|0.56(2) s | β+ (90.4%) | 90Ru | rowspan=2|(7+) | rowspan=2| |- | β+, p (9.6%) | 89Tc |-id=Rhodium-91 | rowspan=2|91Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 46 | rowspan=2|90.93712(32)# | rowspan=2|1.47(22) s | β+ (98.7%) | 91Ru | rowspan=2|(9/2+) | rowspan=2| |- | β+, p (1.3%) | 90Tc |-id=Rhodium-91m | rowspan=3 style="text-indent:1em" | 91mRh | rowspan=3 colspan="3" style="text-indent:2em" | 172.9(4) keV | rowspan=3|1.8# s | β+? | 91Ru | rowspan=3|1/2−# | rowspan=3| |- | β+, p? | 90Tc |- | IT? | 91Rh |-id=Rhodium-92 | rowspan=2|92Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 47 | rowspan=2|91.9323677(47) | rowspan=2|5.61(8) s | β+ (97.95%) | 92Ru | rowspan=2|(6+) | rowspan=2| |- | β+, p (2.05%) | 91Tc |-id=Rhodium-92m1 | rowspan=2 style="text-indent:1em" | 92m1Rh | rowspan=2 colspan="3" style="text-indent:2em" | 50(100)# keV | rowspan=2|3.18(22) s | β+ (98.3%) | 92Ru | rowspan=2|(2+) | rowspan=2| |- | β+, p (1.7%) | 91Tc |-id=Rhodium-92m2 | style="text-indent:1em" | 92m2Rh | colspan="3" style="text-indent:2em" | 105(100)# keV | 232(15) ns | IT | 92Rh | (4+) | |-id=Rhodium-93 | 93Rh | style="text-align:right" | 45 | style="text-align:right" | 48 | 92.9259128(28) | 13.9(16) s | β+ | 93Ru | 9/2+# | |-id=Rhodium-94 | rowspan=2|94Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 49 | rowspan=2|93.9217305(36) | rowspan=2|70.6(6) s | β+ (98.2%) | 94Ru | rowspan=2|(4+) | rowspan=2| |- | β+, p (1.8%) | 93Tc |-id=Rhodium-94m1 | style="text-indent:1em" | 94m1Rh | colspan="3" style="text-indent:2em" | 54.60(20)# keV | 480(30) ns | IT | 94Rh | (2+) | |-id=Rhodium-94m2 | style="text-indent:1em" | 94m2Rh | colspan="3" style="text-indent:2em" | 300(200)# keV | 25.8(2) s | β+ | 94Ru | (8+) | |-id=Rhodium-95 | 95Rh | style="text-align:right" | 45 | style="text-align:right" | 50 | 94.9158979(42) | 5.02(10) min | β+ | 95Ru | (9/2)+ | |-id=Rhodium-95m | rowspan=2 style="text-indent:1em" | 95mRh | rowspan=2 colspan="3" style="text-indent:2em" | 543.3(3) keV | rowspan=2|1.96(4) min | IT (88%) | 95Rh | rowspan=2|(1/2)− | rowspan=2| |- | β+ (12%) | 95Ru |-id=Rhodium-96 | 96Rh | style="text-align:right" | 45 | style="text-align:right" | 51 | 95 914452(11) | 9.90(10) min | β+ | 96Ru | 6+ | |-id=Rhodium-96m | rowspan=2 style="text-indent:1em" | 96mRh | rowspan=2 colspan="3" style="text-indent:2em" | 51.98(9) keV | rowspan=2|1.51(2) min | IT (60%) | 96Rh | rowspan=2|3+ | rowspan=2| |- | β+ (40%) | 96Ru |-id=Rhodium-97 | 97Rh | style="text-align:right" | 45 | style="text-align:right" | 52 | 96.911328(38) | 30.7(6) min | β+ | 97Ru | 9/2+ | |-id=Rhodium-97m | rowspan=2 style="text-indent:1em" | 97mRh | rowspan=2 colspan="3" style="text-indent:2em" | 258.76(18) keV | rowspan=2|46.2(16) min | β+ (94.4%) | 97Ru | rowspan=2|1/2− | rowspan=2| |- | IT (5.6%) | 97Rh |-id=Rhodium-98 | 98Rh | style="text-align:right" | 45 | style="text-align:right" | 53 | 97.910708(13) | 8.72(12) min | β+ | 98Ru | (2)+ | |-id=Rhodium-98m | rowspan=2 style="text-indent:1em" | 98mRh | rowspan=2 colspan="3" style="text-indent:2em" | 56.3(10) keV | rowspan=2|3.6(2) min | IT (89%) | 98Rh | rowspan=2|(5+) | rowspan=2| |- | β+ (11%) | 98Ru |-id=Rhodium-99 | 99Rh | style="text-align:right" | 45 | style="text-align:right" | 54 | 98.908121(21) | 16.1(2) d | β+ | 99Ru | 1/2− | |-id=Rhodium-99m | rowspan=2 style="text-indent:1em" | 99mRh | rowspan=2 colspan="3" style="text-indent:2em" | 64.4(5) keV | rowspan=2|4.7(1) h | β+ | 99Ru | rowspan=2|9/2+ | rowspan=2| |- | IT? | 99Rh |-id=Rhodium-100 | rowspan=2|100Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 55 | rowspan=2|99.908114(19) | rowspan=2|20.8(1) h | EC (95.1%) | 100Ru | rowspan=2|1− | rowspan=2| |- | β+ (4.9%) | 100Ru |-id=Rhodium-100m1 | style="text-indent:1em" | 100m1Rh | colspan="3" style="text-indent:2em" | 74.782(14) keV | 214.0(20) ns | IT | 100Rh | (2)+ | |-id=Rhodium-100m2 | rowspan=2 style="text-indent:1em" | 100m2Rh | rowspan=2 colspan="3" style="text-indent:2em" | 107.6(2) keV | rowspan=2|4.6(2) min | IT (98.3%) | 100Rh | rowspan=2|(5+) | rowspan=2| |- | β+ (1.7%) | 100Ru |-id=Rhodium-100m3 | style="text-indent:1em" | 100m3Rh | colspan="3" style="text-indent:2em" | 219.61(22) keV | 130(10) ns | IT | 100Rh | (7+) | |-id=Rhodium-101 | 101Rh | style="text-align:right" | 45 | style="text-align:right" | 56 | 100.9061589(63) | 4.07(5) y | EC | 101Ru | 1/2− | |-id=Rhodium-101m | rowspan=2 style="text-indent:1em" | 101mRh | rowspan=2 colspan="3" style="text-indent:2em" | 157.32(3) keV | rowspan=2|4.343(10) d | EC (92.80%) | 101Ru | rowspan=2|9/2+ | rowspan=2| |- | IT (7.20%) | 101Rh |-id=Rhodium-102 | rowspan=2|102Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 57 | rowspan=2|101.9068343(69) | rowspan=2|207.0(15) d | β+ (78%) | 102Ru | rowspan=2|2− | rowspan=2| |- | β− (22%) | 102Pd |-id=Rhodium-102m | rowspan=2 style="text-indent:1em" | 102mRh | rowspan=2 colspan="3" style="text-indent:2em" | 140.73(9) keV | rowspan=2|3.742(10) y | β+ (99.77%) | 102Ru | rowspan=2|6+ | rowspan=2| |- | IT (0.233%) | 102Rh |-id=Rhodium-103 | 103Rh | style="text-align:right" | 45 | style="text-align:right" | 58 | 102.9054941(25) | colspan=3 align=center|Stable | 1/2− | 1.0000 |-id=Rhodium-103m | style="text-indent:1em" | 103mRh | colspan="3" style="text-indent:2em" | 39.753(6) keV | 56.114(9) min | IT | 103Rh | 7/2+ | |-id=Rhodium-104 | rowspan=2|104Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 59 | rowspan=2|103.9066453(25) | rowspan=2|42.3(4) s | β− (99.55%) | 104Pd | rowspan=2|1+ | rowspan=2| |- | β+ (0.45%) | 104Ru |-id=Rhodium-104m | rowspan=2 style="text-indent:1em" | 104mRh | rowspan=2 colspan="3" style="text-indent:2em" | 128.9679(5) keV | rowspan=2|4.34(3) min | IT (99.87%) | 104Rh | rowspan=2|5+ | rowspan=2| |- | β− (0.13%) | 104Pd |-id=Rhodium-105 | 105Rh | style="text-align:right" | 45 | style="text-align:right" | 60 | 104.9056878(27) | 35.341(19) h | β− | 105Pd | 7/2+ | |-id=Rhodium-105m | style="text-indent:1em" | 105mRh | colspan="3" style="text-indent:2em" | 129.742(4) keV | 42.8(3) s | IT | 105Rh | 1/2− | |-id=Rhodium-106 | 106Rh | style="text-align:right" | 45 | style="text-align:right" | 61 | 105.9072859(58) | 30.07(35) s | β− | 106Pd | 1+ | |-id=Rhodium-106m | style="text-indent:1em" | 106mRh | colspan="3" style="text-indent:2em" | 132(11) keV | 131(2) min | β− | 106Pd | (6)+ | |-id=Rhodium-107 | 107Rh | style="text-align:right" | 45 | style="text-align:right" | 62 | 106.906748(13) | 21.7(4) min | β− | 107Pd | 7/2+ |-id=Rhodium-107m | style="text-indent:1em" | 107mRh | colspan="3" style="text-indent:2em" | 268.36(4) keV | >10 μs | IT | 107Rh | 1/2− | |-id=Rhodium-108 | 108Rh | style="text-align:right" | 45 | style="text-align:right" | 63 | 107.908715(15) | 16.8(5) s | β− | 108Pd | 1+ | |-id=Rhodium-108m | style="text-indent:1em" | 108mRh | colspan="3" style="text-indent:2em" | 115(18) keV | 6.0(3) min | β− | 108Pd | (5+) | |-id=Rhodium-109 | 109Rh | style="text-align:right" | 45 | style="text-align:right" | 64 | 108.9087496(43) | 80.8(7) s | β− | 109Pd | 7/2+ | |-id=Rhodium-109m | style="text-indent:1em" | 109mRh | colspan="3" style="text-indent:2em" | 225.873(19) keV | 1.66(4) μs | IT | 109Pd | 3/2+ | |-id=Rhodium-110 | 110Rh | style="text-align:right" | 45 | style="text-align:right" | 65 | 109.911080(19) | 3.35(12) s | β− | 110Pd | (1+) | |-id=Rhodium-110m | style="text-indent:1em" | 110mRh | colspan="3" style="text-indent:2em" | 220(150)# keV | 28.5(13) s | β− | 110Pd | (6+) | |-id=Rhodium-111 | 111Rh | style="text-align:right" | 45 | style="text-align:right" | 66 | 110.9116432(74) | 11(1) s | β− | 111Pd | (7/2+) | |-id=Rhodium-112 | 112Rh | style="text-align:right" | 45 | style="text-align:right" | 67 | 111.914405(47) | 3.4(4) s | β− | 112Pd | (1+) | |-id=Rhodium-112m | style="text-indent:1em" | 112mRh | colspan="3" style="text-indent:2em" | 340(70) keV | 6.73(15) s | β− | 112Pd | (6+) | |-id=Rhodium-113 | 113Rh | style="text-align:right" | 45 | style="text-align:right" | 68 | 112.9154402(77) | 2.80(12) s | β− | 113Pd | (7/2+) | |-id=Rhodium-114 | 114Rh | style="text-align:right" | 45 | style="text-align:right" | 69 | 113.918722(77) | 1.85(5) s | β− | 114Pd | 1+ | |-id=Rhodium-114m | style="text-indent:1em" | 114mRh | colspan="3" style="text-indent:2em" | 200(150)# keV | 1.85(5) s | β− | 114Pd | (7−) | |-id=Rhodium-115 | rowspan=2|115Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 70 | rowspan=2|114.9203116(79) | rowspan=2|1.03(3) s | β− | 115Pd | rowspan=2|(7/2+) | rowspan=2| |- | β−, n? | 114Pd |-id=Rhodium-116 | rowspan=2|116Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 71 | rowspan=2|115.924062(79) | rowspan=2|685(39) ms | β− (>97.9%) | 116Pd | rowspan=2|1+ | rowspan=2| |- | β−, n? (<2.1%) | 115Pd |-id=Rhodium-116m | rowspan=2 style="text-indent:1em" | 116mRh | rowspan=2 colspan="3" style="text-indent:2em" | 200(150)# keV | rowspan=2|570(50) ms | β− (>97.9%) | 116Pd | rowspan=2|(6−) | rowspan=2| |- | β−, n? (<2.1%) | 115Pd |-id=Rhodium-117 | rowspan=2|117Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 72 | rowspan=2|116.9260363(95) | rowspan=2|421(30) ms | β− | 117Pd | rowspan=2|7/2+# | rowspan=2| |- | β−, n? (<7.6%) | 115Pd |-id=Rhodium-117m | style="text-indent:1em" | 117mRh | colspan="3" style="text-indent:2em" | 321.2(10) keV | 138(17) ns | IT | 117Rh | 3/2+# | |-id=Rhodium-118 | rowspan=2|118Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 73 | rowspan=2|117.930341(26) | rowspan=2|282(9) ms | β− (96.9%) | 118Pd | rowspan=2|1+# | rowspan=2| |- | β−, n (3.1%) | 117Pd |-id=Rhodium-118m | rowspan=3 style="text-indent:1em" | 118mRh | rowspan=3 colspan="3" style="text-indent:2em" | 200(150)# keV | rowspan=3|310(30) ms | β− (96.9%) | 118Pd | rowspan=3|6−# | rowspan=3| |- | β−, n (3.1%) | 117Pd |- | IT? | 118Rh |-id=Rhodium-119 | rowspan=2|119Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 74 | rowspan=2|118.932557(10) | rowspan=2|190(6) ms | β− (93.6%) | 119Pd | rowspan=2|7/2+# | rowspan=2| |- | β−, n (6.4%) | 118Pd |-id=Rhodium-120 | rowspan=3| 120Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 75 | rowspan=3| 119.93707(22)# | rowspan=3| 129.6(42) ms | β− | 120Pd | rowspan=3| 8−# | rowspan=3| |- | β−, n (<9.3%) | 119Pd |- | β−, 2n? | 118Pd |-id=Rhodium-120m | style="text-indent:1em" | 120mRh | colspan="3" style="text-indent:2em" | 157.2(7) keV | 295(16) ns | IT | 120Rh | 6# | |-id=Rhodium-121 | rowspan=2| 121Rh | rowspan=2 style="text-align:right" | 45 | rowspan=2 style="text-align:right" | 76 | rowspan=2| 120.93961(67) | rowspan=2| 74(4) ms | β− | 121Pd | rowspan=2| 7/2+# | rowspan=2| |- | β−, n (>11%) | 120Pd |-id=Rhodium-122 | rowspan=3| 122Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 77 | rowspan=3| 121.94431(32)# | rowspan=3| 51(6) ms | β− | 122Pd | rowspan=3| 7−# | rowspan=3| |- | β−, n (<3.9%) | 121Pd |- | β−, 2n? | 120Pd |-id=Rhodium-122m | style="text-indent:1em" | 122mRh | colspan="3" style="text-indent:2em" | 271.0(7) keV | 830(120) ns | IT | 122Rh | 4+# | |-id=Rhodium-123 | rowspan=3| 123Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 78 | rowspan=3| 122.94719(43)# | rowspan=3| 42(4) ms | β− | 123Pd | rowspan=3| 7/2+# | rowspan=3| |- | β−, n (>24%) | 122Pd |- | β−, 2n? | 121Pd |-id=Rhodium-124 | rowspan=3| 124Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 79 | rowspan=3| 123.95200(43)# | rowspan=3| 30(2) ms | β− | 124Pd | rowspan=3| 2+# | rowspan=3| |- | β−, n (<31%) | 123Pd |- | β−, 2n? | 122Pd |-id=Rhodium-125 | rowspan=3|125Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 80 | rowspan=3|124.95509(54)# | rowspan=3|26.5(20) ms | β− | 125Pd | rowspan=3|7/2+# | rowspan=3| |- | β−, n? | 124Pd |- | β−, 2n? | 123Pd |-id=Rhodium-126 | rowspan=3|126Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 81 | rowspan=3|125.96006(54)# | rowspan=3|19(3) ms | β− | 126Pd | rowspan=3|1−# | rowspan=3| |- | β−, n? | 125Pd |- | β−, 2n? | 124Pd |-id=Rhodium-127 | rowspan=3|127Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 82 | rowspan=3|126.96379(64)# | rowspan=3|28(14) ms | β− | 127Pd | rowspan=3|7/2+# | rowspan=3| |- | β−, n? | 126Pd |- | β−, 2n? | 125Pd |-id=Rhodium-128 | rowspan=3|128Rh | rowspan=3 style="text-align:right" | 45 | rowspan=3 style="text-align:right" | 83 | rowspan=3|127.97065(32)# | rowspan=3|8# ms[>550 ns] | β−? | 128Pd | rowspan=3| | rowspan=3| |- | β−, n? | 127Pd |- | β−, 2n? | 126Pd References Isotope masses from: Half-life, spin, and isomer data selected from the following sources. Rhodium Rhodium
Isotopes of rhodium
Chemistry
6,816
4,611,830
https://en.wikipedia.org/wiki/Fingerprint%20powder
Fingerprint powders are fine powders used, in conjunction with fingerprint brushes, by crime scene investigators and other law enforcement personnel to search for and enhance latent/invisible fingerprints that can be used to determine identification. This method of fingerprint development commonly referred to as dusting for fingerprints, involves the adherence of the powder particles to the moisture and sweat secretions deposited on to surfaces by the raised ridges on fingers, palms, or soles of feet designed for grip, called friction ridges. Furrows, representing the recessed areas, which lack fingerprint residue, do not retain the powder. Physical development of fingerprints using powders is one of many methods that can be employed to enhance fingerprints. It is typically used to search for fingerprints on large non-porous surfaces that cannot be submitted for chemical development within a laboratory. This particular method is best suited for the enhancement of freshly deposited fingerprints, because the adherence of the powder is diminished when the impression residue has dried. Fingerprint powders are commonly used because of the versatility associated with this technique. There is a large selection of fingerprint powder compositions that have evolved, over time, to enable the safe and effective use of fingerprint powders on a wide range of backgrounds. Composition In general, two components are present in dry non-magnetic fingerprint powders: a colour, typically inorganic in nature, and a material for adhesion within the powder such as stearic acid, cornstarch or Lycopodium powder, the spores of the Lycopodium and other related plants. A filler material such as mesh pumice is often added to keep the colour and adhesion material together, while preventing the formation of large clumps within the powder that would result in ineffective application onto the surface. Modern fingerprint powders are plentiful and include both dry and aqueous options. The dry fingerprint powders are often categorized into six basic types based on the general composition. Some powders may belong to more than a single group. These dry powders can be added to a dilute detergent solution that can be applied to surfaces to develop fingerprints. Flake powders Flake powders are composed of metal particles. The most common metal material used in this type of powder is aluminum. bronze, gold, copper, iron, and zinc are often used as well.Aluminum powder shows up on a variety of surfaces and is a particularly popular powder choice in the United Kingdom. Granular powders Granular powders were invented in the 1920s as one of the first types of fingerprint powders available. The materials used to make these powders were chalk, lamp black, graphite and a variety of lead and mercury components. Modern granular powders do not use lead or mercury because they have been known to cause health complications with prolonged use. Black granular powders, commonly used in the United States of America, are now composed of carbon-based particles. A wide range of colours are available to provide contrast with many backgrounds. Magnetic powders The foundation of magnetic powders is granular or flake powder, of any colour. The magnetic aspect is from the addition of small iron particles. Magnetic powders enhance fingerprint details more effectively than traditional granular and flake powders, because the particles are finer and the method of application is less invasive. Fluorescent powders Fluorescent powders are available in both granular and magnetic compositions. The fluorescent component is provided by adding a colour dye to the powder to allow for better visualization of dusted fingerprints on multi-coloured surfaces, with the use of ultra-violet (UV) light or an alternate light source (ALS). Nanopowders Nanopowders are a relatively new type of powder composed of nanoparticles. Similar to magnetic powders, these powders provide greater ridge detail in freshly deposited and aged fingerprints due to the extremely small particle size. Infrared powders Infrared powders are used in conjunction with infrared light. The features of the fingerprint are distinguished from the surrounding background because the infrared light is absorbed in the fingerprint rather than reflected. These powders are most beneficial for enhancing fingerprints found on money bills in countries that use polymer currency, because the powder removes issues related to the visualization of fingerprints on items with colourful, reflective and fluorescent properties. Factors influencing fingerprint powder quality There are several factors influencing the effectiveness of fingerprint powders. Particle size and shape The powders with finer particles show greater detail within the fingerprint than powders composed of larger coarse particles. Particles with shapes that provides a larger surface area promote greater adhesion to the fingerprints. Adhesion Effective enhancement of fingerprints relies on the adhesion of the powder to residue composing the fingerprint impression without adherence to the rest of the surface, otherwise the view of the fingerprint will be obscured. Adhesion of the powder particles to the fingerprint residue is partially attributable to the electrostatic attraction between the two. However, the adhesion is mainly guided by the increased contact between the moisture in fingerprints and the powder, and the surface tension. Colour The fingerprint powder must be a suitable colour for the surface in question by providing contrast. The details of fingerprints deposited on light surfaces are best visualized by applying dark or black coloured powders. Conversely, white or grey powders are recommended for enhancing fingerprints found on dark surfaces. There are powders available that contain elements that provide contrast on all types of surfaces. These powders are advantageous at crime scenes where both light and dark surfaces require fingerprint development because the powder type/colour does not need to be changed between uses. Multi-coloured surfaces are more complex. Fluorescent powders and additional light sources are required to provide the best contrast in these situations. Carbon dots based powders adopt different colours depending on the wavelength of the incident radiation, minimising background interference for fingerprints found on multicolour surfaces. Consistency The powder should not clump together, because the important identifying details within the fingerprint can become destroyed during the application of the powder. Powder selection and application In the past, powders were selected based on personal choice or outlined standard procedures of the associated department or agency. Despite this freedom in the powder selection process, crime scene investigators should ultimately choose powders that provide the best contrast against the surface the fingerprints are deposited on, with consideration for the characteristics of the surface itself. Most fingerprint powders are applied with a fingerprint brush composed of extremely fine fibers, designed to pick up powder and gently deposit it through a twirling motion onto the fingerprint, to reveal it without removing the delicate residue composing the fingerprint itself. There are many different types of brushes that can be used for applying fingerprint powder. The choice mostly depends on the stage of fingerprint development. For example, feather fingerprint brushes enhance detail within fingerprints located during an initial search, with fingerprint brushes composed of fiberglass fibers. Fingerprint powder may be applied using aerosol methods that do not involve direct contact with the surface. Magnetic powders differ slightly from traditional powders in the way that they are applied to fingerprints. A magnetic applicator is used in place of a fingerprint brush composed of fibers. The magnet within the applicator attracts the magnetic powder, forming a cluster of powder that can be gently moved across the fingerprint. Once complete, the magnet within the applicator is retracted and the magnetic powder falls off. The benefit to this application method is that no bristles touch the surface, with less potential damage to the print compared to other methods of developing fingerprints. Regardless of the method chosen to apply fingerprint powder, there must be careful consideration not to apply too much powder, that could ruin the fine details within the fingerprint needed to make an identification. In addition, fingerprints contain DNA that can be used in subsequent forensic DNA analysis and, therefore, it is important to mitigate the risks of DNA contamination when applying fingerprint powder. Further applications/uses Fingerprint powder is useful for the detection and collection of latent fingerprints, but that is not all the analysis that can be done. Kaplan-Sandquist, LeBeau, and Miller conducted a study where they tested fingerprint development methods with the MALDI/TOF MS. The fingerprint powder was found to be useful as a MALDI matrix. This instrument can identify many compounds. In the study, fingerprints tested contained known solvent residues. The fingerprint powder along with the MALDI matrix had the highest average detection rates of (88%). Since this study was controlled, it is known that the use of this further application with the MALDI/TOF MS would be effective. Associated health concerns Fingerprint powders used in the past contained materials that were considered carcinogenic and toxic. Lead and mercury components were removed from fingerprint powders due to associated cases of mercury and lead poisoning. Modern fingerprint powders pose significantly fewer health risks because they are composed mainly of organic components. However, there is concern that the small particles within the fingerprint powders may be inhaled and after prolonged exposure can result in the development of lung diseases. Powders with smaller particle sizes such as fluorescent powders, or the newer nanopowders are particularly concerning because they are small enough to reach and settle deep within the lungs. It is recommended that individuals frequently using these powders take the necessary precautions to mitigate the risk of respiratory illness, whether that be working within a fume hood or wearing a mask. See also Fingerprint Glove prints References Fingerprints Forensic equipment Powders
Fingerprint powder
Physics
1,930
36,770,904
https://en.wikipedia.org/wiki/SDSS%20J102915%2B172927
SDSS J102915+172927 or Caffau's Star is a population II star in the galactic halo, seen in the constellation Leo. It is about 13 billion years old, making it one of the oldest stars in the Galaxy. At the time of its discovery, it had the lowest metallicity of any known star. It is small (less than 0.8 solar masses), deficient in carbon, nitrogen, oxygen, and completely devoid of lithium. Because carbon and oxygen provide a fine structure cooling mechanism that is critical in the formation of low-mass stars, the origins of Caffau's Star are somewhat mysterious. It has been suggested, both for theoretical and observational reasons, that the formation of low-mass stars in the interstellar medium requires a critical metallicity somewhere between 1.5×10−8 and 1.5×10−6. The metallicity of Caffau's Star is less than 6.9×10−7. According to Schneider et al., cooling by dust rather than the fine structure lines of CII and OI may have enabled the creation of such low-mass, metal-poor stars in the early universe. The absence of lithium implies past temperatures of at least two million kelvins. Data from Gaia's DR2 released in 2018 confirmed that SDSS J102915+172927 is a dwarf star. The star was described by Elisabetta Caffau et al. in an article published by the journal Nature in September 2011. Caffau had been searching for extremely metal-poor stars for the past ten years. It was identified by automated software which analyzed data from the Sloan Digital Sky Survey. This was followed up by observations with the X-shooter and UVES instruments on the Very Large Telescope in Chile. Caffau and her team expect to find between five and fifty similar stars with the telescope in the future. See also Nucleosynthesis Spite plateau List of stars in Leo Ultra low metallicity / ultra metal poor stars Cayrel's Star HE 0107-5240 HE 1327-2326 References Population II stars Leo (constellation) 20110901
SDSS J102915+172927
Astronomy
447
50,221,745
https://en.wikipedia.org/wiki/Shiina%20esterification
Shiina esterification is an organic chemical reaction that synthesizes carboxylic esters from nearly equal amounts of carboxylic acids and alcohols by using aromatic carboxylic acid anhydrides as dehydration condensation agents. In 1994, Prof. Isamu Shiina (Tokyo University of Science, Japan) reported an acidic coupling method using Lewis acid, and, in 2002, a basic esterification using nucleophilic catalyst. Mechanism The successive addition of carboxylic acids and alcohols into a system containing aromatic carboxylic acid anhydride and catalyst produces corresponding carboxylic esters through the process shown in the following figure. In acidic Shiina esterification, Lewis acid catalysts are used, while nucleophilic catalysts are used for Shiina esterification under basic conditions. In the acidic reaction, 4-trifluoromethylbenzoic anhydride (TFBA) is mainly used as a dehydration condensation agent. First, the Lewis acid catalyst activates the TFBA, and then a carboxyl group in carboxylic acid reacts with the activated TFBA to produce mixed anhydride (MA) once. Then, a carbonyl group derived from the carboxylic acid in MA is selectively activated and is attacked by a hydroxyl group in the alcohol through intermolecular nucleophilic substitution. Simultaneously, residual aromatic carboxylic acid salt, which is derived from the MA, acts as a deprotonation agent, causing the esterification to progress and produce the desired carboxylic ester. To balance the reaction, each TFBA accepts the atoms of one water molecule from its starting materials, i.e., the carboxylic acid and alcohol, and then changes itself into two molecules of 4-trifluoromethylbenzoic acid at the end of the reaction. Since the Lewis acid catalyst is reproduced at the end of the reaction, only a small proportion of catalyst is needed relative to the starting material to drive the reaction forward. In the basic reaction, 2-methyl-6-nitrobenzoic anhydride (MNBA) is primarily used as a dehydration condensation agent. First, the nucleophilic catalyst acts on the MNBA to produce activated acyl carboxylate. The reaction of carboxyl group in the carboxylic acid with the activated acyl carboxylate produces the corresponding MA, in the same manner as in the acidic reaction. Then, the nucleophilic catalyst acts selectively on a carbonyl group derived from the carboxylic acid in MA to again produce activated acyl carboxylate. The hydroxyl group in the alcohol attacks its host molecule through intermolecular nucleophilic substitution, and at the same time, carboxylate anion, derived from 2-methyl-6-nitrobenzoic acid, acts as a deprotonation agent, promoting the progression of the esterification and producing the desired carboxylic ester. To balance the reaction, each MNBA accepts the atoms of one water molecule from its starting materials, changing itself into two molecules of the amine salt of 2-methyl-6-nitrobenzoic acid, and thus, terminating the reaction. Because the nucleophilic catalyst is reproduced at the end of the reaction, only small stoichiometric quantities are required. Details All of the processes of Shiina esterification consist of reversible reactions, with the exception of the last nucleophilic substitution step with alcohol. Therefore, the aromatic carboxylic acid anhydride and the mixed anhydride (MA) coexist in the system. Furthermore, aliphatic carboxylic acid anhydride produced via disproportionation of the MA is simultaneously present in the system; thus, it is directly used as a mixture without being separated. Owing to activation by Lewis acid catalysts or nucleophilic catalysts, the mixture of these three components begins to react with alcohol; in addition to the targeted aliphatic carboxylic acid esters, aromatic carboxylic acid esters are likely to be formed as by-products. However, by using 4-trifluoromethylbenzoic anhydride (TFBA) as the aromatic carboxylic acid anhydride under acidic conditions and 2-methyl-6-nitrobenzoic anhydride (MNBA) as the aromatic carboxylic acid anhydride under basic conditions, practically no aromatic carboxylic acid esters are obtained as by-products. (The chemoselectivity is 200:1 or higher.) Aromatic carboxylic acid anhydrides are used as dehydration condensation agents not only for the intermolecular coupling of carboxylic acids with alcohols but also for the intramolecular cyclization of hydroxycarboxylic acids (Shiina macrolactonization). Both of these intermolecular and intramolecular reactions are used for the artificial synthesis of various natural products and pharmacologically active compounds, as the reaction of a carboxylic acid with an amine produces an amide or a peptide. In acidic reactions, Lewis acid catalysts, such as metal triflates, exhibit high activities, while in basic reactions, 4-dimethylaminopyridine (DMAP), 4-dimethylaminopyridine N-oxide (DMAPO), and 4-pyrrolidinopyridine (PPY) are employed. In the Shiina esterification performed under basic conditions, asymmetric synthesis is realized using chiral nucleophilic catalysts. First, in the presence of a chiral nucleophilic catalyst, by the action of an appropriate carboxylic acid anhydride on a racemic aliphatic carboxylic acid, the corresponding MA is produced, resulting in the kinetic resolution of the racemic aliphatic carboxylic acid after having been subjected to reaction with achiral alcohol. Using this method, optically active carboxylic acids and optically active carboxylic acid esters can be obtained. It is also possible to realize the kinetic resolution of racemic alcohols by modifying the compositions of the reactants, i.e., by forming MA through reactions between achiral carboxylic acid and the appropriate carboxylic acid anhydride; then, by activating the racemic alcohols using the MA, optically active alcohols and optically active carboxylic acid esters can be obtained. See also Shiina macrolactonization Fischer–Speier esterification Steglich esterification Yamaguchi esterification Mitsunobu reaction References External lists Chemical reactions
Shiina esterification
Chemistry
1,484
61,594,658
https://en.wikipedia.org/wiki/Murid%20betaherpesvirus%201
Murid betaherpesvirus 1 (MuHV-1) is a species of virus in the genus Muromegalovirus, subfamily Betaherpesvirinae, family Herpesviridae, and order Herpesvirales. References External links Betaherpesvirinae
Murid betaherpesvirus 1
Biology
58
57,264,039
https://en.wikipedia.org/wiki/Einstein%27s%20thought%20experiments
A hallmark of Albert Einstein's career was his use of visualized thought experiments () as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices that attempted to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement. Introduction A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world. As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument." When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638). Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments. Special relativity Pursuing a beam of light Late in life, Einstein recalled Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific: 1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university. 2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds. 3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it. Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations. Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905. Magnet and conductor In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes: This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena. Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment. Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both. In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry: Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt. That decision ultimately led to his development of special relativity as a theory founded on two postulates. Einstein's original expression of these postulates was: "The laws governing the changes of the state of any physical system do not depend on which one of two coordinate systems in uniform translational motion relative to each other these changes of the state are referred to. Each ray of light moves in the coordinate system "at rest" with the definite velocity V independent of whether this ray of light is emitted by a body at rest or a body in motion." In their modern form: 1. The laws of physics take the same form in all inertial frames. 2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor] Einstein's wording of the first postulate was one with which nearly all theorists of his day could agree. His second postulate expresses a new idea about the character of light. Modern textbooks combine the two postulates. One popular textbook expresses the second postulate as, "The speed of light in free space has the same value c in all directions and in all inertial reference frames." Trains, embankments, and lightning flashes The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works (Annus Mirabilis papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before. Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory. In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows: Observer M stands on an embankment, while observer M rides on a rapidly traveling train. At the precise moment that M and M coincide in their positions, lightning strikes points A and B equidistant from M and M. Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous. The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M measures exactly the same speed of light as does M. Since M was equidistant from A and B when lightning struck, the fact that M receives light from B before light from A means that to M, the bolts were not synchronous. Instead, the bolt at B struck first. A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity. However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization. The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration. We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public. Relativistic center-of-mass theorem Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter). Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, i.e. a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum. Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment: Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass and length , (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) from the left to the right. The radiation has momentum Since the total momentum of the system is zero, the cylinder recoils with a speed (c) The radiation hits the other end of the cylinder in time (assuming ), bringing the cylinder to a stop after it has moved through a distance (d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated. The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle cannot be massless while transferring energy from the right to the left. If energy possesses the inertia the contradiction disappears. Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of . Impossibility of faster-than-light signaling In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling. Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of (as viewed from the material strip). Imagine two observers, A and B, standing on the x-axis and separated by the distance . They stand next to the material strip, which is not at rest, but rather is moving in the negative x-direction with speed . A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed . The time required for the signal to propagate from A to B is given by The strip can move at any speed . Given the starting assumption , one can always set the strip moving at a speed such that . In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it. About this thought experiment, Einstein wrote: General relativity Falling painters and accelerating elevators In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle: The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life. Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity." Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle. An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields." Early applications of the equivalence principle Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis. In his 1907 Jahrbuch paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test. By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is , then the increase in its inertial mass is Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass precisely the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so. To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A certain amount of electromagnetic energy is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The energy arriving at will therefore not be the energy but the greater energy given by According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that The energy arriving at is greater than the energy emitted by by the potential energy of the mass in the gravitational field. Hence corresponds to the gravitational mass as well as the inertial mass of a quantity of energy. To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source is situated a distance above a receiver in a uniform gravitational field. A movable mass can shuttle between and (b) A pulse of electromagnetic energy is sent from to The energy is absorbed by (c) Mass is lowered from to releasing an amount of work equal to (d) The energy absorbed by is transferred to This increases the gravitational mass of to a new value (e) The mass is lifted back to , requiring the input of work (e) The energy carried by the mass is then transferred to completing the cycle. Conservation of energy demands that the difference in work between raising the mass and lowering the mass, , must equal or one could potentially define a perpetual motion machine. Therefore, In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity. Einstein then considered sending a continuous electromagnetic beam of frequency (as measured at ) from to in a homogeneous gravitational field. The frequency of the light as measured at will be a larger value given by Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from to is continuous, how could the number of periods emitted per second from be different from that received at It is impossible for wave crests to appear on the way down from to . The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation. It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for any theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation. In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity. Non-Euclidean geometry and the rotating disk By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with. Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force: In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses. In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods. Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime. Quantum mechanics Background: Einstein and the quantum Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote: In hindsight, we know that Pais was incorrect in his assessment. Einstein was arguably the greatest single contributor to the "old" quantum theory. In his 1905 paper on light quanta, Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true. Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence. In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids. In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero): at very cold temperatures, atoms in a solid do not have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate. Einstein proposed the wave–particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views. Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black-body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties. Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character. Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion. In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation. This paper, well known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance. In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation. Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual. Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose–Einstein condensation, one of the fundamental research topics of condensed matter physics. While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter. Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave–particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser. What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write: In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution." Wave–particle duality All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave–particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability. Einstein's 1909 arguments for the wave–particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface. The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules. Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation in a small volume of a cavity filled with thermal radiation in the frequency interval between and to be a function of frequency and temperature: where would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law (i.e. a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy ). From this, Einstein concluded that radiation had simultaneous wave and particle aspects. Bubble paradox From 1905 to 1923, Einstein was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory. Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave–particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless". Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units. Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom. Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality. In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community. Einstein's light box Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote: The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it. Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day. On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete. At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system. Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box: 1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field. 2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system). 3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error in returning the pointer to its initial position. Correspondingly, there will be some residual error in the weight measurement. 4. Adding the weights imparts a momentum to the box which can be measured with an accuracy delimited by It is clear that where is the gravitational constant. Plugging in yields 5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty in the determination of the emission time of the photon. 6. Hence, The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle. After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox. Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth." EPR paradox Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions. Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed: 1. A reality exists independent of our ability to observe it. 2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality. 3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature. Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete? After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism. Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality." Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes. The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this." Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics. Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle. Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership. In 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out that demonstrated violation of these limits. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified. The EPR paper has recently been recognized as prescient, since it identified the phenomenon of quantum entanglement, which has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory. Notes Primary sources References External links NOVA: Inside Einstein's Mind (2015) — Retrace the thought experiments that inspired his theory on the nature of reality. Special relativity General relativity History of physics Thought experiments in quantum mechanics Albert Einstein
Einstein's thought experiments
Physics
10,557
5,799,093
https://en.wikipedia.org/wiki/Operation%20Bumblebee
Operation Bumblebee was a US Navy effort to develop surface-to-air missiles (SAMs) to provide a mid-range layer of anti-aircraft defense between anti-aircraft guns in the short range and fighter aircraft operating at long range. A major reason for the Bumblebee efforts was the need to engage bombers before they could launch standoff anti-shipping weapons, as these aircraft might never enter the range of the shipboard guns. Bumblebee originally concentrated on a ramjet-powered design, and the initial Applied Physics Lab PTV-N-4 Cobra/BTV (Propulsion Test Vehicle/Burner Test Vehicle) was flown in October 1945. Cobra eventually emerged as the RIM-8 Talos, which entered service on 28 May 1958 aboard the light cruiser USS Galveston. As part of the development program, several other vehicles were also developed. One of these developed into the RIM-2 Terrier, which entered operational status on 15 June 1956, two years before Talos; Terrier was first installed aboard the heavy cruiser USS Canberra. The Terrier was later modified as a short-range missile system for smaller ships, entering service in 1963 as the RIM-24 Tartar. Together, the three missiles were known as the "3 Ts". Bumblebee was not the only early Navy SAM project; the SAM-N-2 Lark was rushed into production as a short-range counter to the Kamikaze threat. However, it never matured into an operational weapon. The RIM-50 Typhon was developed to replace the 3 Ts but was cancelled during development. The 3 Ts were ultimately replaced by the RIM-66 Standard, a development of the Tartar. Origin Navy ships were hit by air-launched Henschel Hs 293 and Ruhrstahl SD 1400 X anti-ship guided bombs in 1943. A ramjet-powered anti-aircraft missile was proposed to destroy aircraft launching such weapons while remaining beyond the range of shipboard artillery. Initial performance goals were target intercept at a horizontal range of 10 miles and altitude, with a warhead for a 30 to 60 percent kill probability. Heavy shipping losses to kamikaze attacks during the Battle of Okinawa provided additional incentive for missile development. This role was not as demanding as the attacking weapon was much larger, but there was a desire for long range and rapid deployment. This led to a second concept, the SAM-N-2 Lark, a subsonic missile intended to provide a middle layer of defense between the long-range combat air patrols and short-range anti-aircraft artillery. With the ending of the war, and the introduction of jet-powered bombers with significantly higher performance, interest in Lark ended in favor of the Bumblebee efforts, and the prototype examples were used as test vehicles. Field testing In addition to initial tests at the Island Beach, New Jersey, and Fort Miles, Delaware, temporary sites, Camp Davis, North Carolina, was used for Operation Bumblebee from 1 June 1946 to 28 July 1948. Topsail Island, North Carolina, became the permanent Bumblebee testing and launch facility in March 1947. The Topsail Historical Society hosts the Missiles and More Museum at the site. Testing was transferred to Naval Air Weapons Station China Lake and then to White Sands Missile Range in 1951, where was built as a prototype Talos launch facility. Program results The RIM-2 Terrier, devised as a test vehicle, became operational as a fleet anti-aircraft missile aboard in 1955 and evolved into the RIM-66 Standard. Talos became operational with the fleet aboard in February 1959 and saw combat use during the Vietnam War. Ramjet knowledge acquired during the program aided the development of the XB-70 Valkyrie and the SR-71 Blackbird. Solid fuel boosters developed to bring the ramjet to operational velocity formed the basis for larger solid fuel rocket motors for ICBMs, satellite launch vehicles, and the Space Shuttle. References External links Topsail Historical Society's Missiles and More Museum The Bumblebee Project Military projects of the United States Bumblebee Naval surface-to-air missiles of the United States Naval weapons of the United States Nuclear anti-aircraft weapons Surface-to-air missiles of the United States
Operation Bumblebee
Engineering
852
808,767
https://en.wikipedia.org/wiki/Vacuum%20distillation
Vacuum distillation or distillation under reduced pressure is a type of distillation performed under reduced pressure, which allows the purification of compounds not readily distilled at ambient pressures or simply to save time or energy. This technique separates compounds based on differences in their boiling points. This technique is used when the boiling point of the desired compound is difficult to achieve or will cause the compound to decompose. Reduced pressures decrease the boiling point of compounds. The reduction in boiling point can be calculated using a temperature-pressure nomograph using the Clausius–Clapeyron relation. Laboratory-scale applications Compounds with a boiling point lower than 150 °C typically are distilled at ambient pressure. For samples with high boiling points, short-path distillation apparatus is commonly employed. This technique is amply illustrated in Organic Synthesis. Rotary evaporation Rotary evaporation is a common technique used in laboratories to concentrate or isolate a compound from solution. Many solvents are volatile and can easily be evaporated using rotary evaporation. Even less volatile solvents can be removed by rotary evaporation under high vacuum and with heating. It is also used by environmental regulatory agencies for determining the amount of solvents in paints, coatings and inks. Safety considerations Safety is an important consideration when glassware is under vacuum pressure. Scratches and cracks can result in implosions when the vacuum is applied. Wrapping as much of the glassware with tape as is practical helps to prevent dangerous scattering of glass shards in the event of an implosion. Industrial-scale applications Industrial-scale vacuum distillation has several advantages. Close boiling mixtures may require many equilibrium stages to separate the key components. One tool to reduce the number of stages needed is to utilize vacuum distillation. Vacuum distillation columns (as depicted in Figures 2 and 3) typically used in oil refineries have diameters ranging up to about 14 meters (46 feet), heights ranging up to about 50 meters (164 feet), and feed rates ranging up to about 25,400 cubic meters per day (160,000 barrels per day). Vacuum distillation can improve a separation by: Prevention of product degradation or polymer formation because of reduced pressure leading to lower tower bottoms temperatures, Reduction of product degradation or polymer formation because of reduced mean residence time especially in columns using packing rather than trays. Increasing yield, and purity. Vacuum distillation in petroleum refining Petroleum crude oil is a complex mixture of hundreds of different hydrocarbon compounds generally having from 3 to 60 carbon atoms per molecule, although there may be small amounts of hydrocarbons outside that range. The refining of crude oil begins with distilling the incoming crude oil in a so-called atmospheric distillation column operating at pressures slightly above atmospheric pressure. Vacuum distillation can also be referred to as "low-temperature distillation". In distilling the crude oil, it is important not to subject the crude oil to temperatures above 370 to 380 °C because high molecular weight components in the crude oil will undergo thermal cracking and form petroleum coke at temperatures above that. Formation of coke would result in plugging the tubes in the furnace that heats the feed stream to the crude oil distillation column. Plugging would also occur in the piping from the furnace to the distillation column as well as in the column itself. The constraint imposed by limiting the column inlet crude oil to a temperature of less than 370 to 380 °C yields a residual oil from the bottom of the atmospheric distillation column consisting entirely of hydrocarbons that boil above 370 to 380 °C. To further distill the residual oil from the atmospheric distillation column, the distillation must be performed at absolute pressures as low as 10 to 40 mmHg / Torr (About 5% atmospheric pressure) so as to limit the operating temperature to less than 370 to 380 °C. Figure 2 is a simplified process diagram of a petroleum refinery vacuum distillation column that depicts the internals of the column and Figure 3 is a photograph of a large vacuum distillation column in a petroleum refinery. The 10 to 40 mmHg absolute pressure in a vacuum distillation column increases the volume of vapor formed per volume of liquid distilled. The result is that such columns have very large diameters. Distillation columns such those in Images 1 and 2, may have diameters of 15 meters or more, heights ranging up to about 50 meters, and feed rates ranging up to about 25,400 cubic meters per day (160,000 barrels per day). The vacuum distillation column internals must provide good vapor–liquid contacting while, at the same time, maintaining a very low-pressure increase from the top of the column top to the bottom. Therefore, the vacuum column uses distillation trays only where products are withdrawn from the side of the column (referred to as side draws). Most of the column uses packing material for the vapor–liquid contacting because such packing has a lower pressure drop than distillation trays. This packing material can be either structured sheet metal or randomly dumped packing such as Raschig rings. The absolute pressure of 10 to 40 mmHg in the vacuum column is most often achieved by using multiple stages of steam jet ejectors. Many industries, other than the petroleum refining industry, use vacuum distillation on a much smaller scale. Copenhagen-based Empirical Spirits, a distillery founded by former Noma chefs, uses the process to create uniquely flavoured spirits. Their flagship spirit, Helena, is created using Koji, alongside Pilsner Malt and Belgian Saison Yeast. Large-scale water purification Vacuum distillation is often used in large industrial plants as an efficient way to remove salt from ocean water, in order to produce fresh water. This is known as desalination. The ocean water is placed under a vacuum to lower its boiling point and has a heat source applied, allowing the fresh water to boil off and be condensed. The condensing of the water vapor prevents the water vapor from filling the vacuum chamber, and allows the effect to run continuously without a loss of vacuum pressure. The heat from condensation of the water vapor is removed by a heat sink, which uses the incoming ocean water as the coolant and thus preheats the feed of ocean water. Some forms of distillation do not use condensers, but instead compress the vapor mechanically with a pump. This acts as a heat pump, concentrating the heat from the vapor and allowing for the heat to be returned and reused by the incoming untreated water source. There are several forms of vacuum distillation of water, with the most common being multiple-effect distillation, vapor-compression desalination, and multi-stage flash distillation. Molecular distillation Molecular distillation is vacuum distillation below the pressure of 0.01 torr (1.3 Pa). 0.01 torr is one order of magnitude above high vacuum, where fluids are in the free molecular flow regime, i.e. the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, the rate of evaporation no longer depends on pressure. That is, because the continuum assumptions of fluid dynamics no longer apply, mass transport is governed by molecular dynamics rather than fluid dynamics. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between. Molecular distillation is used industrially for purification of oils. Gallery See also Continuous distillation Fractionating column Fractional distillation Kugelrohr References External links D1160 Vacuum Distillation How vacuum distillation works Pressure-temperature nomograph Short path distillation , includes a table comparing methods Distillation Industrial processes Vacuum
Vacuum distillation
Physics,Chemistry
1,632
76,533,523
https://en.wikipedia.org/wiki/Hwinfo
HWiNFO (also known as HWiNFO64) is a system monitoring, system profiling and system diagnostics program for Windows and DOS-based systems. It is developed by Martin Malik and REALiX. It was used by NASA during several tests of different microprocessors, including an AMD Ryzen 3 1200 and Intel i5-6600K. Features Displaying CPU, GPU and other hardware information Monitoring CPU, GPU and other sensors References Utilities for Windows
Hwinfo
Technology
100
50,642,077
https://en.wikipedia.org/wiki/Pipelayer%20%28vehicle%29
A pipelayer or sideboom is a type of a construction vehicle used to lay pipes. It is used in the construction of oil, gas and water pipelines. Description The lifting equipment of pipelayers includes: boom, mounted on the left side of the machine counterweights hydraulic hook and boom winches. A pipelayer can also be equipped with accessories: crane (used to lift and place pipeline elements in a trench) dragline (used to backfill a trench) rammer (used to compact the soil in an excavation). Standards Construction, installation, operation, inspection, testing, and maintenance of sidebooms are covered by OSHA standard 1926.1440 - Sideboom cranes and ASME B30.14 - Side Boom Tractors. References Engineering vehicles
Pipelayer (vehicle)
Engineering
160
3,124,930
https://en.wikipedia.org/wiki/Hybrid%20name
In botanical nomenclature, a hybrid may be given a hybrid name, which is a special kind of botanical name, but there is no requirement that a hybrid name should be created for plants that are believed to be of hybrid origin. The International Code of Nomenclature for algae, fungi, and plants (ICNafp) provides the following options in dealing with a hybrid: A hybrid may get a name if the author considers it necessary (in practice, authors tend to use this option for naturally occurring hybrids), but it is recommended to use parents' names as they are more informative (art. H.10B.1). A hybrid may also be indicated by a formula listing the parents. Such a formula uses the multiplication sign "×" to link the parents. "It is usually preferable to place the names or epithets in a formula in alphabetical order. The direction of a cross may be indicated by including the sexual symbols (♀: female; ♂: male) in the formula, or by placing the female parent first. If a non-alphabetical sequence is used, its basis should be clearly indicated." (H.2A.1) Grex names can be given to orchid hybrids. A hybrid name is treated like other botanical names, for most purposes, but differs in that: A hybrid name does not necessarily refer to a morphologically distinctive group, but applies to all progeny of the parents, no matter how much they vary. E.g., Magnolia × soulangeana applies to all progeny from the cross Magnolia denudata × Magnolia liliiflora, and from the crosses of all their progeny, as well as from crosses of any of the progeny back to the parents (backcrossing). This covers quite a range in flower colour. Grex names (for orchids only) differ in that they do not cover crosses from plants within the grex (F2 hybrids) or back-crosses (crosses between a grex member and its parent). Hybrids can be named with ranks, like other organisms covered by the ICNafp. They are nothotaxa, from notho- (hybrid) + taxon. If the parents (or postulated parents) differ in rank, then the rank of the nothotaxon is the lowest. The names of nothospecies differ depending on whether they are derived from species within the same genus; if more than one parental genus is involved, then the nothospecies name includes a nothogenus name. Pyrus × bretschneideri is a hybrid between two species in the genus Pyrus. × Pyraria irregularis, in the nothogenus Pyraria, is a hybrid between Aria edulis and Pyrus communis. Publication of names Names of hybrids between genera (called nothogenera) can be published by specifying the names of the parent genera, but without a scientific description, and do not have a type. Nothotaxon names with the rank of a subdivision of a genus (notho-subgenus, notho-section, notho-series, etc.) are also published by listing the parent taxa and without descriptions or types. Forms of hybrid names A hybrid name can be indicated by: a multiplication sign "×" placed before the name of an intergeneric hybrid or before the epithet of a species hybrid. An intervening space is optional. e.g.: × Sorbaronia or ×Sorbaronia is the name of hybrids between the genera Sorbus and Aronia, Iris × germanica or Iris ×germanica is a species derived by hybrid speciation or by the prefix notho- attached to the rank (from Ancient Greek νόθος, nóthos, “bastard”) Crataegus nothosect. Crataeguineae Iris germanica nothovar. florentina. The multiplication sign and the prefix notho- are not part of the actual name and are disregarded for nomenclatural purposes such as synonymy, homonymy, etc. This means that a taxonomist could decide to use either form of this name: Drosera ×anglica to emphasize that it is a hybrid, or Drosera anglica to emphasize that it is a species. The names of intergeneric hybrids generally have a special form called a condensed formula, e.g., × Agropogon for hybrids between Agrostis and Polypogon. Hybrids involving four or more genera are formed from the name of a person, with suffix -ara, e.g., × Belleara. Names for hybrids between three genera can be either a condensed formula or formed from a person's name with suffix -ara. Notation The symbol used to indicate a hybrid is . (Linnaeus originally used , but abandoned it in favour of the multiplication sign.) See also Botanical nomenclature International Code of Nomenclature for algae, fungi, and plants Graft-chimaera names look similar, but use . Glossary of scientific naming How to type the × symbol Notes References External links The Language of Horticulture Botanical nomenclature Hybrid plants
Hybrid name
Biology
1,054
18,310,964
https://en.wikipedia.org/wiki/Reproterol
Reproterol is a short-acting β2 adrenoreceptor agonist used in the treatment of asthma. It was patented in 1965 and came into medical use in 1977. Stereochemistry Reproterol contains a stereocenter and is chiral. There are thus two enantiomers, the (R)-form and the (S)-form. The commercial preparations contain the drug as a racemate, an equal mixture of the two enantiomers. References Antiasthmatic drugs Beta-adrenergic agonists Xanthines Phenylethanolamines Resorcinols
Reproterol
Chemistry
133
33,719,867
https://en.wikipedia.org/wiki/Laternea%20pusilla
Laternea pusilla is a species of fungus in the family Phallaceae. this mushroom is an odd looking mushroom from central and South America. Like other stinkhorns, it produces a spore-laden foul smelling stink that attracts insects. smeels horrible to us , but lovely to insects. Phallales Fungi described in 1868 Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis Fungus species
Laternea pusilla
Biology
85
969,807
https://en.wikipedia.org/wiki/Pit%20sword
The pit sword (also known as a rodmeter) is a blade of metal or plastic that extends into the water beneath the hull of a ship. It is part of the pitometer log, a device for measuring the ship's speed through the water. See also Electromagnetic log Pitot tube References External links Historical site with information on US Navy submarine pitometer log systems during World War II. This site shows the pit sword or rodmeter as it is deployed. Measuring instruments
Pit sword
Technology,Engineering
97
36,758,926
https://en.wikipedia.org/wiki/Target%20peptide
A target peptide is a short (3-70 amino acids long) peptide chain that directs the transport of a protein to a specific region in the cell, including the nucleus, mitochondria, endoplasmic reticulum (ER), chloroplast, apoplast, peroxisome and plasma membrane. Some target peptides are cleaved from the protein by signal peptidases after the proteins are transported. Types by protein destination Secretion Almost all proteins that are destined to the secretory pathway have a sequence consisting of 5-30 hydrophobic amino acids on the N-terminus, which is commonly referred to as the signal peptide, signal sequence or leader peptide. Signal peptides form alpha-helical structures. Proteins that contain such signals are destined for either extra-cellular secretion, the plasma membrane, the lumen or membrane of either the (ER), Golgi or endosomes. Certain membrane-bound proteins are targeted to the secretory pathway by their first transmembrane domain, which resembles a typical signal peptide. In prokaryotes, signal peptides direct the newly synthesized protein to the SecYEG protein-conducting channel, which is present in the plasma membrane. A homologous system exists in eukaryotes, where the signal peptide directs the newly synthesized protein to the Sec61 channel, which shares structural and sequence similarity with SecYEG, but is present in the endoplasmic reticulum. Both the SecYEG and Sec61 channels are commonly referred to as the translocon, and transit through this channel is known as translocation. While secreted proteins are threaded through the channel, transmembrane domains may diffuse across a lateral gate in the translocon to partition into the surrounding membrane. ER-Retention Signal In eukaryotes, most of the newly synthesized secretory proteins are transported from the ER to the Golgi apparatus. If these proteins have a particular 4-amino-acid retention sequence for the ER's lumen, KDEL, on their C-terminus, they are retained in the ER's lumen or are routed back to the ER's lumen (in instances where they escape) via interaction with the KDEL receptor in the Golgi apparatus. If the signal is KKXX, the retention mechanism to the ER will be similar but the protein will be transmembranal. Nucleus A nuclear localization signal (NLS) is a target peptide that directs proteins to the nucleus and is often a unit consisting of five basic, positively charged amino acids. The NLS normally is located anywhere on the peptide chain. A nuclear export signal (NES) is a target peptide that directs proteins from the nucleus back to the cytosol. It often consists of several hydrophobic amino acids (often leucine) interspaced by 2-3 other amino acids. Many proteins are known to constantly shuttle between the cytosol and nucleus and these contain both NESs and NLSs. Nucleolus The nucleolus within the nucleus can be targeted with a sequence called a nucleolar localization signal (abbreviated NoLS or NOS). Mitochondria and plastid The mitochondrial targeting signal also known as presequence is a 10-70 amino acid long peptide that directs a newly synthesized protein to the mitochondria. It is found at the N-terminus end consists of an alternating pattern of hydrophobic and positively charged amino acids to form what is called an amphipathic helix. Mitochondrial targeting signals can contain additional signals that subsequently target the protein to different regions of the mitochondria, such as the mitochondrial matrix or inner membrane. In plants, an N-terminal signal (or transit peptide) targets to the plastid in a similar manner. Like most signal peptides, mitochondrial targeting signals and plastid specific transit peptides are cleaved once targeting is complete. Some plant proteins have an N-terminal transport signal that targets both organelles often referred to as dual-targeted transit peptide. Approximately 5% of total organelle proteins are predicted to be dual-targeted however the specific number could be higher considering the variable degree of accumulation of passenger proteins in both organelles. The targeting specificity of these transit peptides depends on many factors including net charge and affinity between transit peptides and organelle transport machinery. Peroxisome There are two types of target peptides directing to peroxisome, which are called peroxisomal targeting signals (PTS). One is PTS1, which is made of three amino acids on the C-terminus. The other is PTS2, which is made of a 9-amino-acid sequence often present on the N-terminus of the protein. Examples of target peptides The following content uses protein primary structure single-letter location. A "[n]" prefix indicates the N-terminus and a "[c]" suffix indicates the C-terminus; sequences lacking either are found in the middle of the protein. See also Protein targeting Signal peptide References External links SPdb (Signal Peptide DataBase) Prediction methods: SignalP — predicts the presence and location of signal peptide cleavage sites in amino acid sequences from all domains of organisms. PHOBIUS - combined transmembrane topology and signal peptide predictor Gene expression Protein targeting
Target peptide
Chemistry,Biology
1,096
8,092,698
https://en.wikipedia.org/wiki/Euclidean%20distance%20matrix
In mathematics, a Euclidean distance matrix is an matrix representing the spacing of a set of points in Euclidean space. For points in -dimensional space , the elements of their Euclidean distance matrix are given by squares of distances between them. That is where denotes the Euclidean norm on . In the context of (not necessarily Euclidean) distance matrices, the entries are usually defined directly as distances, not their squares. However, in the Euclidean case, squares of distances are used to avoid computing square roots and to simplify relevant theorems and algorithms. Euclidean distance matrices are closely related to Gram matrices (matrices of dot products, describing norms of vectors and angles between them). The latter are easily analyzed using methods of linear algebra. This allows to characterize Euclidean distance matrices and recover the points that realize it. A realization, if it exists, is unique up to rigid transformations, i.e. distance-preserving transformations of Euclidean space (rotations, reflections, translations). In practical applications, distances are noisy measurements or come from arbitrary dissimilarity estimates (not necessarily metric). The goal may be to visualize such data by points in Euclidean space whose distance matrix approximates a given dissimilarity matrix as well as possible — this is known as multidimensional scaling. Alternatively, given two sets of data already represented by points in Euclidean space, one may ask how similar they are in shape, that is, how closely can they be related by a distance-preserving transformation — this is Procrustes analysis. Some of the distances may also be missing or come unlabelled (as an unordered set or multiset instead of a matrix), leading to more complex algorithmic tasks, such as the graph realization problem or the turnpike problem (for points on a line). Properties By the fact that Euclidean distance is a metric, the matrix has the following properties. All elements on the diagonal of are zero (i.e. it is a hollow matrix); hence the trace of is zero. is symmetric (i.e. ). (by the triangle inequality) In dimension , a Euclidean distance matrix has rank less than or equal to . If the points are in general position, the rank is exactly Distances can be shrunk by any power to obtain another Euclidean distance matrix. That is, if is a Euclidean distance matrix, then is a Euclidean distance matrix for every . Relation to Gram matrix The Gram matrix of a sequence of points in -dimensional space is the matrix of their dot products (here a point is thought of as a vector from 0 to that point): , where is the angle between the vector and . In particular is the square of the distance of from 0. Thus the Gram matrix describes norms and angles of vectors (from 0 to) . Let be the matrix containing as columns. Then , because (seeing as a column vector). Matrices that can be decomposed as , that is, Gram matrices of some sequence of vectors (columns of ), are well understood — these are precisely positive semidefinite matrices. To relate the Euclidean distance matrix to the Gram matrix, observe that That is, the norms and angles determine the distances. Note that the Gram matrix contains additional information: distances from 0. Conversely, distances between pairs of points determine dot products between vectors (): (this is known as the polarization identity). Characterizations For a matrix , a sequence of points in -dimensional Euclidean space is called a realization of in if is their Euclidean distance matrix. One can assume without loss of generality that (because translating by preserves distances). This follows from the previous discussion because is positive semidefinite of rank at most if and only if it can be decomposed as where is a matrix. Moreover, the columns of give a realization in . Therefore, any method to decompose allows to find a realization. The two main approaches are variants of Cholesky decomposition or using spectral decompositions to find the principal square root of , see Definite matrix#Decomposition. The statement of theorem distinguishes the first point . A more symmetric variant of the same theorem is the following: Other characterizations involve Cayley–Menger determinants. In particular, these allow to show that a symmetric hollow matrix is realizable in if and only if every principal submatrix is. In other words, a semimetric on finitely many points is embedabble isometrically in if and only if every points are. In practice, the definiteness or rank conditions may fail due to numerical errors, noise in measurements, or due to the data not coming from actual Euclidean distances. Points that realize optimally similar distances can then be found by semidefinite approximation (and low rank approximation, if desired) using linear algebraic tools such as singular value decomposition or semidefinite programming. This is known as multidimensional scaling. Variants of these methods can also deal with incomplete distance data. Unlabeled data, that is, a set or multiset of distances not assigned to particular pairs, is much more difficult to deal with. Such data arises, for example, in DNA sequencing (specifically, genome recovery from partial digest) or phase retrieval. Two sets of points are called homometric if they have the same multiset of distances (but are not necessarily related by a rigid transformation). Deciding whether a given multiset of distances can be realized in a given dimension is strongly NP-hard. In one dimension this is known as the turnpike problem; it is an open question whether it can be solved in polynomial time. When the multiset of distances is given with error bars, even the one dimensional case is NP-hard. Nevertheless, practical algorithms exist for many cases, e.g. random points. Uniqueness of representations Given a Euclidean distance matrix, the sequence of points that realize it is unique up to rigid transformations – these are isometries of Euclidean space: rotations, reflections, translations, and their compositions. Rigid transformations preserve distances so one direction is clear. Suppose the distances and are equal. Without loss of generality we can assume by translating the points by and , respectively. Then the Gram matrix of remaining vectors is identical to the Gram matrix of vectors (). That is, , where and are the matrices containing the respective vectors as columns. This implies there exists an orthogonal matrix such that , see Definite symmetric matrix#Uniqueness up to unitary transformations. describes an orthogonal transformation of (a composition of rotations and reflections, without translations) which maps to (and 0 to 0). The final rigid transformation is described by . In applications, when distances don't match exactly, Procrustes analysis aims to relate two point sets as close as possible via rigid transformations, usually using singular value decomposition. The ordinary Euclidean case is known as the orthogonal Procrustes problem or Wahba's problem (when observations are weighted to account for varying uncertainties). Examples of applications include determining orientations of satellites, comparing molecule structure (in cheminformatics), protein structure (structural alignment in bioinformatics), or bone structure (statistical shape analysis in biology). See also Adjacency matrix Coplanarity Distance geometry Hollow matrix Distance matrix Euclidean random matrix Classical multidimensional scaling, a visualization technique that approximates an arbitrary dissimilarity matrix by a Euclidean distance matrix Cayley–Menger determinant Semidefinite embedding Notes References Matrices Distance
Euclidean distance matrix
Physics,Mathematics
1,519
36,562,676
https://en.wikipedia.org/wiki/Miracast
Miracast is a wireless communications standard created by the Wi-Fi Alliance which is designed to transmit video and sound from devices (such as laptops or smartphones) to display receivers (such as TVs, monitors, or projectors). It uses Wi-Fi Direct to create an ad hoc encrypted wireless connection and can roughly be described as "HDMI over Wi-Fi", replacing cables in favor of wireless. Miracast is utilised in many devices and is used or branded under various names by different manufacturers, including Smart View (by Samsung), SmartShare (by LG), screen mirroring (by Sony), Cast (in Windows 11) and Connect (in Windows 10), wireless display and screen casting. A related enterprise protocol named Miracast over Infrastructure (MS-MICE) functions using a central local area network instead, and is supported in Microsoft Windows. Development The Wi-Fi Alliance launched the Miracast certification program at the end of 2012. Devices that are Miracast-certified can communicate with each other, regardless of manufacturer. Nvidia announced support in 2012 for their Tegra 3 platform, and Freescale Semiconductor, Texas Instruments, Qualcomm, Marvell Technology Group and other chip vendors have also announced their plans to support the Miracast standard. The Wi-Fi Alliance maintains a list of certified device models, which numbered over 13,200 Technical details Miracast is based on the peer-to-peer Wi-Fi Direct standard. It allows sending up to 1080p HD video (H.264 codec) and 5.1 surround sound (AAC and AC3 are optional codecs, mandated codec is linear pulse-code modulation16 bits 48 kHz 2 channels). The connection is created via WPS and therefore is secured with WPA2. IPv4 is used on the Internet layer. On the transport layer, TCP or UDP are used. On the application layer, the stream is initiated and controlled via RTSP, RTP for the data transfer. Version history Functionality The technology was promoted to work across devices, regardless of brand. Miracast devices negotiate settings for each connection, which simplifies the process for the users. In particular, it obviates having to worry about format or codec details. Miracast is "effectively a wireless HDMI cable, copying everything from one screen to another using the H.264 codec and its own digital rights management (DRM) layer emulating the HDMI system". The Wi-Fi Alliance suggested that Miracast could also be used by a set-top box wanting to stream content to a TV or tablet. Both devices (the sender and the receiver) need to be Miracast certified for the technology to work. However, to stream music and movies to a non-certified device, Miracast adapters are available that plug into HDMI or USB ports. Certification does not mandate a maximum latency (i.e. the time between the display of pictures on the source and display of the mirrored image on the sync display). Even with certification, it is possible an underpowered device will be constrained in performance or bandwidth. Types of media streamed Miracast can stream videos that are in 1080p, media with DRM such as DVDs, as well as protected premium content streaming, enabling devices to stream feature films and other copy-protected materials. This is accomplished by using a Wi-Fi version of the same trusted content mechanisms used on cable-based HDMI and DisplayPort connections. Display resolution 27 Consumer Electronics Association (CEA) formats, from 640 × 480 up to 4096 × 2160 pixels, and from 24 to 60 frames per second (fps) 34 Video Electronics Standards Association (VESA) formats, from 800 × 600 up to 2560 × 1600 pixels, and from 30 to 60 fps 12 handheld formats, from 640 × 360 up to 960 × 540 pixels, and from 30 to 60 fps Mandatory: 1280 × 720p30 (HD) Optional: 3840 × 2160p60 (4K Ultra HD) Video Mandatory: ITU-T H.264 (Advanced Video Coding [AVC]) for HD and Ultra HD video; supports several profiles in transcoding and non-transcoding modes, including Constrained Baseline Profile (CBP), at levels ranging from 3.1 to 5.2 Optional: ITU-T H.265 (High Efficiency Video Coding [HEVC]) for HD and Ultra HD video; supports several profiles in transcoding and non-transcoding modes, including Main Profile, Main 444, SCC-8 bit 444, Main 444 10, at levels ranging from 3.1 to 5.1 Audio Mandated audio codec: Linear Pulse-Code Modulation (LPCM) 16 bits, 48 kHz sampling, 2 channels Optional audio codecs, including: LPCM mode 16 bits, 44.1 kHz sampling, 2 channels Advanced Audio Coding (AAC) modes Dolby Advanced Codec 3 (AC3) modes E-AC-3 Dolby TrueHD, Dolby MAT modes DTS-HD mode MPEG-4 AAC and MPEG-H 3D Audio modes AAC-ELDv2 Hardware and software support A device's wireless network adapter must support Wi-Fi Direct and Virtual Wi-Fi for it to work with Miracast; generally most adapters built since 2013 should meet the criteria. In Windows computers this can be checked by looking at the adapter's NDIS version which must be 6.3 or above. However Miracast support also depends on the software implementation by manufacturers. Most modern devices support Miracast, with notable exceptions being products from Google and Apple. Windows and Linux PCs Microsoft also added support for Miracast in Windows 8.1 (announced in June 2013) and available on hardware with supported Miracast drivers from hardware (GPU) manufacturers. Windows 10 and Windows 11 support Miracast transmitting along with User Input Back Channel (UIBC) support to allow for human interface devices (touch screens, mouse, keyboard) abbreviated as HID, to also have wireless connectivity (provided the host hardware also supports this). The transmit feature is built-in from launch for all Miracast devices with no additional setup past using the WIN+K keystroke to pair with a compatible display sink (including Microsoft's own Wireless Display Adapter). Developers can also implement Miracast on top of the built-in Wi-Fi Direct support in Windows 7 and Windows 8. Windows 8.1 supports broadcasting/sending the screen via Miracast. Another way to support Miracast in Windows is with Intel's proprietary WiDi (v3.5 or higher). While Linux does not feature native support, several add-on software solutions exist. In the GNOME ecosystem, the GNOME Network Displays application has allowed for Miracast screen sharing. As part of the 2023 Google Summer of Code, an effort to integrate this as a feature in the GNOME Settings was announced, which would mean functionality would be had out of the box with that desktop environment. Windows Wireless Display Windows 11 and Windows 10 (since Windows 10 version 2004) also have the ability to use Miracast to make a monitor display (of a computer running Windows) act as a secondary screen of another device. This feature can be set up in the Projecting to this PC setting. It requires the downloading of the optional Wireless Display add-in feature in Windows, which adds the UWP-based Wireless Display app (known as Connect before Windows 11 version 22H2) and is launched on the receiving device. Android Miracast support was built into stock Android as of version 4.2 (Android Jelly Bean) - as of January 2013, the LG Nexus 4 and Sony's Xperia Z, ZL, T and V officially supported the function, as did HTC One, Motorola in their Droid Maxx and Droid Ultra flagships, and Samsung in its Galaxy S III and Galaxy Note II under the moniker AllShare Cast. The Galaxy S4 uses Samsung Link for its implementation. Some devices such as the Nexus 7 don't support it due to hardware limitations. Since Android 6.0 Marshmallow released in 2015, Google dropped Miracast support in favor of their own proprietary Google Cast protocol which was introduced with their Chromecast device. Despite this there are third-party Miracast apps for Android available. Many device manufacturers have retained Miracast support through their customized versions of Android (for example: Smart View on Samsung's One UI, Cast on Xiaomi's MIUI, Screencast on Oppo's ColorOS, Wireless Projection on Huawei's EMUI, HTC Sense, LG UX, Asus ZenUI, Sony Xperia devices, OnePlus's OxygenOS etc.). The performance and quality of the streamed video is dependent on the device's hardware. Nokia devices, which ran a near-stock version of Android, originally did not support Miracast. However, Nokia 7 Plus, 8, 8 Sirocco, and 8.1 smartphones that have been upgraded to Android 9 or 10 are able to support Miracast, after enabling Wireless Display Certification in Developer Options. Devices such as Nokia 2.3, 2.4, 3.4, 5.4, and 8.3 5G have Miracast support enabled by default. The same option is present to stock Android as well, with Google describing it as based on the "Wi-Fi Alliance Wi-Fi Display Specification", but it tends to be useless as Miracast code was removed. Televisions and dongles Samsung televisions support Miracast where it is named Smart View (including all models made since 2016). Miracast is also supported on LG smart TV models, some Toshiba TVs, Sharp, Philips (Wireless Screencasting), and Panasonic televisions and Blu-ray players. Sony Bravia models of televisions released between 2013 and 2020 normally have Miracast. The feature is named screen mirroring. Newer models with Android TV instead make use of the Google Cast protocol. On 23 September 2014, Microsoft announced the Microsoft Wireless Display Adaptor, a USB-powered HDMI dongle for high definition televisions. Simple dongles such as these can be used to provide Miracast to a television (or other display) that lacks the feature built-in. Miscellaneous Xbox One since 2019 using the optional downloadable Wireless Display app. Windows Phone 8.1. BlackBerry 10 devices since update 10.2.1 in 2013 (as of March 2015, the BlackBerry Q10, Q5, Z30, and later models support Miracast streaming). Ubuntu Touch-powered Meizu Pro 5 supported Miracast in OTA-11. The Roku streaming stick and Roku TV (starting October 2014). Most Amazon Fire TV models (except 2017 Fire TV with 4K Ultra HD and Alexa Voice Remote). HTC Vive ScreenBeam from Actiontec Electronics Miracast over Infrastructure Miracast over Infrastructure Connection Establishment Protocol (MS-MICE) allows the capabilities of Miracast but through a local network instead of directly. It has been supported in Microsoft Windows since Windows 10, version 1703. MS-MICE connects with computers that are connected to the network via secure Wi-Fi or through Ethernet. See also AirPlay Discovery and Launch (used by Netflix app) Digital Living Network Alliance (DLNA) WiDi version 3.5 to 6.0 supports Miracast; discontinued Google Cast Smart Display (codename Mira, early 2002 screencasting by Microsoft) Wireless HDMI References External links Wi-Fi Alliance list of Miracast certified devices Internet Standards Wi-Fi Wi-Fi Direct Wireless display technologies
Miracast
Technology
2,402
31,336,432
https://en.wikipedia.org/wiki/Dold%E2%80%93Thom%20theorem
In algebraic topology, the Dold-Thom theorem states that the homotopy groups of the infinite symmetric product of a connected CW complex are the same as its reduced homology groups. The most common version of its proof consists of showing that the composition of the homotopy group functors with the infinite symmetric product defines a reduced homology theory. One of the main tools used in doing so are quasifibrations. The theorem has been generalised in various ways, for example by the Almgren isomorphism theorem. There are several other theorems constituting relations between homotopy and homology, for example the Hurewicz theorem. Another approach is given by stable homotopy theory. Thanks to the Freudenthal suspension theorem, one can see that the latter actually defines a homology theory. Nevertheless, none of these allow one to directly reduce homology to homotopy. This advantage of the Dold-Thom theorem makes it particularly interesting for algebraic geometry. The theorem Dold-Thom theorem. For a connected CW complex X one has πnSP(X) ≅ H̃n(X), where H̃n denotes reduced homology and SP stands for the infinite symmetric product. It is also very useful that there exists an isomorphism φ : πnSP(X) → H̃n(X) which is compatible with the Hurewicz homomorphism h: πn(X) → H̃n(X), meaning that one has a commutative diagram where i* is the map induced by the inclusion i: X = SP1(X) → SP(X). The following example illustrates that the requirement of X being a CW complex cannot be dropped offhand: Let X = CH ∨ CH be the wedge sum of two copies of the cone over the Hawaiian earring. The common point of the two copies is supposed to be the point 0 ∈ H meeting every circle. On the one hand, H1(X) is an infinite group while H1(CH) is trivial. On the other hand, π1(SP(X)) ≅ π1(SP(CH)) × π1(SP(CH)) holds since φ : SP(X) × SP(Y) → SP(X ∨ Y) defined by φ([x1, ..., xn], [y1, ..., yn]) = ([x1, ..., xn, y1, ..., yn]) is a homeomorphism for compact X and Y. But this implies that either π1(SP(CH)) ≅ H1(CH) or π1(SP(X)) ≅ H1(X) does not hold. Sketch of the proof One wants to show that the family of functors hn = πn ∘ SP defines a homology theory. Dold and Thom chose in their initial proof a slight modification of the Eilenberg-Steenrod axioms, namely calling a family of functors (h̃n)n∈N0 from the category of basepointed, connected CW complexes to the category of abelian groups a reduced homology theory if they satisfy If f ≃ g: X → Y, then f* = g*: h̃n(X) → h̃n(Y), where ≃ denotes pointed homotopy equivalence. There are natural boundary homomorphisms ∂ : h̃n(X/A) → h̃n−1(A) for each pair (X, A) with X and A being connected, yielding an exact sequence where i: A → X is the inclusion and q: X → X/A is the quotient map. h̃n(S1) = 0 for n ≠ 1, where S1 denotes the circle. Let (Xλ) be the system of compact subspaces of a pointed space X containing the basepoint. Then (Xλ) is a direct system together with the inclusions. Denote by respectively the inclusion if Xλ ⊂ Xμ. h̃n(Xλ) is a direct system as well with the morphisms . Then the homomorphism induced by the is required to be an isomorphism. One can show that for a reduced homology theory (h̃n)n∈N0 there is a natural isomorphism h̃n(X) ≅ H̃n(X; G) with G = h̃1(S1). Clearly, hn is a functor fulfilling property 1 as SP is a homotopy functor. Moreover, the third property is clear since one has SP(S1) ≃ S1. So it only remains to verify the axioms 2 and 4. The crux of this undertaking will be the first point. This is where quasifibrations come into play: The goal is to prove that the map p*: SP(X) → SP(X/A) induced by the quotient map p: X → X/A is a quasifibration for a CW pair (X, A) consisting of connected complexes. First of all, as every CW complex is homotopy equivalent to a simplicial complex, X and A can be assumed to be simplicial complexes. Furthermore, X will be replaced by the mapping cylinder of the inclusion A → X. This will not change anything as SP is a homotopy functor. It suffices to prove by induction that p* : En → Bn is a quasifibration with Bn = SPn(X/A) and En = p*−1(Bn). For n = 0 this is trivially fulfilled. In the induction step, one decomposes Bn into an open neighbourhood of Bn−1 and Bn − Bn−1 and shows that these two sets are, together with their intersection, distinguished, i.e. that p restricted to each of the preimages of these three sets is a quasifibration. It can be shown that Bn is then already distinguished itself. Therefore, p* is indeed a quasifibration on the whole SP(X) and the long exact sequence of such a one implies that axiom 2 is satisfied as p*−1([e]) ≅ SP(A) holds. One may wonder whether p* is not even a fibration. However, that turns out not to be the case: Take an arbitrary path xt for t ∈ [0, 1) in X − A approaching some a ∈ A and interpret it as a path in X/A ⊂ SP(X/A). Then any lift of this path to SP(X) is of the form xtαt with αt ∈ A for every t. But this means that its endpoint aα1 is a multiple of a, hence different from the basepoint, so the Homotopy lifting property fails to be fulfilled. Verifying the fourth axiom can be done quite elementary, in contrast to the previous one. One should bear in mind that there is a variety of different proofs although this one is seemingly the most popular. For example, proofs have been established via factorisation homology or simplicial sets. One can also proof the theorem using other notions of a homology theory (the Eilenberg-Steenrod axioms e.g.). Compatibility with the Hurewicz homomorphism In order to verify the compatibility with the Hurewicz homomorphism, it suffices to show that the statement holds for X = Sn. This is because one then gets a prism for each Element [f] ∈ πn(X) represented by a map f: Sn → X. All sides except possibly the one at the bottom commute in this diagram. Therefore, one sees that the whole diagram commutes when considering where 1 ∈ πn(Sn) ≅ Z gets mapped to. However, by using the suspension isomorphisms for homotopy respectively homology groups, the task reduces to showing the assertion for S1. But in this case the inclusion SP1(S1) → SP(S1) is a homotopy equivalence. Applications Mayer-Vietoris sequence One direct consequence of the Dold-Thom theorem is a new way to derive the Mayer-Vietoris sequence. One gets the result by first forming the homotopy pushout square of the inclusions of the intersection A ∩ B of two subspaces A, B ⊂ X into A and B themselves. Then one applies SP to that square and finally π* to the resulting pullback square. A theorem of Moore Another application is a new proof of a theorem first stated by Moore. It basically predicates the following: Theorem. A path-connected, commutative and associative H-space X with a strict identity element has the weak homotopy type of a generalised Eilenberg-MacLane space. Note that SP(Y) has this property for every connected CW complex Y and that it therefore has the weak homotopy type of a generalised Eilenberg-MacLane space. The theorem amounts to saying that all k-invariants of a path-connected, commutative and associative H-space with strict unit vanish. Proof Let Gn = πn(X). Then there exist maps M(Gn, n) → X inducing an isomorphism on πn if n ≥ 2 and an isomorphism on H1 if n = 1 for a Moore space M(Gn, n). These give a map if one takes the maps to be basepoint-preserving. Then the special H-space structure of X yields a map given by summing up the images of the coordinates. But as there are natural homeomorphisms with Π denoting the weak product, f induces isomorphisms on πn for n ≥ 2. But as π1(X) → π1SP(X) = H1(X) induced by the inclusion X → SP(X) is the Hurewicz homomorphism and as H-spaces have abelian fundamental groups, f also induces isomorphisms on π1. Thanks to the Dold-Thom theorem, each SP(M(Gn, n)) is now an Eilenberg-MacLane space K(Gn, n). This also implies that the natural inclusion of the weak product Πn SP(M(Gn, n)) into the cartesian product is a weak homotopy equivalence. Therefore, X has the weak homotopy type of a generalised Eilenberg-MacLane space. Algebraic geometry What distinguishes the Dold-Thom theorem from other alternative foundations of homology like Cech or Alexander-Spanier cohomology is that it is of particular interest for algebraic geometry since it allows one to reformulate homology only using homotopy. Since applying methods from algebraic topology can be quite insightful in this field, one tries to transfer these to algebraic geometry. This could be achieved for homotopy theory, but for homology theory only in a rather limited way using a formulation via sheaves. So the Dold-Thom theorem yields a foundation of homology having an algebraic analogue. Notes References External links Why the Dold-Thom theorem? on MathOverflow The Dold-Thom theorem for infinity categories? on MathOverflow Group structure on Eilenberg-MacLane spaces on StackExchange Theorems in algebraic topology
Dold–Thom theorem
Mathematics
2,358
59,116,830
https://en.wikipedia.org/wiki/NGC%205529
NGC 5529 is an edge-on intermediate spiral galaxy in the constellation Boötes. It is located approximately 144 million light-years (44 megaparsecs) away and was discovered by William Herschel on May 1, 1785. NGC 5529 is an edge-on intermediate galaxy. It is located near dwarf galaxies PGC 50952, and PGC 50925. Polycyclic aromatic hydrocarbons (PAHs) have been detected in the mid-infrared spectrum of NGC 5529. PAHs have been shown to only appear in galaxies with recent star formation. References External links 5529 Boötes Intermediate spiral galaxies 009127 050942
NGC 5529
Astronomy
137
30,012,394
https://en.wikipedia.org/wiki/Rule-based%20machine%20translation
Rule-based machine translation (RBMT; "Classical Approach" of MT) is machine translation systems based on linguistic information about source and target languages basically retrieved from (unilingual, bilingual or multilingual) dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language respectively. Having input sentences (in some source language), an RBMT system generates them to output sentences (in some target language) on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages involved in a concrete translation task. RBMT has been progressively superseded by more efficient methods, particularly neural machine translation. History The first RBMT systems were developed in the early 1970s. The most important steps of this evolution were the emergence of the following RBMT systems: Systran Japanese MT systems Today, other common RBMT systems include: Apertium GramTrans Types of RBMT There are three different types of rule-based machine translation systems: Direct Systems (Dictionary Based Machine Translation) map input to output with basic rules. Transfer RBMT Systems (Transfer Based Machine Translation) employ morphological and syntactical analysis. Interlingual RBMT Systems (Interlingua) use an abstract meaning. RBMT systems can also be characterized as the systems opposite to Example-based Systems of Machine Translation (Example Based Machine Translation), whereas Hybrid Machine Translations Systems make use of many principles derived from RBMT. Basic principles The main approach of RBMT systems is based on linking the structure of the given input sentence with the structure of the demanded output sentence, necessarily preserving their unique meaning. The following example can illustrate the general frame of RBMT: A girl eats an apple. Source Language = English; Demanded Target Language = German Minimally, to get a German translation of this English sentence one needs: A dictionary that will map each English word to an appropriate German word. Rules representing regular English sentence structure. Rules representing regular German sentence structure. And finally, we need rules according to which one can relate these two structures together. Accordingly, we can state the following stages of translation: 1st: getting basic part-of-speech information of each source word: a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun 2nd: getting syntactic information about the verb "to eat": NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice 3rd: parsing the source sentence: (NP an apple) = the object of eat Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence. 4th: translate English words into German a (category = indef.article) => ein (category = indef.article) girl (category = noun) => Mädchen (category = noun) eat (category = verb) => essen (category = verb) an (category = indef. article) => ein (category = indef.article) apple (category = noun) => Apfel (category = noun) 5th: Mapping dictionary entries into appropriate inflected forms (final generation): A girl eats an apple. => Ein Mädchen isst einen Apfel. Ontologies An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon. In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, rule-based systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons:I saw a man/star/molecule with a microscope/telescope/binoculars.Since the syntax does not change, a traditional rule-based machine translation system may not be able to differentiate between the meanings. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced. Building ontologies The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology for NLP purposes can be compiled: A large-scale ontology is necessary to help parsing in the active modules of the machine translation system. In the PANGLOSS example, about 50,000 nodes were intended to be subsumed under the smaller, manually-built upper (abstract) region of the ontology. Because of its size, it had to be created automatically. The goal was to merge the two resources LDOCE online and WordNet to combine the benefits of both: concise definitions from Longman, and semantic relations allowing for semi-automatic taxonomization to the ontology from WordNet. A definition match algorithm was created to automatically merge the correct meanings of ambiguous words between the two online resources, based on the words that the definitions of those meanings have in common in LDOCE and WordNet. Using a similarity matrix, the algorithm delivered matches between meanings including a confidence factor. This algorithm alone, however, did not match all meanings correctly on its own. A second hierarchy match algorithm was therefore created which uses the taxonomic hierarchies found in WordNet (deep hierarchies) and partially in LDOCE (flat hierarchies). This works by first matching unambiguous meanings, then limiting the search space to only the respective ancestors and descendants of those matched meanings. Thus, the algorithm matched locally unambiguous meanings (for instance, while the word seal as such is ambiguous, there is only one meaning of seal in the animal subhierarchy). Both algorithms complemented each other and helped constructing a large-scale ontology for the machine translation system. The WordNet hierarchies, coupled with the matching definitions of LDOCE, were subordinated to the ontology's upper region. As a result, the PANGLOSS MT system was able to make use of this knowledge base, mainly in its generation element. Components The RBMT system contains: a SL morphological analyser - analyses a source language word and provides the morphological information; a SL parser - is a syntax analyser which analyses source language sentences; a translator - used to translate a source language word into the target language; a TL morphological generator - works as a generator of appropriate target language words for the given grammatica information; a TL parser - works as a composer of suitable target language sentences; Several dictionaries - more specifically a minimum of three dictionaries: a SL dictionary - needed by the source language morphological analyser for morphological analysis, a bilingual dictionary - used by the translator to translate source language words into target language words, a TL dictionary - needed by the target language morphological generator to generate target language words. The RBMT system makes use of the following: a Source Grammar for the input language which builds syntactic constructions from input sentences; a Source Lexicon which captures all of the allowable vocabulary in the domain; Source Mapping Rules which indicate how syntactic heads and grammatical functions in the source language are mapped onto domain concepts and semantic roles in the interlingua; a Domain Model/Ontology which defines the classes of domain concepts and restricts the fillers of semantic roles for each class; Target Mapping Rules which indicate how domain concepts and semantic roles in the interlingua are mapped onto syntactic heads and grammatical functions in the target language; a Target Lexicon which contains appropriate target lexemes for each domain concept; a Target Grammar for the target language which realizes target syntactic constructions as linearized output sentences. Advantages No bilingual texts are required. This makes it possible to create translation systems for languages that have no texts in common, or even no digitized data whatsoever. Domain independent. Rules are usually written in a domain independent manner, so the vast majority of rules will "just work" in every domain, and only a few specific cases per domain may need rules written for them. No quality ceiling. Every error can be corrected with a targeted rule, even if the trigger case is extremely rare. This is in contrast to statistical systems where infrequent forms will be washed away by default. Total control. Because all rules are hand-written, you can easily debug a rule-based system to see exactly where a given error enters the system, and why. Reusability. Because RBMT systems are generally built from a strong source language analysis that is fed to a transfer step and target language generator, the source language analysis and target language generation parts can be shared between multiple translation systems, requiring only the transfer step to be specialized. Additionally, source language analysis for one language can be reused to bootstrap a closely related language analysis. Shortcomings Insufficient amount of really good dictionaries. Building new dictionaries is expensive. Some linguistic information still needs to be set manually. It is hard to deal with rule interactions in big systems, ambiguity, and idiomatic expressions. Failure to adapt to new domains. Although RBMT systems usually provide a mechanism to create new rules and extend and adapt the lexicon, changes are usually very costly and the results, frequently, do not pay off. References Literature Arnold, D.J. et al. (1993): Machine Translation: an Introductory Guide Hutchins, W.J. (1986): Machine Translation: Past, Present, Future Links First International Workshop on Free/Open-Source Rule-Based Machine Translation https://web.archive.org/web/20120306014535/http://www.inf.ed.ac.uk/teaching/courses/mt/lectures/history.pdf https://web.archive.org/web/20150914205051/http://www.csse.unimelb.edu.au/research/lt/nlp06/materials/Bond/mt-intro.pdf Machine translation Machine translation, example-based
Rule-based machine translation
Technology
2,123
15,443,296
https://en.wikipedia.org/wiki/Ouvrage%20Janus
Ouvrage Janus is a work (gros ouvrage) of the Maginot Line's Alpine extension, the Alpine Line, located to the east of Briançon on near the Col de Montgenèvre. The ouvrage consists of one entry block, two infantry blocks, two artillery blocks, two observation blocks and one combination block at an altitude of , the second highest fortification on the Alps in 1940. Built on the site of the old Fort Janus, it retained the old fort's 95mm naval guns and added two 75mm guns Fort du Janus The location was known from the end of the 18th century as the Château Jouan, occupied by a Vauban-era round tower. In 1883 a Séré de Rivières system fortification was begun on the massif, called the Fort du Janus. Work continued until 1889 with a blockhouse in top of the position and a rock-cut battery in the face of the mountain, which housed four 95mm naval guns. In 1891-92 the blockhouse was expanded to two levels for a barracks, and from 1898 to 1906 a subterranean barracks was excavated. The whole was surrounded by a perimeter wall. The fort was armed with six guns on the ramparts in addition to the four naval guns in their unique casemate, which was added between 1898 and 1906. The garrison was 120 men. The perimeter was laid out with re-entrant angles to sweep the walls with fields of fire. The underground component comprised three large chambers, a cistern with a capacity of 100 cubic meters of water, a kitchen, a small magazine and a connection to the 95mm gun casemate. The gun positions were separated by prominent buttresses to prevent fragments from affecting the entire battery, and each gun was provided with an exhaust hood for gun fumes. The 95mm battery provided flanking fire to the Gondran line on the Montgenèvre massif. Ouvrage du Janus Given the existing facilities and the site's strategic importance, the site was selected for a Maginot ouvrage in 1926. Work began in 1931, abandoning some of the older work and creating new underground facilities to the Maginot Line standard. A gallery connected the new position to the Séré de Rivières works. Work stopped in July 1935 as a result of the Stresa Front agreement with Italy, but restarted in 1938 as relations with Italy and Germany deteriorated. The cost of the new work amounted to 10.3 million francs (not including armament). When the position was occupied in 1938, numerous deficiencies in heat, ventilation and optical sighting equipment were uncovered. The Maginot ouvrage incorporates subterranean elements of the old fort, particularly the entry and underground barracks. New galleries were extended to the ends of the rock fin to Blocks 4, 5, 6 and 7 on the north and blocks 1 and 2 on the south. The massive Block 8 inherited from the original fort occupies the center of the ridge, facing southeast. Description Block 1 (entry): one machine gun embrasure and one heavy machine gun/47mm anti-tank gun embrasure. Block 2 (artillery): two heavy twin machine gun cloches and two 81mm mortar embrasures. Block 3 (artillery): two 75mm gun embrasures. Block 4 (observation): one observation cloche and one machine gun embrasure. Block 5 (observation): no armament. Block 6 (infantry): one heavy twin machine gun cloche. Block 7 (infantry): one heavy twin machine gun cloche. Block 8 (artillery): four 95mm naval guns, incorporated from the earlier fort. History See Fortified Sector of the Dauphiné for a broader discussion of the Dauphiné sector of the Alpine Line. On 19 June 1940 the ouvrage was fired upon by the 149mm guns of the Italian Fort Chaberton, higher in altitude. The bombardment continued the next day and on the 21st, with significant damage to the surface installations and the 95mm gun embrasures. The 6th Battery of the 154th Régiment d'Artillerie de Position, armed with four 280mm mortars, established dispersed positions and opened fire on 21 June, guided by observers at Janus, and silencing the Chaberton guns. On 23 and 24 June Janus fired on Italian positions, with continued heavy French mortar fire directed at Chaberton. Janus's commanding officer altered the guns' shields to open a broader line of fire against the Col de Montgenèvre. The armistice of 25 June brought fighting to an end. After the 1940 armistice, Italian forces occupied the Alpine ouvrages and disarmed them. In August 1943, southern France was occupied by the German 19th Army, which took over many of the Alpine positions that had been occupied by the Italians until Italy's withdrawal from the war in September 1943. Janus was recaptured by Free French forces on 4 September 1944. Immediately after the war, the Briançon region was regarded as an area of medium priority for restoration and reuse by the military. By the 1950s the positions in the Southeast of France were restored and operational again. However, by 1960, with France's acquisition of nuclear weapons, the cost and effectiveness of the Maginot system was called into question. Between 1964 and 1971 nearly all of the Maginot fortifications were deactivated. The site is presently owned by Montgenèvre and is under study for public access. The above-ground portion of the site is unsecured. Damage from the 1940 Italian bombardment has not been repaired. The Maginot positions are closed to access. See also List of Alpine Line ouvrages References Bibliography Allcorn, William. The Maginot Line 1928-45. Oxford: Osprey Publishing, 2003. Kaufmann, J.E. and Kaufmann, H.W. Fortress France: The Maginot Line and French Defenses in World War II, Stackpole Books, 2006. Kaufmann, J.E., Kaufmann, H.W., Jancovič-Potočnik, A. and Lang, P. The Maginot Line: History and Guide, Pen and Sword, 2011. Marquilie, Franck, le fort du Janus, aboutissement de 250 ans de fortification dans le Briançonnais, éditions Atelier Rankki, 200 p., 2012, Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 4 - La fortification alpine. Paris, Histoire & Collections, 2009. Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 5. Paris, Histoire & Collections, 2009. External links Janus (gros ouvrage du) at fortiff.be Janus (fort du) at fortiff.be Patrimoine XXeme, Forteresse du Janus Fort du Janus at Fortiff' Séré JANU Maginot Line Alpine Line Séré de Rivières system World War II museums in France Fortifications of Briançon
Ouvrage Janus
Engineering
1,480
7,599,559
https://en.wikipedia.org/wiki/Gear%20manufacturing
Gear manufacturing refers to the making of gears. Gears can be manufactured by a variety of processes, including casting, forging, extrusion, powder metallurgy, and blanking. As a general rule, however, machining is applied to achieve the final dimensions, shape and surface finish in the gear. The initial operations that produce a semifinishing part ready for gear machining as referred to as blanking operations; the starting product in gear machining is called a gear blank. Selection of materials The gear material should have the following properties: High tensile strength to prevent failure against static loads High endurance strength to withstand dynamic loads Low coefficient of friction Good manufacturability Gear manufacturing processes There are multiple ways in which gear blanks can be shaped through the cutting and finishing processes. Gear forming In gear form cutting, the cutting edge of the cutting tool has a shape identical with the shape of the space between the gear teeth. Two machining operations, milling and broaching can be employed to form cut gear teeth. Form milling In form milling, the cutter called a form cutter travels axially along the length of the gear tooth at the appropriate depth to produce the gear tooth. After each tooth is cut, the cutter is withdrawn, the gear blank is rotated, and the cutter proceeds to cut another tooth. The process continues until all teeth are cut Broaching Broaching can also be used to produce gear teeth and is particularly applicable to internal teeth. The process is rapid and produces fine surface finish with high dimensional accuracy. However, because broaches are expensive and a separate broach is required for each size of gear, this method is suitable mainly for high-quality production. Gear generation In gear generation, the tooth flanks are obtained as an outline of the subsequent positions of the cutter, which resembles in shape the mating gear in the gear pair. There are two machining processes employed shaping and milling. There are several modifications of these processes for different cutting tool used. Gear hobbing Gear hobbing is a machining process in which gear teeth are progressively generated by a series of cuts with a helical cutting tool. All motions in hobbing are rotary, and the hob and gear blank rotate continuously as in two gears meshing until all teeth are cut. Finishing operations As produced by any of the process described, the surface finish and dimensional accuracy may not be accurate enough for certain applications. Several finishing operations are available, including the conventional process of shaving, and a number of abrasive operations, including grinding, honing, and lapping. See also American Gear Manufacturers Association, standards organization for gears References Gears
Gear manufacturing
Engineering
534
8,970,987
https://en.wikipedia.org/wiki/Text%20Analysis%20Portal%20for%20Research
TAPoR (Text Analysis Portal for Research) is a gateway that highlights tools and code snippets usable for textual criticism of all types. The project is housed at the University of Alberta, and is currently led by Geoffrey Rockwell, Stéfan Sinclair, Kirsten C. Uszkalo, and Milena Radzikowska. Users of the portal explore tools to use in their research, and can rate, review, and comment on tools, browse curated lists of recommended tools, and add tags to tools. Tool pages on TAPoR consist of a short description, authorial information, a screenshot of the tool, tags, suggested related tools, and user ratings and comments. Code snippet pages also contain an excerpt of code and a link to the full code's location online. An earlier version of the portal was based at McMaster University, and consisted of a network of six leading Humanities computing centres in Canada: McMaster, University of Victoria (in collaboration with Malaspina UC), University of Alberta, University of Toronto, Université de Montréal (law) and University of New Brunswick. TAPoR developed, a network of nodes at universities across Canada which would have servers and local labs where the best text tools, be they from industry or other sources, could be aggregated and made available. These would be supplemented by representative texts and special infrastructure ... This earlier version allowed researchers to experiment with text analysis tools by either using them without an account through the "TAPoR Tools" interface, or getting an account where they could define texts they wanted to operate on and create a list of favorite tools. TAPoR has also sponsored CaSTA (Canadian Symposium on Text Analysis) conferences including The Face of Text (CaSTA 2004) which focused on text visualization. Selected papers from "The Face of Text" were published by Text Technology, a journal of computer text processing. Citations External links TAPoR 3.0 TAPoR 1.0 on the Wayback Machine McMaster University Computational linguistics
Text Analysis Portal for Research
Technology
407
5,117,642
https://en.wikipedia.org/wiki/HD%20125288
HD 125288 is a single star in the southern constellation of Centaurus. It has the Bayer designation v Centauri (lower case V); while HD 125288 is the star's identifier in the Henry Draper catalogue. The object has a blue-white hue and is faintly visible to the naked eye with an apparent visual magnitude of 4.30. Based on spectroscopic measurements, it is located at a distance of approximately 1,270 light years from Earth. This is a candidate runaway star that is moving to the west and falling back into the Galactic plane. It has an absolute magnitude of −3.56. This massive B-type supergiant star has a stellar classification of B5Ib/II or B6Ib. It is around 29 million years old and has 9 times the mass of the Sun. The star has expanded to 21 times the girth of the Sun and is spinning with a projected rotational velocity of 23 km/s. It is radiating 12,600 times the luminosity of the Sun from its photosphere at an effective temperature of 13,700 K. In 2016, an asterism including HD 125288 (SAO 241641) was unofficially identified in honor of David Bowie. References B-type supergiants Centaurus Centauri, v Durchmusterung objects 125288 070069 5358
HD 125288
Astronomy
285
52,219,057
https://en.wikipedia.org/wiki/Luttinger%E2%80%93Ward%20functional
In solid state physics, the Luttinger–Ward functional, proposed by Joaquin Mazdak Luttinger and John Clive Ward in 1960, is a scalar functional of the bare electron-electron interaction and the renormalized one-particle propagator. In terms of Feynman diagrams, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible diagrams, i.e., all diagrams without particles going in or out that do not fall apart if one removes two propagator lines. It is usually written as or , where is the one-particle Green's function and is the bare interaction. The Luttinger–Ward functional has no direct physical meaning, but it is useful in proving conservation laws. The functional is closely related to the Baym–Kadanoff functional constructed independently by Gordon Baym and Leo Kadanoff in 1961. Some authors use the terms interchangeably; if a distinction is made, then the Baym–Kadanoff functional is identical to the two-particle irreducible effective action , which differs from the Luttinger–Ward functional by a trivial term. Construction Given a system characterized by the action in terms of Grassmann fields , the partition function can be expressed as the path integral: , where is a binary source field. By expansion in the Dyson series, one finds that is the sum of all (possibly disconnected), closed Feynman diagrams. in turn is the generating functional of the N-particle Green's function: The linked-cluster theorem asserts that the effective action is the sum of all closed, connected, bare diagrams. in turn is the generating functional for the connected Green's function. As an example, the two particle connected Green's function reads: To pass to the two-particle irreducible (2PI) effective action, one performs a Legendre transform of to a new binary source field. One chooses an, at this point arbitrary, convex as the source and obtains the 2PI functional, also known as Baym–Kadanoff functional:   with   . Unlike the connected case, one more step is required to obtain a generating functional from the two-particle irreducible effective action because of the presence of a non-interacting part. By subtracting it, one obtains the Luttinger–Ward functional: , where is the self-energy. Along the lines of the proof of the linked-cluster theorem, one can show that this is the generating functional for the two-particle irreducible propagators. Properties Diagrammatically, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible Feynman diagrams (also known as “skeleton” diagrams): The diagrams are closed as they do not have any external legs, i.e., no particles going in or out of the diagram. They are “bold” because they are formulated in terms of the interacting or bold propagator rather than the non-interacting one. They are two-particle irreducible since they do not become disconnected if we sever up to two fermionic lines. The Luttinger–Ward functional is related to the grand potential of a system: is a generating functional for irreducible vertex quantities: the first functional derivative with respect to gives the self-energy, while the second derivative gives the partially two-particle irreducible four-point vertex: ;   While the Luttinger–Ward functional exists, it can be shown to be not unique for Hubbard-like models. In particular, the irreducible vertex functions show a set of divergencies, which causes the self-energy to bifurcate into a physical and an unphysical solution. Baym and Kadanoff showed that we can satisfy the conservation law for any functional , thanks to the Noether's theorem. This is followed by the fact that the equation of motion of responding to one-body external fields apparently satisfies the space- and time- translational symmetries as well as the abelian gauge symmetry (phase symmetry), as long as the equation of motion is given with the derivative of . Note that reverse is also true. Based on the diagramatic analysis, what Baym found is that is needed to satisfy the conservation law. This is nothing but the completely-integrable condition, implying the existence of such that (recall the completely-integrable condition for ). Thus the remaining problem is how to determine approximately. Such approximations are called as conserving approximation. Some examples: The (fully self-consistent) GW approximation is equivalent to truncating to so-called ring diagrams: (A ring diagram consists of polarisation bubbles connected by interaction lines). Dynamical mean field theory is equivalent to taking only purely local diagrams into account: , where are lattice site indices. See also Luttinger's theorem Ward identity References Condensed matter physics Fermions
Luttinger–Ward functional
Physics,Chemistry,Materials_science,Engineering
1,015
14,732,574
https://en.wikipedia.org/wiki/Golden%20Bed
The Golden Bed is a bed designed by the English architect and designer William Burges in 1879 for the guest bedroom of the home that he designed for himself in Holland Park, The Tower House. It is now in the collection of the Victoria & Albert Museum (V&A) in South Kensington. The bed was made by John Walden and carved by Thomas Nicholls. The painting in the central panel of the headboard was executed by Henry Holiday, and the motifs and figures on the bed painted by Fred Weekes. The bed is made from polished hardwood, mahogany and pine. The theme for the guest room has been variously described as 'The Earth and Her Productions' and 'Vita Nova' ('New Life'). The Golden Bed matched the rest of the furniture designed for the guest bedroom, in keeping with the room's decorative scheme. Design The Golden Bed is a large bed, measuring long, high and wide. It is made from wood, gilded gold. The bed is decorated with carvings and 'fragments of illuminated manuscripts under glass and rock crystal'. Two mirrors are inset into the headboard, which features a painting by Thomas Weekes of the Judgement of Paris at its centre. The three gods in the 'Judgement of Paris' are wearing clothes of the 13th-century, with Mercury standing to the left of Paris, with Venus bowing to Paris on the right. The painting had previously been part of a larger painted panel at Burges' rooms in Buckingham Street, where he had lived before Tower House. The sideboards of the bed are ornamented with glass covering pieces of illuminated vellum and fragments of textiles. Grotesque figures, of a female and male, feature in the side brackets at the head of the bed. The bed head and foot posts are surmounted by half orbs of rock crystal. The foot of the bed is inscribed with the Latin phrase 'VITA NOVA' ('New Life'), with the posts of the bed inscribed 'WILLIAM BURGES ME FIERI FECIT' ('William Burges Made Me') on the right, and 'ANNO DOMINI MDCCCLXXIX' ('In the Year of Our Lord 1879') on the left. History The estimate book Burges used for Tower House records the bed on 12 March 1879 as costing £39 13s (). Thomas Nicholls carving for the bed is marked by a payment of £15 15s in June that year (). From 1952 to 1953 the Exhibition of Victorian and Edwardian Decorative Arts was held at the V&A, at which the Golden Bed and an accompanying washstand, also from the guest bedroom at The Tower House, were lent for display. Oliver Poole, 1st Baron Poole was originally asked to lend the bed but Poole subsequently requested that Colonel T.H. Minshall D.S.O. be acknowledged as the owner. Minshall had owned Tower House in the 1920s. Poole and his mother, Mrs. Minshall, later agreed to donate the bed and washstand to the V&A in the name of Colonel Minshall. In 2002 the Golden Bed was lent to Knightshayes Court in Tiverton, Devon, by the V&A. Knightshayes Court had been built by Burges from 1867 to 1874. The bed joined a wardrobe designed by Burges on loan from Tower House in a newly created 'Burges room' at Knightshayes Court. References 1879 in art Beds Collection of the Victoria and Albert Museum William Burges furniture
Golden Bed
Biology
720
56,512,633
https://en.wikipedia.org/wiki/Misliya%20Cave
Misliya Cave (), also known as the "Brotzen Cave" after Fritz Brotzen, who first described it in 1927, is a collapsed cave at Mount Carmel, Israel, containing archaeological layers from the Lower Paleolithic and Middle Paleolithic periods. The site is significant in paleoanthropology for the discovery of what were from 2018 to 2019 considered to be the earliest known remains attributed to Homo sapiens outside Africa, dated to 185,000 years ago. Since the time of its discovery in 2011, Jebel Faya, in the United Arab Emirates, had been considered to be the oldest settlement of anatomically-modern humans outside Africa, with its deepest assemblage being dated to 125,000 years ago. Excavations Excavations by teams of University of Haifa and University of Tel Aviv were conducted in the 2000/1 season, yielding finds dated to between 300,000 and 150,000 years ago. Misliya-1 fossil Of special interest is the Misliya-1 fossil, an upper jawbone discovered in 2002, and at first dated to "possibly 150,000 years ago" and classified as "early modern Homo sapiens" (EMHS). In January 2018, the date of the fossil has been revised to between 177,000 and 194,000 years ago (95% CI). This qualifies Misliya-1 as one of the oldest known fossil of H. sapiens, of comparable age to the Omo remains (as well as those of Herto, identified as "archaic Homo sapiens", or Homo sapiens idaltu), and the second oldest modern humans ever found outside of Africa, the oldest being the skull Apidima 1 from the south western Peloponnese dated to roughly 210,000 years ago. See also Anatomically modern humans List of human evolution fossils Northern Dispersal Recent African origin of modern humans References External links Mina Weinstein-Evron et al.: Introducing Misliya Cave, Mount Carmel, Israel: A new continuous Lower/Middle Paleolithic sequence in the Levant. In: Eurasian Prehistory. Band 1, Nr. 1, 2003, S. 31–55. Mina Weinstein-Evron et al.: A Window into Early Middle Paleolithic Human Occupational Layers: Misliya Cave, Mount Carmel, Israel. In: Paleo Anthropology. 2012: 202−228, doi:10.4207/PA.2012.ART75 Hélène Valladas, Norbert Mercier, Israel Hershkovitz et al.: Dating the Lower to Middle Paleolithic transition in the Levant: A view from Misliya Cave, Mount Carmel, Israel. In: Journal of Human Evolution. Band 65, Nr. 5, 2013, S. 585–593, doi:10.1016/j.jhevol.2013.07.005 Caves of Israel Prehistoric sites in Israel Recent African origin of modern humans Paleoanthropological sites Mount Carmel
Misliya Cave
Biology
617
30,321,101
https://en.wikipedia.org/wiki/Transmission-based%20precautions
Transmission-based precautions are infection-control precautions in health care, in addition to the so-called "standard precautions". They are the latest routine infection prevention and control practices applied for patients who are known or suspected to be infected or colonized with infectious agents, including certain epidemiologically important pathogens, which require additional control measures to effectively prevent transmission. Universal precautions are also important to address as far as transmission-based precautions. Universal precautions is the practice of treating all bodily fluids as if it is infected with HIV, HBV, or other blood borne pathogens. Transmission-based precautions build on the so-called "standard precautions" which institute common practices, such as hand hygiene, respiratory hygiene, personal protective equipment protocols, soiled equipment and injection handling, patient isolation controls and risk assessments to limit spread between patients. History The following table shows the history of guidelines for transmission-based precautions in U.S. hospitals as of 2007: Rationale for use in healthcare setting Communicable diseases occur as a result of the interaction between a source (or reservoir) of infectious agents, a mode of transmission for the agent, a susceptible host with a portal of entry receptive to the agent, the environment. The control of communicable diseases may involve changing one or more of these components, the first three of which are influenced by the environment. These diseases can have a wide range of effects, varying from silent infection – with no signs or symptoms – to severe illness and death. According to its nature, a certain infectious agent may demonstrate one or more following modes of transmission direct and indirect contact transmission, droplet transmission and airborne transmission. Transmission-based precautions are used when the route(s) of transmission is (are) not completely interrupted using "standard precautions" alone. Standard precautions Standard precautions include: Hand hygiene or hand washing to prevent oneself from contracting an illness or disease and prevent the spread of pathogens (e.g. bacteria, viruses, parasites) to other people, thus reducing the potential for transmission. Hand hygiene can be accomplished with different modalities including alcohol-based hand sanitizers, soap and water, or antiseptic hand wash. There are techniques and benefits to using one modality over another. Utilization of alcohol-based hand sanitizer is generally recommended when the hands are not visibly soiled or before and after contact with a person (e.g. patient in a healthcare setting), or object. With proper technique, soap and water are preferred for visibly soiled hands or in situations where hands various pathogens cannot be killed with alcohol-based hand sanitizers (e.g. spore-producing organisms like Clostridioides difficile). personal protective equipment (PPE) in cases of infectious material exposure etiquette, respiratory hygiene principles, patient isolation controls, soiled equipment handling, and injection handling. Research Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of personal protective equipment for preventing the transmission of infectious diseases to healthcare workers. There is low-quality evidence that supports making improvements or modifications to personal protective equipment to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is weak evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE. Definitions Three categories of transmission-based precautions have been designed with respect to the modes of transmission, namely Contact precautions, Droplet precautions, and Airborne precautions. For some diseases with multiple routes of transmission, more than one transmission-based precautions category may be used. When used either singly or in combination, they are always used in addition to standard precautions. Contact precautions Contact precautions are intended to prevent transmission of infectious agents, including epidemiologically important microorganisms, which are spread by direct or indirect contact with the patient or the patient's environment. The specific agents and circumstance for which contact precautions are indicated are found in Appendix A of the 2007 CDC Guidance. The application of contact precautions for patients infected or colonized with Multidrug-Resistant Organisms MDROs is described in the 2006 HICPAC/CDC MDRO guideline. Contact precautions also apply where the presence of excessive wound drainage, fecal incontinence, or other discharges from the body suggest an increased potential for extensive environmental contamination and risk of transmission. A single-patient room is preferred for patients who require contact precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). In multi-patient rooms, >3 feet spatial separation between beds is advised to reduce the opportunities for inadvertent sharing of items between the infected/colonized patient and other patients. Healthcare personnel caring for patients on contact precautions wear a gown and gloves for all interactions that may involve contact with the patient or potentially contaminated areas in the patient's environment. Donning PPE upon room entry and discarding before exiting the patient room is done to contain pathogens, especially those that have been implicated in transmission through environmental contamination (e.g., VRE, C. difficile, noroviruses and other intestinal tract pathogens; RSV) Droplet precautions As of 2020, the classification systems of routes of respiratory disease transmission are based on a conceptual division of large versus small droplets, as defined in the 1930s. Droplet precautions are intended to prevent transmission of certain pathogens spread through close respiratory or mucous membrane contact with respiratory secretions, namely respiratory droplets. Because certain pathogens do not remain infectious over long distances in a healthcare facility, special air handling and ventilation are not required to prevent droplet transmission. Infectious agents for which mere droplet precautions are indicated include B. pertussis, influenza virus, adenovirus, rhinovirus, N. meningitidis, and group A streptococcus (for the first 24 hours of antimicrobial therapy). A single patient room is preferred for patients who require droplet precautions. When a single-patient room is not available, consultation with infection control personnel is recommended to assess the various risks associated with other patient placement options (e.g., cohorting, keeping the patient with an existing roommate). Spatial separation of > 3 feet and drawing the curtain between patient beds is especially important for patients in multi-bed rooms with infections transmitted by the droplet route. Healthcare personnel wear a simple mask (a respirator is not necessary) for close contact with an infectious patient, which is generally donned upon room entry. Patients on droplet precautions who must be transported outside of the room should wear a mask if tolerated and follow Respiratory Hygiene/Cough Etiquette. Airborne precautions Airborne precautions prevent transmission of infectious agents that remain infectious over long distances when suspended in the air (e.g., rubeola virus [measles], varicella virus [chickenpox], M. tuberculosis, and possibly SARS-CoV). The preferred placement for patients who require airborne precautions is in an airborne infection isolation room (AIIR). An AIIR is a single-patient room that is equipped with special air handling and ventilation capacity that meet the American Institute of Architects/Facility Guidelines Institute (AIA/FGI) standards for AIIRs (i.e., monitored negative pressure relative to the surrounding area, air changes per hour (ach) for new construction and renovation and 6 air exchanges per hour for existing facilities, air exhausted directly to the outside or recirculated through HEPA filtration before return). The Airborne Infectious Isolation Rooms are designed for prevention against the airborne diseases. They have their predefined Heating, Ventilation and Air conditioning (HVAC) criteria given by CDC, IDPH and ASHRAE Standard 170. CDC regulations only specify 12 ach and do not have any criteria on temperature or humidity. Meanwhile, IDPH/ASHRAE Standard 170 has more detailed design criteria for HVAC systems. According to their regulations the isolation rooms must have the ability to maintain the room temperature around 70F to 75F, while keeping the relative humidity (rh) to be minimum of 30% during winters and maximum of 60% during summers. The specified airflow is 12 ach total/ 2 ach OA (Outdoor Air) and the pressure should be negative relative to the adjacent spaces. There are some architectural  design requirements for the rooms such as walls should be slab to slab, plaster or drywall ceilings with sliding-self-closing doors are preferred with all the leakages sealed. The guidelines specified by CDC, IDPH/ASHRAE Standard 170 are focused on maintaining room temperatures within specified range, one thing to look into these conditions is how the relative humidity plays a role to effect the cooling systems used for maintaining the stringent temperature requirements. While the places with low relative humidity are perfectly fine with the evaporative cooling systems used in HVAC systems, but as the relative humidity pushes towards the higher ranges i.e. more than 60%, then the evaporative cooling systems fail miserably and have to be replaced with the refrigerated cooling systems. This is to be done to prevent the corrosive action of the moisture saturated on the corrosive surfaces of isolation rooms because evaporative cooling being slow in higher relative humidity areas will allow the more contact time between moisture and corrosive surfaces. For example, during the annual monsoon season in Arizona the cooling is going  to be adversely affected due to high relative humidity. Some states require the availability of such rooms in hospitals, emergency departments, and nursing homes that care for patients with M. tuberculosis. A respiratory protection program that includes education about use of respirators, fit-testing, and user seal checks is required in any facility with AIIRs. In settings where airborne precautions cannot be implemented due to limited engineering resources (e.g., physician offices), masking the patient, placing the patient in a private room (e.g., office examination room) with the door closed, and providing N95 or higher level respirators or masks if respirators are not available for healthcare personnel will reduce the likelihood of airborne transmission until the patient is either transferred to a facility with an AIIR or returned to the home environment, as deemed medically appropriate. Healthcare personnel caring for patients on airborne precautions wear a mask or respirator, depending on the disease-specific recommendations (Appendix A), that is donned prior to room entry. Whenever possible, non-immune HCWs should not care for patients with vaccine-preventable airborne diseases (e.g., measles, chickenpox, and smallpox). Syndromic and empirical use Since the infecting agent often is not known at the time of admission to a healthcare facility, transmission-based precautions are used empirically, according to the clinical syndrome and the likely etiologic agents at the time, and then modified when the pathogen is identified or a transmissible infectious etiology is ruled out. Diagnosis of many infections requires laboratory confirmation. Since laboratory tests, especially those that depend on culture techniques, often require two or more days for completion, transmission-based precautions must be implemented while test results are pending based on the clinical presentation and likely pathogens. Use of appropriate transmission-based precautions at the time a patient develops symptoms or signs of transmissible infection, or arrives at a healthcare facility for care, reduces transmission opportunities. While it is not possible to identify prospectively all patients needing transmission-based precautions, certain clinical syndromes and conditions carry a sufficiently high risk to warrant their use empirically while confirmatory tests are pending. ¹ Patients with the syndromes or conditions listed below may present with atypical signs or symptoms (e.g.neonates and adults with pertussis may not have paroxysmal or severe cough). The clinician's index of suspicion should be guided by the prevalence of specific conditions in the community, as well as clinical judgment. ² The organisms listed under the column "Potential pathogens" are not intended to represent the complete, or even most likely, diagnoses, but rather possible etiologic agents that require additional precautions beyond standard precautions until they can be ruled out. Recommendations for specific infections Following are recommendations for transmission-based precautions for specific infections per the US Healthcare Infection Control Practices Advisory Committee as of 2007. 1 Type of precautions: A, airborne; C, contact; D, droplet; S, standard; when A, C, and D are specified, also use S. ² Duration of precautions: CN, until off antimicrobial treatment and culture-negative; DI, duration of illness (with wound lesions, DI means until wounds stop draining); DE, until environment completely decontaminated; U, until time specified in hours (hrs) after initiation of effective therapy; Unknown: criteria for establishing eradication of pathogen has not been determined Discontinuation Transmission-based precautions remain in effect for limited periods of time (i.e., while the risk for transmission of the infectious agent persists or for the duration of the illness (Appendix A). For most infectious diseases, this duration reflects known patterns of persistence and shedding of infectious agents associated with the natural history of the infectious process and its treatment. For some diseases (e.g., pharyngeal or cutaneous diphtheria, RSV), transmission-based precautions remain in effect until culture or antigen-detection test results document eradication of the pathogen and, for RSV, symptomatic disease is resolved. For other diseases, (e.g., M. tuberculosis) state laws and regulations, and healthcare facility policies, may dictate the duration of precautions 12). In immunocompromised patients, viral shedding can persist for prolonged periods of time (many weeks to months) and transmission to others may occur during that time; therefore, the duration of contact and/or droplet precautions may be prolonged for many weeks. The duration of contact precautions for patients who are colonized or infected with MDROs remains undefined. MRSA is the only MDRO for which effective decolonization regimens are available. However, carriers of MRSA who have negative nasal cultures after a course of systemic or topical therapy may resume shedding MRSA in the weeks that follow therapy. Although early guidelines for VRE suggested discontinuation of contact precautions after three stool cultures obtained at weekly intervals proved negative, subsequent experiences have indicated that such screening may fail to detect colonization that can persist for >1 year. Likewise, available data indicate that colonization with VRE, MRSA, and possibly MDR-GNB, can persist for many months, especially in the presence of severe underlying disease, invasive devices, and recurrent courses of antimicrobial agents. It may be prudent to assume that MDRO carriers are colonized permanently and manage them accordingly. Alternatively, an interval free of hospitalizations, antimicrobial therapy, and invasive devices (e.g., 6 or 12 months) before reculturing patients to document clearance of carriage may be used. Determination of the best strategy awaits the results of additional studies. See the 2006 HICPAC/CDC MDRO guideline for discussion of possible criteria to discontinue contact precautions for patients colonized or infected with MDROs. Application in ambulatory and home care settings Although transmission-based precautions generally apply in all healthcare settings, exceptions exist. For example, in home care, AIIRs are not available. Furthermore, family members already exposed to diseases such as varicella and tuberculosis would not use masks or respiratory protection, but visiting HCWs would need to use such protection. Similarly, management of patients colonized or infected with MDROs may necessitate contact precautions in acute care hospitals and in some LTCFs when there is continued transmission, but the risk of transmission in ambulatory care and home care, has not been defined. Consistent use of standard precautions may suffice in these settings, but more information is needed. Patients requiring outpatient services with known airborne or droplet transmitted diseases should be scheduled at the end of the day to minimize exposure to other patients. These patients should also be educated on proper respiratory etiquette - coughing into their elbow and wearing a mask.  Healthcare professionals should also wear proper PPE when anticipating contact with these patients. Patients with known contact transmitted diseases coming into ambulatory clinics should be triaged quickly and placed in a private room.  Items used in these rooms should not be taken out of the room unless properly sanitized. Healthcare workers must practice proper hand hygiene when exiting the private room.   Patients placed in long-term care facilities should be placed in single rooms, have access to their own items or use disposable items, and should have limited contact with other residents, in order to reduce the spread of contact transmitted diseases.  For patients with airborne and droplet transmitted diseases in long-term care facilities, they should wear masks when around other residents, and proper PPE and standard precautions should be maintained throughout facilities.  In addition, residents of long-term care facilities who are identified as at-risk for these diseases should be immunized if possible. Side effects When transmission-based precautions are indicated, efforts must be made to counteract possible adverse effects on patients (i.e., anxiety, depression and other mood disturbances, perceptions of stigma, reduced contact with clinical staff, and increases in preventable adverse events in order to improve acceptance by the patients and adherence by health care workers). References Epidemiology Medical hygiene Infection-control measures
Transmission-based precautions
Environmental_science
3,682
12,606,884
https://en.wikipedia.org/wiki/Carrier%20lifetime
A definition in semiconductor physics, carrier lifetime is defined as the average time it takes for a minority carrier to recombine. The process through which this is done is typically known as minority carrier recombination. The energy released due to recombination can be either thermal, thereby heating up the semiconductor (thermal recombination or non-radiative recombination, one of the sources of waste heat in semiconductors), or released as photons (optical recombination, used in LEDs and semiconductor lasers). The carrier lifetime can vary significantly depending on the materials and construction of the semiconductor. Carrier lifetime plays an important role in bipolar transistors and solar cells. In indirect band gap semiconductors, the carrier lifetime strongly depends on the concentration of recombination centers. Gold atoms act as highly efficient recombination centers, silicon for some high switching speed diodes and transistors is therefore alloyed with a small amount of gold. Many other atoms, e.g. iron or nickel, have similar effect. Overview In practical applications, the electronic band structure of a semiconductor is typically found in a non-equilibrium state. Therefore, processes that tend towards thermal equilibrium, namely mechanisms of carrier recombination, always play a role. Additionally, semiconductors used in devices are very rarely pure semiconductors. Oftentimes, a dopant is used, giving an excess of electrons (in so-called n-type doping) or holes (in so-called p-type doping) within the band structure. This introduces a majority carrier and a minority carrier. As a result of this, the carrier lifetime plays a vital role in many semiconductor devices that have dopants. Recombination mechanisms There are several mechanisms by which minority carriers can recombine, each of which subtract from the carrier lifetime. The main mechanisms that play a role in modern devices are band-to-band recombination and stimulated emission, which are forms of radiative recombination, and Shockley-Read-Hall (SRH), Auger, Langevin, and surface recombination, which are forms of non-radiative recombination. Depending on the system, certain mechanisms may play a greater role than others. For example, surface recombination plays a significant role in solar cells, where much of the effort goes into passivating surfaces to minimize non-radiative recombination. As opposed to this, Langevin recombination plays a major role in organic solar cells, where the semiconductors are characterized by low mobility. In these systems, maximizing the carrier lifetime is synonymous to maximizing the efficiency of the device. Applications Solar cells A solar cell is an electrical device in which a semiconductor is exposed to light that is converted into electricity through the photovoltaic effect. Electrons are either excited through the absorption of light, or if the band-gap energy of the material can be bridged, electron-hole pairs are created. Simultaneously, a voltage potential is created. The charge carriers within the solar cell move through the semiconductor in order to cancel said potential, which is the drifting force that moves the electrons. Also, the electrons can be forced to move by diffusion from higher concentration to lower concentration of electrons. In order to maximize the efficiency of the solar cell, it is desirable to have as many charge carriers as possible collected at the electrodes of the solar cell. Thus, recombination of electrons (among other factors that influence efficiency) must be avoided. This corresponds to an increase in the carrier lifetime. Surface recombination occurs at the top of the solar cell, which makes it preferable to have layers of material that have great surface passivation properties so as not to become affected by exposure to light over longer periods of time. Additionally, the same method of layering different semiconductor materials is used to reduce the capture probability of the electrons, which results in a decrease in trap-assisted SRH recombination, and an increase in carrier lifetime. Radiative (band-to-band) recombination is negligible in solar cells that have semiconductor materials with indirect bandgap structure. Auger recombination occurs as a limiting factor for solar cells when the concentration of excess electrons grows large at low doping rates. Otherwise, the doping-dependent SRH recombination is one of the primary mechanisms that reduces the electrons’ carrier lifetime in solar cells. Bipolar junction transistors A bipolar junction transistor is a type of transistor that is able to use electrons and electron holes as charge carriers. A BJT uses a single crystal of material in its circuit that is divided into two types of semiconductor, an n-type and p-type. These two types of doped semiconductors are spread over three different regions in respective order: the emitter region, the base region and the collector region. The emitter region and collector region are quantitively doped differently, but are of the same type of doping and share a base region, which is why the system is different from two diodes connected in series with each other. For a PNP-transistor, these regions are, respectively, p-type, n-type and p-type, and for a NPN-transistor, these regions are, respectively, n-type, p-type and n-type. For NPN-transistors in typical forward-active operation, given an injection of charge carriers through the first junction from the emitter into the base region, electrons are the charge carriers that are transported diffusively through the base region towards the collector region. These are the minority carriers of the base region. Analogously, for PNP-transistors, electronic holes are the minority carriers of the base region. The carrier lifetime of these minority carriers plays a crucial role in the charge flow of minority carriers in the base region, which is found between the two junctions. Depending on the BJT's mode of operation, recombination is either preferred, or to be avoided in the base region. In particular, for the aforementioned forward-active mode of operation, recombination is not preferable. Thus, in order to get as many minority carriers as possible from the base region into the collecting region before these recombine, the width of the base region must be small enough such that the minority carriers can diffuse in a smaller amount of time than the semiconductor's minority carrier lifetime. Equivalently, the width of the base region must be smaller than the diffusion length, which is the average length a charge carrier travels before recombining. Additionally, in order to prevent high rates of recombination, the base is only lightly doped with respect to the emitter and collector region. As a result of this, the charge carriers do not have a high probability of staying in the base region, which is their preferable region of occupation when recombining into a lower-energy state. For other modes of operation, like that of fast switching, a high recombination rate (and thus a short carrier lifetime) is desirable. The desired mode of operation, and the associated properties of the doped base region must be considered in order to facilitate the appropriate carrier lifetime. Presently, silicon and silicon carbide are the materials used in most BJTs. The recombination mechanisms that must be considered in the base region are surface recombination near the base-emitter junction, as well as SRH- and Auger recombination in the base region. Specifically, Auger recombination increases when the amount of injected charge carriers grows, hence decreasing the efficiency of the current gain with growing injection numbers. Semiconductor lasers In semiconductor lasers, the carrier lifetime is the time it takes an electron before recombining via non-radiative processes in the laser cavity. In the frame of the rate equations model, carrier lifetime is used in the charge conservation equation as the time constant of the exponential decay of carriers. The dependence of carrier lifetime on the carrier density is expressed as: where A, B and C are the non-radiative, radiative and Auger recombination coefficients and is the carrier lifetime. Measurement Because the efficiency of a semiconductor device generally depends on its carrier lifetime, it is important to be able to measure this quantity. The method by which this is done depends on the device, but is usually dependent on measuring the current and voltage. In solar cells, the carrier lifetime can be calculated by illuminating the surface of the cell, which induces carrier generation and increases the voltage until it reaches an equilibrium, and subsequently turning off the light source. This causes the voltage to decay at a consistent rate. The rate at which the voltage decays is determined by the amount of minority carriers that recombine per unit time, with a higher amount of recombining carriers resulting in a faster decay. Subsequently, a lower carrier lifetime will result in a faster decay of the voltage. This means that the carrier lifetime of a solar cell can be calculated by studying its voltage decay rate. This carrier lifetime is generally expressed as: where is the Boltzmann constant, q is the elementary charge, T is the temperature, and is the time derivative of the open-circuit voltage. In bipolar junction transistors (BJTs), determining the carrier lifetime is rather more complicated. Namely, one must measure the output conductance and reverse transconductance, both of which are variables that depend on the voltage and flow of current through the BJT, and calculate the minority carrier transit time, which is determined by the width of the quasi-neutral base (QNB) of the BJT, and the diffusion coefficient; a constant that quantifies the atomic migration within the BJT. This carrier lifetime is expressed as: where and are the output conductance, reverse transconductance, width of the QNB and diffusion coefficient, respectively. Current research Because a longer carrier lifetime is often synonymous to a more efficient device, research tends to focus on minimizing processes that contribute to the recombination of minority carriers. In practice, this generally implies reducing structural defects within the semiconductors, or introducing novel methods that do not suffer from the same recombination mechanisms. In crystalline silicon solar cells, which are particularly common, an important limiting factor is the structural damage done to the cell when the transparent conducting film is applied. This is done with reactive plasma deposition, a form of sputter deposition. In the process of applying this film, defects appear on the silicon layer, which degrades the carrier lifetime. Reducing the amount of damage done during this process is therefore important to increase the efficiency of the solar cell, and a focus of current research. In addition to research that seeks to optimize currently favoured technologies, there is a great deal of research surrounding other, less-utilized technologies, like the Perovskite solar cell (PSC). This solar cell is preferable due to its comparatively cheap and simple manufacturing process. Modern advancements suggest that there is still ample room to improve on the carrier lifetime of this solar cell, with most of the issues surrounding it being construction-related. In addition to solar cells, perovskites can be utilized to manufacture LEDs, lasers, and transistors. As a result of this, lead and halide perovskites are of particular interest in modern research. Current problems include the structural defects that appear when semiconductor devices are manufactured with the material, as the dislocation density associated with the crystals is a detriment to their carrier lifetime. References External links Carrier Lifetime Charge carriers
Carrier lifetime
Physics,Materials_science
2,409
72,607,033
https://en.wikipedia.org/wiki/Stormvloedkering%20Hollandse%20IJssel
The (English: Hollandse IJssel Storm Surge Barrier), (Hollandse IJssel Barrier) or (Algera Barrier) is a storm surge barrier located on the Hollandse IJssel, at the municipal boundary of Capelle aan den IJssel and Krimpen aan den IJssel, east of Rotterdam in The Netherlands. The construction of the works comprised the first project of the Delta Works, undertaken in response to the disastrous effects of the North Sea flood of 1953. Prior to 1954, the spelling was used in the official name. The Hollandse IJssel is a low-lying river, and during the 1953 flood, the river dikes were exposed to dangerously high water levels, placing around 1.5 million people in the Randstad at risk from flooding. A dike at Ouderkerk aan den IJssel failed, and a dike in Nieuwerkerk aan den IJssel was almost breached, being sealed only after the local mayor ordered sailor Arie Evegroen to navigate his barge into the hole which had been formed in it. The body in charge of the Delta Works, the , therefore prioritised the construction of a storm surge barrier, and in January 1954, less than a year after the flood, dredging works were undertaken to start the project. On May 6, 1958, the first sluice gate was lowered as a test, with the storm surge barrier made operational on 22 October 1958. The barrier is often referred to colloquially as the , but has never been officially known by that name. The name arises as the adjacent bridge carrying the N210 road is officially named the (Algera Bridge), after Jacob Algera, who resigned as Minister of Transport and Water Management for health reasons on 10 October 1958, only twelve days before the opening of the project. The architect was J.A.G. van der Steur Jr., and the project was designed by the department of Rijkswaterstaat, with H.G. Kroon as construction engineer. The barrier is classified as a Rijksmonument. The four towers of the barrier are lit to act as aids to navigation, with blue lighting indicating an open barrier and red lighting indicating that the barrier is closed. Design and planning As the Hollandse IJssel is an important shipping route, the option of permanently closing the river with a dam was not taken forward. However, the potential advantages of a closure included increased supply security of drinking water for large parts of South Holland, by reducing the inflow of seawater into the polders. A storm surge barrier with two movable sluice gates, suspended between concrete towers, was chosen as the solution. The sluices are only closed at periods of very high water, with shipping able to sail underneath the raised gates at other times. The , a 23.9 metre wide, 139 metre long control lock located north-west of the barrier, permits ships up to CEMT Class Va to navigate beyond the barrier at times of closure. A bascule bridge is located within the structure, having a closed clearance height of 7.11 metres. The project was designed such that the two gates could be operated independently, minimising the risks from failure and permitting regular maintenance. Due to budgetary constraints, the second sluice gate did not come into operation until 1976, by which time the barrier had operated with a single sluice gate for almost 18 years. The gates were designed for different load cases and water levels on both sides of the barrier, as shown in the table below. Construction and operation The towers are 45 metres high, and the barrier gates are 12 metres high, 80 metres in width, and 135 metres apart. The total weight of each gate is 480 tonnes. Construction of the foundations involved steel sheet piles being installed in pairs, with ground anchors welded to the bottom of the piles to transfer loads to the foundation concrete. The towers are constructed of reinforced concrete with precast concrete slabs forming two floors at the top of each structure, one containing the cable wheels for the sluice counterweight, and the other housing the mechanical parts of the lifting mechanism. Eight galvanised wire rope lifting and lowering cables with steel cores were used, guided over two 4.8 metre diameter cable wheels at the top of each tower, mounted on a forged steel axle with two self-aligning spherical roller bearings. The mechanism allowed the gates to be moved at a rate between 2 and 3 centimetres per second. The gates were fabricated and assembled in Dordrecht before being transported to site on barges, via Rotterdam, for installation. The total construction cost of the Hollandse IJssel Storm Surge Barrier amounted to approximately forty million guilders. One barrier is closed at an anticipated water level of 2.25 metres above Amsterdam Ordnance Datum (NAP). Closure of the barrier takes between 20 and 60 minutes. The barrier is closed four to five times per year on average, and is also closed once per month on a test basis during the storm season between October and April. The Algera Bridge The Algera Bridge, the first major fixed cross-river connection between Krimpenerwaard and the mainland of South Holland, consists of a fixed bridge over the river and a bascule bridge over the lock. The fixed bridge is supported on two piers, one of which forms the transition to a viaduct on the east side, with the other forming a support for the bascule bridge. West of the bridge, there is an underpass designed to allow north–south traffic to pass uninterrupted. The effects of the construction of the Maeslantkering The construction of the Maeslant barrier in 1997 resulted in a higher factor of safety from flooding for the areas behind it, including areas protected by the Hollandse IJsselkering. The Maeslant barrier is designed for very extreme floods (when the water level is predicted to be 3 metres or more above NAP in Rotterdam). On average, this occurs once every 10 years. When the Maeslant barrier is closed, the entrance to the old port of Rotterdam at is closed to all shipping for at least 24 hours. The dikes behind the Hollandse IJsselkering can only cope with water levels up to 2.25 metres above NAP, and therefore it is closed more often than the Maeslant barrier - around 3 to 4 times per year. Although it would be technically possible to abandon the Hollandse IJsselkering and close the Maeslantkering more frequently, the undesirable economic effects arising from the associated shipping disruption mean that the Hollandse IJsselkering remains in operation. In addition, the operation to close the Maeslant barrier takes much longer than the Hollandse IJsselkering, which can be completely closed in around 4 hours. Closure of the Maeslantkering is therefore reserved only for extreme high water levels and the Hollandse IJsselkering remains essential to Rijkswaterstaat's overall flood management strategy. Media See also Delta Works Flood control in the Netherlands Rijkswaterstaat References External links Hollandse IJssel Storm Surge Barrier details at the Watersnoodmuseum Knowledge Centre Information on the Hollandse IJssel storm surge barrier from the official Watersnoodmuseum website Deltawerk Hollandse IJsselkering Official information page about the barrier on the Rijkswaterstaat website Dams in South Holland Dams completed in 1958 Delta Works
Stormvloedkering Hollandse IJssel
Physics
1,537
57,767,854
https://en.wikipedia.org/wiki/AN/ALR-20
AN/ALR-20(A) is an airborne wideband tuned radio frequency receiver providing a panoramic display of Radio Frequency (RF) spectrum on US Air Force B-52 Stratofortress aircraft. As a stand-alone system, it is used by the Electronics Warfare Officer (EWO) to evaluate and determine various classifications of threats to the aircraft, identifying various signals including search, acquisition, and tracking radars as well as communications. Because it allows a broad view of the RF spectrum, its situational awareness also provides for analysis of the efficacy of jamming techniques employed by the EWO using other systems. First manufactured in the late 1960s, the system is a passive Electronic Support Measures (ESM) tuned radio frequency receiver. It is the primary tool used by the EWO to evaluate threats. History First developed in the early 1960s, the ALR-20 began appearing on B-52D bombers (before 1967) and B-52Gs in 1967-1969. In accordance with the Joint Electronics Type Designation System (JETDS), the "AN/ALR-20" designation represents the 20th design of an Army-Navy electronic device for passive countermeasures signal receiver. The JETDS system also now is used to name Air Force systems. The ALR-20 did not undergo any significant upgrades or design changes until the 1980s when solid-state components were added to the system's tuners upgrading older tube-based technology. At the beginning of the 1990s, the outdated panoramic display (using old cathode ray tube technology) needed replacement due to the existing display becoming unsupportable. Until the late 1990s, the ALR-20's panoramic receiver display utilized Cathode-Ray Tube (CRT) technology. This replacement was delivered in the late 1990s. At that time, tuners and the power supply were determined to also need replacement for the same reasons. Today, deployed on B-52H bombers, the system still provides the EWO a display of six different RF bands, allowing for detection and identification of threat signals. Into the early 2000s, it was determined the system was "becoming unsupportable due to vanishing vendors and obsolete technology". Under the B-52 Situational Awareness Defensive Improvement (SADI) program, the ALR-20 is expected to be replaced with a defensive system upgrade. The upgrade is expected to create up to thirty-fold improvements in reliability. Efforts to replace the ALR-20 continued into the mid-2000s, while some work was done to continue maintaining line replaceable units (LRUs). In 1999, ninety-one LRU-1s, fifty-four LRU-3s, thirty-six LRU-8s, eighty-three LRU-9s were repaired at a total cost of over $315,000. According to the Air Force's Fiscal Year (FY) 2004/2005 budget estimates, SADI would cost just over $70.9 million. Electronic Warfare Officers undergo extensive training concerning the ALR-20 panoramic system. Technical description Features The ALR-20's panoramic display is the EWO's primary source for analysis of potential threats through a very wide part of the electromagnetic spectrum. The early cathode-ray tube for the display (seen in the image to the right) had an orange tint displaying six different horizontal lines that represented a part of the spectrum. The signals displayed on those lines may be quickly analyzed allowing the EWO to bring the proper countermeasures for multiple different threats at once. Components Receiving Set Controller - LRU-1 Panoramic Display Power Supply - LRU-3 Radio Frequency Tuner - LRU-8 Radio Frequency Tuner - LRU-9 Variants AN/ALR-20 AN/ALR-20A See also List of military electronics of the United States References External links AF.mil - B-52H Stratofortress Fact Sheet Electronic warfare equipment Military electronics of the United States Equipment of the United States Air Force Electronic countermeasures Electronic warfare Military equipment introduced in the 1960s Radar warning receivers Radiofrequency receivers
AN/ALR-20
Technology
857
47,570,772
https://en.wikipedia.org/wiki/NGC%20110
NGC 110 is an open star cluster located in the constellation Cassiopeia. It was discovered by the English astronomer John Herschel on October 29, 1831. It is unknown if the members are physically related, or if the cluster exists at all. It is barely visible against the background sky, and the two dozen member stars seem to be at various distances. If the cluster does exist, it is at least 2,000 light years away. References External links 0110 Cassiopeia (constellation) Astronomical objects discovered in 1831 Open clusters Discoveries by John Herschel
NGC 110
Astronomy
115
411,174
https://en.wikipedia.org/wiki/Brine%20shrimp
Artemia is a genus of aquatic crustaceans also known as brine shrimp or sea monkeys. It is the only genus in the family Artemiidae. The first historical record of the existence of Artemia dates back to the first half of the 10th century AD from Lake Urmia, Iran, with an example called by an Iranian geographer an "aquatic dog", although the first unambiguous record is the report and drawings made by Schlösser in 1757 of animals from Lymington, England. Artemia populations are found worldwide, typically in inland saltwater lakes, but occasionally in oceans. Artemia are able to avoid cohabiting with most types of predators, such as fish, by their ability to live in waters of very high salinity (up to 25%). The ability of the Artemia to produce dormant eggs, known as cysts, has led to extensive use of Artemia in aquaculture. The cysts may be stored indefinitely and hatched on demand to provide a convenient form of live feed for larval fish and crustaceans. Nauplii of the brine shrimp Artemia constitute the most widely used food item, and over of dry Artemia cysts are marketed worldwide annually with most of the cysts being harvested from the Great Salt Lake in Utah. In addition, the resilience of Artemia makes them ideal animals for running biological toxicity assays and it has become a model organism used to test the toxicity of chemicals. Breeds of Artemia are sold as novelty gifts under the marketing name Sea-Monkeys. Description The brine shrimp Artemia comprises a group of seven to nine species very likely to have diverged from an ancestral form living in the Mediterranean area about , around the time of the Messinian salinity crisis. The Laboratory of Aquaculture & Artemia Reference Center at Ghent University possesses the largest known Artemia cyst collection, a cyst bank containing over 1,700 Artemia population samples collected from different locations around the world. Artemia is a typical primitive arthropod with a segmented body to which is attached broad leaf-like appendages. The body usually consists of 19 segments, the first 11 of which have pairs of appendages, the next two which are often fused together carry the reproductive organs, and the last segments lead to the tail. The total length is usually about for the adult male and for the female, but the width of both sexes, including the legs, is about . The body of Artemia is divided into head, thorax, and abdomen. The entire body is covered with a thin, flexible exoskeleton of chitin to which muscles are attached internally and which is shed periodically. In female Artemia, a moult precedes every ovulation. For brine shrimp, many functions, including swimming, digestion and reproduction are not controlled through the brain; instead, local nervous system ganglia may control some regulation or synchronisation of these functions. Autotomy, the voluntary shedding or dropping of parts of the body for defence, is also controlled locally along the nervous system. Artemia have two types of eyes. They have two widely separated compound eyes mounted on flexible stalks. These compound eyes are the main optical sense organ in adult brine shrimps. The median eye, or the naupliar eye, is situated anteriorly in the centre of the head and is the only functional optical sense organ in the nauplii, which is functional until the adult stage. Ecology and behavior Brine shrimp can tolerate any levels of salinity from 25‰ to 250‰ (25–250 g/L), with an optimal range of 60‰–100‰, and occupy the ecological niche that can protect them from predators. Physiologically, optimal levels of salinity are about 30–35‰, but due to predators at these salt levels, brine shrimp seldom occur in natural habitats at salinities of less than 60–80‰. Locomotion is achieved by the rhythmic beating of the appendages acting in pairs. Respiration occurs on the surface of the legs through fibrous, feather-like plates (lamellar epipodites). Reproduction Males differ from females by having the second antennae markedly enlarged, and modified into clasping organs used in mating. Adult female brine shrimp ovulate approximately every 140 hours. In favourable conditions, the female brine shrimp can produce eggs that almost immediately hatch. While in extreme conditions, such as low oxygen level or salinity above 150‰, female brine shrimp produce eggs with a chorion coating which has a brown colour. These eggs, also known as cysts, are metabolically inactive and can remain in total stasis for two years while in dry oxygen-free conditions, even at temperatures below freezing. This characteristic is called cryptobiosis, meaning "hidden life". While in cryptobiosis, brine shrimp eggs can survive temperatures of liquid air () and a small percentage can survive above boiling temperature () for up to two hours. Once placed in briny (salt) water, the eggs hatch within a few hours. The nauplius larvae are less than 0.4 mm in length when they first hatch. Parthenogenesis Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilisation. Thelytoky is a particular form of parthenogenesis in which the development of a female individual occurs from an unfertilised egg. Automixis is a form of thelytoky, but there are different kinds of automixis. The kind of automixis relevant here is one in which two haploid products from the same meiosis combine to form a diploid zygote. Diploid Artemia parthenogenetica reproduce by automictic parthenogenesis with central fusion (see diagram) and low but nonzero recombination. Central fusion of two of the haploid products of meiosis (see diagram) tends to maintain heterozygosity in transmission of the genome from mother to offspring, and to minimise inbreeding depression. Low crossover recombination during meiosis likely restrains the transition from heterozygosity to homozygosity over successive generations. Diet In their first stage of development, Artemia do not feed but consume their own energy reserves stored in the cyst. Wild brine shrimp eat microscopic planktonic algae. Cultured brine shrimp can also be fed particulate foods including yeast, wheat flour, soybean powder or egg yolk. Genetics, genomics and transcriptomics Artemia comprises sexually reproducing, diploid species and several obligate parthenogenetic Artemia populations consisting of different clones and ploidies (2n->5n). Several genetic maps have been published for Artemia. The past years, different transcriptomic studies have been performed to elucidate biological responses in Artemia, such as its response to salt stress, toxins, infection and diapause termination. These studies also led to various fully assembled Artemia transcriptomes. Recently, the Artemia genome was assembled and annotated, revealing a genome containing an unequaled 58% of repeats, genes with unusually long introns and adaptations unique to the extremophilic nature of Artemia in high salt and low oxygen environments. These adaptations include a unique energy-intensive endocytosis-based salt excretion strategy resembling salt excretion strategies of plants, as well as several survival strategies for extreme environments it has in common with the extremophilic tardigrade. Aquaculture Fish farm owners search for a cost-effective, easy to use, and available food that is preferred by the fish. From cysts, brine shrimp nauplii can readily be used to feed fish and crustacean larvae just after a one-day incubation. Instar I (the nauplii that just hatched and with large yolk reserves in their body) and instar II nauplii (the nauplii after first moult and with functional digestive tracts) are more widely used in aquaculture, because they are easy for operation, rich in nutrients, and small, which makes them suitable for feeding fish and crustacean larvae live or after drying. Toxicity test Artemia found favor as a model organism for use in toxicological assays, despite the recognition that it is too robust an organism to be a sensitive indicator species. In pollution research Artemia, the brine shrimp, has had extensive use as a test organism and in some circumstances is an acceptable alternative to the toxicity testing of mammals in the laboratory. The fact that millions of brine shrimp are so easily reared has been an important help in assessing the effects of a large number of environmental pollutants on the shrimps under well controlled experimental conditions. Conservation Overall, brine shrimp are abundant, but some populations and localized species do face threats, especially from habitat loss to introduced species. For example, A. franciscana of the Americas has been widely introduced to places outside its native range and is often able to outcompete local species, such as A. salina in the Mediterranean region. Among the highly localized species are A. urmiana from Lake Urmia in Iran. Once abundant, the species has drastically declined due to drought, leading to fears that it was almost extinct. However, a second population of this species has recently been discovered in the Koyashskoye Salt Lake, Ukraine. A. monica, the species commonly known as Mono Lake brine shrimp, can be found in Mono Lake, Mono County, California. In 1987, Dennis D. Murphy from Stanford University petitioned the United States Fish and Wildlife Service to add A. monica to the endangered species list under the Endangered Species Act (1973). The diversion of water by the Los Angeles Department of Water and Power resulted in rising salinity and concentration of sodium hydroxide in Mono Lake. Despite the presence of trillions of brine shrimp in the lake, the petition contended that the increase in pH would endanger them. The threat to the lake's water levels was addressed by a revision to California State Water Resources Control Board's policy, and the US Fish and Wildlife Service found on 7 September 1995 that the Mono Lake brine shrimp did not warrant listing. Space experiment Scientists have taken the eggs of brine shrimp to outer space to test the impact of radiation on life. Brine shrimp cysts were flown on the U.S. Biosatellite 2, Apollo 16, and Apollo 17 missions, and on the Russian Bion-3 (Cosmos 782), Bion-5 (Cosmos 1129), Foton 10, and Foton 11 flights. Some of the Russian flights carried European Space Agency experiments. On Apollo 16 and Apollo 17, the cysts traveled to the Moon and back. Cosmic rays that passed through an egg would be detected on the photographic film in its container. Some eggs were kept on Earth as experimental controls as part of the tests. Also, as the take-off in a spacecraft involves a lot of shaking and acceleration, one control group of egg cysts was accelerated to seven times the force of gravity and vibrated mechanically from side to side for several minutes so that they could experience the same violence of a rocket take-off. About 400 eggs were in each experimental group. All the egg cysts from the experiment were then placed in salt water to hatch under optimum conditions. The results showed A. salina eggs are highly sensitive to cosmic radiation; 90% of the embryos induced to develop from hit eggs died at different developmental stages. References External links Anostraca Space-flown life Taxa described in 1757
Brine shrimp
Biology
2,425
1,642,078
https://en.wikipedia.org/wiki/Paul%20Nem%C3%A9nyi
Paul Felix Neményi (June 5, 1895March 1, 1952) was a Hungarian mathematician and physicist who specialized in continuum mechanics. He was known for using what he called the inverse or semi-inverse approach, which applied vector field analysis, to obtain numerous exact solutions of the nonlinear equations of gas dynamics, many of them representing rotational flows of nonuniform total energy. His work applied geometrical solutions to fluid dynamics. In continuum mechanics, "Neményi's theorem" proves that, given any net of isothermal curves, there exists a five parameter family of plane stress systems for which these curves are stress trajectories. Neményi's five constant theory for the determination of stress trajectories in plane elastic systems was subsequently proven by later mathematicians. He was the father of the statistician Peter Nemenyi and the putative father of former World Chess Champion Bobby Fischer. Biography Family Neményi was born to a wealthy Hungarian-Jewish family on June 5, 1895, in Fiume (Rijeka) in the Kingdom of Hungary. His grandfather was Siegmund Neumann who magyarized his family to Neményi in 1871 and part of the family became Christians. Pauls father Dezső Neményi was one of the directors at Rijeka Refinery (now INA d.d.). His mother was Julianna Goldberger de Buda (or Buday= von Buda), born 1868 in Budapest, as at least, the fifth consecutive generation Goldberger to do so. Neményi attended elementary and high school in Fiume (Rijeka). He graduated from high school in Budapest. Neményi's uncle was Dr. Ambrus Neményi, born in Pécel, c. 20 km east of Budapest. Paul Neményi's aunt was Berta Koppély (whose parents were Adolf Koppély (18091883) and Rózsa von Hatvany-Deutsch). His family's art collection included works by Klimt, Kandinsky and Matisse. Hungary at the time, was producing a generation of geniuses in the exact sciences, who would be collectively known as Martians, that included Theodore von Kármán (b. 1881), George de Hevesy (b. 1885), Leó Szilárd (b. 1898), Dennis Gabor (b. 1900), Eugene Wigner (b. 1902), John von Neumann (b. 1903), Edward Teller (b. 1908), and Paul Erdős (b. 1913). Family tree Mathematical career A child prodigy in mathematics, at the age of 17, Neményi won the Hungarian national mathematics competition. Neményi obtained his doctorate in mathematics in Berlin in 1922 and was appointed a lecturer in fluid dynamics at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin). In the early 1930s, he published a textbook on mathematical mechanics that became required reading in German universities. Stripped of his position when the Nazis came to power, he also had to leave Hungary where anti-Semitic laws had been enacted, and found work for a time in Copenhagen. In Germany, Neményi belonged to a Socialist party called the ISK, which believed that truth could be arrived at through neo-Kantian Socratic principles. He was an animal-rights supporter and refused to wear anything made of wool. In 1930, Neményi entrusted his 3 year old first son, Peter Nemenyi, to be looked after by the socialist vegetarian community, visiting him once a year. He arrived in the US at the outbreak of World War II. He briefly held a number of teaching positions in succession and took part in hydraulic research at the State University of Iowa. In 1941 he was appointed instructor at the University of Colorado (other sources claim Colorado State University), and in 1944 at the State College of Washington. Theodore von Kármán wrote of Neményi: "When he came to this country, he went to scientific meetings in an open shirt without a tie and was very much disappointed as I advised him to dress as anyone else. He told me that he thought this was a country of freedom, and the man is only judged according to his internal values and not his external appearance." In 1947 Neményi was appointed a physicist with the Naval Ordnance Laboratory, White Oak, Maryland. He was head of the Theoretical Mechanics Section at the laboratory and one of the country's principal authorities on elasticity and fluid dynamics. At the US Navy Research Laboratory, Neményi became mentor to Jerald Ericksen, where he put Ericksen to work on the study of water bells. Neményi pioneered what he called the inverse or semi-inverse approach, which applied vector field analysis, to obtain numerous exact solutions of the nonlinear equations of gas dynamics, many of them representing rotational flows of nonuniform total energy. In continuum mechanics, "Neményi's theorem" proves that, given any net of isothermal curves, there exists a five parameter family of plane stress systems for which these curves are stress trajectories. In his exposition, The Main Concepts and Ideas of Fluid Dynamics in their Historical Development, Neményi was highly critical of Isaac Newton's inadequate understanding of fluid dynamics. I. Bernard Cohen argues that Neményi pays insufficient attention to Newton's empirical experiments. However, Cohen notes that Neményi provides the "most thorough and incisive analyses in print of Newton's work on fluids, written by an obvious master of science. For example, Neményi is the only author I have encountered who has shown the weakness of Newton's "proof" at the end of Book 2, that vortices contradict the laws of astronomy. Neményi's scientific knowledge extended well beyond the subjects of his researches. He has been described as having "extreme[ly] versatile interests and erudition". Neményi's interest and ability encompassed several nonscientific fields. He collected children's art and sometimes lectured upon it. In 1951, he published a critique of the entire Encyclopædia Britannica, and suggested improvements for such diverse sections as psychology and psychoanalysis. Neményi was also deeply interested in the philosophy of mathematics and mathematical education.Clifford Truesdell writes that it was Neményi who first taught him "that mechanics was something deep and beautiful, beyond the ken of schools of "applied mathematics" and "applied mechanics"". Paul Neményi died on March 1, 1952, at the age of 56. He was survived officially by one son: Peter Nemenyi, then a student of mathematics at Princeton University. Supposed fatherhood of Bobby Fischer In 2002 Neményi was identified as the probable biological father of world chess champion Bobby Fischer, not the man named on Fischer's birth certificate (Hans Gerhardt Fischer). Additional details on their relationship were reported in 2009. In A Psychobiography of Bobby Fischer, Joseph G. Ponterotto enumerates nine clusters of evidence that indicate that Neményi was Bobby Fischer's father: Regina Fischer and Hans Gerhardt Fischer had no confirmed contact after 1939. Paul Neményi was in contact with Regina Fischer both before and after Bobby's birth, and occasionally came to visit Bobby. Regina told Jewish Family Services that she gave birth to a boy by Neményi in 1943. Neményi told a social worker that they had agreed to put the child up for adoption, but that Regina had later refused. Paul Neményi used Jewish Family Services to deliver money to Regina and Bobby and told the agency that he was concerned for Bobby's welfare. In letter to the psychiatrist Harold Kline on March 13, 1952, Peter Nemenyi wrote, "I take it you know that Paul was Bobby Fischer’s father." After Paul Neményi's death, Regina Fischer wrote to Peter Nemenyi to ask whether Paul had left any money for Bobby. In a letter to Allen W. Dulles on May 22, 1959, J. Edgar Hoover wrote, "Investigation has established that Robert James Fischer’s father was one Paul Felix Nemenyi." A court document signed by Regina Fischer following Paul Neményi's death states that Bobby "was born to the decedent out of wedlock". Paul Neményi and Bobby Fischer physically resembled each other. Selected list of publications Posthumous publication, edited by Clifford Truesdell. Obituaries References External links 1890s births 1952 deaths 20th-century American Jews 20th-century American physicists 20th-century Hungarian physicists Academic staff of Technische Universität Berlin American people of Hungarian-Jewish descent Fluid dynamicists Hungarian emigrants to the United States Hungarian Jews Jewish American scientists Jewish emigrants from Nazi Germany to the United States Jewish physicists Scientists from Rijeka
Paul Neményi
Chemistry
1,823
1,290,265
https://en.wikipedia.org/wiki/Salix%20viminalis
Salix viminalis, the basket willow, common osier or osier, is a species of willow native to Europe, Western Asia, and the Himalayas. Description Salix viminalis is a multistemmed shrub growing to between (rarely to ) tall. It has long, erect, straight branches with greenish-grey bark. The leaves long and slender, 10–25 cm long but only 0.5–2 cm broad; they are dark green above, with a silky grey-haired underside. The flowers are catkins, produced in early spring before the leaves; they are dioecious, with male and female catkins on separate plants. The male catkins are yellow and oval-shaped; the female catkins are longer and more cylindrical; they mature in early summer when the fruit capsules split open to release the numerous minute seeds. Distribution and habitat It is commonly found by streams and other wet places. The exact native range is uncertain due to extensive historical cultivation; it is certainly native from central Europe east to western Asia, but may also be native as far west as southeastern England. As a cultivated or naturalised plant, it is widespread throughout both Britain and Ireland, but only at lower altitudes. It is one of the least variable willows, but it will hybridise with several other species. Uses Along with other related willows, the flexible twigs (called withies) are commonly used in basketry, giving rise to its alternative common name of "basket willow". In his History of the Peloponnesian War, the ancient historian Thucydides describes using osier in 425 BCE to construct makeshift shields. Cultivation and use of the common osier was common in England in the 18th and 19th century, with osier beds lining many rivers and streams. Other uses occur in energy forestry, effluent treatment, wastewater gardens, and cadmium phytoremediation for water purification. Salix viminalis is a known hyperaccumulator of cadmium, chromium, lead, mercury, petroleum hydrocarbons, organic solvents, MTBE, TCE and byproducts, selenium, silver, uranium, and zinc, and as such is a prime candidate for phytoremediation. For more information, see the list of hyperaccumulators. Ecology Among the most common pathogens on S. viminalis are Melampsora spp. Female plants are more severely infected than male plants. References External links viminalis Flora of Europe Flora of temperate Asia Flora of tropical Asia Phytoremediation plants Plants described in 1753 Taxa named by Carl Linnaeus
Salix viminalis
Biology
536
2,421,141
https://en.wikipedia.org/wiki/Dimethylamine
Dimethylamine is an organic compound with the formula (CH3)2NH. This secondary amine is a colorless, flammable gas with an ammonia-like odor. Dimethylamine is commonly encountered commercially as a solution in water at concentrations up to around 40%. An estimated 270,000 tons were produced in 2005. Structure and synthesis The molecule consists of a nitrogen atom with two methyl substituents and one hydrogen. Dimethylamine is a weak base and the pKa of the ammonium CH3--CH3 is 10.73, a value above methylamine (10.64) and trimethylamine (9.79). Dimethylamine reacts with acids to form salts, such as dimethylamine hydrochloride, an odorless white solid with a melting point of 171.5 °C. Dimethylamine is produced by catalytic reaction of methanol and ammonia at elevated temperatures and high pressure: Natural occurrence Dimethylamine is found quite widely distributed in animals and plants, and is present in many foods at the level of a few mg/kg. Uses Dimethylamine is a precursor to several industrially significant compounds. It reacts with carbon disulfide to give dimethyl dithiocarbamate, a precursor to zinc bis(dimethyldithiocarbamate) and other chemicals used in the sulfur vulcanization of rubber. Dimethylaminoethoxyethanol is manufactured by reacting dimethylamine and ethylene oxide. Other methods are also available producing streams rich in the substance which then need to be further purified. The solvents dimethylformamide and dimethylacetamide are derived from dimethylamine. It is raw material for the production of many agrichemicals and pharmaceuticals, such as dimefox and diphenhydramine, respectively. The chemical weapon tabun is derived from dimethylamine. The surfactant lauryl dimethylamine oxide is found in soaps and cleaning compounds. Unsymmetrical dimethylhydrazine, a rocket fuel, is prepared from dimethylamine. (CH3)2NH + NH2Cl → (CH3)2NNH2 ⋅ HCl It is an attractant for boll weevils. Reactions It is basic, in both the Lewis and Brønsted senses. It easily forms dimethylammonium salts upon treatment with acids. Deprotonation of dimethylamine can be effected with organolithium compounds. The resulting LiNMe2, which adopts a cluster-like structure, serves as a source of Me2N−. This lithium amide has been used to prepare volatile metal complexes such as tetrakis(dimethylamido)titanium and pentakis(dimethylamido)tantalum. It reacts with many carbonyl compounds. Aldehydes give aminals. For example reaction of dimethylamine and formaldehyde gives bis(dimethylamino)methane: 2 (CH3)2NH + CH2O → [(CH3)2N]2CH2 + H2O It converts esters to dimethylamides. Safety Dimethylamine is not very toxic with the following LD50 values: 736 mg/kg (mouse, i.p.); 316 mg/kg (mouse, p.o.); 698 mg/kg (rat, p.o.); 3900 mg/kg (rat, dermal); 240 mg/kg (guinea pig or rabbit, p.o.). Although not acutely toxic, dimethylamine undergoes nitrosation to give dimethylnitrosamine, a carcinogen. See also Methylamine Trimethylamine References External links (gas) (aqueous solution) Properties from Air Liquide MSDS at airliquide.com Alkylamines Insect pheromones Insect ecology Secondary amines
Dimethylamine
Chemistry
828
46,597,469
https://en.wikipedia.org/wiki/Privacy%20engineering
Privacy engineering is an emerging field of engineering which aims to provide methodologies, tools, and techniques to ensure systems provide acceptable levels of privacy. Its focus lies in organizing and assessing methods to identify and tackle privacy concerns within the engineering of information systems. In the US, an acceptable level of privacy is defined in terms of compliance to the functional and non-functional requirements set out through a privacy policy, which is a contractual artifact displaying the data controlling entities compliance to legislation such as Fair Information Practices, health record security regulation and other privacy laws. In the EU, however, the General Data Protection Regulation (GDPR) sets the requirements that need to be fulfilled. In the rest of the world, the requirements change depending on local implementations of privacy and data protection laws. Definition and scope The definition of privacy engineering given by National Institute of Standards and Technology (NIST) is: While privacy has been developing as a legal domain, privacy engineering has only really come to the fore in recent years as the necessity of implementing said privacy laws in information systems has become a definite requirement to the deployment of such information systems. For example, IPEN outlines their position in this respect as: Privacy engineering involves aspects such as process management, security, ontology and software engineering. The actual application of these derives from necessary legal compliances, privacy policies and 'manifestos' such as Privacy-by-Design. Towards the more implementation levels, privacy engineering employs privacy enhancing technologies to enable anonymisation and de-identification of data. Privacy engineering requires suitable security engineering practices to be deployed, and some privacy aspects can be implemented using security techniques. A privacy impact assessment is another tool within this context and its use does not imply that privacy engineering is being practiced. One area of concern is the proper definition and application of terms such as personal data, personally identifiable information, anonymisation and pseudo-anonymisation which lack sufficient and detailed enough meanings when applied to software, information systems and data sets. Another facet of information system privacy has been the ethical use of such systems with particular concern on surveillance, big data collection, artificial intelligence etc. Some members of the privacy and privacy engineering community advocate for the idea of ethics engineering or reject the possibility of engineering privacy into systems intended for surveillance. Software engineers often encounter problems when interpreting legal norms into current technology. Legal requirements are by nature neutral to technology and will in case of legal conflict be interpreted by a court in the context of the current status of both technology and privacy practice. Core practices As this particular field is still in its infancy and somewhat dominated by the legal aspects, the following list just outlines the primary areas on which privacy engineering is based: Data flow modelling Development of suitable terminologies/ontologies for expressing types, usages, purposes etc. of information Privacy Impact Assessment (PIA) Privacy management and processes Requirements engineering Risk assessment Semantics Despite the lack of a cohesive development of the above areas, courses already exist for the training of privacy engineering. The International Workshop on Privacy Engineering co-located with IEEE Symposium on Security and Privacy provides a venue to address "the gap between research and practice in systematizing and evaluating approaches to capture and address privacy issues while engineering information systems". A number of approaches to privacy engineering exist. The LINDDUN methodology takes a risk-centric approach to privacy engineering where personal data flows at risk are identified and then secured with privacy controls. Guidance for interpretation of the GDPR has been provided in the GDPR recitals, which have been coded into a decision tool that maps GDPR into software engineering forces with the goal to identify suitable privacy design patterns. One further approach uses eight privacy design strategies - four technical and four administrative strategies - to protect data and to implement data subject rights. Aspects of information Privacy engineering is particularly concerned with the processing of information over the following aspects or ontologies and their relations to their implementation in software: Data Processing Ontologies Information Type Ontologies (as opposed to PII or machine types) Notions of controller and processor The notions of authority and identity (ostensibly of the source(s) of data) Provenance of information, including the notion of data subject Purpose of information, viz: primary vs secondary collection Semantics of information and data sets (see also noise and anonymisation) Usage of information Further to this how the above then affect the security classification, risk classification and thus the levels of protection and flow within a system can then the metricised or calculated. Definitions of privacy Privacy is an area dominated by legal aspects but requires implementation using, ostensibly, engineering techniques, disciplines and skills. Privacy Engineering as an overall discipline takes its basis from considering privacy not just as a legal aspect or engineering aspect and their unification but also utilizing the following areas: Privacy as a philosophical aspect Privacy as an economic aspect, particularly game theory Privacy as a sociological aspect Legal basis The impetus for technological progress in privacy engineering stems from general privacy laws and various particular legal acts: Children's Online Privacy Protection Act Driver's Privacy Protection Act Intimate Privacy Protection Act Online Privacy Protection Act Privacy Act of 1974 Privacy Protection Act of 1980 Telephone Records and Privacy Protection Act of 2006 Video Privacy Protection Act See also Data Protection Directive Information security Privacy software Risk management Free and open MOOC course module on privacy by design and management with Karlstad University's Privacy by Design on-line course. Carnegie Mellon University's Privacy Engineering Program - Offers a rich curriculum on the technical, legal, and policy aspects of privacy engineering, this program is known for its comprehensive approach. Additional insights into the program's impact, as well as students' projects and work, are available on their dedicated blog. Notes and references Security engineering Engineering
Privacy engineering
Engineering
1,139
49,575,825
https://en.wikipedia.org/wiki/Al%20Baydha%20Project
The Al Baydha Project, in rural, western Saudi Arabia, is a land restoration, poverty-alleviation, and heritage preservation program, based on principles of permacultural and hydrological design. Located roughly south of Mecca, in Makkah Province, Al Baydha is an area characterized by the rocky, arid, foothills of the Hijaz Mountains. Arab tribes are the major residents of this region. Founded in 2009 by Princess Haifa Al-Faisal, Harvard ethicist Mona Hamdy, and Stanford permaculturist Neal Spackman, Al Baydha has begun to see practical and ecological results. Project goals Most notably, Al Baydha's emphasis is on creating an economy for the inhabitants of Al Baydha that is socially, culturally, environmentally, and economically sustainable. The project's main objective is to create financial and social independence for the inhabitants by training, educating and employing them in the infrastructure and capacity building activities undertaken by the Al Baydha Project. Al Baydha's environmental goal is the reversal of desertification. This is accomplished largely via rainwater harvesting, through utilization of rock terraces and gabions (or small check dams), as well as catchment of runoff into swale lines. These support afforestation of drought-resistant trees, such as date palms, in the natural landscape. Another focus of the program is on slowing down flash floods in the highlands, and, over time, converting them into seasonal streams or wadis. In the long-term future, Al Baydha hopes to transform the region into a savanna ecosystem, in part, by means of assisted natural regeneration, conservation grazing, and the effects of evapotranspiration and atmospheric moisture recycling. Site development after the end of artificial irrigation In 2016, the Al Baydha Project received a commendation from Prince Khaled Al Faisal for innovative work undertaken by the inhabitants of Al Baydha as a model of national excellence in humanitarianism, sustainability, and innovation. The same year funding stopped and Neal Spackman needed to shut off water to the irrigation pipes. The trees started to die. He told those involved in the project that the true test would be to see if the trees could live without being watered. Later that winter the trees survived and thrived, proving testament to the power of ancient terrace farming. In a 2020 documentary about the Al Baydha Project, Spackman has called the project "a testament to the potential of regenerative agricultures and a template for the reforestation of millions of desert landscapes in the Arabian peninsula and beyond." Similar projects A similar project, overseen by permaculturist Geoff Lawton (who advised on the design of Al Baydha), has already achieved success in Wadi Rum, in southern Jordan. References External links Neal Spackman's personal website and blog Interview with Neal Spackman (2014) Neal Spackman's Project updates on PermacultureNews.org 2009 establishments in Saudi Arabia Agricultural organisations based in Saudi Arabia Appropriate technology organizations Bedouins in Saudi Arabia Conservation projects Desert greening Ecological restoration Land reclamation Environmental organisations based in Saudi Arabia Mecca Province Permaculture organizations Projects established in 2009 Rural community development Science and technology in Saudi Arabia Sustainable agriculture Wadis of Saudi Arabia Water conservation Water in Saudi Arabia Water supply and sanitation in Saudi Arabia
Al Baydha Project
Chemistry,Engineering
673
68,495,847
https://en.wikipedia.org/wiki/Selenate%20selenite
A selenate selenite is a chemical compound or salt that contains selenite and selenate anions (SeO32- and SeO42-). These are mixed anion compounds. Some have third anions. Naming A selenate selenite compound may also be called a selenite selenate. Production One way to produce a selenate selenite compound is to evaporate a water solution of selenate and selenite compounds. Properties On heating, selenate selenites lose SeO2 and O2 and yield selenites, and ultimately metal oxides. Related Related to these are the sulfate sulfites and tellurate tellurites. They can be classed as mixed valent compounds. List References Selenates Selenites Mixed anion compounds
Selenate selenite
Physics,Chemistry
161
73,742,783
https://en.wikipedia.org/wiki/Cherry%20Hill%20%28model%20engineer%29
Cherry Mavis Hill, MBE (née Hinds, 16 November 1931 – 4 December 2024) was an English model engineer known for her detailed scale models of steam vehicles. Hill won the Duke of Edinburgh award nine times, the Bradbury Winter Memorial Trophy eight times, and was awarded an MBE (Member of the British Empire) by the Queen of England, and other awards. Life and work Cherry Hill was born in Malvern, Worcestershire, England, on 16 November 1931. Her father, George Hinds, was an agricultural machinery manufacturer who began mentoring her when she showed enthusiasm for metalworking. In the Hinds household workshop, she learned machining skills and built her first models, including a scooter, warships, and aircraft. In that phase, Cherry received special mention for her Sunderland flying boat model in a model-making contest. While continuing her development as a model maker, she completed a BSC in maths at the University of St Andrews. During her 60-year model engineering career, Hill built nearly 20 detailed scale models of steam vehicles, including Victorian models, which each took her approximately 7,000 hours to make. The parts in the models were all made from her metal stock, and the engines are fully operational. Additionally, her engines were made from scratch, every nut and bolt was made in her workshop, and a complete model took 7 years to make. Cherry Hill is considered to be one of the greatest model engineers ever due to her success in competitions. An article written about her by the Craftsmanship Museum says that, "The uncompromising craftsmanship exhibited in Cherry’s work is a result of her attitude. She never accepts anything less than perfection." As her career progressed past its early stages, Hill started building unusual models, many of which were insufficiently documented or had no existing original copies. Due to the scarce information frequently taken from patent applications, Hill often had to use her design skills to overcome missing information and shortcomings in the original designs in order to make fully functioning models. Her favorite model was the Blackburn agricultural engine of 1863. Hill needed to be resourceful and imaginative in various critical components, including the crankshaft, valve chest and eccentrics, boiler, steering, and front suspension. The models made at the beginning of Hill's career were given to family and friends, but she donated her more recent ones to the Institution of Mechanical Engineers. In addition to her acclaimed work as a model engineer, Cherry Hill also worked as a machinery designer for her family business McConnell-Hinds who made innovative hop-picking machinery. Cherry was also an inventor and had had several patents awarded to her, including the well known Crypton Synchro-check carburettor balancer (produced commercially by AC Delco), an air flowmeter device used for setting and balancing multiple carburettors on car and motorcycle engines. An expert on obscure 19th-century engineers, Hill explained that ‘Everyone has heard of Brunel and Stephenson, but there were a lot of very clever people in the background. I'm just interested in these people and how they thought about engineering.’ Models In the 1950s, Cherrie Hill began working on the Stuart Turner No 9 early 20th-century steam engine. She was ‘thrilled to bits’ after 18 months of work because she achieved her goal of getting the engine to work. The model won her a bronze medal at the International Model Engineering Exhibition. After that, she built an Allchin Royal Chester traction engine. Thanks to her father's suggestion, the model was built in a scale of 1:16, which was important because it became the scale she used in later models. The model won her a silver medal at the exhibition. However, Hill wasn’t satisfied and spent 7 years improving it after locating the full-size machine near Tonbridge, England. Later, she built a Stuart D10, a Burrell showman’s engine, and a red 1905 Merryweather fire engine, which got her on the cover of Model Engineer magazine along with the Allchin, increasing Hill’s recognition among model engineering enthusiasts. These were praised due to their obscurity, complexity, difficulty, and rarity. Cherry Hill’s Blackburn agricultural engine of 1857 was a model based on a design made when traction engines were in their early development, and many were impractical. The plans for the Blackburn were insufficient, so Hill had to create her own design to make it work. That model won a gold medal at the Bradbury Winter Memorial Trophy and a Duke of Edinburgh’s Award. Another exceptional model was the 1862 Gilletts & Allatt traction engine, partly because Hill designed and patented her own traction engine. It also won a gold medal, a Bradbury Winter Memorial Trophy, and The Duke of Edinburgh’s Award. A notable project of Cherry’s later years is the Nathaniel Grew ice locomotive, which was used in Russia to carry cargo across frozen lakes and rivers in the 1860s. It is made completely from steel, and the sled blades were constructed using conventional machining rather than CNC. Many of Cherry Hill’s award-winning models are exhibited at the Institution of Mechanical Engineers in London, England. Personal life Ivor Hill, also a model engineer, first saw Cherry on the cover of the 1968 Model Engineer magazine; he was so enamored that he declared, ‘I’m going to marry that girl’. He successfully courted her, and they were married and lived in the U.S. until his death. Cherry Hill died in Malvern on 4 December 2024, at the age of 93. She was survived by her sisters, Charmian and Rosalie, as well as their children. Awards Sir Henry Royce Trophy for the Pursuit of Excellence (1989 and 1995). MBE (Member of the British Empire) award (2000). Elected Companion of the Institution of Mechanical Engineers (2004). Honorary member of the Society of Model and Experimental Engineers (2004). Awarded nine different gold medals at the annual Model Engineer Exhibition in London. Awarded the Bradbury Winter Memorial Trophy eight times. Awarded the Aveling Barford Cup twice. Crebbin Memorial Trophy. Awarded the Championship Cup three times. Awarded The Duke of Edinburgh's Award nine times. Joe Martin Foundation Craftsman of the Year Award (2017). References External links The Remarkable Mechanical Models of Cherry Hill 1931 births 2024 deaths Alumni of the University of St Andrews Model makers Model engineers People from Malvern, Worcestershire Members of the Order of the British Empire
Cherry Hill (model engineer)
Physics
1,317
47,058,873
https://en.wikipedia.org/wiki/Neoboletus%20venenatus
Neoboletus venenatus, known until 2015 as Boletus venenatus, is a species of bolete fungus in the family Boletaceae native to Japan and China. It was transferred to the new genus Neoboletus by Chinese mycologists Gang Wu and Zhu L. Yang in 2015. Taxonomy Japanese mycologist Eiji Nagasawa described this species as Boletus venenatus in 1995. It is known in Japan as dokuyamadori or tahei-iguchi. Description The cap is dome-shaped initially, then convex to cushion-shaped, before flattening out in maturity, attaining diameters of , and can be various shades of yellow-grey, olvie-brown or yellow-brown. The surface is dry and slightly furry when young, and the cap margin curved inwards. The pale yellow flesh is thick under the cap and slowly turns pale blue on bruising. The pores are yellow to yellow-brown and stain dark blue quickly upon bruising. Covered in fine scales, the stipe is yellow-brown fading to pale yellow at the top, measuring tall by wide. It also stains pale blue on bruising. The mycelium is pale yellow. Distribution and habitat Neoboletus venenatus has been found in southwestern China, specifically Laojun Mountain in Yulong County in Yunnan province and Kangding County in Sichuan province, and Japan, specifically Hokkaido and central Honshu. It grows in subalpine regions. associated with conifers such as Abies, Picea and Tsuga. Toxicity Neoboletus venenatus is poisonous, causing severe gastrointestinal symptoms of nausea and recurrent vomiting, which can be severe enough to result in dehydration. Symptoms generally resolve in a few days. One toxic compound—bolevenine—was isolated and described by Matsuura and colleagues in 2007. References External links venenatus Fungi described in 1996 Fungi of China Fungi of Japan Poisonous fungi Fungus species
Neoboletus venenatus
Biology,Environmental_science
410
53,091,780
https://en.wikipedia.org/wiki/NGC%20398
NGC 398 is a lenticular galaxy located in the constellation Pisces. It was discovered on October 28, 1886, by Guillaume Bigourdan. It was described by Dreyer as "very faint, very small, stellar." References External links 0398 18861028 Pisces (constellation) 004090
NGC 398
Astronomy
68
36,932,167
https://en.wikipedia.org/wiki/Neutron%20star%20merger
A neutron star merger is the stellar collision of neutron stars. When two neutron stars fall into mutual orbit, they gradually spiral inward due to the loss of energy emitted as gravitational radiation. When they finally meet, their merger leads to the formation of either a more massive neutron star, or—if the mass of the remnant exceeds the Tolman–Oppenheimer–Volkoff limit—a black hole. The merger can create a magnetic field that is trillions of times stronger than that of Earth in a matter of one or two milliseconds. The immediate event creates a short gamma-ray burst visible over hundreds of millions, or even billions of light years. The merger of neutron stars momentarily creates an environment of such extreme neutron flux that the r-process can occur. This reaction accounts for the nucleosynthesis of around half of the isotopes in elements heavier than iron. The mergers also produce kilonovae, which are transient sources of isotropic longer wave electromagnetic radiation due to the radioactive decay of heavy r-process nuclei that are produced and ejected during the merger process. Kilonovae had been discussed as a possible r-process site since the reaction was first proposed in 1999, but the mechanism became widely accepted after multi-messenger event GW170817 was observed in 2017. Observed mergers On 17 August 2017, the LIGO and Virgo interferometers observed GW170817, a gravitational wave associated with the merger of a binary neutron star (BNS) system in NGC 4993, an elliptical galaxy in the constellation Hydra about 140 million light years away. GW170817 co-occurred with a short (roughly 2-second long) gamma-ray burst, , first detected 1.7 seconds after the GW merger signal, and a visible light observational event first observed 11 hours afterwards, SSS17a. The co-occurrence of GW170817 with GRB 170817A in both space and time strongly implies that neutron star mergers create short gamma-ray bursts. The subsequent detection of Swope Supernova Survey event 2017a (SSS17a) in the area where GW170817 and GRB 170817A were known to have occurred—and its having the expected characteristics of a kilonova—strongly imply that neutron star mergers are responsible for kilonovae as well. In February 2018, the Zwicky Transient Facility began to track neutron star events via gravitational wave observation, as evidenced by "systematic samples of tidal disruption events". Later that year, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be directly related to GW170817 and associated with the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers. Also in October 2018, scientists presented a new way to use information from gravitational wave events (especially those involving the merger of neutron stars like GW170817) to determine the Hubble constant, which establishes the rate of expansion of the universe. The two earlier methods for finding the Hubble constant—one based on redshifts and another based on the cosmic distance ladder—disagree by about 10%. This difference, the Hubble tension, might be reconciled by using kilonovae as another type of standard candle. In April 2019, the LIGO and Virgo gravitational wave observatories announced the detection of GW190425, a candidate event that is, with a probability 99.94%, the merger of two neutron stars. Despite extensive follow-up observations, no electromagnetic counterpart could be identified. In December 2022, astronomers reported observing for 51 seconds, the first evidence of a long GRB associated with the merger of a "compact binary object", thus potentially including a BNS. Following this, (2019, 64s) and (2023, 35s) have been argued to belong to this emerging class of BNS as long GRB progenitor. The indirect reasoning includes co-observations of kilonovae, for example the detection of tellurium and lanthanide in the spectral aftermath of the 2023 event. XT2 (magnetar) In 2019, analysis of data from the Chandra X-ray Observatory revealed another binary neutron star merger at a distance of 6.6 billion light years, an x-ray signal called XT2. The merger produced a magnetar; its emissions could be detected for several hours. Effect on Earth Neutron star mergers emit an unusually diverse range of radiations which can be harmful to life on earth, including the initial short gamma-ray burst, emission from the radioactive decay of heavy elements scattered by the sGRB cocoon, the sGRB afterglow itself, and cosmic rays accelerated by the blast. In order of arrival, the sGRB and afterglow photons arrive first after the (harmless) gravitational waves, with the cosmic ray particles arriving hundreds to thousands of years later. The lethal zone of the highly directional sGRB component extends hundreds of parsecs along the focus of its beam. These high-energy gamma ray photons would extinguish life directly, through thermal stress, molecular breakdown, and terminal radiation damage to both plants and animals. Apart from an unlucky hit by a focused beam, any neutron star merger occurring within 10 parsecs of Earth would also result in conclusive human extinction. The ejected material sweeps up the interstellar medium and creates a supernova-remnant-like bubble holding a lethal dose of cosmic rays. If the Earth were to be engulfed by the remnant, these cosmic rays would destroy the ozone layer, exposing Earth's biome to fatal levels of UVB radiation from the Sun. They could also interact with the atmosphere, yielding weakly-interacting muons. The flux density of these generated particles would be sufficient to sterilize the planet, penetrating even deep into caves and underwater. The danger to life lies in the particles' ability to disrupt DNA, causing birth defects and mutations. Relative to supernovae, binary neutron star (BNS) mergers influence about the same volume of space, but are thought to be much rarer, and their most dangerous sGRB component requires that the beam be precisely oriented towards the Earth. Accordingly, the overall threat of a BNS event to human extinction is extremely low. Distribution of heavy metals Neutron star mergers are rare, so most stars will form out of gas clouds which have few r-process metals. Our own solar system, however, did form from a gas cloud enriched with heavy metals. This suggests that metals heavier than iron, such as the platinum group metals, the rare earth elements, and the radioactive elements will be rarer in most solar systems as compared to our own. See also Tolman–Oppenheimer–Volkoff limit References External links Related videos (): 2017 in science 2017 in outer space Impact events Merger Stellar astronomy
Neutron star merger
Astronomy
1,486
1,394,087
https://en.wikipedia.org/wiki/Glomus%20cell
Glomus cells are the cell type mainly located in the carotid bodies and aortic bodies. Glomus type I cells are peripheral chemoreceptors which sense the oxygen, carbon dioxide and pH levels of the blood. When there is a decrease in the blood's pH, a decrease in oxygen (pO2), or an increase in carbon dioxide (pCO2), the carotid bodies and the aortic bodies signal the dorsal respiratory group in the medulla oblongata to increase the volume and rate of breathing. The glomus cells have a high metabolic rate and good blood perfusion and thus are sensitive to changes in arterial blood gas tension. Glomus type II cells are sustentacular cells having a similar supportive function to glial cells. Structure The signalling within the chemoreceptors is thought to be mediated by the release of neurotransmitters by the glomus cells, including dopamine, noradrenaline, acetylcholine, substance P, vasoactive intestinal peptide and enkephalins. Vasopressin has been found to inhibit the response of glomus cells to hypoxia, presumably because the usual response to hypoxia is vasodilation, which in case of hypovolemia should be avoided. Furthermore, glomus cells are highly responsive to angiotensin II through AT1 receptors, providing information about the body's fluid and electrolyte status. Function Glomus type I cells are chemoreceptors which monitor arterial blood for the partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2) and pH. Glomus type I cells are secretory sensory neurons that release neurotransmitters in response to hypoxemia (low pO2), hypercapnia (high pCO2) or acidosis (low pH). Signals are transmitted to the afferent nerve fibers of the sinus nerve and may include dopamine, acetylcholine, and adenosine. This information is sent to the respiratory center and helps the brain to regulate breathing. Innervation The glomus type I cells of the carotid body are innervated by the sensory neurons found in the inferior ganglion of the glossopharyngeal nerve. The carotid sinus nerve is the branch of the glossopharyngeal nerve which innervates them. Alternatively, the glomus type I cells of the aortic body are innervated by sensory neurons found in the inferior ganglion of the vagus nerve. Centrally the axons of neurons which innervate glomus type I cells synapse in the caudal portion of the solitary nucleus in the medulla. Glomus type II cells are not innervated. Development Glomus type I cells are embryonically derived from the neural crest. In the carotid body the respiratory chemoreceptors need a period of time postnatally in order to reach functional maturity. This maturation period is known as resetting. At birth the chemorecptors express a low sensitivity for lack of oxygen but this increases over the first few days or weeks of life. The mechanisms underlying the postnatal maturity of chemotransduction are obscure. Clinical significance Clusters of glomus cells, of which the carotid bodies and aortic bodies are the most important, are called non-chromaffin or parasympathetic paraganglia. They are also present along the vagus nerve, in the inner ears, in the lungs, and at other sites. Neoplasms of glomus cells are known as paraganglioma, among other names, they are generally non-malignant. Research The autotransplantation of glomus cells of the carotid body into the striatum – a nucleus in the forebrain, has been investigated as a cell-based therapy for people with Parkinson's disease. See also List of distinct cell types in the adult human body References Histology Neuroendocrine cells
Glomus cell
Chemistry
865