id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,601,887 | https://en.wikipedia.org/wiki/Mount%20Wachusett | Mount Wachusett is a mountain in Massachusetts. It straddles towns of Princeton and Westminster, in Worcester County. It is the highest point in Massachusetts east of the Connecticut River. The mountain is named after a Native American term meaning "near the mountain" or "mountain place". The mountain is a popular hiking and skiing destination (see 'Wachusett Mountain Ski Area"). An automobile road, open spring to fall, ascends to the summit. Views from the top of Mount Wachusett include Mount Monadnock to the north, Mount Greylock to the west, southern Vermont to the northwest, and Boston to the east. The mountain is traversed by the Midstate Trail. It is also home to the Wachusett Mountain State Reservation.
A band of old growth forest along rock ledges below the summit supports trees from 150 to 370 years old. Covering , it is the largest known old growth forest east of the Connecticut River in Massachusetts.
Geography
Mount Wachusett is a (formerly) glaciated monadnock: a single mountain on a relatively flat landscape. Glacial activity that shaped the mountain can be seen at Balance Rock on the northeast side of the mountain: two large boulders were stacked one on top of each other by moving glaciers thousands of years ago.
Mount Wachusett is bordered to the south by Little Wachusett Mountain and Brown Hill, to the north by Church Rock, to the east by Pine Hill, and to the northeast by the Crow Hills. The nearest mountain of comparable size is Mount Watatic, 1,832 feet (558 m), to the north on the New Hampshire border in Ashburnham, Massachusetts.
The west side of Mount Wachusett drains into the east branch of the Ware River, thence into the Chicopee River, the Connecticut River, and Long Island Sound. The south side drains into the Quinapoxet River, the Nashua River, thence the Merrimack River and the Atlantic Ocean. The east side drains into the Stillwater River, thence the Nashua River. The north side drains into the Nashua River through a series of small reservoirs.
Ski area and recreation
Mount Wachusett is home to a 25-trail ski area serviced by 3 high-speed quads, 1 fixed-grip triple and 3 carpet lifts. It features approximately of vertical, a base lodge, 100% snowmaking and night skiing on 18 trails. The mountain also maintains a terrain park and a jump called the Main Event. Due to its location, it is a popular skiing destination for residents of nearby Worcester and Boston. The ski area is located within the boundaries of the Wachusett Mountain State Reservation on a lease parcel on the northern slopes of the mountain.
There is an annual 6.2 mile (10 km) road race each May sponsored by the Central Mass Striders.
Stands of old-growth hardwood forest on Mount Wachusett became the object of a 2003 court ruling in favor of the Commonwealth of Massachusetts in joint contract with the ski area regarding plans for a ski slope expansion into an environmental buffer zone around the old growth stand. The old growth forest contains trees over 350 years old; the buffer zone contained mature trees about half that age. The Sierra Club and other conservation organizations criticized the ruling and two members of Earth First! staged a sit-in protest by climbing into the crowns of several of the trees in the area slated to be clear cut. As of 2007 wording on the website of the Wachusett Mountain Ski Area included strong language prohibiting skiers and snow boarders from entering the old growth area: "Anyone found entering old growth areas will have their lift ticket revoked. Subsequent offenses will be subject to fines."
History
Before European colonialism, Wachusett was the home of the Nipmuc tribe. The tribe has been confined to a four-and-a-half acre reservation outside Grafton, Massachusetts, which began as a praying town in 1654. The general feeling towards the ski resort among Nipmucs is that it is an injustice. Some hope to someday use the mountain as a proper Nipmuc cultural center. During King Philip's War in 1676, Native Americans brought their captive, Mary Rowlandson, to Wachusett to release her to the colonists at Redemption Rock.
See also
The name Wachusett has been adopted for the names of institutions, businesses, structures, geographic features, and other miscellaneous uses:
Wachusett Reservoir
Wachusett Mountain State Reservation
Wachusett Brewing Company
Wachusett (MBTA station)
Mount Wachusett Community College
Wachusett Regional High School
USS Wachusett
Wachusett Road in the Town of Woodway, Washington
Wachusett Potato Chips Company
It is also the title of Henry David Thoreau's A Walk to Wachusett, which describes the transcendentalist author's experiences during his four-day walk from Concord to the mountain and back.
References
External links
Massachusetts DCR Wachusett Mountain State Reservation page
Wachusett Brewing Company
The view from Mount Wachusett
Princeton history of the Mountain
People of the Wachusett: Greater New England in History and Memory, 1630-1860
Wachusett, Mount
Mountains of Worcester County, Massachusetts
Princeton, Massachusetts
Westminster, Massachusetts
Old-growth forests | Mount Wachusett | [
"Biology"
] | 1,068 | [
"Old-growth forests",
"Ecosystems"
] |
1,601,956 | https://en.wikipedia.org/wiki/Gay%20agenda | "Gay agenda" or "homosexual agenda" is a pejorative term used by sectors of the Christian religious right as a disparaging way to describe the advocacy of cultural acceptance and normalization of non-heterosexual sexual orientations and relationships. The term originated among social conservatives in the United States and has been adopted in nations with active anti-LGBT movements such as Hungary, Uganda, Russia and Turkey.
The term refers to efforts to change government policies and laws on LGBT rights–related issues. Additionally, it has been used by social conservatives and others to describe alleged goals of LGBT rights activists, such as recruiting heterosexuals into what conservatives term a "homosexual lifestyle".
Origins and usage
Origins
In the United States, the phrase "gay agenda" was popularized by a video series produced by a California evangelical religious group called Springs of Life Ministries. The first video of the series, The Gay Agenda, was released in 1992 and distributed to hundreds of Christian right organizations.
Tens of thousands of copies of the video were sold, it was distributed to the United States Congress, and Commandant of the Marine Corps Carl Mundy Jr. gave it to the other members of the Joint Chiefs of Staff.
In 1992, the Oregon Citizens Alliance (OCA) used the video in their campaign for Oregon Ballot Measure 9, opposing what the OCA called "special rights" for gays, lesbians, and bisexuals.
The Gay Agenda was followed by three other video productions made available through Christian right organizations and containing interviews with opponents of LGBT rights, intended to expose the lesbian and gay movement's secret plans for America: The Gay Agenda in Public Education (1993), The Gay Agenda: March on Washington (1993), and a feature-length follow-up to the original, Stonewall: 25 Years of Deception (1994).
Usage in the United States
The term "gay agenda" or "radical gay agenda" has been used by members of the Christian right to refer to efforts to change government policies and laws on lesbian, gay, bisexual, and transgender (LGBTQ) issues, for example, same-sex marriage and civil unions, LGBT adoption, recognizing sexual orientation as a protected civil rights minority classification, LGBT military participation, inclusion of LGBT history and themes in public education, introduction of anti-bullying legislation to protect LGBT minors—as well as non-governmental campaigns and individual actions that increase visibility and cultural acceptance of LGBT people, relationships, and identities. The term has also been used by some social conservatives to describe alleged goals of LGBT rights activists, such as supposed recruitment of heterosexuals into a "homosexual lifestyle".
Columnist James Kirchick writes that the idea of a "homosexual agenda" to subvert American cultural and family institutions largely replaced earlier panic over the "Homintern", an alleged gay conspiracy to undermine the U.S. government.
The term has been used in response to efforts to include protections for LGBT people under local and state anti-discrimination laws, as well as U.S. Supreme Court cases that granted new rights to LGBT individuals, such as Lawrence v. Texas and Obergefell v. Hodges, which respectively held that private acts of consensual sex between same-sex couples and the right of same-sex couples to marry were fundamental rights guaranteed under the Equal Protection Clause of the U.S. Constitution.
In his 2003 dissent in Lawrence, U.S. Supreme Court Justice Antonin Scalia said the court had become embroiled in a culture war by seeking to protect homosexuals from discrimination, writing that the decision reflected a "law-profession culture, that has largely signed on to the so-called homosexual agenda".
Conservative Christian groups such as the Alliance Defending Freedom (ADF), the Catholic Family & Human Rights Institute (C-Fam), and the World Congress of Families (WCF) have used the term in their literature.
According to its website, ADF has litigated numerous anti–gay rights cases in countries outside the US, in order to combat the "homosexual agenda" which it claims will "destroy marriage and undermine religious freedom".
ADF president Alan Sears published a book in 2003 titled The Homosexual Agenda: Exposing the Principal Threat to Religious Freedom Today, which argues that overturning anti-sodomy laws would lead to the legalization of pedophilia, incest, polygamy, and bestiality.
American conservative Christian groups such as the Family Research Council (FRC) have cited fears of a "homosexual agenda" in lobbying against extending hate-crime legislation to cover acts motivated by bias against a person's sexual orientation or gender identity,
as well as public-school curricula about homosexuality introduced in an effort to reduce bullying.
American conservative Christian organizations have continued public screenings of videos alleging a homosexual agenda as of 2022.
Usage outside the United States
Africa
American Christian right organizations that are losing acceptance among Americans have had more success promoting the notion of a gay agenda in Africa. Examples include Human Life International, American Center for Law & Justice and Family Watch International. Zambian scholar Kapya John Kaoma considers these organizations colonial powers, working to expand American dominance of Africa. In Africa, fear of a "Western gay agenda" is frequently used by opponents of LGBT rights.
The concept was used in a series of talks in 2009 by American evangelical Christians in Kampala. A speaker at one such workshop said, feels it is necessary to draft a new law that deals comprehensively with the issue of homosexuality and [...] takes into account the international gay agenda." The eventual result of this campaign was the Anti-Homosexuality Bill of 2009 (nicknamed the "Kill the Gays Bill"), which imposed the death penalty for homosexual behavior; this was altered to life imprisonment after the loss of foreign aid was threatened by other countries including the U.S., and the law was later ruled invalid by the Constitutional Court of Uganda.
In 2021, the Ghana Catholic Bishops' Conference called for LGBT rights organizations to be kicked out of their office space in Accra because of the belief that they promote the homosexual agenda.
Europe
In Hungary, László Toroczkai, former vice president of the far-right political party Jobbik, has complained of the perceived "homosexual agenda".
Toroczkai introduced a law banning public displays of affection by gay people in 2017.
Central America
Before decriminalization of homosexuality in Belize, the LGBT and anti-AIDS organization United Belize Advocacy Movement (UNIBAM) was lambasted in the Amandala newspaper and by American evangelicals who accused the group of trying to bring the "gay 'agenda to the country.
International organizations
In 2019, two prominent Roman Catholic cardinals – Raymond Leo Burke and Walter Brandmuller – wrote an open letter to Pope Francis calling for an end of "the plague of the homosexual agenda" to which they in part attributed the sexual abuse crisis engulfing the Catholic Church. They claimed the agenda was spread by "organized networks" protected by a "conspiracy of silence".
Speakers from many nations inveigh against the perceived homosexual agenda at the World Congress of Families annual summit, a focal point of the worldwide "pro-family" movement.
Responses
The Gay & Lesbian Alliance Against Defamation (GLAAD) describes the terms "gay agenda" and "homosexual agenda" as a "rhetorical invention of anti-gay extremists seeking to create a climate of fear by portraying the pursuit of civil rights for LGBT people as sinister".
Some writers have described the term as pejorative.
Commentators have remarked on a lack of realism and veracity to the idea of a gay agenda per se.
Such campaigns based on a presumed "gay agenda" have been described as anti-gay propaganda by researchers and critics.
At a press conference on December 22, 2010, U.S. Representative Barney Frank said that the "gay agenda" is
Satire
A satirical 1987 essay by Michael Swift entitled "" appeared in Gay Community News, describing a scenario in which homosexual men dominate American society and suppress all things heterosexual. The opening line, which read "This essay is an outré, madness, a tragic, cruel fantasy, an eruption of inner rage, on how the oppressed desperately dream of being the oppressor", was omitted when the essay was reprinted in Congressional Record and cited by later religious right publications.
The essay has often been cited by conservative Christian authors as proof of a secretive conspiracy to corrupt American youth and subvert the nuclear family, particularly the following paragraph:
The term is sometimes used satirically as a counterfoil by people who would normally find the term offensive, such as the spoof agenda found on the Betty Bowers website, and as the name of a stand-up comedy show in Prague that is a fundraiser for AIDS relief efforts.
On a 2007 episode of The Daily Show, Jon Stewart defined the gay agenda as "gay marriage, civil rights protection, Fleet Week expanded to Fleet Year, Federal Emergency Management Agency (FEMA) assistance for when it's raining men, Kathy Griffin to host everything and a nationwide ban on pleated pants".
Reappropriation
Some LGBT activists seek to reappropriate the term "gay agenda" for their own use.
In 2008, openly gay Bishop Gene Robinson declared that "Jesus is the agenda, the homosexual agenda in the Episcopal Church" and that the "homosexual agenda [...] is Jesus".
A political action committee (PAC) named Agenda PAC was inspired by the notion of the gay agenda. The PAC is led by LGBT politicians including Malcolm Kenyatta and Megan Hunt, and advocates for greater LGBT political representation.
American rapper Lil Nas X thanked the "gay agenda" in his acceptance speech at the 2021 MTV Video Music Awards.
See also
References
Further reading
External links
Historical text cited by social conservatives as evidence of a "gay agenda"
Political pejoratives
Homophobia
LGBTQ rights movement
Political terminology of the United States
Conspiracy theories in the United States
LGBTQ-related conspiracy theories
Harassment and bullying
Discrimination against LGBTQ people in the United States
2020s anti-LGBTQ movement in the United States
Social conservatism in the United States | Gay agenda | [
"Biology"
] | 2,052 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
1,601,998 | https://en.wikipedia.org/wiki/Organic%20field-effect%20transistor | An organic field-effect transistor (OFET) is a field-effect transistor using an organic semiconductor in its channel. OFETs can be prepared either by vacuum evaporation of small molecules, by solution-casting of polymers or small molecules, or by mechanical transfer of a peeled single-crystalline organic layer onto a substrate. These devices have been developed to realize low-cost, large-area electronic products and biodegradable electronics. OFETs have been fabricated with various device geometries. The most commonly used device geometry is bottom gate with top drain and source electrodes, because this geometry is similar to the thin-film silicon transistor (TFT) using thermally grown SiO2 as gate dielectric. Organic polymers, such as poly(methyl-methacrylate) (PMMA), can also be used as dielectric. One of the benefits of OFETs, especially compared with inorganic TFTs, is their unprecedented physical flexibility, which leads to biocompatible applications, for instance in the future health care industry of personalized biomedicines and bioelectronics.
In May 2007, Sony reported the first full-color, video-rate, flexible, all plastic display, in which both the thin-film transistors and the light-emitting pixels were made of organic materials.
History
The concept of a field-effect transistor (FET) was first proposed by Julius Edgar Lilienfeld, who received a patent for his idea in 1930. He proposed that a field-effect transistor behaves as a capacitor with a conducting channel between a source and a drain electrode. Applied voltage on the gate electrode controls the amount of charge carriers flowing through the system.
The first insulated-gate field-effect transistor was designed and prepared by Frosch and Derrick in 1957, using masking and predeposition, were able to manufacture silicon dioxide transistors and showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer. Later, following this research, Mohamed Atalla and Dawon Kahng proposed a silicon MOS transistor in 1959 and successfully demonstrated a working MOS device with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. Also known as the MOS transistor, the MOSFET is the most widely manufactured device in the world.
The concept of a thin-film transistor (TFT) was first proposed by John Wallmark who in 1957 filed a patent for a thin film MOSFET in which germanium monoxide was used as a gate dielectric. Thin-film transistor was developed in 1962 by Paul K. Weimer who implemented Wallmark's ideas. The TFT is a special type of MOSFET.
Rising costs of materials and manufacturing, as well as public interest in more environmentally friendly electronics materials, have supported development of organic based electronics in more recent years. In 1986, Mitsubishi Electric researchers H. Koezuka, A. Tsumura and Tsuneya Ando reported the first organic field-effect transistor, based on a polymer of thiophene molecules. The thiophene polymer is a type of conjugated polymer that is able to conduct charge, eliminating the need to use expensive metal oxide semiconductors. Additionally, other conjugated polymers have been shown to have semiconducting properties. OFET design has also improved in the past few decades. Many OFETs are now designed based on the thin-film transistor (TFT) model, which allows the devices to use less conductive materials in their design. Improvement on these models in the past few years have been made to field-effect mobility and on–off current ratios.
Materials
One common feature of OFET materials is the inclusion of an aromatic or otherwise conjugated π-electron system, facilitating the delocalization of orbital wavefunctions. Electron withdrawing groups or donating groups can be attached that facilitate hole or electron transport.
OFETs employing many aromatic and conjugated materials as the active semiconducting layer have been reported, including small molecules such as rubrene, tetracene, pentacene, diindenoperylene, perylenediimides, tetracyanoquinodimethane (TCNQ), and polymers such as polythiophenes (especially poly(3-hexylthiophene) (P3HT)), polyfluorene, polydiacetylene, poly(2,5-thienylene vinylene), poly(p-phenylene vinylene) (PPV).
The field is very active, with newly synthesized and tested compounds reported weekly in prominent research journals. Many review articles exist documenting the development of these materials.
Rubrene-based OFETs show the highest carrier mobility 20–40 cm2/(V·s). Another popular OFET material is pentacene, which has been used since the 1980s, but with mobilities 10 to 100 times lower (depending on the substrate) than rubrene. The major problem with pentacene, as well as many other organic conductors, is its rapid oxidation in air to form pentacene-quinone. However if the pentacene is preoxidized, and the thus formed pentacene-quinone is used as the gate insulator, then the mobility can approach the rubrene values. This pentacene oxidation technique is akin to the silicon oxidation used in the silicon electronics.
Polycrystalline tetrathiafulvalene and its analogues result in mobilities in the range 0.1–1.4 cm2/(V·s). However, the mobility exceeds 10 cm2/(V·s) in solution-grown or vapor-transport-grown single crystalline hexamethylene-tetrathiafulvalene (HMTTF). The ON/OFF voltage is different for devices grown by those two techniques, presumably due to the higher processing temperatures using in the vapor transport grows.
All the above-mentioned devices are based on p-type conductivity. N-type OFETs are yet poorly developed. They are usually based on perylenediimides or fullerenes or their derivatives, and show electron mobilities below 2 cm2/(V·s).
Device design of organic field-effect transistors
Three essential components of field-effect transistors are the source, the drain and the gate. Field-effect transistors usually operate as a capacitor. They are composed of two plates. One plate works as a conducting channel between two ohmic contacts, which are called the source and the drain contacts. The other plate works to control the charge induced into the channel, and it is called the gate. The direction of the movement of the carriers in the channel is from the source to the drain. Hence the relationship between these three components is that the gate controls the carrier movement from the source to the drain.
When this capacitor concept is applied to the device design, various devices can be built up based on the difference in the controller – i.e. the gate. This can be the gate material, the location of the gate with respect to the channel, how the gate is isolated from the channel, and what type of carrier is induced by the gate voltage into channel (such as electrons in an n-channel device, holes in a p-channel device, and both electrons and holes in a double injection device).
Classified by the properties of the carrier, three types of FETs are shown schematically in Figure 1. They are MOSFET (metal–oxide–semiconductor field-effect transistor), MESFET (metal–semiconductor field-effect transistor) and TFT (thin-film transistor).
MOSFET
The most prominent and widely used FET in modern microelectronics is the MOSFET (metal–oxide–semiconductor FET). There are different kinds in this category, such as MISFET (metal–insulator–semiconductor field-effect transistor), and IGFET (insulated-gate FET). A schematic of a MISFET is shown in Figure 1a. The source and the drain are connected by a semiconductor and the gate is separated from the channel by a layer of insulator. If there is no bias (potential difference) applied on the gate, the Band bending is induced due to the energy difference of metal conducting band and the semiconductor Fermi level. Therefore, a higher concentration of holes is formed on the interface of the semiconductor and the insulator. When an enough positive bias is applied on the gate contact, the bended band becomes flat. If a larger positive bias is applied, the band bending in the opposite direction occurs and the region close to the insulator-semiconductor interface becomes depleted of holes. Then the depleted region is formed. At an even larger positive bias, the band bending becomes so large that the Fermi level at the interface of the semiconductor and the insulator becomes closer to the bottom of the conduction band than to the top of the valence band, therefore, it forms an inversion layer of electrons, providing the conducting channel. Finally, it turns the device on.
MESFET
The second type of device is described in Fig.1b. The only difference of this one from the MISFET is that the n-type source and drain are connected by an n-type region. In this case, the depletion region extends all over the n-type channel at zero gate voltage in a normally “off” device (it is similar to the larger positive bias in MISFET case). In the normally “on” device, a portion of the channel is not depleted, and thus leads to passage of a current at zero gate voltage.
TFT
A thin-film transistor (TFT) is illustrated in Figure 1c. Here the source and drain electrodes are directly deposited onto the conducting channel (a thin layer of semiconductor) then a thin film of insulator is deposited between the semiconductor and the metal gate contact. This structure suggests that there is no depletion region to separate the device from the substrate. If there is zero bias, the electrons are expelled from the surface due to the Fermi-level energy difference of the semiconductor and the metal. This leads to band bending of semiconductor. In this case, there is no carrier movement between the source and drain. When the positive charge is applied, the accumulation of electrons on the interface leads to the bending of the semiconductor in an opposite way and leads to the lowering of the conduction band with regards to the Fermi-level of the semiconductor. Then a highly conductive channel forms at the interface (shown in Figure 2).
OFET
OFETs adopt the architecture of TFT. With the development of the conducting polymer, the semiconducting properties of small conjugated molecules have been recognized. The interest in OFETs has grown enormously in the past ten years. The reasons for this surge of interest are manifold. The performance of OFETs, which can compete with that of amorphous silicon (a-Si) TFTs with field-effect mobilities of 0.5–1 cm2 V−1 s−1 and ON/OFF current ratios (which indicate the ability of the device to shut down) of 106–108, has improved significantly. Currently, thin-film OFET mobility values of 5 cm2 V−1 s−1 in the case of vacuum-deposited small molecules
and 0.6 cm2 V−1 s−1 for solution-processed polymers have been reported. As a result, there is now a greater industrial interest in using OFETs for applications that are currently incompatible with the use of a-Si or other inorganic transistor technologies. One of their main technological attractions is that all the layers of an OFET can be deposited and patterned at room temperature by a combination of low-cost solution-processing and direct-write printing, which makes them ideally suited for realization of low-cost, large-area electronic functions on flexible substrates.
Device preparation
Thermally oxidized silicon is a traditional substrate for OFETs where the silicon dioxide serves as the gate insulator. The active FET layer is usually deposited onto this substrate using either (i) thermal evaporation, (ii) coating from organic solution, or (iii) electrostatic lamination. The first two techniques result in polycrystalline active layers; they are much easier to produce, but result in relatively poor transistor performance. Numerous variations of the solution coating technique (ii) are known, including dip-coating, spin-coating, inkjet printing and screen printing. The electrostatic lamination technique is based on manual peeling of a thin layer off a single organic crystal; it results in a superior single-crystalline active layer, yet it is more tedious. The thickness of the gate oxide and the active layer is below one micrometer.
Carrier transport
The carrier transport in OFET is specific for two-dimensional (2D) carrier propagation through the device. Various experimental techniques were used for this study, such as Haynes - Shockley experiment on the transit times of injected carriers, time-of-flight (TOF) experiment for the determination of carrier mobility, pressure-wave propagation experiment for probing electric-field distribution in insulators, organic monolayer experiment for probing orientational dipolar changes, optical time-resolved second harmonic generation (TRM-SHG), etc. Whereas carriers propagate through polycrystalline OFETs in a diffusion-like (trap-limited) manner, they move through the conduction band in the best single-crystalline OFETs.
The most important parameter of OFET carrier transport is carrier mobility. Its evolution over the years of OFET research is shown in the graph for polycrystalline and single crystalline OFETs. The horizontal lines indicate the comparison guides to the main OFET competitors – amorphous (a-Si) and polycrystalline silicon. The graph reveals that the mobility in polycrystalline OFETs is comparable to that of a-Si whereas mobility in rubrene-based OFETs (20–40 cm2/(V·s)) approaches that of best poly-silicon devices.
Development of accurate models of charge carrier mobility in OFETs is an active field of research. Fishchuk et al. have developed an analytical model of carrier mobility in OFETs that accounts for carrier density and the polaron effect.
While average carrier density is typically calculated as function of gate voltage when used as an input for carrier mobility models, modulated amplitude reflectance spectroscopy (MARS) has been shown to provide a spatial map of carrier density across an OFET channel.
Light-emitting OFETs
Because an electric current flows through such a transistor, it can be used as a light-emitting device, thus integrating current modulation and light emission. In 2003, a German group reported the first organic light-emitting field-effect transistor (OLET). The device structure comprises interdigitated gold source- and drain electrodes and a polycrystalline tetracene thin film. Both positive charges (holes) as well as negative charges (electrons) are injected from the gold contacts into this layer leading to electroluminescence from the tetracene.
See also
Charge modulation spectroscopy
OLED
Organic electronics
Oxide thin-film transistor
Thin film transistor
References
Molecular electronics
Organic electronics
Flexible displays | Organic field-effect transistor | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 3,283 | [
"Molecular physics",
"Molecular electronics",
"Flexible displays",
"Nanotechnology",
"Planes (geometry)",
"Thin films"
] |
1,602,252 | https://en.wikipedia.org/wiki/Source%20counts | The source counts distribution of radio-sources from a radio-astronomical survey is the cumulative distribution of the number of sources (N) brighter than a given flux density (S). As it is usually plotted on a log-log scale its distribution is known as the log N – log S plot. It is one of several cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models.
Early work to catalogue radio sources aimed to determine the source count distribution as a discriminating test of different cosmological models. For example, a uniform distribution of radio sources at low redshift, such as might be found in a 'steady-state Euclidean universe,' would produce a slope of −1.5 in the cumulative distribution of log(N) versus log(S).
Data from the early Cambridge 2C survey (published 1955) apparently implied a (log(N), log(S)) slope of nearly −3.0. This appeared to invalidate the steady state theory of Fred Hoyle, Hermann Bondi and Thomas Gold. Unfortunately many of these weaker sources were subsequently found to be due to 'confusion' (the blending of several weak sources in the side-lobes of the interferometer, producing a stronger response).
By contrast, analysis from the contemporaneous Mills Cross data (by Slee and Mills) were consistent with an index of −1.5.
Later and more accurate surveys from Cambridge, 3C, 3CR, and 4C, also showed source count slopes steeper than −1.5, though by a smaller margin than 2C. This convinced some cosmologists that the steady state theory was wrong, although residual problems with confusion provided some defense for Hoyle and his colleagues.
The immediate interest in testing the steady-state theory through source-counts was reduced by the discovery of the 3K microwave background radiation in the mid-1960s, which essentially confirmed the Big-Bang model.
Later radio survey data have shown a complex picture — the 3C and 4C claims appear to hold up, while at fainter levels the source counts flatten substantially below a slope of −1.5. This is now understood to reflect the effects of both density and luminosity evolution of the principal radio sources over cosmic timescales.
See also
Tolman surface brightness test
References
Physical cosmology | Source counts | [
"Physics",
"Astronomy"
] | 488 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
1,602,490 | https://en.wikipedia.org/wiki/Extensive-form%20game | In game theory, an extensive-form game is a specification of a game allowing (as the name suggests) for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the (possibly imperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
Finite extensive-form games
Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just a game tree with payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced by Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation from , an n-player extensive-form game thus consists of the following:
A finite set of n (rational) players
A rooted tree, called the game tree
Each terminal (leaf) node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player at the end of every possible play
A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each (rational) player, and with a special subset for a fictitious player called Chance (or Nature). Each player's subset of nodes is referred to as the "nodes of the player". (A game of complete information thus has an empty set of Chance nodes.)
Each node of the Chance player has a probability distribution over its outgoing edges.
Each set of nodes of a rational player is further partitioned in information sets, which make certain choices indistinguishable for the player when making a move, in the sense that:
there is a one-to-one correspondence between outgoing edges of any two nodes of the same information set—thus the set of all outgoing edges of an information set is partitioned in equivalence classes, each class representing a possible choice for a player's move at some point—, and
every (directed) path in the tree from the root to a terminal node can cross each information set at most once
the complete description of the game specified by the above parameters is common knowledge among the players
A play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines precisely one outgoing edge except (in general) the player doesn't know which one is being followed. (An outside observer knowing every other player's choices up to that point, and the realization of Nature's moves, can determine the edge precisely.) A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set (of his). In a game of perfect information, the information sets are singletons. It's less evident how payoffs should be interpreted in games with Chance nodes. It is assumed that each player has a von Neumann–Morgenstern utility function defined for every game outcome; this assumption entails that every rational player will evaluate an a priori random outcome by its expected utility.
The above presentation, while precisely defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision". These can be made precise using epistemic modal logic; see for details.
A perfect information two-player game over a game tree (as defined in combinatorial game theory and artificial intelligence) can be represented as an extensive form game with outcomes (i.e. win, lose, or draw). Examples of such games include tic-tac-toe, chess, and infinite chess. A game over an expectminimax tree, like that of backgammon, has no imperfect information (all information sets are singletons) but has moves of chance. For example, poker has both moves of chance (the cards being dealt) and imperfect information (the cards secretly held by other players).
Perfect and complete information
A complete extensive-form representation specifies:
the players of a game
for every player every opportunity they have to move
what each player can do at each of their moves
what each player knows for every move
the payoffs received by every player for every possible combination of moves
The game on the right has two players: 1 and 2. The numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players (e.g. 2,1 represents a payoff of 2 to player 1 and a payoff of 1 to player 2). The labels by every edge of the graph are the name of the action that edge represents.
The initial node belongs to player 1, indicating that player 1 moves first. Play according to the tree is as follows: player 1 chooses between U and D; player 2 observes player 1's choice and then chooses between U' and D' . The payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree: (U,U'), (U,D'), (D,U') and (D,D'). The payoffs associated with each outcome respectively are as follows (0,0), (2,1), (1,2) and (3,1).
If player 1 plays D, player 2 will play U' to maximise their payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises their payoff by playing D' and player 1 receives 2. Player 1 prefers 2 to 1 and so will play U and player 2 will play D' . This is the subgame perfect equilibrium.
Imperfect information
An advantage of representing the game in this way is that it is clear what the order of play is. The tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this. One player does not always observe the choice of another (for example, moves may be simultaneous or a move may be hidden). An information set is a set of decision nodes such that:
Every node in the set belongs to one player.
When the game reaches the information set, the player who is about to move cannot differentiate between nodes within the information set; i.e. if the information set contains more than one node, the player to whom that set belongs does not know which node in the set has been reached.
In extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set.
If a game has an information set with more than one member that game is said to have imperfect information. A game with perfect information is such that at any stage of the game, every player knows exactly what has taken place earlier in the game; i.e. every information set is a singleton set. Any game without perfect information has imperfect information.
The game on the right is the same as the above game except that player 2 does not know what player 1 does when they come to play. The first game described has perfect information; the game on the right does not. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitum), play in the first game will be as follows: player 1 knows that if they play U, player 2 will play D' (because for player 2 a payoff of 1 is preferable to a payoff of 0) and so player 1 will receive 2. However, if player 1 plays D, player 2 will play U' (because to player 2 a payoff of 2 is better than a payoff of 1) and player 1 will receive 1. Hence, in the first game, the equilibrium will be (U, D' ) because player 1 prefers to receive 2 to 1 and so will play U and so player 2 will play D' .
In the second game it is less clear: player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking they have played U when they have actually played D so that player 2 will play D' and player 1 will receive 3. In fact in the second game there is a perfect Bayesian equilibrium where player 1 plays D and player 2 plays U' and player 2 holds the belief that player 1 will definitely play D. In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. Notice how the imperfection of information changes the outcome of the game.
To more easily solve this game for the Nash equilibrium, it can be converted to the normal form. Given this is a simultaneous/sequential game, player one and player two each have two strategies.
Player 1's Strategies: {U , D}
Player 2's Strategies: {U’ , D’}
We will have a two by two matrix with a unique payoff for each combination of moves. Using the normal form game, it is now possible to solve the game and identify dominant strategies for both players.
If player 1 plays Up (U), player 2 prefers to play Down (D’) (Payoff 1>0)
If player 1 plays Down (D), player 2 prefers to play Up (U’) (Payoff 2>1)
If player 2 plays Up (U’), player 1 prefers to play Down (D) (Payoff 1>0)
If player 2 plays Down (D’), player 1 prefers to play Down (D) (3>2)
These preferences can be marked within the matrix, and any box where both players have a preference provides a nash equilibrium. This particular game has a single solution of (D,U’) with a payoff of (1,2).
In games with infinite action spaces and imperfect information, non-singleton information sets are represented, if necessary, by inserting a dotted line connecting the (non-nodal) endpoints behind the arc described above or by dashing the arc itself. In the Stackelberg competition described above, if the second player had not observed the first player's move the game would no longer fit the Stackelberg model; it would be Cournot competition.
Incomplete information
It may be the case that a player does not know exactly what the payoffs of the game are or of what type their opponents are. This sort of game has incomplete information. In extensive form it is represented as a game with complete but imperfect information using the so-called Harsanyi transformation. This transformation introduces to the game the notion of nature's choice or God's choice. Consider a game consisting of an employer considering whether to hire a job applicant. The job applicant's ability might be one of two things: high or low. Their ability level is random; they either have low ability with probability 1/3 or high ability with probability 2/3. In this case, it is convenient to model nature as another player of sorts who chooses the applicant's ability according to those probabilities. Nature however does not have any payoffs. Nature's choice is represented in the game tree by a non-filled node. Edges coming from a nature's choice node are labelled with the probability of the event it represents occurring.
The game on the left is one of complete information (all the players and payoffs are known to everyone) but of imperfect information (the employer doesn't know what nature's move was.) The initial node is in the centre and it is not filled, so nature moves first. Nature selects with the same probability the type of player 1 (which in this game is tantamount to selecting the payoffs in the subgame played), either t1 or t2. Player 1 has distinct information sets for these; i.e. player 1 knows what type they are (this need not be the case). However, player 2 does not observe nature's choice. They do not know the type of player 1; however, in this game they do observe player 1's actions; i.e. there is perfect information. Indeed, it is now appropriate to alter the above definition of complete information: at every stage in the game, every player knows what has been played by the other players. In the case of private information, every player knows what has been played by nature. Information sets are represented as before by broken lines.
In this game, if nature selects t1 as player 1's type, the game played will be like the very first game described, except that player 2 does not know it (and the very fact that this cuts through their information sets disqualify it from subgame status). There is one separating perfect Bayesian equilibrium; i.e. an equilibrium in which different types do different things.
If both types play the same action (pooling), an equilibrium cannot be sustained. If both play D, player 2 can only form the belief that they are on either node in the information set with probability 1/2 (because this is the chance of seeing either type). Player 2 maximises their payoff by playing D' . However, if they play D' , type 2 would prefer to play U. This cannot be an equilibrium. If both types play U, player 2 again forms the belief that they are at either node with probability 1/2. In this case player 2 plays D' , but then type 1 prefers to play D.
If type 1 plays U and type 2 plays D, player 2 will play D' whatever action they observe, but then type 1 prefers D. The only equilibrium hence is with type 1 playing D, type 2 playing U and player 2 playing U' if they observe D and randomising if they observe U. Through their actions, player 1 has signalled their type to player 2.
Formal definition
Formally, a finite game in extensive form is a structure
where:
is a finite tree with a set of nodes , a unique initial node , a set of terminal nodes (let be a set of decision nodes) and an immediate predecessor function on which the rules of the game are represented,
is a partition of called an information partition,
is a set of actions available for each information set which forms a partition on the set of all actions .
is an action partition associating each node to a single action , fulfilling:
, the restriction of on is a bijection, with the set of successor nodes of .
is a finite set of players, is (a special player called) nature, and is a player partition of information set . Let be a single player that makes a move at node .
is a family of probabilities of the actions of nature, and
is a payoff profile function.
Infinite action space
It may be that a player has an infinite number of possible actions to choose from at a particular decision node. The device used to represent this is an arc joining two edges protruding from the decision node in question. If the action space is a continuum between two numbers, the lower and upper delimiting numbers are placed at the bottom and top of the arc respectively, usually with a variable that is used to express the payoffs. The infinite number of decision nodes that could result are represented by a single node placed in the centre of the arc. A similar device is used to represent action spaces that, whilst not infinite, are large enough to prove impractical to represent with an edge for each action.
The tree on the left represents such a game, either with infinite action spaces (any real number between 0 and 5000) or with very large action spaces (perhaps any integer between 0 and 5000). This would be specified elsewhere. Here, it will be supposed that it is the former and, for concreteness, it will be supposed it represents two firms engaged in Stackelberg competition. The payoffs to the firms are represented on the left, with and as the strategy they adopt and and as some constants (here marginal costs to each firm). The subgame perfect Nash equilibria of this game can be found by taking the first partial derivative of each payoff function with respect to the follower's (firm 2) strategy variable () and finding its best response function, . The same process can be done for the leader except that in calculating its profit, it knows that firm 2 will play the above response and so this can be substituted into its maximisation problem. It can then solve for by taking the first derivative, yielding . Feeding this into firm 2's best response function, and is the subgame perfect Nash equilibrium.
See also
Axiom of determinacy
Perfect information
Combinatorial game theory
Self-confirming equilibrium
Sequential game
Signalling
Solution concept
References
Dresher M. (1961). The mathematics of games of strategy: theory and applications (Ch4: Games in extensive form, pp74–78). Rand Corp.
Fudenberg D and Tirole J. (1991) Game theory (Ch3 Extensive form games, pp67–106). MIT press.
. An 88-page mathematical introduction; see Chapters 4 and 5. Free online at many universities.
Luce R.D. and Raiffa H. (1957). Games and decisions: introduction and critical survey. (Ch3: Extensive and Normal Forms, pp39–55). Wiley New York.
Osborne MJ and Rubinstein A. 1994. A course in game theory (Ch6 Extensive game with perfect information, pp. 89–115). MIT press.
. A comprehensive reference from a computational perspective; see Chapter 5. Downloadable free online.
Further reading
, 6.1, "Disasters in Game Theory" and 7.2 "Measurability (The Axiom of Determinateness)", discusses problems in extending the finite-case definition to infinite number of options (or moves)
Historical papers
contains Kuhn's lectures at Princeton from 1952 (officially unpublished previously, but in circulation as photocopies)
Game theory game classes | Extensive-form game | [
"Mathematics"
] | 3,891 | [
"Game theory game classes",
"Game theory"
] |
1,602,534 | https://en.wikipedia.org/wiki/New%20Caledonian%20barrier%20reef | The New Caledonian barrier reef is a barrier reef located in New Caledonia in the South Pacific, being the longest continuous barrier reef in the world and the third largest after the Great Barrier Reef of Australia and the Mesoamerican Barrier Reef.
The New Caledonian barrier reef surrounds Grande Terre, New Caledonia's largest island, as well as the Ile des Pins and several smaller islands, reaching a length of . The reef encloses a lagoon of , which has an average depth of . The reefs lie up to from the shore, but extend almost to the Entrecasteaux reefs in the northwest. This northwestern extension encloses the Belep Islands and other sand cays. Several natural passages open out to the ocean. The Boulari passage, which leads to Nouméa, the capital and chief port of New Caledonia, is marked by the Amédée lighthouse.
In 2008, the barrier reef and its enclosing lagoon was inscribed on the UNESCO World Heritage List for its outstanding beauty, its unique geography as a reef entirely encircling Grande Terre, and its exceptional marine diversity (in particular its coral diversity).
Ecology
The reef systems of New Caledonia are considered to be the second largest in the world after the Great Barrier Reef of Australia, the longest continuous barrier reef in the world with a length of 1,600 km and its lagoon, the largest in the world with an area of 24,000 square kilometers. This ecosystem hosts along with Fiji, the world's most diverse concentration of reef structures, 146 types based on a global classification system, and they equal or even surpass the much larger Great Barrier Reef in coral and fish diversity.
The reef has great species diversity with a high level of endemism. In total, there have been 2328 fish species observed in the reef, belonging to 248 families. In addition, the reef is home to third-largest population of endangered dugongs (Dugong dugon) on Earth, and is an important nesting site for green sea turtles (Chelonia mydas).
In the lagoons of New Caledonia there are many other marine species, including over 2000 species of molluscs and a thriving population of humpback whales.
Environmental threats
Most of the reefs are generally thought to be in good health. Some of the eastern reefs have been damaged by effluent from nickel mining on Grand Terre. Sedimentation from mining, agriculture, and grazing has affected reefs near river mouths, which has been worsened by the destruction of mangrove forests, which help to retain sediment. Some reefs have been buried under several metres of silt. In 2008, an assessment of northwest near-shore reefs concluded that many would be dead within years, and at best decades, if present trends relating to mining sediment and silt run-off continued.
In January 2002, the French government proposed listing New Caledonia's reefs as a UNESCO World Heritage Site. UNESCO listed New Caledonia barrier reef on the World Heritage List under the name The Lagoons of New Caledonia: Reef Diversity and Associated Ecosystems on 7 July 2008.
There are 13 local management committees, composed of tourist operators, fishermen, politicians and chiefs of local tribes which work with the community to monitor the health of the lagoons.
Human use
Scuba Diving is common, with several dive sites in the lagoon and around the reef. These include the Prony needle, the Shark Pit and the Cathedral.
See also
List of reefs
References
External links
New Caledonia Barrier Reef (World Wildlife Fund)
New Caledonia reefs (French Coral Reef Society)
Biodiversity and Nickel Mining in Kanaky/New Caledonia (Mines and Communities Website)
Landforms of New Caledonia
Marine ecoregions
Coral reefs
World Heritage Sites in Oceania
World Heritage Sites in France
Reefs of Oceania
Ecoregions of New Caledonia
Central Indo-Pacific | New Caledonian barrier reef | [
"Biology"
] | 763 | [
"Biogeomorphology",
"Coral reefs"
] |
1,602,970 | https://en.wikipedia.org/wiki/Information%20set%20%28game%20theory%29 | The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set represents all possible points (or decision nodes) in a game that a given player might be at during their turn, based on their current knowledge and observations. These nodes are indistinguishable to the player due to incomplete information about previous actions or the state of the game. Therefore, an information set groups together all decision points where the player, given what they know, cannot tell which specific point they are currently at. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now.
Information sets are used in extensive form games and are often depicted in game trees. Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. When constructing a game tree, it can be challenging for a player to determine their exact location within the tree solely based on their knowledge and observations. This is because players may lack complete information about the actions or strategies of their opponents. As a result, a player may only be certain that they are at one of several possible nodes. The collection of these indistinguishable nodes at a given point is called the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just labelling the vertices which shows a particular player's options at the current stage of the game as shown in Figure 1.
More specifically, in the extensive form, an information set is a set of decision nodes such that:
Every node in the set belongs to one player.
When the game reaches the information set, the player with the move cannot differentiate between nodes within the information set, i.e. if an information set contains multiple nodes, the participants associated with that information set are uncertain about which node to select for their move.
Games in extensive form often involve each player being able to play multiple moves which results in the formation of multiple information sets as well. A player is to make choices at each of these vertices based on the options in the information set. This is known as the player's strategy and can provide the player's path from the start of the game, to the end which is also known as the play of the game. From the play of the game, the outcome will always be known based on the strategy of each player unless chance moves are involved, then there will not always be a singular outcome. Not all games play's are strategy based as they can also involve chance moves. When chance moves are involved, a vector of strategies can result in the probability distribution of the multiple outcomes of the games that could occur. Multiple outcomes of games can be created when chance is involved as the moves are likely to be different each time. However, based on the strength of the strategy, some outcomes could have higher probabilities than others.
Assuming that there are multiple information sets in a game, the game transforms from a static game to a dynamic game. The key to solving dynamic game is to calculate each player's information set and make decisions based on their choices at different stages. For example, when player A chooses first, the player B will make the best decision for him based on A's choice. Player A, in turn, can predict B's reaction and make a choice in his favour. The notion of information set was introduced by John von Neumann, motivated by studying the game of Poker.
Example
At the right are two versions of the battle of the sexes game, shown in extensive form. Below, the normal form for both of these games is shown as well.
The first game is simply sequential―when player 2 makes a choice, both parties are already aware of whether player 1 has chosen or .
The second game is also sequential, but the dotted line shows player 2's information set. This is the common way to show that when player 2 moves, he or she is not aware of what player 1 did.
This difference also leads to different predictions for the two games. In the first game, player 1 has the upper hand. They know that they can choose safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for and get 2 than choose and get 0. Formally, that's applying subgame perfection to solve the game.
In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game. So subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible equilibria:
Both choose opera
both choose football
or both use a mixed strategy, with player 1 choosing O(pera) 3/5 of the time, and player 2 choosing 2/5 of the time
See also
Self-confirming equilibrium
References
Game theory | Information set (game theory) | [
"Mathematics"
] | 1,150 | [
"Game theory"
] |
1,603,001 | https://en.wikipedia.org/wiki/Complete%20information | In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game. A typical example is the prisoner's dilemma.
Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows their own utility function (valuation for the item), but does not know the utility function of the other players.
Applications
Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs.
It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game.
In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash equilibrium to find viable strategies. In dynamic games with complete information, backward induction is the solution concept, which eliminates non-credible threats as potential strategies for players.
A classic example of a dynamic game with complete information is Stackelberg's (1934) sequential-move version of Cournot duopoly. Other examples include Leontief's (1946) monopoly-union model and Rubenstein's bargaining model.
Lastly, when complete information is unavailable (incomplete information games), these solutions turn towards Bayesian Nash Equilibria since games with incomplete information become Bayesian games. In a game of complete information, the players' payoffs functions are common knowledge, whereas in a game of incomplete information at least one player is uncertain about another player's payoff function.
Extensive form
The extensive form can be used to visualize the concept of complete information. By definition, players know where they are as depicted by the nodes, and the final outcomes as illustrated by the utility payoffs. The players also understand the potential strategies of each player and as a result their own best course of action to maximize their payoffs.
Complete versus perfect information
Complete information is importantly different from perfect information.
In a game of complete information, the structure of the game and the payoff functions of the players are commonly known but players may not see all of the moves made by other players (for instance, the initial placement of ships in Battleship); there may also be a chance element (as in most card games). Conversely, in games of perfect information, every player observes other players' moves, but may lack some information on others' payoffs, or on the structure of the game. A game with complete information may or may not have perfect information, and vice versa.
Examples of games with imperfect but complete information are card games, where each player's cards are hidden from other players but objectives are known, as in contract bridge and poker, if the outcomes are assumed to be binary (players can only win or lose in a zero-sum game). Games with complete information generally require one player to outwit the other by forcing them to make risky assumptions.
Examples of games with incomplete but perfect information are conceptually more difficult to imagine, such as a Bayesian game. A game of chess is a commonly given example to illustrate how the lack of certain information influences the game, without chess itself being such a game. One can readily observe all of the opponent's moves and viable strategies available to them but never ascertain which one the opponent is following until this might prove disastrous for one. Games with perfect information generally require one player to outwit the other by making them misinterpret one's decisions.
See also
Bayesian game
Handicap principle
Market impact
Screening game
Signaling game
Small talk
Trash-talk
References
Bibliography
Watson, J. (2015) Strategy: An Introduction to Game Theory. Volume 139. New York, WW Norton
Fudenberg, D. and Tirole, J. (1993) Game Theory. MIT Press. (see Chapter 6, sect 1)
Gibbons, R. (1992) A primer in game theory. Harvester-Wheatsheaf. (see Chapter 3)
Ian Frank, David Basin (1997), Artificial Intelligence 100 (1998) 87-123. "Search in games with incomplete information: a case study using Bridge card play".
Game theory
Perfect competition | Complete information | [
"Mathematics"
] | 1,044 | [
"Game theory"
] |
1,603,024 | https://en.wikipedia.org/wiki/Marie-Jean-L%C3%A9on%2C%20Marquis%20d%27Hervey%20de%20Saint%20Denys | Marie-Jean-Léon Lecoq, Baron d'Hervey de Juchereau, Marquis d'Hervey de Saint-Denys (; 6 May 1822 – 2 November 1892) son of Pierre Marin Alexandre Le Coq or Lecoq, Baron d'Hervey (1780-1858), and Marie Louise Josephine Mélanie Juchereau de Saint-Denys (1789-1844) was born on 6 May 1822. D'Hervey was a French sinologist also known for his research on dreams.
Contributions to Sinology
Hervey de Saint Denys made an intense study of Chinese, and in 1851 D'Hervey published his Recherches sur l'agriculture et l'horticulture des Chinois (Transl: Research on the agriculture and horticulture of the Chinese), in which he dealt with the plants and animals that potentially might be able to be acclimatized to and introduced in Western countries. He translated as well Chinese texts as some Chinese stories, not of classical interest, but valuable for the light they throw on Chinese culture and customs.
He was a man of letters too. E.g. he translated some Spanish-language works, and wrote a history of the Spanish drama.
D'Hervey also created a literary translation theory, paraphrased by Joshua A. Fogel, the author of a book review on De l'un au multiple: Traductions du chinois vers les langues européenes, as "empowering the translator to use his own creative talents to embellish wherever necessary—not a completely free hand, but some leeway to avoid the pitfall of becoming too leaden."
By adoption by his uncle Amédée Louis Vincent Juchereau (1782-1858) he became Marquis de Saint-Denys.
At the Paris Exhibition of 1867, Hervey de Saint Denys acted as commissioner for the Chinese exhibits. and is decorated by the Legion of Honour.On June the 11th 1868 the Marquis married the 19-year-old Austrian orphan Louise de Ward.
In 1874 he succeeded Stanislas Julien in the chair of Chinese at the Collège de France, while in 1878 he was elected a member of the Académie des Inscriptions et de Belles-Lettres. D'Hervey died in his hotel at Paris on 2 November 1892.
Contributions to Oneirology
More recently Hervey de Saint Denys has begun to be known for his introspective studies on dreams. D'Hervey was also one of the earliest oneirologists (specialists in the study of dreams), and is nowadays regarded as "The Father" of modern lucid dreaming. In 1867 there appeared as an anonymous publication a book entitled Les rêves et les moyens de les diriger; observations pratiques (Translation: Dreams and the Ways to Direct Them: Practical Observations). In a footnote on page 1 from the 1878-edition of Alfred Maury's work Le sommeil et les rêves D'Hervey de Saint-Denys was identified as the writer of it. Writers like e.g. Havelock Ellis (1911), Johann Stärcke (1912), A. Breton (1955) a.o. refer to the fact that the original anonymous publication was hard to lay hands on as copies were scarce, because shortly after the publication publisher Amyot went broke. Sigmund Freud (Die Traumdeutung.Wien; Deuticke.1900) e.g. states: "Maury, le sommeil et les rêves, Paris,1878, p.19, polemisiert lebhaft gegen d'Hervey, dessen Schrift ich mir trotz aller Bemühung nicht verschaffen konnte"(Transl.:Maury, Sleep and Dreams, Paris,1878, p. 19, argues strenuously against d'Hervey, whose book I could not lay hands on in spite of all my efforts).
D'Hervey started recording his dreams on a daily basis from the age of 13. (At page 4 of his work Les Rêves et les Moyens de Les Diriger the author stated that he was in his fourteenth year when he started his dreamwork). In this book, the author proposed a theoretical framework, techniques to control dreams, and he described dreams in which the "dreamer is perfectly aware he is dreaming". Recently the question has been raised who has coined for the very first time the term 'lucid dreaming'. Generally it is contributed to Frederik van Eeden, but some scientists question if this was inspired by the use of the term by Saint-Denys. Denys describes his own lucid dreams with sentences like 'I was aware of my situation'. It is erroneous to state that Denys's book deals mainly with lucid dreams. It does not. Generally it is focused on the development of dreams, not specific on lucid dreams.
It is only in recent years that Saint-Denys was rediscovered for his oneirology work. In an article from Den Blanken & Meijer, the authors wondered about the fact that there were so little biographical data available on such an erudite person as Saint-Denys was, and presented some.
In 1964 Editor Tchou reprinted Les Rêves Et Les Moyens de Les Diriger, but the 1867-Appendix, entitled 'Un rêve apres avoir pris du hatchich' (Transl.: A dream after I took hashish) had, due to its contents, been left out, without indication. In 1982 an abbreviated English edition appeared, which was based on the Tchou-edition, and consequently it did not contain the Appendix either, nor did it refer to it. The Den Blanken & Meijer-article revealed this fact, and the authors presented for the first time an English translation of this Appendix. Others were stimulated by above, and in 1992 the French Dream Society 'Oniros' held in Paris a commemoration on Saint-Denys. Leading dream specialists Carolus den Blanken, Celia Green, Paul Tholey (1937-1998) and Oniros president-elect Roger Ripert paid their respect. In 1995 society Oniros published an integral French version of Denys' book on dreams, and Italian, Dutch and Japanese translations appeared. Recently several French editions of Les Rêves has been published. It is not always evident if these (E)books are
integral versions or based on the 1964-Tchou edition.
In 2016 an integral English version (inclusive the original frontpage, backcover and frontispice) appeared as a free of charge E-book with the title:'Dreams and the Ways to Direct Them: Practical Observations', edited by Drs. Carolus den Blanken & Drs. Eli Meijer. In this translation, the designer of the front cover of the 1867-original is revealed, namely Henri Alfred Darjou (1832-1875), French painter and draughtsman. This edition was not without flaws, and in 2020 appeared an enhanced version.
Bibliography
Sinology
Hervey de Saint-Denys (1850). Recherches sur l'agriculture et l'horticulture des Chinois et sur les végétaux, les animaux et les procédés agricoles que l'on pourrait introduire... dans l'Europe occidentale et le nord de l'Afrique. (Studies on the agriculture and horticulture of the Chinese). Allouard et Kaeppelin. Paris.text on line
Hervey de Saint-Denys (1859). La Chine devant l’Europe. Amyot/Paris.text on line
Hervey de Saint-Denys (1862). Poésies de l'époque des T'ang. Étude sur l’art poétique en Chine (Poems of the Tang dynasty). Paris: Amyot.
Hervey de Saint-Denys (1869). Recueil de textes faciles et gradués en chinois moderne, avec un tableau des 214 clefs chinoises et un vocabulaire de tous les mots compris dans les exercices, publié à l'usage des élèves de l'École spéciale des langues orientales.
Hervey de Saint-Denys (1870). Le Li-sao, poéme du IIIe siècle avant notre ére, traduit du chinois (The Li Sao, a poem of the 3rd century BC, translated from Chinese). Paris: Maisonneuve.
Hervey de Saint-Denys (1872). Mémoire sur l'histoire ancienne du Japon d'après le Ouen Hien Tong Kao de Ma-Touan-Lin. Imprimerie Nationale. Paris.Text on line
Hervey de Saint-Denys (1873). Mémoire sur l'ethnographie de la Chine centrale et méridionale, d'après un ensemble de documents inédits tirés des anciens écrivains chinois. In-8°, paginé 109–134. Extrait des "Mémoires de la Société d'ethnographie". XII. text on line
Hervey de Saint-Denys (1873-1880). Ban Zai Sau, pour servir à la connaissance de l'Extrême-Orient, 4 vol.
Hervey de Saint-Denys (1875). Sur le pays connu des anciens Chinois sous le nom de Fou-sang, et de quelques documents inédits pouvant servir à l'identifier. Comptes rendus des séances de l'Académie des Inscriptions et Belles-Lettres, Vol. 19, Issue 4, pages 319–335. Text on line
Hervey de Saint-Denys (1876). Mémoire sur le pays connu sous le nom de Fou-Sang.
Hervey de Saint-Denys (1876–1883). Ethnographie des peuples étrangers de la Chine (Ethnography of people abroad in China), translated from Ma Duanlin.H. Georg, 2 Vol.4. Paris. - London H. Georg. - E. Leroux. - Trübner. text on line
Hervey de Saint-Denys (1879). Sur une notice de M. August Strindberg concernant les relations de la Suède avec la Chine et les pays tartares, depuis le milieu du XVIIe siècle jusqu'à nos jours. Comptes rendus des séances de l'Académie des Inscriptions et Belles-Lettres, Vol 23, Issue 2, pages 137–140.
Hervey de Saint-Denys (1885). Trois nouvelles chinoises. Translation of selections from Jingu qiguan 今古奇觀.Ernest Leroux éditeur, « Bibliothèque Orientale Elzévirienne », vol. XLV, Paris. Text on line
Hervey de Saint-Denys (1886). L’Annam et la Cochinchine. Imprimerie Nationale. Paris.
Hervey de Saint-Denys (1887). Mémoires sur les doctrines religieuses; de Confucius et de l'école des lettres (Dissertations on religious doctrines; from Confucius to the school of letters).
Hervey de Saint-Denys (1889). La tunique de perles. Une serviteur méritant et Tant le Kiaï-Youen, trois nouvelles chinoises. E. Dentu, Paris. Reprint in Six nouvelles chinoises, Éditions Bleu de Chine, Paris, 1999.
Hervey de Saint-Denys (1892). Six nouvelles nouvelles, traduites pour la première fois du chinois par le Marquis d’Hervey-Saint-Denys. Éditions J. Maisonneuve, Collection Les Littératures Populaires, t. XXX, Paris. Reprint in Six nouvelles chinoises, Éditions Bleu de Chine, Paris, 1999.
Hervey de Saint-Denys (2004). Écoutez là-bas, sous les rayons de la lune, traduzione di Li Bai e note del marchese d'Hervey Saint-Denis, Redaction Céline Pillon.
Oneirology
Hervey de Saint-Denys (1867). Les Rêves et les moyens de les diriger; Observations pratiques. (Transl.: Dream and the Ways to Direct Them: Practical Observations). Paris: Librairie d'Amyot, Éditeur, 8, Rue de la Paix.(Originally published anonymous). Text on line
Henri Cordier (1892). Necrologie: Le Marquis d'Hervey Saint Denys . T'oung Pao- International Journal of Chinese Studies. Vol. 3 No. 5, pag. 517–520. Publisher E.J. Brill/Leiden/The Netherlands.Text on line
Alexandre Bertrand (1892). Annonce du décès de M. le marquis Léon d'Hervey de Saint-Denys, membre de l'Académie.(Transl.: Announcement of the death of Marquis d'Hervey de Saint-Denys, member of the Academie). Comptes rendus des séances de l'Académie des Inscriptions et Belles-Lettres, Vol. 36, Issue 6, page 377.Text on line
Alexandre Bertrand (1892). Paroles prononcées par le Président de l'Académie à l'occasion de la mort de M. le marquis d'Hervey-Saint-Denys. (Transl.: Words spoken by the president of the Academy on the occasion of the death of marquis d'Hervey de Saint-Denys). Comptes rendus des séances de l'Académie des Inscriptions et Belles-Lettres, Vol. 36, Issue 6, pages 392–397. Text on line
Hervey de Saint-Denys (1964). Les Rêves et les moyens de les diriger. Paris: Tchou/Bibliothèque du Merveilleux.Preface by Robert Desoille. Edited by Jacques Donnars. This edition does not contain 'The Appendix' from the 1867-book. Text on line
B. Schwartz (1972). Hervey de Saint-Denys: Sa vie, ses recherches et ses découvertes sur le sommeil et les reves. Hommage à l'occasion du 150e anniversaire de sa naissance (Transl.: Hervey de Saint-Denys: His life, his investigations and his discoveries about the sleep and the dreams. Tribute on the 150th anniversary of his birth). Revue d'Electroencéphalographie et de Neurophysiologie Clinique, Vol 2, Issue 2, April–June 1972, pages 131–139.
Hervey de Saint-Denys (1977). Les Rêves et les moyens de les diriger. Plan de la Tour: Editions d'Aujourd'hui.(Facsimile reprint of the Tchou-Edition).
Hervey de Saint-Denys (1982). Dreams and how to guide them. Translated by N.Fry and edited by Morton Schatzman. London. Gerald Duckworth. . (abbreviated version)
C.M. den Blanken & E.J.G. Meijer (1988/1991). An Historical View of "Dreams and the Ways to Direct Them; Practical Observations" by Marie-Jean-Léon LeCoq, le Marquis d'Hervey-Saint-Denys. Lucidity Letter, December, 1988, Vol.7, No.2, p. 67-78. Revised Edition:Lucidity,1991, Vol.10 No.1&2, p. 311-322. This article contains an English translation of the forgotten Appendix from the 1867-book.
Hervey de Saint-Denys (1991). Les Reves et les Moyens de les Diriger. Editor D'Aujourd'hui. . No illustrations.
B. Schwartz (1992). Ce qu'on a du savoir, cru savoir, pu savoir sur la vie du marquis d'Hervey de Saint-Denys.(Transl.: What was supposed to be known and what was believed to be known). Oniros no. 37/38, pag. 4–8. Soc. Oniros/Paris.
R. Ripert (1992). Découverte et réhabilitation d'Hervey de Saint-Denys.(Transl.: The discovery and rehabilitation of d'Hervey de Saint-Denys). Oniros no.37/38 pag. 20–21. Soc. Oniros/Paris.
Hervey de Saint-Denys (1995). Les Reves et Les Moyens de Les Diriger:Observations Pratiques. . Soc.Oniros/Paris.
O. de Luppé, A. Pino, R. Ripert & B. Schwartz (1995). (Transl.: D'Hervey de Saint-Denys 1822-1892; Biography, Family correspondence, oneirological and sinological works; Tributes to the author on the centenary of his death and artistic exposition regarding his dreams) Oniros, BP 30, 93451 Ile Saint-Denis cedex. The tributes are by Carolus den Blanken, Celia Green, Roger Ripert and Paul Tholey.
Hervey de Saint-Denys (2000). I sogni e il modo di dirigerli. (Transl.: The dream and the way to direct it). Translation by C.M. Carbone, Il Minotauro, Phoenix. .
Hervey de Saint-Denys (2013). Dromen: Praktische Observaties. (Transl.: Dreams: Practical Observations - Integral Dutch Translation of Les Rêves et les moyens de les diriger: Observations Pratiques)(E-book) . Editor and Translator Drs. Carolus M. den Blanken.
Hervey de Saint-Denys (2007). Les Rêves et les moyens de les diriger. No illustrations. Broché. Editions Cartouche/Paris. Also in E-book format.
Hervey de Saint-Denys, Marie Jean Leon (2008). Les Reves et les Moyens de les diriger. No illustrations. No Appendix. . Paperback, Buenos Books International/Paris.
Hervey de Saint-Denys, Marie Jean Leon (2012). Yume no sojuho. . Editor Takashi Tachiki. Publ. Kokushokankokai/Tokyo;2012.
Jacqueline Carroy (2013), La force et la couleur des rêves selon Hervey de Saint-Denys , Rives méditerranéennes, 44 | 2013, 53–68.
Hervey de Saint-Denys, Marie Jean Leon (2013). Les Reves et les Moyens de les diriger. No illustrations. No Appendix. E-Pub Edition, Buenos Books America LLC. .
Hervey de Saint-Denys (2016). Dreams and the Ways to Direct Them: Practical Observations. Published by Carolus den Blanken/Utrecht(E-book) . Editors: Drs. Carolus den Blanken and Drs. Eli Meijer. English Translator: Drs. Carolus den Blanken. Translator Greek & Latin Sentences: Prof. Dr. Jan van Gijn. Integral Edition.
Hervey de Saint-Denys (2020). Dreams and the Ways to Direct Them: Practical Observations, Including an appendix with a record of a dream after taking hashish. Inner Garden Press/Utrecht(E-book) . Editor: Derekh Moreh. (Enhanced edition of the Den Blanken translation)
Hervey de Saint-Denys (2021). Dreams and how to direct them: Practical Observations. Ouroboros Publishing (paperback) . Editor: Ouroboros Publishing. Translator: D. Bernardo.
Hervey de Saint-Denys (2021). Los sueños y como dirigirlos: Observaciones prácticas. Published by Abraxas Editores. Editor: Abraxas Editores. Translator: D. Bernardo.
References
Further reading
genealogy on geneanet pierfit (after log in)
Hervey de Saint-Denys (1849). Insurrection de Naples en 1647. Amyot. Paris. Text on line
Hervey de Saint-Denys (1856). Histoire de la révolution dans les Deux-Siciles depuis 1793.
Hervey de Saint-Denys (1875). Examen des faits mensongers contenus dans un libelle publié sous le faux nom de Léon Bertin avec le jugement du tribunal correctionel de Versailles.
Hervey de Saint-Denys (1878-1889). Collection of 6 autograph letters signed to unknown recipients. Chateau du Breau, (Seine-et-Marne), and 9 Av. Bosquet, 24 June 1878- 9 June 1889. In French. The letters are written to colleagues and friends and mainly concern sinological matters.
Truchelut & Valkman (1884). Marie Jean Léon d'Hervey de Saint-Denys. Bibliothèque nationale de France, Département Société de Géographie, SG PORTRAIT-1182. 1 photogr. + notice et letters. Text on line
Pino, Angel and Rabut, Isabelle (1999). "Le marquis d'Hervey-Saint-Denys et les traductions littéraires: À propos d'un texte traduit par lui et retraduit par d'autres." (Archive), English title: "The marquis D’Hervey-Saint-Denys and literary translations", In: Alleton, Vivianne and Michael Lackner (editors). De l'un au multiple: traductions du chinois vers les langues européennes Translations from Chinese into European Languages. Fondation Maison des sciences de l'homme, Paris, p. 114-142. . English abstract available
Léon d’Hervey de Saint-Denys (1822-O1878-1892). Entre science et rêve, un patrimoine révélé. Journée du Patrimoine, 16 septembre 2012.Text on line
Jacqueline Carroy (2013). La force et la couleur des rêves selon Hervey de Saint-Denys. Rives Méditerranéennes, 44, p. 53-68. Référence électronique (2013). Rives Méditerranéennes 44. Text on line
1822 births
1892 deaths
French orientalists
French sinologists
Sleep researchers
Oneirologists
Lucid dreams
Chinese–French translators
Translators to French
Spanish–French translators
Members of the Académie des Inscriptions et Belles-Lettres
19th-century French translators
Academic staff of the Collège de France | Marie-Jean-Léon, Marquis d'Hervey de Saint Denys | [
"Biology"
] | 4,897 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
1,603,459 | https://en.wikipedia.org/wiki/Mittag-Leffler%20function | In mathematics, the Mittag-Leffler functions are a family of special functions. They are complex-valued functions of a complex argument z, and moreover depend on one or two complex parameters.
The one-parameter Mittag-Leffler function, introduced by Gösta Mittag-Leffler in 1903,
can be defined by the Maclaurin series
where is the gamma function, and is a complex parameter with .
The two-parameter Mittag-Leffler function, introduced by Wiman in 1905, is occasionally called the generalized Mittag-Leffler function. It has an additional complex parameter , and may be defined by the series
When , the one-parameter function is recovered.
In the case and are real and positive, the series converges for all values of the argument , so the Mittag-Leffler function is an entire function. This class of functions are important in the theory of the fractional calculus.
See below for three-parameter generalizations.
Some basic properties
For , the Mittag-Leffler function is an entire function of order , and type for any value of . In some sense, the Mittag-Leffler function is the simplest entire function of its order. The indicator function of is
This result actually holds for as well with some restrictions on when .
The Mittag-Leffler function satisfies the recurrence property (Theorem 5.1 of )
from which the following asymptotic expansion holds : for and real such that
then for all , we can show the following asymptotic expansions (Section 6. of ):
-as :
,
-and as :
.
A simpler estimate that can often be useful is given, thanks to the fact that the order and type of is and , respectively:
for any positive and any .
Special cases
For , the series above equals the Taylor expansion of the geometric series and consequently .
For we find: (Section 2 of )
Error function:
Exponential function:
Hyperbolic cosine:
For , we have
For , the integral
gives, respectively: , , .
Mittag-Leffler's integral representation
The integral representation of the Mittag-Leffler function is (Section 6 of )
where the contour starts and ends at and circles around the singularities and branch points of the integrand.
Related to the Laplace transform and Mittag-Leffler summation is the expression (Eq (7.5) of with )
Three-parameter generalizations
One generalization, characterized by three parameters, is
where and are complex parameters and .
Another generalization is the Prabhakar function
where is the Pochhammer symbol.
Applications of Mittag-Leffler function
One of the applications of the Mittag-Leffler function is in modeling fractional order viscoelastic materials. Experimental investigations into the time-dependent relaxation behavior of viscoelastic materials are characterized by a very fast decrease of the stress at the beginning of the relaxation process and an extremely slow decay for large times. It can even take a long time before a constant asymptotic value is reached. Therefore, a lot of Maxwell elements are required to describe relaxation behavior with sufficient accuracy. This ends in a difficult optimization problem in order to identify a large number of material parameters. On the other hand, over the years, the concept of fractional derivatives has been introduced to the theory of viscoelasticity. Among these models, the fractional Zener model was found to be very effective to predict the dynamic nature of rubber-like materials with only a small number of material parameters. The solution of the corresponding constitutive equation leads to a relaxation function of the Mittag-Leffler type. It is defined by the power series with negative arguments. This function represents all essential properties of the relaxation process under the influence of an arbitrary and continuous signal with a jump at the origin.
See also
Mittag-Leffler summation
Mittag-Leffler distribution
Notes
R Package 'MittagLeffleR' by Gurtek Gill, Peter Straka. Implements the Mittag-Leffler function, distribution, random variate generation, and estimation.
References
Gorenflo R., Kilbas A.A., Mainardi F., Rogosin S.V., Mittag-Leffler Functions, Related Topics and Applications (Springer, New York, 2014) 443 pages
External links
Mittag-Leffler function: MATLAB code
Mittag-Leffler and stable random numbers: Continuous-time random walks and stochastic solution of space-time fractional diffusion equations
Special functions
Analytic functions | Mittag-Leffler function | [
"Mathematics"
] | 952 | [
"Special functions",
"Combinatorics"
] |
1,603,488 | https://en.wikipedia.org/wiki/Elevator%20pitch | An elevator pitch, elevator speech, lift speech, or elevator statement is a short description of an idea, product, or company that explains the concept in a way such that any listener can understand it in a short period of time. This description typically explains who the thing is for, what it does, why it is needed, and how it will get done. When explaining an individual person, the description generally explains one's skills and goals, and why they would be a productive and beneficial person to have on a team or within a company or project. An elevator pitch does not have to include all of these components, but it usually does at least explain what the idea, product, company, or person is and their value.
Unlike a sales pitch, an elevator pitch can be used in a variety of ways, and may not have a clear buyer-seller relationship. The goal is simply to convey the overall concept or topic being pitched in an exciting way.
The name—elevator pitch—reflects the idea that it should be possible to deliver the summary in the time span of an elevator ride, or approximately thirty seconds to two minutes.
Background information and history
There are many origin stories for the elevator pitch. One commonly-known origin story is that of Ilene Rosenzweig and Michael Caruso, two former journalists active in the 1990s. According to Rosenzweig, Caruso was a senior editor at Vanity Fair and was continuously attempting to pitch story ideas to the Editor-In-Chief at the time, but could never pin her down long enough to do so simply because she was always on the move. So, in order to pitch her ideas, Caruso would join her during short free periods of time she had, such as on an elevator ride. Thus, the concept of an elevator pitch was created, as says Rosenzweig.
However, there is another known potential origin story that dates back before the story of Rosenzweig and Caruso. Philip Crosby, author of The Art of Getting Your Own Sweet Way (1972) and Quality Is Still Free (1996) suggested individuals should have a pre-prepared speech that can deliver information regarding themselves or a quality that they can provide within a short period of time, namely the amount of time of an elevator ride for if an individual finds themselves on an elevator with a prominent figure. Essentially, an elevator pitch is meant to allow an individual to, with very limited time, pitch themselves or an idea to a person who is high up in a company.
Crosby, who worked as a quality test technician, and then later as the Director of Quality at International Telephone and Telegraph, recounted how an elevator pitch could be used to push for change within the company. He planned a speech regarding the change he wanted to see and waited at the elevator at ITT headquarters. Crosby stepped onto an elevator with the CEO of the company to deliver his speech. Once they reached the floor where the CEO was getting off, Crosby was asked to deliver a full presentation on the topic at a meeting for all of the general managers.
Aspects
An elevator pitch is meant to last the duration of an elevator ride, which can vary in length from approximately thirty seconds to two minutes. Therefore, the main focus of an elevator pitch should be making it short and direct. According to the Idaho Business Review, the first two sentences of any elevator pitch are the most important, and should hook or grab the attention of the listener. Information in an elevator pitch, due to the limited amount of time, should be condensed to express the most important ideas or concepts within the allotted time.
The Idaho Business Review also suggests individuals who use an elevator pitch deliver it using simple language, avoiding statistics or other language that may disrupt the focus of the listener. Bloomberg Businessweek suggests that an important lesson to think about when giving an elevator pitch is to "adjust the pitch to the person who is listening, and refine it as you and your business continue to grow and change." When delivering an elevator pitch, individuals are encouraged to remain flexible and adaptable, and to be able to deliver the pitch in a genuine and fluent fashion. By doing so, the intended audience of the pitch will likely be able to follow the information, and will not consider it as being scripted.
Advantages
Advantages to conducting an elevator pitch include convenience and simplicity. For instance, elevator pitches can be given on short notice and without much preparation, making the listener more comfortable. Furthermore, elevator pitches allow the individual who is giving the pitch the ability to simplify the content and deliver it in a less complicated manner by providing the information in a cut-down fashion that gets right to the point.
See also
Business plan
High concept
Lightning talk
Mission statement
SWOT analysis
Vision statement
References
Further reading
Business terms
Elevators
Rhetorical techniques
Selling techniques
Statements | Elevator pitch | [
"Engineering"
] | 961 | [
"Building engineering",
"Elevators"
] |
1,603,508 | https://en.wikipedia.org/wiki/Perfect%20Bayesian%20equilibrium | In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.
Any perfect Bayesian equilibrium has two components -- strategies and beliefs:
The strategy of a player in a given information set specifies his choice of action in that information set, which may depend on the history (on actions taken previously in the game). This is similar to a sequential game.
The belief of a player in a given information set determines what node in that information set he believes the game has reached. The belief may be a probability distribution over the nodes in the information set, and is typically a probability distribution over the possible types of the other players. Formally, a belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1.
The strategies and beliefs also must satisfy the following conditions:
Sequential rationality: each strategy should be optimal in expectation, given the beliefs.
Consistency: each belief should be updated according to the equilibrium strategies, the observed actions, and Bayes' rule on every path reached in equilibrium with positive probability. On paths of zero probability, known as off-equilibrium paths, the beliefs must be specified but can be arbitrary.
A perfect Bayesian equilibrium is always a Nash equilibrium.
Examples of perfect Bayesian equilibria
Gift game 1
Consider the following game:
The sender has two possible types: either a "friend" (with probability ) or an "enemy" (with probability ). Each type has two strategies: either give a gift, or not give.
The receiver has only one type, and two strategies: either accept the gift, or reject it.
The sender's utility is 1 if his gift is accepted, -1 if his gift is rejected, and 0 if he does not give any gift.
The receiver's utility depends on who gives the gift:
If the sender is a friend, then the receiver's utility is 1 (if he accepts) or 0 (if he rejects).
If the sender is an enemy, then the receiver's utility is -1 (if he accepts) or 0 (if he rejects).
For any value of Equilibrium 1 exists, a pooling equilibrium in which both types of sender choose the same action:
Equilibrium 1. Sender: Not give, whether they are the friend type or the enemy type. Receiver: Do not accept, with the beliefs that Prob(Friend|Not Give) = p and Prob(Friend|Give) = x, choosing a value
The sender prefers the payoff of 0 from not giving to the payoff of -1 from sending and not being accepted. Thus, Give has zero probability in equilibrium and Bayes's Rule does not restrict the belief Prob(Friend|Give) at all. That belief must be pessimistic enough that the receiver prefers the payoff of 0 from rejecting a gift to the expected payoff of from accepting, so the requirement that the receiver's strategy maximize his expected payoff given his beliefs necessitates that Prob(Friend|Give) On the other hand, Prob(Friend|Not give) = p is required by Bayes's Rule, since both types take that action and it is uninformative about the sender's type.
If , a second pooling equilibrium exists as well as Equilibrium 1, based on different beliefs:
Equilibrium 2. Sender: Give, whether they are the friend type or the enemy type. Receiver: Accept, with the beliefs that Prob(Friend|Give) = p and Prob(Friend|Not give) = x, choosing any value for
The sender prefers the payoff of 1 from giving to the payoff of 0 from not giving, expecting that his gift will be accepted. In equilibrium, Bayes's Rule requires the receiver to have the belief Prob(Friend|Give) = p, since both types take that action and it is uninformative about the sender's type in this equilibrium. The out-of-equilibrium belief does not matter, since the sender would not want to deviate to Not give no matter what response the receiver would have.
Equilibrium 1 is perverse if The game could have so the sender is very likely a friend, but the receiver still would refuse any gift because he thinks enemies are much more likely than friends to give gifts. This shows how pessimistic beliefs can result in an equilibrium bad for both players, one that is not Pareto efficient. These beliefs seem unrealistic, though, and game theorists are often willing to reject some perfect Bayesian equilibria as implausible.
Equilibria 1 and 2 are the only equilibria that might exist, but we can also check for the two potential separating equilibria, in which the two types of sender choose different actions, and see why they do not exist as perfect Bayesian equilibria:
Suppose the sender's strategy is: Give if a friend, Do not give if an enemy. The receiver's beliefs are updated accordingly: if he receives a gift, he believes the sender is a friend; otherwise, he believes the sender is an enemy. Thus, the receiver will respond with Accept. If the receiver chooses Accept, though, the enemy sender will deviate to Give, to increase his payoff from 0 to 1, so this cannot be an equilibrium.
Suppose the sender's strategy is: Do not give if a friend, Give if an enemy. The receiver's beliefs are updated accordingly: if he receives a gift, he believes the sender is an enemy; otherwise, he believes the sender is a friend. The receiver's best-response strategy is Reject. If the receiver chooses Reject, though, the enemy sender will deviate to Do not give, to increase his payoff from -1 to 0, so this cannot be an equilibrium.
We conclude that in this game, there is no separating equilibrium.
Gift game 2
In the following example, the set of PBEs is strictly smaller than the set of SPEs and BNEs. It is a variant of the above gift-game, with the following change to the receiver's utility:
If the sender is a friend, then the receiver's utility is 1 (if they accept) or 0 (if they reject).
If the sender is an enemy, then the receiver's utility is 0 (if they accept) or -1 (if they reject).
Note that in this variant, accepting is a weakly dominant strategy for the receiver.
Similarly to example 1, there is no separating equilibrium. Let's look at the following potential pooling equilibria:
The sender's strategy is: always give. The receiver's beliefs are not updated: they still believe in the a-priori probability, that the sender is a friend with probability and an enemy with probability . Their payoff from accepting is always higher than from rejecting, so they accept (regardless of the value of ). This is a PBE - it is a best-response for both sender and receiver.
The sender's strategy is: never give. Suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability , where is any number in . Regardless of , the receiver's optimal strategy is: accept. This is NOT a PBE, since the sender can improve their payoff from 0 to 1 by giving a gift.
The sender's strategy is: never give, and the receiver's strategy is: reject. This is NOT a PBE, since for any belief of the receiver, rejecting is not a best-response.
Note that option 3 is a Nash equilibrium. If we ignore beliefs, then rejecting can be considered a best-response for the receiver, since it does not affect their payoff (since there is no gift anyway). Moreover, option 3 is even a SPE, since the only subgame here is the entire game. Such implausible equilibria might arise also in games with complete information, but they may be eliminated by applying subgame perfect Nash equilibrium. However, Bayesian games often contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame—the entire game—and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.
To summarize: in this variant of the gift game, there are two SPEs: either the sender always gives and the receiver always accepts, or the sender always does not give and the receiver always rejects. From these, only the first one is a PBE; the other is not a PBE since it cannot be supported by any belief-system.
More examples
For further examples, see signaling game#Examples. See also for more examples. There is a recent application of this concept in Poker, by Loriente and Diez (2023).
PBE in multi-stage games
A multi-stage game is a sequence of simultaneous games played one after the other. These games may be identical (as in repeated games) or different.
Repeated public-good game
The following game is a simple representation of the free-rider problem. There are two players, each of whom can either build a public good or not build. Each player gains 1 if the public good is built and 0 if not; in addition, if player builds the public good, they have to pay a cost of . The costs are private information - each player knows their own cost but not the other's cost. It is only known that each cost is drawn independently at random from some probability distribution. This makes this game a Bayesian game.
In the one-stage game, each player builds if-and-only-if their cost is smaller than their expected gain from building. The expected gain from building is exactly 1 times the probability that the other player does NOT build. In equilibrium, for every player , there is a threshold cost , such that the player contributes if-and-only-if their cost is less than . This threshold cost can be calculated based on the probability distribution of the players' costs. For example, if the costs are distributed uniformly on , then there is a symmetric equilibrium in which the threshold cost of both players is 2/3. This means that a player whose cost is between 2/3 and 1 will not contribute, even though their cost is below the benefit, because of the possibility that the other player will contribute.
Now, suppose that this game is repeated two times. The two plays are independent, i.e., each day the players decide simultaneously whether to build a public good in that day, get a payoff of 1 if the good is built in that day, and pay their cost if they built in that day. The only connection between the games is that, by playing in the first day, the players may reveal some information about their costs, and this information might affect the play in the second day.
We are looking for a symmetric PBE. Denote by the threshold cost of both players in day 1 (so in day 1, each player builds if-and-only-if their cost is at most ). To calculate , we work backwards and analyze the players' actions in day 2. Their actions depend on the history (= the two actions in day 1), and there are three options:
In day 1, no player built. So now both players know that their opponent's cost is above . They update their belief accordingly, and conclude that there is a smaller chance that their opponent will build in day 2. Therefore, they increase their threshold cost, and the threshold cost in day 2 is .
In day 1, both players built. So now both players know that their opponent's cost is below . They update their belief accordingly, and conclude that there is a larger chance that their opponent will build in day 2. Therefore, they decrease their threshold cost, and the threshold cost in day 2 is .
In day 1, exactly one player built; suppose it is player 1. So now, it is known that the cost of player 1 is below and the cost of player 2 is above . There is an equilibrium in which the actions in day 2 are identical to the actions in day 1 - player 1 builds and player 2 does not build.
It is possible to calculate the expected payoff of the "threshold player" (a player with cost exactly ) in each of these situations. Since the threshold player should be indifferent between contributing and not contributing, it is possible to calculate the day-1 threshold cost . It turns out that this threshold is lower than - the threshold in the one-stage game. This means that, in a two-stage game, the players are less willing to build than in the one-stage game. Intuitively, the reason is that, when a player does not contribute in the first day, they make the other player believe their cost is high, and this makes the other player more willing to contribute in the second day.
Jump-bidding
In an open-outcry English auction, the bidders can raise the current price in small steps (e.g. in $1 each time). However, often there is jump bidding - some bidders raise the current price much more than the minimal increment. One explanation to this is that it serves as a signal to the other bidders. There is a PBE in which each bidder jumps if-and-only-if their value is above a certain threshold. See Jump bidding#signaling.
See also
Sequential equilibrium - a refinement of PBE, that restricts the beliefs that can be assigned to off-equilibrium information sets to "reasonable" ones.
Intuitive criterion and Divine equilibrium - other refinements of PBE, specific to signaling games.
References
Game theory equilibrium concepts
Non-cooperative games | Perfect Bayesian equilibrium | [
"Mathematics"
] | 3,057 | [
"Game theory",
"Non-cooperative games",
"Game theory equilibrium concepts"
] |
1,603,557 | https://en.wikipedia.org/wiki/Cadmium%20poisoning | Cadmium is a naturally occurring toxic metal with common exposure in industrial workplaces, plant soils, and from smoking. Due to its low permissible exposure in humans, overexposure may occur even in situations where only trace quantities of cadmium are found. Cadmium is used extensively in electroplating, although the nature of the operation does not generally lead to overexposure. Cadmium is also found in some industrial paints and may represent a hazard when sprayed. Operations involving removal of cadmium paints by scraping or blasting may pose a significant hazard. The primary use of cadmium is in the manufacturing of NiCd rechargeable batteries. The primary source for cadmium is as a byproduct of refining zinc metal. Exposures to cadmium are addressed in specific standards for the general industry, shipyard employment, the construction industry, and the agricultural industry.
Signs and symptoms
Acute
Acute exposure to cadmium fumes may cause flu-like symptoms including chills, fever, and muscle ache sometimes referred to as "the cadmium blues." Symptoms may resolve after a week if there is no respiratory damage. More severe exposures can cause tracheobronchitis, pneumonitis, and pulmonary edema. Symptoms of inflammation may start hours after the exposure and include cough, dryness and irritation of the nose and throat, headache, dizziness, weakness, fever, chills, and chest pain.
Chronic
Complications of cadmium poisoning include cough, anemia, and kidney failure (possibly leading to death). Cadmium exposure increases one's chances of developing cancer. Similar to zinc, long-term exposure to cadmium fumes can cause lifelong anosmia.
Bone and joints
One of the main effects of cadmium poisoning is weak and brittle bones. The bones become soft (osteomalacia), lose bone mineral density (osteoporosis), and become weaker. This results in joint and back pain, and increases the risk of fractures. Spinal and leg pain is common, and a waddling gait often develops due to bone deformities caused by the long-term cadmium exposure. The pain eventually becomes debilitating, with fractures becoming more common as the bone weakens. Permanent deformation in bones can occur. In extreme cases of cadmium poisoning, mere body weight causes a fracture.
Renal
The kidney damage inflicted by cadmium poisoning is irreversible. The kidneys can shrink up to 30 percent. The kidneys lose their function to remove acids from the blood in proximal renal tubular dysfunction. The proximal renal tubular dysfunction causes hypophosphatemia, leading to muscle weakness and sometimes coma. Hyperchloremia also occurs. Kidney dysfunction also causes gout, a form of arthritis due to the accumulation of uric acid crystals in the joints because of high acidity of the blood (hyperuricemia). Cadmium exposure is also associated with the development of kidney stones.
Sources of exposure
Smoking is a significant source of cadmium exposure. Even small amounts of cadmium from smoking are highly toxic to humans, as the lungs absorb cadmium more efficiently than the stomach. Cadmium is emitted to the electronic cigarette (EC) aerosol but, on currently available data, the lifetime cancer risk (LCR) calculated does not exceed the acceptable risk limit.
Environmental
Buildup of cadmium levels in the water, air, and soil has been occurring particularly in industrial areas. Environmental exposure to cadmium has been particularly problematic in Japan where many people have consumed rice that was grown in cadmium-contaminated irrigation water. This phenomenon is known as itai-itai disease.
People who live near hazardous waste sites or factories that release cadmium into the air have the potential for exposure to cadmium in air. However, numerous state and federal regulations in the United States control the amount of cadmium that can be released to the air from waste sites and incinerators so that properly regulated sites are not hazardous. The general population and people living near hazardous waste sites may be exposed to cadmium in contaminated food, dust, or water from unregulated or accidental releases. Numerous regulations and use of pollution controls are enforced to prevent such releases.
Some sources of phosphate in fertilizers contain cadmium in amounts of up to 100 mg/kg, which can lead to an increase in the concentration of cadmium in soil (for example in New Zealand).
Food
Food is another source of cadmium. Plants may contain small or moderate amounts in non-industrial areas, but high levels may be found in the liver and kidneys of adult animals. The daily intake of cadmium through food varies by geographic region. Intake is reported to be approximately 8 to 30μg in Europe and the United States versus 59 to 113 μg in various areas of Japan. A small study of premium dark chocolate samples found 48% had high levels of cadmium, the source commonly being the presence of cadmium in soil in which they were grown.
Occupational exposure
In the 1950s and 1960s industrial exposure to cadmium was high, but as the toxic effects of cadmium became apparent, industrial limits on cadmium exposure have been reduced in most industrialized nations and many policy makers agree on the need to reduce exposure further. While working with cadmium it is important to do so under a fume hood to protect against dangerous fumes. Brazing fillers which contain cadmium should be handled with care. Serious toxicity problems have resulted from long-term exposure to cadmium plating baths.
Workers can be exposed to cadmium in air from the smelting and refining of metals, or from the air in plants that make cadmium products such as batteries, coatings, or plastics. Workers can also be exposed when soldering or welding metal that contains cadmium. Approximately 512,000 workers in the United States are in environments each year where cadmium exposure may occur. Regulations that set permissible levels of exposure, however, are enforced to protect workers and to make sure that levels of cadmium in the air are considerably below levels thought to result in harmful effects.
Artists who work with cadmium pigments, which are commonly used in strong oranges, reds, and yellows, can easily accidentally ingest dangerous amounts, particularly if they use the pigments in dry form, as with chalk pastels, or in mixing their own paints.
Consumer products
Cadmium is used in nickel-cadmium batteries; these are some of the most popular and most common cadmium-based products.
In February 2010, cadmium was found in an entire line of Wal-Mart exclusive Miley Cyrus jewelry. The charms were tested at the behest of the Associated Press and were found to contain high levels of cadmium. Wal-Mart did not stop selling the jewelry until May 12 because "it would be too difficult to test products already on its shelves".
On June 4, 2010, cadmium was detected in the paint used on promotional drinking glasses for the movie Shrek Forever After, sold by McDonald's Restaurants, triggering a recall of 12 million glasses.
Toxicology
Cadmium is an extremely toxic industrial and environmental pollutant classified as a human carcinogen: Group 1, according to the International Agency for Research on Cancer; Group 2a, according to Environmental Protection Agency (EPA); and a 1B carcinogen as classified by European Chemical Agency.
Toxicodynamics
Cellular toxicology
Inside cells, cadmium ions act as a catalytic hydrogen peroxide generator. This sudden surge of cytosolic hydrogen peroxide causes increased lipid peroxidation and additionally depletes ascorbate and glutathione stores. Hydrogen peroxide can also convert thiol groups on proteins into nonfunctional sulfonic acids and is also capable of directly attacking nuclear DNA. This oxidative stress causes the afflicted cell to manufacture large amounts of inflammatory cytokines.
Toxicokinetics
Inhaling cadmium-laden dust quickly leads to respiratory tract and kidney problems which can be fatal (often from kidney failure). Ingestion of any significant amount of cadmium causes immediate poisoning and damage to the liver and the kidneys. Compounds containing cadmium are also carcinogenic.
Diagnosis
Biomarkers of excessive exposure
Increased concentrations of urinary beta-2 microglobulin can be an early indicator of kidney dysfunction in persons chronically exposed to low but excessive levels of environmental cadmium. The urinary beta-2 microglobulin test is an indirect method of measuring cadmium exposure. Under some circumstances, the Occupational Health and Safety Administration requires screening for kidney damage in workers with long-term exposure to high levels of cadmium. Blood or urine cadmium concentrations provide a better index of excessive exposure in industrial situations or following acute poisoning, whereas organ tissue (lung, liver, kidney) cadmium concentrations may be useful in fatalities resulting from either acute or chronic poisoning. Cadmium concentrations in healthy persons without excessive cadmium exposure are generally less than 1 μg/L in either blood or urine. The ACGIH biological exposure indices for blood and urine cadmium levels are 5 μg/L and 5 μg/g creatinine, respectively, in random specimens. Persons who have sustained kidney damage due to chronic cadmium exposure often have blood or urine cadmium levels in a range of 25-50 μg/L or 25-75 μg/g creatinine, respectively. These ranges are usually 1000-3000 μg/L and 100-400 μg/g, respectively, in survivors of acute poisoning and may be substantially higher in fatal cases.
Treatment
A person with cadmium poisoning should seek immediate medical attention, both for treatment and supportive care.
For a non-chronic ingestive exposure, emetics or gastric lavage shortly after exposure can beneficially decontaminate the gastrointestinal system. Activated charcoal remains unproven. The US CDC does not recommend chelation therapy, in part because chelation may accentuate kidney damage.
For long-term exposure, considerable evidence indicates that the traditional chelator EDTA can reduce a body's overall cadmium load. Co-administered antioxidants, including nephroprotective glutathione, appear to improve efficacy. For patients with extremely fragile kidneys, limited evidence suggests that sauna sweat may differentially excrete the metal.
Epidemiology
In a mass cadmium poisoning in Japan, a marked prevalence for skeletal complications has been noted for older, postmenopausal women, however, the cause of the phenomenon is not fully understood, and is under investigation. Cadmium poisoning in postmenopausal women may result in an increased risk for osteoporosis. Current research has pointed to general malnourishment, as well as poor calcium metabolism relating to the women's age. Studies are pointing to damage of the mitochondria of kidney cells by cadmium as a key factor of the disease.
History
An experiment during the early 1960s involving the spraying of cadmium over Norwich was declassified in 2005 by the UK government, as documented in a BBC News article.
See also
Cobalt poisoning
Itai-itai disease
Citations
General and cited references
Shannon M. "Heavy Metal Poisoning", in Haddad LM, Shannon M, Winchester JF (editors): Clinical Management of Poisoning and Drug Overdose, Third Edition, 1998.
External links
ATSDR Case Studies in Environmental Medicine: Cadmium Toxicity U.S. Department of Health and Human Services
CDC - Cadmium - NIOSH Workplace Safety and Health Topic U.S. Department of Health and Human Services
National Pollutant Inventory - Cadmium and compounds
http://www.canoshweb.org/odp/html/cadmium.htm
After ‘Cadmium Rice,’ now ‘Lead’ and ‘Arsenic Rice’, New York Times
Poisoning
Contaminated farmland
Toxic effects of metals | Cadmium poisoning | [
"Chemistry",
"Environmental_science"
] | 2,431 | [
"Contaminated farmland",
"Water pollution"
] |
1,603,731 | https://en.wikipedia.org/wiki/Labdanum | Labdanum, also called ladanum, ladan, or ladanon, is a sticky brown resin obtained from the shrubs Cistus ladanifer (western Mediterranean) and Cistus creticus (eastern Mediterranean), species of rockrose. It was historically used in herbal medicine and is still used in the preparation of some perfumes and vermouths.
History
In ancient times, labdanum was collected by combing the beards and thighs of goats and sheep that had grazed on the cistus shrubs. Wooden instruments used were referred to in 19th-century Crete as ergastiri; a lambadistrion ("labdanum-gatherer") was a kind of rake to which a double row of leathern thongs were fixed instead of teeth. These were used to sweep the shrubs and collect the resin which was later extracted. It was collected by the shepherds and sold to coastal traders. The resin was used as an ingredient for incense, and medicinally to treat colds, coughs, menstrual problems and rheumatism.
Labdanum was produced on the banks of the Mediterranean in antiquity. The Book of Genesis contains two mentions of labdanum being carried to Egypt from Canaan. The word lot (לט "resin") in these two passages is usually interpreted as referring to labdanum on the basis of Semitic cognates.
Percy Newberry, a specialist on ancient Egypt, speculated that the false beard worn by Osiris and pharaohs may have originally represented a "labdanum-laden goat's beard". He also argued that the scepter of Osiris, which is usually interpreted as either a flail or a flabellum, was more likely an instrument for collecting labdanum similar to that used in nineteenth-century Crete.
Some scholars, such as Samuel Bochart, H.J. Abrahams, and Rabbi Saʻadiah ben Yosef Gaon (Saadya), 882–942, state that the mysterious שחלת (onycha), an ingredient in the holy incense (ketoret) mentioned in the Torah (Exodus 30: 34), was actually labdanum.
Modern uses
Labdanum is produced today mainly for the perfume industry. The raw resin is usually extracted by boiling the leaves and twigs. An absolute is also obtained by solvent extraction. An essential oil is produced by steam distillation. The raw gum is a black or sometimes dark brown, fragrant mass containing up to 20% or more of water. It is plastic but not pourable, and becomes brittle with age. The absolute is dark amber-green and very thick at room temperature. The fragrance is more refined than the raw resin. The odour is very rich, complex and tenacious. Labdanum is much valued in perfumery because of its resemblance to ambergris, which has been banned from use in many countries because it originates from the sperm whale, which is an endangered species. Labdanum is the main ingredient used when making the scent of amber in perfumery. Labdanum's odour is variously described as amber, sweet, woody, powdery, fruity, animalic, ambergris, dry musk, or leathery.
See also
Labdane
References
Resins
Perfume ingredients
Incense material | Labdanum | [
"Physics"
] | 689 | [
"Resins",
"Unsolved problems in physics",
"Incense material",
"Materials",
"Amorphous solids",
"Matter"
] |
1,603,783 | https://en.wikipedia.org/wiki/Index%20%28economics%29 | In statistics, economics,and finance, an index is a statistical measure of change in a representative group of individual data points. These data may be derived from any number of sources, including company performance, prices, productivity, and employment. Economic indices track economic health from different perspectives. Examples include the consumer price index, which measures changes in retail prices paid by consumers, and the cost-of-living index (COLI), which measures the relative cost of living over time.
Influential global financial indices such as the Global Dow, and the NASDAQ Composite track the performance of selected large and powerful companies in order to evaluate and predict economic trends.
The Dow Jones Industrial Average and the S&P 500 primarily track U.S. markets, though some legacy international companies are included. The consumer price index tracks the variation in prices for different consumer goods and services over time in a constant geographical location and is integral to calculations used to adjust salaries, bond interest rates, and tax thresholds for inflation.
The GDP Deflator Index, or real GDP, measures the level of prices of all-new, domestically produced, final goods and services in an economy. Market performance indices include the labour market index/job index and proprietary stock market index investment instruments offered by brokerage houses.
Some indices display market variations. For example, the Economist provides a Big Mac Index that expresses the adjusted cost of a globally ubiquitous Big Mac as a percentage over or under the cost of a Big Mac in the U.S. in USD. Such indices can be used to help forecast currency values.
Index numbers
An index number is an economic data figure reflecting price or quantity compared with a standard or base value. The base usually equals 100 and the index number is usually expressed as 100 times the ratio to the base value. For example, if a commodity costs twice as much in 1970 as it did in 1960, its index number would be 200 relative to 1960. Index numbers are used especially to compare business activity, the cost of living, and employment. They enable economists to reduce unwieldy business data into easily understood terms.
In contrast to a cost-of-living index based on the true but unknown utility function, a superlative index number is an index number that can be calculated. Thus, superlative index numbers are used to provide a fairly close approximation to the underlying cost-of-living index number in a wide range of circumstances.
Some indexes are not time series. Spatial indexes summarize real estate prices, or toxins in the environment, or availability of services, across geographic locations. Indexes may also be used to summarize comparisons between distributions of data within categories. For example, purchasing power parity comparisons of currencies are often constructed with indexes.
There is a substantial body of economic analysis concerning the construction of index numbers, desirable properties of index numbers and the relationship between index numbers and economic theory. A number indicating a change in magnitude, as of price, wage, employment, or production shifts, relative to the magnitude at a specified point usually taken as 100.
Index number problem
The index number problem is the term used by economists to describe the limitation of statistical indexing, when used as a measurement for cost-of-living increases.
For example, in the Consumer Price Index, a reference year's "market basket" is assigned an index number of 100. In 2019 if a market basket price is 55 and the basket were to double the following year, in 2020, then the index would rise to 200. This is done by performing a simple calculation: Dividing the new year market basket price by the reference year's (otherwise known as the base year) price, and subsequently multiplying the quotient by 100.
While the CPI is a conventional method to measure inflation, it doesn't express how price changes directly affect all consumer purchases of goods and services. It either understates or overstates cost-of-living increases. This is the limitation of the CPI that is described as the index number problem.
There is no theoretically ideal solution to this problem. In practice for retail price indices, the "basket of goods" is updated incrementally every few years to reflect changes. Nevertheless, the fact remains that many economic indices taken over the long term are not really like-for-like comparisons and this is an issue taken into account by researchers in economic history.
See also
Stock market index
List of stock market indices
Producer price index
Price index
Chemical plant cost indexes
Bureau of Labor Statistics
Dow Jones Indexes
Indexation
economic indicator
References
Further reading
Robin Marris, Economic Arithmetic, (1958).
External links
Humboldt Economic Index
S&P Indices
Lars Kroijer
SG Index
Business terms
Economic growth
Economic indicators
Mathematical and quantitative methods (economics) | Index (economics) | [
"Mathematics"
] | 963 | [
"Index numbers",
"Mathematical objects",
"Numbers"
] |
1,604,592 | https://en.wikipedia.org/wiki/Structured%20cabling | In telecommunications, structured cabling is building or campus cabling infrastructure that consists of a number of standardized smaller elements (hence structured) called subsystems. Structured cabling components include twisted pair and optical cabling, patch panels and patch cables.
Overview
Structured cabling is the design and installation of a cabling system that will support multiple hardware uses and be suitable for today's needs and those of the future. With a correctly installed system, current and future requirements can be met, and hardware that is added in the future will be supported.
Structured cabling design and installation is governed by a set of standards that specify wiring data centers, offices, and apartment buildings for data or voice communications using various kinds of cable, most commonly Category 5e (Cat 5e), Category 6 (Cat 6), and fiber-optic cabling and modular connectors. These standards define how to lay the cabling in various topologies in order to meet the needs of the customer, typically using a central patch panel (which is often mounted in a 19-inch rack), from where each modular connection can be used as needed. Each outlet is then patched into a network switch (normally also rack-mounted) for network use or into an IP or PBX (private branch exchange) telephone system patch panel.
Lines patched as data ports into a network switch require simple straight-through patch cables at each end to connect a computer. Voice patches to PBXs in most countries require an adapter at the remote end to translate the configuration on 8P8C modular connectors into the local standard telephone wall socket. In North America no adapter is needed for certain uses: With ports wired in the preferred standard T568A pattern, for the 6P2C plugs most commonly used for single-line phone equipment (e.g. with RJ11), and 6P4C plugs used for two-line phones without power (e.g. with RJ14) and single-line phones with power (again RJ11), telephone connections are physically and electrically compatible with the larger 8P8C socket, but with ports wired as T568B, which is common but often in violation of the standard, only the first pair, i.e. line 1, works. RJ25 and RJ61 connections are physically but not electrically compatible, and cannot be used. In the United Kingdom, an adapter must be present at the remote end as the 6-pin BT socket is physically incompatible with 8P8C.
It is common to color-code patch panel cables to identify the type of connection, though structured cabling standards do not require it except in the demarcation wall field.
Cabling standards require that all eight conductors in Cat 5e/6/6A cable be connected.
IP phone systems can run the telephone and the computer on the same wires, eliminating the need for separate phone wiring.
Regardless of copper cable type (Cat 5e/6/6A), the maximum distance is for the permanent link installation, plus an allowance for a combined of patch cords at the ends.
Cat 5e and Cat 6 can both effectively run power over Ethernet (PoE) applications up to . However, due to greater power dissipation in Cat 5e cable, performance and power efficiency are higher when Cat 6A cabling is used to power and connect to PoE devices.
Subsystems
Structured cabling consists of six subsystems:
Entrance facilities is the point where the telephone company network ends and connects with the on-premises wiring belonging to the customer.
Equipment rooms house equipment and wiring consolidation points that serve the users inside the building or campus.
Backbone cabling is the inter-building and intra-building cable connections in structured cabling between entrance facilities, equipment rooms and telecommunications closets. Backbone cabling consists of the transmission media, main and intermediate cross-connects and terminations at these locations. This system is mostly used in data centers.
Horizontal cabling wiring can be standard inside wiring (IW) or plenum cabling and connects telecommunications rooms to individual outlets or work areas on the floor, usually through the wireways, conduits or ceiling spaces of each floor. A horizontal cross-connect is where the horizontal cabling connects to a patch panel or punch-down block, which is connected by backbone cabling to the main distribution facility.
Telecommunications rooms or telecommunications enclosure connects between the backbone cabling and horizontal cabling.
Work-area components connect end-user equipment to outlets of the horizontal cabling system.
Standards
Network cabling standards are used internationally and are published by ISO/IEC, CENELEC and the Telecommunications Industry Association (TIA). Most European countries use CENELEC, International Electrotechnical Commission (IEC) or International Organization for Standardization (ISO) standards. The main CENELEC document is EN50173, which introduces contextual links to the full suite of CENELEC documents. ISO/IEC 11801 heads the ISO/IEC documentation. In the US, the Telecommunications Industry Association issue the ANSI/TIA-568 standards for telecommunications cabling in commercial premises.
See also
110 block
American National Standards Institute (ANSI)
Registered jack, a set of standards for telecommunications cabling termination (including RJ11, RJ15, and RJ45)
Telecommunication cabling
Notes
References
External links
Fiber Optics Tech Consortium (FOTC)
Networking standards
Signal cables | Structured cabling | [
"Technology",
"Engineering"
] | 1,108 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
1,604,597 | https://en.wikipedia.org/wiki/Nortriptyline | Nortriptyline, sold under the brand name Aventyl, among others, is a tricyclic antidepressant. This medicine is also sometimes used for neuropathic pain, attention deficit hyperactivity disorder (ADHD), smoking cessation and anxiety. Its use for young people with depression and other psychiatric disorders may be limited due to increased suicidality in the 18–24 population initiating treatment. Nortriptyline is not a preferred treatment for attention deficit hyperactivity disorder or smoking cessation. It is taken by mouth.
Common side effects include dry mouth, constipation, blurry vision, sleepiness, low blood pressure with standing, and weakness. Serious side effects may include seizures, an increased risk of suicide in those less than 25 years of age, urinary retention, glaucoma, mania, and a number of heart issues. Nortriptyline may cause problems if taken during pregnancy. Use during breastfeeding appears to be relatively safe. It is a tricyclic antidepressant (TCA) and is believed to work by altering levels of serotonin and norepinephrine.
Nortriptyline was approved for medical use in the United States in 1964. It is available as a generic medication. In 2022, it was the 191st most commonly prescribed medication in the United States, with more than 2million prescriptions.
Medical uses
Nortriptyline is used to treat depression. A level between 50 and 150 ng/mL of nortriptyline in the blood generally corresponds with an antidepressant effect.
It is also used off-label for the treatment of panic disorder, ADHD, irritable bowel syndrome, tobacco-cessation, migraine prophylaxis and chronic pain or neuralgia modification, particularly temporomandibular joint disorder.
Irritable bowel syndrome
Nortriptyline has also been used as an off-label treatment for irritable bowel syndrome (IBS).
Contraindications
Nortriptyline should not be used in the acute recovery phase after myocardial infarction (heart attack). Use of tricyclic antidepressants along with a monoamine oxidase inhibitor (MAOI), linezolid, or IV methylene blue are contraindicated as it can cause an increased risk of developing serotonin syndrome.
Closer monitoring is required for those with a history of cardiovascular disease, stroke, glaucoma, or seizures, as well as in persons with hyperthyroidism or receiving thyroid hormones.
Side effects
The most common side effects include dry mouth, sedation, constipation, increased appetite, blurred vision and tinnitus. An occasional side effect is a rapid or irregular heartbeat. Alcohol may exacerbate some of its side effects.
Overdose
The symptoms and the treatment of an overdose are generally the same as for the other tricyclic antidepressants, including anticholinergic effects, serotonin syndrome and adverse cardiac effects. TCAs, particularly nortriptyline, have a relatively narrow therapeutic index, which increase the chance of an overdose (both accidental and intentional). Symptoms of overdose include: irregular heartbeat, seizures, coma, confusion, hallucination, widened pupils, drowsiness, agitation, fever, low body temperature, stiff muscles and vomiting.
Interactions
Excessive consumption of alcohol in combination with nortriptyline therapy may have a potentiating effect, which may lead to the danger of increased suicidal attempts or overdosage, especially in patients with histories of emotional disturbances or suicidal ideation.
It may interact with the following drugs:
heart rhythm medications such as flecainide (Tambocor), propafenone (Rhythmol), or quinidine (Cardioquin, Quinidex, Quinaglute)
cimetidine
guanethidine
reserpine
Pharmacology
Nortriptyline is a strong norepinephrine reuptake inhibitor and a moderate serotonin reuptake inhibitor. Its pharmacologic profile is as the table shows with (inhibition or antagonism of all sites).
Pharmacodynamics
Nortriptyline is an active metabolite of amitriptyline by demethylation in the liver. Chemically, it is a secondary amine dibenzocycloheptene and pharmacologically it is classed as a first-generation antidepressant.
Nortriptyline may also have a sleep-improving effect due to antagonism of the H1 and 5-HT2A receptors. In the short term, however, nortriptyline may disturb sleep due to its activating effect.
In one study, nortriptyline had the highest affinity for the dopamine transporter among the tricyclic antidepressants (KD = 1,140 nM) besides amineptine (a norepinephrine–dopamine reuptake inhibitor), although its affinity for this transporter was still 261- and 63-fold lower than for the norepinephrine and serotonin transporters (KD = 4.37 and 18 nM, respectively).
Pharmacogenetics
Nortriptyline is metabolized in the liver by the hepatic enzyme CYP2D6, and genetic variations within the gene coding for this enzyme can affect its metabolism, leading to changes in the concentrations of the drug in the body. Increased concentrations of nortriptyline may increase the risk for side effects, including anticholinergic and nervous system adverse effects, while decreased concentrations may reduce the drug's efficacy.
Individuals can be categorized into different types of CYP2D6 metabolizers depending on which genetic variations they carry. These metabolizer types include poor, intermediate, extensive, and ultrarapid metabolizers. Most individuals (about 77–92%) are extensive metabolizers, and have "normal" metabolism of nortriptyline. Poor and intermediate metabolizers have reduced metabolism of the drug as compared to extensive metabolizers; patients with these metabolizer types may have an increased probability of experiencing side effects. Ultrarapid metabolizers use nortriptyline much faster than extensive metabolizers; patients with this metabolizer type may have a greater chance of experiencing pharmacological failure.
The Clinical Pharmacogenetics Implementation Consortium recommends avoiding nortriptyline in persons who are CYP2D6 ultrarapid or poor metabolizers, due to the risk of a lack of efficacy and side effects, respectively. A reduction in starting dose is recommended for patients who are CYP2D6 intermediate metabolizers. If use of nortriptyline is warranted, therapeutic drug monitoring is recommended to guide dose adjustments. The Dutch Pharmacogenetics Working Group recommends reducing the dose of nortriptyline in CYP2D6 poor or intermediate metabolizers, and selecting an alternative drug or increasing the dose in ultrarapid metabolizers.
Chemistry
Nortriptyline is a tricyclic compound, specifically a dibenzocycloheptadiene, and possesses three rings fused together with a side chain attached in its chemical structure. Other dibenzocycloheptadiene tricyclic antidepressants include amitriptyline (N-methylnortriptyline), protriptyline, and butriptyline. Nortriptyline is a secondary amine tricyclic antidepressant, with its N-methylated parent amitriptyline being a tertiary amine. Other secondary amine tricyclic antidepressants include desipramine and protriptyline. The chemical name of nortriptyline is 3-(10,11-dihydro-5H-dibenzo[a,d]cyclohepten-5-ylidene)-N-methyl-1-propanamine and its free base form has a chemical formula of C19H21N1 with a molecular weight of 263.384 g/mol. The drug is used commercially mostly as the hydrochloride salt; the free base form is used rarely. The CAS Registry Number of the free base is 72-69-5 and of the hydrochloride is 894-71-3.
History
Nortriptyline was developed by Geigy. It first appeared in the literature in 1962 and was patented the same year. The drug was first introduced for the treatment of depression in 1963.
Society and culture
Generic names
Nortriptyline is the generic name of the drug and its , , and , while nortriptyline hydrochloride is its , , , and . Its generic name in Spanish and Italian and its are nortriptilina, in German is nortriptylin, and in Latin is nortriptylinum.
Brand names
Brand names of nortriptyline include Allegron, Aventyl, Noritren, Norpress, Nortrilen, Norventyl, Norzepine, Pamelor, and Sensival, among many others.
Research
Although not approved by the US Food and Drug Administration (FDA) for neuropathic pain, randomized controlled trials have demonstrated the effectiveness of tricyclic antidepressants for the treatment of this condition in both depressed and non-depressed individuals. In 2010, an evidence-based guideline sponsored by the International Association for the Study of Pain recommended nortriptyline as a first-line medication for neuropathic pain. However, in a 2015 Cochrane systematic review the authors did not recommend nortriptyline as a first-line agent for neuropathic pain.
It may be effective in the treatment of tobacco-cessation.
References
Alpha-1 blockers
Antihistamines
Dibenzocycloheptenes
Human drug metabolites
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Secondary amines
Serotonin receptor antagonists
Serotonin–norepinephrine reuptake inhibitors
Sodium channel blockers
Tricyclic antidepressants
Wikipedia medicine articles ready to translate | Nortriptyline | [
"Chemistry"
] | 2,125 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
1,604,604 | https://en.wikipedia.org/wiki/Adrien%20Douady | Adrien Douady (; 25 September 1935 – 2 November 2006) was a French mathematician born in La Tronche, Isère. He was the son of Daniel Douady and Guilhen Douady.
Douady was a student of Henri Cartan at the École normale supérieure, and initially worked in homological algebra. His thesis concerned deformations of complex analytic spaces. Subsequently, he became more interested in the work of Pierre Fatou and Gaston Julia and made significant contributions to the fields of analytic geometry and dynamical systems. Together with his former student John H. Hubbard, he launched a new subject, and a new school, studying properties of iterated quadratic complex mappings. They made important mathematical contributions in this field of complex dynamics, including a study of the Mandelbrot set. One of their most fundamental results is that the Mandelbrot set is connected; perhaps most important is their theory of renormalization of (polynomial-like) maps. The Douady rabbit, a quadratic filled Julia set, is named after him.
Douady taught at the University of Nice and was a professor at the Paris-Sud 11 University, Orsay. He was a member of Bourbaki and an invited speaker at the International Congress of Mathematicians in 1966 at Moscow and again in 1986 in Berkeley.
He was elected to the Académie des Sciences in 1997, and was featured in the French animation project Dimensions.
He died after diving into the cold Mediterranean from a favourite spot near his vacation home in the Var.
His son, Raphael Douady, is also a noted mathematician and an economist.
See also
Beltrami equation
Jessen's icosahedron
Kuiper's theorem
References
External links
http://picard.ups-tlse.fr/~cheritat/Adrien70/index.php
http://www.math.jacobs-university.de/adrien with a guest book to share your memories
Pictures with Adrien Douady
1935 births
2006 deaths
20th-century French mathematicians
Dynamical systems theorists
Academic staff of Côte d'Azur University
Academic staff of Paris-Sud University
École Normale Supérieure alumni
Members of the French Academy of Sciences
Nicolas Bourbaki
People from La Tronche | Adrien Douady | [
"Mathematics"
] | 470 | [
"Dynamical systems theorists",
"Dynamical systems"
] |
1,604,631 | https://en.wikipedia.org/wiki/Joseph%20Oesterl%C3%A9 | Joseph Oesterlé (born 1954) is a French mathematician who, along with David Masser, formulated the abc conjecture which has been called "the most important unsolved problem in diophantine analysis".
He is a member of Bourbaki.
References
External links
The ABC conjecture
Oesterlé on the origin of the abc Conjecture
1954 births
Living people
People from Alsace
École Normale Supérieure alumni
20th-century French mathematicians
University of Paris alumni
French number theorists
Nicolas Bourbaki
Abc conjecture | Joseph Oesterlé | [
"Mathematics"
] | 105 | [
"Abc conjecture",
"Number theory"
] |
1,604,706 | https://en.wikipedia.org/wiki/John%20B.%20Goodenough | John Bannister Goodenough ( ; July 25, 1922 – June 25, 2023) was an American materials scientist, a solid-state physicist, and a Nobel laureate in chemistry. From 1986 he was a professor of Materials Science, Electrical Engineering and Mechanical Engineering, at the University of Texas at Austin. He is credited with
identifying the Goodenough–Kanamori rules of the sign of the magnetic superexchange in materials, with developing materials for computer random-access memory and with inventing cathode materials for lithium-ion batteries.
Goodenough was born in Jena, Germany, to American parents. During and after graduating from Yale University, Goodenough served as a U.S. military meteorologist in World War II. He went on to obtain his Ph.D. in physics at the University of Chicago, became a researcher at MIT Lincoln Laboratory, and later the head of the Inorganic Chemistry Laboratory at the University of Oxford.
Goodenough was awarded the National Medal of Science, the Copley Medal, the Fermi Award, the Draper Prize, and the Japan Prize. The John B. Goodenough Award in materials science is named for him. In 2019, he was awarded the Nobel Prize in Chemistry alongside M. Stanley Whittingham and Akira Yoshino; at 97 years old, he became the oldest Nobel laureate in history. From August 27, 2021, until his death, he was the oldest living Nobel Prize laureate.
Personal life and education
John Goodenough was born in Jena, Germany, on July 25, 1922, to American parents, Erwin Ramsdell Goodenough (1893–1965) and Helen Miriam (Lewis) Goodenough. He came from an academic family. His father, a graduate student at Oxford when John was born, eventually became a professor of religious history at Yale. His brother Ward became an anthropology professor at the University of Pennsylvania. John also had two half-siblings from his father's second marriage: Ursula Goodenough, emeritus professor of biology at Washington University in St. Louis; and Daniel Goodenough, emeritus professor of biology at Harvard Medical School.
In his school years Goodenough suffered from dyslexia. At the time, dyslexia was poorly understood by the medical community, and Goodenough's condition went undiagnosed and untreated. Although his primary schools considered him "a backward student," he taught himself to write so that he could take the entrance exam for Groton School, the boarding school where his older brother was studying at the time. He was awarded a full scholarship. At Groton, his grades improved and he eventually graduated at the top of his class in 1940. He also developed an interest in exploring nature, plants, and animals. Although he was raised an atheist, he converted to Protestant Christianity in high school.
After Groton, Goodenough graduated summa cum laude from Yale, where he was a member of Skull and Bones. He completed his coursework in early 1943 (after just two and a half years) and received his degree in 1944, covering his expenses by tutoring and grading exams. He had initially sought to enlist in the military following the Japanese attack on Pearl Harbor, but his mathematics professor convinced him to stay at Yale for another year so that he could finish his coursework, which qualified him to join the U.S. Army Air Corps' meteorology department.
After World War II ended, Goodenough obtained a master's degree and a Ph.D. in physics from the University of Chicago, the latter in 1952. His doctoral supervisor was Clarence Zener, a theorist in electrical breakdown; he also worked and studied with physicists, including Enrico Fermi and John A. Simpson. While at Chicago, he met Canadian history graduate student Irene Wiseman. They married in 1951. The couple had no children. Irene died in 2016.
Goodenough turned 100 on July 25, 2022. He died at an assisted living facility in Austin, Texas, on June 25, 2023, one month shy of what would have been his 101st birthday.
Career and research
Over his career, Goodenough authored more than 550 articles, 85 book chapters and reviews, and five books, including two seminal works, Magnetism and the Chemical Bond (1963) and Les oxydes des metaux de transition (1973).
MIT Lincoln Laboratory
After his studies, Goodenough was a research scientist and team leader at the MIT Lincoln Laboratory for 24 years. At MIT, he was part of an interdisciplinary team responsible for developing random access magnetic memory. His research focused on magnetism and on the metal–insulator transition behavior in transition-metal oxides. His research efforts on RAM led him to develop the concepts of cooperative orbital ordering, also known as a cooperative Jahn–Teller distortion, in oxide materials. They subsequently led him to develop (with Junjiro Kanamori) the Goodenough–Kanamori rules, a set of semi-empirical rules to predict the sign of the magnetic superexchange in materials; superexchange is a core property for high-temperature superconductivity.
University of Oxford
The U.S. government eventually terminated Goodenough's research funding, so during the late 1970s and early 1980s, he left the United States and continued his career as head of the Inorganic Chemistry Laboratory at the University of Oxford. Among the highlights of his work at Oxford, Goodenough is credited with significant research essential to the development of commercial lithium-ion rechargeable batteries. Goodenough was able to expand upon previous work from M. Stanley Whittingham on battery materials, and found in 1980 that by using LixCoO2 as a lightweight, high energy density cathode material, he could double the capacity of lithium-ion batteries.
Although Goodenough saw a commercial potential of batteries with his LiCoO2 and LiNiO2 cathodes and approached the University of Oxford with a request to patent this invention, it refused. Unable to afford the patenting expenses with his academic salary, Goodenough turned to UK's Atomic Energy Research Establishment in Harwell, which accepted his offer, but under the terms, which provided zero royalty payment to the inventors John B. Goodenough and Koichi Mizushima. In 1990, the AERE licensed Goodenough's patents to Sony Corporation, which was followed by other battery manufacturers. It was estimated, that the AERE made over 10 mln. British pounds from this licensing.
The work at Sony on further improvements to Goodenough's invention was led by Akira Yoshino, who had developed a scaled up design of the battery and manufacturing process. Goodenough received the Japan Prize in 2001 for his discoveries of the materials critical to the development of lightweight high energy density rechargeable lithium batteries, and he, Whittingham, and Yoshino shared the 2019 Nobel Prize in Chemistry for their research in lithium-ion batteries.
University of Texas
From 1986, Goodenough was a professor at The University of Texas at Austin in the Cockrell School of Engineering departments of Mechanical Engineering and Electrical Engineering. During his tenure there, he continued his research on ionic conducting solids and electrochemical devices; he continued to study improved materials for batteries, aiming to promote the development of electric vehicles and to help reduce human dependency on fossil fuels. Arumugam Manthiram and Goodenough discovered the polyanion class of cathodes. They showed that positive electrodes containing polyanions, e.g., sulfates, produce higher voltages than oxides due to the inductive effect of the polyanion. The polyanion class includes materials such as lithium-iron phosphates that are used for smaller devices like power tools. His group also identified various promising electrode and electrolyte materials for solid oxide fuel cells. He held the Virginia H. Cockrell Centennial Chair in Engineering.
Goodenough still worked at the university at age 98 as of 2021, hoping to find another breakthrough in battery technology.
On February 28, 2017, Goodenough and his team at the University of Texas published a paper in the journal Energy and Environmental Science on their demonstration of a glass battery, a low-cost all-solid-state battery that is noncombustible and has a long cycle life with a high volumetric energy density, and fast rates of charge and discharge. Instead of liquid electrolytes, the battery uses glass electrolytes that enable the use of an alkali-metal anode without the formation of dendrites. However, this paper was met with widespread skepticism by the battery research community and remains controversial after several follow-up works. The work was criticized for a lack of comprehensive data, spurious interpretations of the data obtained, and that the proposed mechanism of battery operation would violate the first law of thermodynamics.
In April 2020, a patent was filed for the glass battery on behalf of Portugal's National Laboratory of Energy and Geology (LNEG), the University of Porto, Portugal, and the University of Texas.
Advisory work
In 2010, Goodenough joined the technical advisory board of Enevate, a silicon-dominant Li-ion battery technology startup based in Irvine, California. Goodenough also served as an adviser to the Joint Center for Energy Storage Research (JCESR), a collaboration led by Argonne National Laboratory and funded by the Department of Energy. From 2016, Goodenough also worked as an adviser for Battery500, a national consortium led by Pacific Northwest National Laboratory (PNNL) and partially funded by the U.S. Department of Energy.
Distinctions and awards
Goodenough was elected a member of the National Academy of Engineering in 1976 for his work designing materials for electronic components and clarifying the relationships between the properties, structures, and chemistry of substances. He was also a member of the American National Academy of Sciences and its French, Spanish, and Indian counterparts. In 2010, he was elected a Foreign Member of the Royal Society. The Royal Society of Chemistry grants a John B. Goodenough Award in his honor. The Electrochemical Society awards a biannual John B. Goodenough Award of The Electrochemical Society.
Goodenough received the following awards:
Fermi Award (2009), alongside metallurgist Siegfried Hecker
National Medal of Science (2013), presented by U.S. President Barack Obama
Draper Prize in engineering (2014).
Welch Award in Chemistry (2017)
C.K. Prahalad Award (2017)
Copley Medal of the Royal Society (2019)
Nobel Prize in Chemistry (2019), alongside M. Stanley Whittingham and Akira Yoshino
Goodenough was 97 when he received the Nobel Prize. He remains the oldest person ever to have been awarded the prize.
Works
Selected articles
Lightfoot, P.; Pei, S. Y.; Jorgensen, J. D.; Manthiram, A.; Tang, X. X. & J. B. Goodenough. "Excess Oxygen Defects in Layered Cuprates", Argonne National Laboratory, The University of Texas-Austin, Materials Science Laboratory United States Department of Energy, National Science Foundation, (September 1990).
Argyriou, D. N.; Mitchell, J. F.; Chmaissem, O.; Short, S.; Jorgensen, J. D. & J. B. Goodenough. "Sign Reversal of the Mn-O Bond Compressibility in La1.2Sr1.8Mn2O7 Below TC: Exchange Striction in the Ferromagnetic State", Argonne National Laboratory, The University of Texas-Austin, Center for Material Science and Engineering United States Department of Energy, National Science Foundation, Welch Foundation, (March 1997).
Goodenough, J. B.; Abruna, H. D. & M. V. Buchanan. "Basic Research Needs for Electrical Energy Storage. Report of the Basic Energy Sciences Workshop on Electrical Energy Storage, April 2–4, 2007", United States Department of Energy, (April 4, 2007).
Selected books
See also
Junjiro Kanamori
Koichi Mizushima (scientist)
Rachid Yazami
References
Further reading
External links
Faculty Directory at University of Texas at Austin
Array of Contemporary American Physicists
History of the lithium-ion battery, Physics Today, Sept. 2016
by The Electrochemical Society, October 5, 2016
Are Solid State Batteries about to change the world?, Joe Scott, November 2018, Goodenough and team research on more energy dense solid state Li-ion chemistry featured 3:35–12:45.
Pr John Goodenough's interview GOODENOUGH John B., 2001–05 – Sciences : histoire orale on École supérieure de physique et de chimie industrielles de la ville de Paris history of science website
including the Nobel Lecture, "Designing Lithium-ion Battery Cathodes" (December 8, 2019)
1922 births
2023 deaths
20th-century American physicists
21st-century American physicists
American men centenarians
American Christians
Inventors from Texas
American materials scientists
American Nobel laureates
Draper Prize winners
Enrico Fermi Award recipients
Fellows of the American Physical Society
Foreign members of the Royal Society
Groton School alumni
Massachusetts Institute of Technology faculty
Members of the French Academy of Sciences
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
National Medal of Science laureates
Nobel laureates in Chemistry
Recipients of the Copley Medal
Scientists with dyslexia
Skull and Bones Society
Solid state chemists
United States Army personnel of World War II
University of Chicago alumni
University of Texas at Austin faculty
Yale University alumni
Members of Skull and Bones
Benjamin Franklin Medal (Franklin Institute) laureates | John B. Goodenough | [
"Chemistry"
] | 2,833 | [
"Solid state chemists"
] |
1,604,816 | https://en.wikipedia.org/wiki/Steric%20factor | The steric factor, usually denoted ρ, is a quantity used in collision theory.
Also called the probability factor, the steric factor is defined as the ratio between the experimental value of the rate constant and the one predicted by collision theory. It can also be defined as the ratio between the pre-exponential factor and the collision frequency, and it is most often less than unity. Physically, the steric factor can be interpreted as the ratio of the cross section for reactive collisions to the total collision cross section.
Usually, the more complex the reactant molecules, the lower the steric factors. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions, which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions); and so on.
When collision theory is applied to reactions in solution, the solvent cage has an effect on the reactant molecules, as several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions.
Usually there is no simple way to accurately estimate steric factors without performing trajectory or scattering calculations. It is also more commonly known as the frequency factor.
Notes
Chemical kinetics
Physical chemistry | Steric factor | [
"Physics",
"Chemistry"
] | 296 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"nan",
"Chemical kinetics",
"Physical chemistry",
"Physical chemistry stubs"
] |
1,604,856 | https://en.wikipedia.org/wiki/Wallpaper%20%28magazine%29 | Wallpaper, stylized Wallpaper*, is a publication focusing on design and architecture, fashion, travel, art, and lifestyle. The magazine was launched in London in 1996 by Canadian journalist Tyler Brûlé and Austrian journalist Alexander Geringer. It is now owned by Future plc after its acquisition of TI Media.
History
Brûlé sold the magazine to Time Warner in 1997 and stayed on as editorial director until 2002, when he was replaced by Jeremy Langmead. In 2003 Langmead appointed Tony Chambers as Creative Director. Chambers, a self-styled "visual journalist", replaced Langmead as editor-in-chief in April 2007. In September 2017, Chambers was succeeded by the publication's creative director, Sarah Douglas. Douglas had worked at the magazine for over a decade, joining as Art Editor in 2007 before being appointed to Creative Director in 2012. Chambers, in turn, took on the role of Wallpaper* brand and content director. In 2023, Bill Prince was appointed editor-in-chief.
Apart from publishing the monthly magazine and website, under the banner of its Bespoke business, Wallpaper* also creates content, curates exhibitions and designs events for third party clients. Wallpaper* published over 100 travel city guide books in partnership with Phaidon Press.
Since 2015, Wallpaper* has offered an e-commerce platform, the WallpaperSTORE*, offering the opportunity to purchase a carefully curated selection of the products seen on the pages of the magazine.
Wallpaper.com covers breaking news across design, interior, art, architecture, fashion, travel, beauty, wellness and technology. It also publishes exclusive online features, interviews, galleries and a daily newsletter. According to the magazine's media pack, the website has on average over 1,700,000 unique users per month.
In 2007, to celebrate its 100th issue and reflect its multi-platform status, the logo's asterisk acquired a cursor in place of one of its arms. From 2007, the magazine invited selected guest editors from the world of art, design, architecture, and fashion to edit its October issue. Guest editors have included Jeff Koons (2007), Zaha Hadid (2008), Karl Lagerfeld (2009), David Lynch (2010), Kraftwerk (2011), Ole Scheeren (2012), Elmgreen & Dragset (2013), Frank Gehry (2014), William Wegman (2015), Jenny Holzer (2019), Giorgio Armani (2022) and Yayoi Kusama (2023).
In August 2008, Wallpaper launched the Wallpaper* Selects website in collaboration with contemporary online art retailer Eyestorm. Wallpaper Selects sells a selection of limited-edition photographs from the Wallpaper archive, signed by the photographer.
In July 2011, the magazine launched an iPad edition. In October 2015, Wallpaper celebrated its 200th issue since starting its print edition in 1996. During the COVID-19 pandemic, the monthly magazine was made available as a free digital download.
The annual Wallpaper* Design Awards were launched in January 2005. From 2009, Wallpaper* released four annual special editions focused on the creative culture of the BRIC nations, 'Made in China' (June 2009), 'Born in Brazil' (June 2010), 'Reborn in India' (June 2011) and 'Reigning in Russia' (November 2012). In 2017, Wallpaper* launched a Chinese-language edition of the magazine, published by Huasheng Media.
From 2010 to 2019, the Wallpaper* Handmade exhibition was held annually at the Milan Salone del Mobile, ushering in an era of collaboration between the magazine's editorial team, established manufacturers and emerging designers across a number of fields.
In 2024, Wallpaper* partnered with Rolex to publish the official history of the Rolex Submariner, the first in an ongoing series of specialist publications about the watchmaker's iconic models.
Controversies
In September 2005, the magazine published an article about the Afrikaans Language Monument by Bronwyn Davies, an English-speaking South African, that described Afrikaans as "one of the world's ugliest languages." South African billionaire Johann Rupert (chairman of the Richemont Group), responded by withdrawing advertising for all Richemont Group brands, such as Cartier, Van Cleef & Arpels, Montblanc and Alfred Dunhill, from the magazine.
References
External links
Future plc
Visual arts magazines published in the United Kingdom
Cultural magazines published in the United Kingdom
Design magazines
Magazines published in London
Magazines established in 1996
Monthly magazines published in the United Kingdom | Wallpaper (magazine) | [
"Engineering"
] | 951 | [
"Design magazines",
"Design"
] |
1,605,200 | https://en.wikipedia.org/wiki/Salt | In common usage, salt is a mineral composed primarily of sodium chloride (NaCl). When used in food, especially in granulated form, it is more formally called table salt. In the form of a natural crystalline mineral, salt is also known as rock salt or halite. Salt is essential for life in general (being the source of the essential dietary minerals sodium and chlorine), and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and is known to uniformly improve the taste perception of food, including otherwise unpalatable food. Salting, brining, and pickling are ancient and important methods of food preservation.
Some of the earliest evidence of salt processing dates to around 6000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt works in China dates to approximately the same period. Salt was prized by the ancient Hebrews, Greeks, Romans, Byzantines, Hittites, Egyptians, and Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance.
Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. The greatest single use for salt (sodium chloride) is as a feedstock for the production of chemicals. It is used to produce caustic soda and chlorine, and in the manufacture of products such as polyvinyl chloride, plastics, and paper pulp. Of the annual global production of around three hundred million tonnes, only a small percentage is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea salt and table salt, the latter of which usually contains an anti-caking agent and may be iodised to prevent iodine deficiency. As well as its use in cooking and at the table, salt is present in many processed foods.
Sodium is an essential element for human health via its role as an electrolyte and osmotic solute. However, excessive salt consumption increases the risk of cardiovascular diseases such as hypertension. Such health effects of salt have long been studied. Accordingly, numerous world health associations and experts in developed countries recommend reducing consumption of popular salty foods. The World Health Organization recommends that adults consume less than 2,000 mg of sodium, equivalent to 5 grams of salt, per day.
History
All through history, the availability of salt has been pivotal to civilization. What is now thought to have been the first city in Europe is Solnitsata, in Bulgaria, which was a salt mine, providing the area now known as the Balkans with salt since 5400 BC. Salt was the best-known food preservative, especially for meat, for many thousands of years. A very ancient salt-works operation has been discovered at the Poiana Slatinei archaeological site next to a salt spring in Lunca, Neamț County, Romania. Evidence indicates that Neolithic people of the Precucuteni Culture were boiling the salt-laden spring water through the process of briquetage to extract salt as far back as 6050 BC. The salt extracted from this operation may have directly correlated with the rapid growth of this society's population soon after production began. The harvest of salt from the surface of Xiechi Lake near Yuncheng in Shanxi, China, dates back to at least 6000 BC, making it one of the oldest verifiable saltworks.
There is more salt in animal tissues, such as meat, blood, and milk, than in plant tissues. Nomads who subsist on their flocks and herds do not eat salt with their food, but agriculturalists, feeding mainly on cereals and vegetable matter, need to supplement their diet with salt. With the spread of civilization, salt became one of the world's main trading commodities. It was of high value to the ancient Hebrews, the Greeks, the Romans, the Byzantines, the Hittites and other peoples of antiquity. In the Middle East, salt was used to seal an agreement ceremonially, and the ancient Hebrews made a "covenant of salt" with God and sprinkled salt on their offerings to show their trust in him. An ancient practice in time of war was salting the earth: scattering salt around in a defeated city to symbolically prevent plant growth. The Bible tells the story of King Abimelech who was ordered by God to do this at Shechem, and various texts claim that the Roman general Scipio Aemilianus Africanus ploughed over and sowed the city of Carthage with salt after it was defeated in the Third Punic War (146 BC), although this story is now considered to be entirely apocryphal.
Salt may have been used for barter in connection with the obsidian trade in Anatolia in the Neolithic Era. Salt was included among funeral offerings found in ancient Egyptian tombs from the third millennium BC, as were salted birds, and salt fish. From about 2800 BC, the Egyptians began exporting salt fish to the Phoenicians in return for Lebanon cedar, glass, and the dye Tyrian purple; the Phoenicians traded Egyptian salted fish and salt from North Africa throughout their Mediterranean trade empire. Herodotus described salt trading routes across Libya back in the 5th century BC. In the early years of the Roman Empire, roads were built for the transportation of salt from the salt imported at Ostia to the capital.
In Africa, salt was used as currency south of the Sahara, and slabs of rock salt were used as coins in Abyssinia. The Tuareg have traditionally maintained routes across the Sahara especially for the transportation of salt by Azalai (salt caravans). The caravans still cross the desert from southern Niger to Bilma, although much of the trade now takes place by truck. Each camel takes two bales of fodder and two of trade goods northwards and returns laden with salt pillars and dates. In Gabon, before the arrival of Europeans, the coast people carried on a remunerative trade with those of the interior by the medium of sea salt. This was gradually displaced by the salt that Europeans brought in sacks, so that the coast natives lost their previous profits; as of the late 1950s, sea salt was still the currency best appreciated in the interior.
Salzburg, Hallstatt, and Hallein lie within of each other on the river Salzach in central Austria in an area with extensive salt deposits. Salzach means "salt river" while Salzburg means "salt castle", both taking their names from the German word , salt. Hallstatt was the site of the world's first salt mine. The town gave its name to the Hallstatt culture that began mining for salt in the area in about 800 BC. Around 400 BC, the townsfolk, who had previously used pickaxes and shovels, began open pan salt making. During the first millennium BC, Celtic communities grew rich trading salt and salted meat to Ancient Greece and Ancient Rome in exchange for wine and other luxuries.
The word salary comes from the Latin word for salt. The reason for this is unknown; a persistent modern claim that the Roman Legions were sometimes paid in salt is baseless. The word salad literally means "salted", and comes from the ancient Roman practice of salting leaf vegetables.
Wars have been fought over salt. Venice won a war with Genoa over the product, and it played a role in the American Revolution. Cities on overland trade routes grew rich by levying duties, and towns like Liverpool flourished on the export of salt extracted from the salt mines of Cheshire. Various governments have at different times imposed salt taxes on their peoples. The voyages of Christopher Columbus are said to have been financed from salt production in southern Spain, and the oppressive salt tax in France was one of the causes of the French Revolution. After being repealed, this tax was reimposed by Napoleon when he became emperor to pay for his foreign wars, and was not finally abolished until 1946. In 1930, Mahatma Gandhi led a crowd of 100,000 protestors on the "Dandi March" or "Salt Satyagraha", during which they made their own salt from the sea as a demonstration of their opposition to the colonial salt tax. This act of civil disobedience inspired numerous Indians and transformed the Indian independence movement into a national struggle.
Physical properties
Salt is mostly sodium chloride (NaCl). Sea salt and mined salt may contain trace elements. Mined salt is often refined. Salt crystals are translucent and cubic in shape; they normally appear white but impurities may give them a blue or purple tinge. When dissolved in water sodium chloride separates into Na+ and Cl− ions, and the solubility is 359 grams per litre. From cold solutions, salt crystallises as the dihydrate NaCl·2H2O. Solutions of sodium chloride have very different properties from those of pure water; the freezing point is −21.12 °C (−6.02 °F) for 23.31 wt% of salt, and the boiling point of saturated salt solution is around 108.7 °C (227.7 °F).
Edible salt
Salt is essential to the health of humans and other animals, and it is one of the five basic taste sensations. Salt is used in many cuisines, and it is often found in salt shakers on diners' eating tables for their personal use on food. Salt is also an ingredient in many manufactured foodstuffs. Table salt is a refined salt containing about 97 to 99 percent sodium chloride. Usually, anticaking agents such as sodium aluminosilicate or magnesium carbonate are added to make it free-flowing. Iodized salt, containing potassium iodide, is widely available. Some people put a desiccant, such as a few grains of uncooked rice or a saltine cracker, in their salt shakers to absorb extra moisture and help break up salt clumps that may otherwise form.
Fortified table salt
Some table salt sold for consumption contains additives that address a variety of health concerns, especially in the developing world. The identities and amounts of additives vary from country to country. Iodine is an important micronutrient for humans, and a deficiency of the element can cause lowered production of thyroxine (hypothyroidism) and enlargement of the thyroid gland (endemic goitre) in adults or cretinism in children. Iodized salt has been used to correct these conditions since 1924 and consists of table salt mixed with a minute amount of potassium iodide, sodium iodide, or sodium iodate. A small amount of dextrose may be added to stabilize the iodine. Iodine deficiency affects about two billion people around the world and is the leading preventable cause of intellectual disabilities. Iodized table salt has significantly reduced disorders of iodine deficiency in countries where it is used.
The amount of iodine and the specific iodine compound added to salt varies. In the United States, the Food and Drug Administration (FDA) recommends 150 micrograms of iodine per day for both men and women. US iodized salt contains 46–77 ppm (parts per million), whereas in the UK the recommended iodine content of iodized salt is 10–22 ppm.
Sodium ferrocyanide, yellow prussiate of soda, is sometimes added to salt as an anticaking agent. Such anticaking agents have been added since at least 1911 when magnesium carbonate was first added to salt to make it flow more freely. The safety of sodium ferrocyanide as a food additive was found to be provisionally acceptable by the Committee on Toxicity in 1988. Other anticaking agents sometimes used include tricalcium phosphate, calcium or magnesium carbonates, fatty acid salts (acid salts), magnesium oxide, silicon dioxide, calcium silicate, sodium aluminosilicate and calcium aluminosilicate. Both the European Union and the United States Food and Drug Administration permitted the use of aluminium in the latter two compounds.
In "doubly fortified salt", both iodide and iron salts are added. The latter alleviates iron deficiency anaemia, which interferes with the mental development of an estimated 40% of infants in the developing world. A typical iron source is ferrous fumarate. Another additive, especially important for pregnant women, is folic acid (vitamin B9), which gives the table salt a yellow colour. Folic acid helps prevent neural tube defects and anaemia, which affect young mothers, especially in developing countries.
A lack of fluoride in the diet is the cause of a greatly increased incidence of dental caries. Fluoride salts can be added to table salt with the goal of reducing tooth decay, especially in countries that have not benefited from fluoridated toothpastes and fluoridated water. The practice is more common in some European countries where water fluoridation is not carried out. In France, 35% of the table salt sold contains added sodium fluoride.
Other kinds
Unrefined sea salt contains small amounts of magnesium and calcium halides and sulfates, traces of algal products, salt-resistant bacteria and sediment particles. The calcium and magnesium salts confer a faintly bitter overtone, and they make unrefined sea salt hygroscopic (i.e., it gradually absorbs moisture from air if stored uncovered). Algal products contribute a mildly "fishy" or "sea-air" odour, the latter from organobromine compounds. Sediments, the proportion of which varies with the source, give the salt a dull grey appearance. Since taste and aroma compounds are often detectable by humans in minute concentrations, sea salt may have a more complex flavour than pure sodium chloride when sprinkled on top of food. When salt is added during cooking however, these flavours would likely be overwhelmed by those of the food ingredients. The refined salt industry cites scientific studies saying that raw sea and rock salts do not contain enough iodine salts to prevent iodine deficiency diseases.
Salts have diverse mineralities depending on their source, giving each a unique flavour. Fleur de sel, a natural sea salt from the surface of evaporating brine in salt pans, has a distinctive flavour varying with its source. In traditional Korean cuisine, so-called "bamboo salt" is prepared by roasting salt in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and has been claimed to increase the anticlastogenic and antimutagenic properties of doenjang (a fermented bean paste). Kosher or kitchen salt has a larger grain size than table salt and is used in cooking. It can be useful for brining, in bread or pretzel making, and as a scrubbing agent when combined with oil.
Salt in food
Salt is present in most foods, but in naturally occurring foodstuffs such as meats, vegetables and fruit, it is present in very small quantities. It is often added to processed foods (such as canned foods and especially salted foods, pickled foods, and snack foods or other convenience foods), where it functions as both a preservative and a flavouring. Dairy salt is used in the preparation of butter and cheese products. As a flavouring, salt enhances the taste of other foods by suppressing the bitterness of those foods making them more palatable and relatively sweeter.
Before the advent of electrically powered refrigeration, salting was one of the main methods of food preservation. Thus, herring contains 67 mg sodium per 100 g, while kipper, its preserved form, contains 990 mg. Similarly, pork typically contains 63 mg while bacon contains 1,480 mg, and potatoes contain 7 mg but potato crisps 800 mg per 100 g. Salt is used extensively in cooking as a flavouring, and in cooking techniques such as with salt crusts and brining. The main sources of salt in the Western diet, apart from direct use, are bread and cereals, meat, and dairy products.
In many East Asian cultures, salt is not traditionally used as a condiment. In its place, condiments such as soy sauce, fish sauce and oyster sauce tend to have a high sodium content and fill a similar role to table salt in western cultures. They are most often used for cooking rather than as table condiments.
Biology of salt taste
Human salt taste is detected by sodium taste receptors present in taste bud cells on the tongue. Human sensory taste testing studies have shown that proteolyzed forms of epithelial sodium channel (ENaC) function as the human salt taste receptor.
Sodium consumption and health
Table salt is made up of just under 40% sodium by weight, so a 6g serving (1teaspoon) contains about 2,400mg of sodium. Sodium serves a vital purpose in the human body: via its role as an electrolyte, it helps nerves and muscles to function correctly, and it is one factor involved in the osmotic regulation of water content in body organs (fluid balance). Most of the sodium in the Western diet comes from salt. The habitual salt intake in many Western countries is about 10 g per day, and it is higher than that in many countries in Eastern Europe and Asia. The high level of sodium in many processed foods has a major impact on the total amount consumed. In the United States, 75% of the sodium eaten comes from processed and restaurant foods, 11% from cooking and table use and the rest from what is found naturally in foodstuffs.
Because consuming too much sodium increases risk of cardiovascular diseases, health organizations generally recommend that people reduce their dietary intake of salt. High sodium intake is associated with a greater risk of stroke, total cardiovascular disease and kidney disease. A reduction in sodium intake by 1,000 mg per day may reduce cardiovascular disease by about 30 percent. In adults and children with no acute illness, a decrease in the intake of sodium from the typical high levels reduces blood pressure. A low sodium diet results in a greater improvement in blood pressure in people with hypertension.
The World Health Organization recommends that adults should consume less than 2,000 mg of sodium (which is contained in 5g of salt) per day. Guidelines by the United States recommend that people with hypertension, African Americans, and middle-aged and older adults should limit consumption to no more than 1,500 mg of sodium per day and meet the potassium recommendation of 4,700 mg/day with a healthy diet of fruits and vegetables.
While reduction of sodium intake to less than 2,300 mg per day is recommended by developed countries, one review recommended that sodium intake be reduced to at least 1,200 mg (contained in 3g of salt) per day, as a further reduction in salt intake led to a greater fall in systolic blood pressure for all age groups and ethnicities. Another review indicated that there is inconsistent/insufficient evidence to conclude that reducing sodium intake to lower than 2,300 mg per day is either beneficial or harmful.
Evidence shows a more complicated relationship between salt and cardiovascular disease. "The association between sodium consumption and cardiovascular disease or mortality is U-shaped, with increased risk at both high and low sodium intake." The findings showed that increased mortality from excessive salt intake was primarily associated with individuals with hypertension. The levels of increased mortality among those with restricted salt intake appeared to be similar regardless of blood pressure. This evidence shows that while those with hypertension should primarily focus on reducing sodium to recommended levels, all groups should seek to maintain a healthy level of sodium intake of between 4 and 5 grams (equivalent to 10-13 g salt) a day.
One of the two most prominent dietary risks for disability in the world are diets high in sodium.
Non-dietary uses
Only a small percentage of the salt manufactured in the world is used in food. The remainder is used in agriculture, water treatment, chemical production, de-icing, and other industrial use cases. In the practice of watering plants with salt as a fertilizer, applying a moderate concentration helps avoid potential toxicity; typically, per liter is considered safe and effective for most plants. Sodium chloride is one of the largest volume inorganic raw materials. It is a feedstock in the production of caustic soda and chlorine. These are used in the manufacture of PVC, paper pulp and many other inorganic and organic compounds. Salt is used as a flux in the production of aluminium. For this purpose, a layer of melted salt floats on top of the molten metal and removes iron and other metal contaminants. It is used in the manufacture of soaps and glycerine, where it is used to saponify fats. As an emulsifier, salt is used in the manufacture of synthetic rubber, and another use is in the firing of pottery, when salt added to the furnace vaporises before condensing onto the surface of the ceramic material, forming a strong glaze.
When drilling through loose materials such as sand or gravel, salt may be added to the drilling fluid to provide a stable "wall" to prevent the hole collapsing. There are many other processes in which salt is involved. These include its use as a mordant in textile dying, to regenerate resins in water softening, for the tanning of hides, the preservation of meat and fish and the canning of meat and vegetables.
Production
Food-grade salt accounts for only a small part of salt production in industrialized countries (7% in Europe), although worldwide, food uses account for 17.5% of total production.
In 2018, total world production of salt was 300 million tonnes, the top six producers being China (68 million), the United States (42 million), India (29 million), Germany (13 million), Canada (13 million) and Australia (12 million).
The manufacture of salt is one of the oldest chemical industries. A major source of salt is seawater, which has a salinity of approximately 3.5%. This means that there are about of dissolved salts, predominantly sodium () and chloride () ions, per kilogram (2.2 lbs) of water. The world's oceans are a virtually inexhaustible source of salt, and this abundance of supply means that reserves have not been calculated. The evaporation of seawater is the production method of choice in marine countries with high evaporation and low precipitation rates. Salt evaporation ponds are filled from the ocean and salt crystals can be harvested as the water dries up. Sometimes these ponds have vivid colours, as some species of algae and other micro-organisms thrive in conditions of high salinity.
Away from the sea, salt is extracted from the vast sedimentary deposits which have been laid down over the millennia from the evaporation of seas and lakes. These sources are either mined directly, producing rock salt, or are extracted by pumping water into the deposit. In either case, the salt may be purified by mechanical evaporation of brine. Traditionally, purification was achieved in shallow open pans that were heated to accelerate evaporation. Vacuum-based methods are also employed. The raw salt is refined by treatment with chemicals that precipitate most impurities (largely magnesium and calcium salts). Multiple stages of evaporation are then applied. Some salt is produced using the Alberger process, which involves vacuum pan evaporation combined with the seeding of the solution with cubic crystals, and produces a grainy-type flake. The Ayoreo, an indigenous group from the Paraguayan Chaco, obtain their salt from the ash produced by burning the timber of the Indian salt tree (Maytenus vitis-idaea) and other trees.
The largest mine operated by underground workings in the world is the Sifto mine, located mostly 550 meters below Lake Huron, in Goderich, Ontario (Canada). About seven million tons of salt are extracted from it annually. The Khewra Salt Mine in Pakistan has nineteen storeys, eleven of which are underground, and of passages. The salt is dug out by the room and pillar method, where about half the material is left in place to support the upper levels. Extraction of Himalayan salt is expected to last 350 years at the present rate of extraction of around 385,000 tons per annum.The mine is also a major tourist attraction, receiving around a quarter of a million visitors a year.
In religion
Salt has long held an important place in religion and culture. At the time of Brahmanic sacrifices, in Hittite rituals and during festivals held by Semites and Greeks at the time of the new moon, salt was thrown into a fire where it produced crackling noises. The ancient Egyptians, Greeks and Romans invoked their gods with offerings of salt and water and some people think this to be the origin of Holy Water in the Christian faith. In Judaism, it is recommended to have either a salty bread or to add salt to the bread if this bread is unsalted when doing Kiddush for Shabbat. It is customary to spread some salt over the bread or to dip the bread in a little salt when passing the bread around the table after the Kiddush. To preserve the covenant between their people and God, Jews dip the Sabbath bread in salt. Salt plays a role within different Christian traditions. It is mandatory in the rite of the Tridentine Mass. Salt is used in the third item (which includes an Exorcism) of the Celtic Consecration (cf. Gallican Rite) that is employed in the consecration of a church, and it is permitted to be added to the water "where it is customary" in the Roman Catholic rite of Holy water. The Bible makes multiple mentions of salt, both of the mineral itself and as a metaphor. Uses include the tale of how Lot's wife is turned into a pillar of salt when looking back at the cities of Sodom and Gomorrah as they are destroyed. In the New Testament, Jesus refers to his followers as the "salt of the earth".
In Aztec mythology, Huixtocihuatl was a fertility goddess who presided over salt and salt water. Salt is an auspicious substance in Hinduism and is used in ceremonies like house-warmings and weddings. In Jainism, devotees lay an offering of raw rice with a pinch of salt before a deity to signify their devotion and salt is sprinkled on a person's cremated remains before the ashes are buried. Salt is believed to ward off evil spirits in Mahayana Buddhist tradition, and when returning home from a funeral, a pinch of salt is thrown over the left shoulder as this prevents evil spirits from entering the house. In Shinto, is used for ritual purification of locations and people (harae, specifically shubatsu), and small piles of salt are placed in dishes by entrances to ward off evil and to attract patrons.
References
Sources
(Preprint)
Food additives
Sodium minerals
Objects believed to protect from evil
Food powders | Salt | [
"Chemistry"
] | 5,642 | [
"Edible salt",
"Salts"
] |
1,605,201 | https://en.wikipedia.org/wiki/Solubility%20pump | In oceanic biogeochemistry, the solubility pump is a physico-chemical process that transports carbon as dissolved inorganic carbon (DIC) from the ocean's surface to its interior.
Overview
The solubility pump is driven by the coincidence of two processes in the ocean :
The solubility of carbon dioxide is a strong inverse function of seawater temperature (i.e. solubility is greater in cooler water)
The thermohaline circulation is driven by the formation of deep water at high latitudes where seawater is usually cooler and denser
Since deep water (that is, seawater in the ocean's interior) is formed under the same surface conditions that promote carbon dioxide solubility, it contains a higher concentration of dissolved inorganic carbon than might be expected from average surface concentrations. Consequently, these two processes act together to pump carbon from the atmosphere into the ocean's interior.
One consequence of this is that when deep water upwells in warmer, equatorial latitudes, it strongly outgasses carbon dioxide to the atmosphere because of the reduced solubility of the gas.
The solubility pump has a biological counterpart known as the biological pump. For an overview of both pumps, see Raven & Falkowski (1999).
Carbon dioxide solubility
Carbon dioxide, like other gases, is soluble in water. However, unlike many other gases (oxygen for instance), it reacts with water and forms a balance of several ionic and non-ionic species (collectively known as dissolved inorganic carbon, or DIC). These are dissolved free carbon dioxide (CO2 (aq)), carbonic acid (H2CO3), bicarbonate (HCO3−) and carbonate (CO32−), and they interact with water as follows :
The balance of these carbonate species (which ultimately affects the solubility of carbon dioxide), is dependent on factors such as pH, as shown in a Bjerrum plot. In seawater this is regulated by the charge balance of a number of positive (e.g. Na+, K+, Mg2+, Ca2+) and negative (e.g. CO32− itself, Cl−, SO42−, Br−) ions. Normally, the balance of these species leaves a net positive charge. With respect to the carbonate system, this excess positive charge shifts the balance of carbonate species towards negative ions to compensate. The result of which is a reduced concentration of the free carbon dioxide and carbonic acid species, which in turn leads to an oceanic uptake of carbon dioxide from the atmosphere to restore balance. Thus, the greater the positive charge imbalance, the greater the solubility of carbon dioxide. In carbonate chemistry terms, this imbalance is referred to as alkalinity.
In terms of measurement, four basic parameters are of key importance: Total inorganic carbon (TIC, T or CT), Total alkalinity (TALK or AT), pH, and pCO2. Measuring any two of these parameters allows for the determination of a wide range of pH-dependent species (including the above-mentioned species). This balance can be changed by a number of processes. For example, the air-sea flux of CO2, the dissolution/precipitation of CaCO3, or biological activity such as photosynthesis/respiration. Each of these has different effects on each of the four basic parameters, and together they exert strong influences on global cycles. The net and local charge of the oceans remains neutral during any chemical process.
Anthropogenic changes
The combustion of fossil fuels, land-use changes, and the production of cement have led to a flux of CO2 to the atmosphere. Presently, about one third (approximately 2 gigatons of carbon per year) of anthropogenic emissions of CO2 are believed to be entering the ocean. The solubility pump is the primary mechanism driving this flux, with the consequence that anthropogenic CO2 is reaching the ocean interior via high latitude sites of deep water formation (particularly the North Atlantic). Ultimately, most of the CO2 emitted by human activities will dissolve in the ocean, however the rate at which the ocean will take it up in the future is less certain.
In a study of carbon cycle up to the end of the 21st century, Cox et al. (2000) predicted that the rate of CO2 uptake will begin to saturate at a maximum rate at 5 gigatons of carbon per year by 2100. This was partially due to non-linearities in the seawater carbonate system, but also due to climate change. Ocean warming decreases the solubility of CO2 in seawater, slowing the ocean's response to emissions. Warming also acts to increase ocean stratification, isolating the surface ocean from deeper waters. Additionally, changes in the ocean's thermohaline circulation (specifically slowing) may act to decrease transport of dissolved CO2 into the deep ocean. However, the magnitude of these processes is still uncertain, preventing good long-term estimates of the fate of the solubility pump.
While ocean absorption of anthropogenic CO2 from the atmosphere acts to decrease climate change, it causes ocean acidification which is believed will have negative consequences for marine ecosystems.
See also
Alkalinity
Biological pump
Continental shelf pump
Ocean acidification
Thermohaline circulation
Total inorganic carbon
References
Aquatic ecology
Carbon cycle
Chemical oceanography
Geochemistry | Solubility pump | [
"Chemistry",
"Biology"
] | 1,117 | [
"Chemical oceanography",
"Aquatic ecology",
"Ecosystems",
"nan"
] |
1,605,292 | https://en.wikipedia.org/wiki/Data%20Transformation%20Services | Data Transformation Services (DTS) is a Microsoft database tool with a set of objects and utilities to allow the automation of extract, transform and load operations to or from a database. The objects are DTS packages and their components, and the utilities are called DTS tools. DTS was included with earlier versions of Microsoft SQL Server, and was almost always used with SQL Server databases, although it could be used independently with other databases.
DTS allows data to be transformed and loaded from heterogeneous sources using OLE DB, ODBC, or text-only files, into any supported database. DTS can also allow automation of data import or transformation on a scheduled basis, and can perform additional functions such as FTPing files and executing external programs. In addition, DTS provides an alternative method of version control and backup for packages when used in conjunction with a version control system, such as Microsoft Visual SourceSafe.
DTS has been superseded by SQL Server Integration Services in later releases of Microsoft SQL Server though there was some backwards compatibility and ability to run DTS packages in the new SSIS for a time.
History
In SQL Server versions 6.5 and earlier, database administrators (DBAs) used SQL Server Transfer Manager and Bulk Copy Program, included with SQL Server, to transfer data. These tools had significant shortcomings, and many DBAs used third-party tools such as Pervasive Data Integrator to transfer data more flexibly and easily. With the release of SQL Server 7 in 1998, "Data Transformation Services" was packaged with it to replace all these tools. The concept, design, and implementation of the Data Transformation Services was led by Stewart P. MacLeod (SQL Server Development Group Program Manager), Vij Rajarajan (SQL Server Lead Developer), and Ted Hart (SQL Server Lead Developer). The goal was to make it easier to import, export, and transform heterogeneous data and simplify the creation of data warehouses from operational data sources.
SQL Server 2000 expanded DTS functionality in several ways. It introduced new types of tasks, including the ability to FTP files, move databases or database components, and add messages into Microsoft Message Queue. DTS packages can be saved as a Visual Basic file in SQL Server 2000, and this can be expanded to save into any COM-compliant language. Microsoft also integrated packages into Windows 2000 security and made DTS tools more user-friendly; tasks can accept input and output parameters.
DTS comes with all editions of SQL Server 7 and 2000, but was superseded by SQL Server Integration Services in the Microsoft SQL Server 2005 release in 2005.
DTS packages
The DTS package is the fundamental logical component of DTS; every DTS object is a child component of the package. Packages are used whenever one modifies data using DTS. All the metadata about the data transformation is contained within the package. Packages can be saved directly in a SQL Server, or can be saved in the Microsoft Repository or in COM files. SQL Server 2000 also allows a programmer to save packages in a Visual Basic or other language file (when stored to a VB file, the package is actually scripted—that is, a VB script is executed to dynamically create the package objects and its component objects).
A package can contain any number of connection objects, but does not have to contain any. These allow the package to read data from any OLE DB-compliant data source, and can be expanded to handle other sorts of data. The functionality of a package is organized into tasks and steps.
A DTS Task is a discrete set of functionalities executed as a single step in a DTS package. Each task defines a work item to be performed as part of the data movement and data transformation process or as a job to be executed.
Data Transformation Services supplies a number of tasks that are part of the DTS object model and that can be accessed graphically through the DTS Designer or accessed programmatically. These tasks, which can be configured individually, cover a wide variety of data copying, data transformation and notification situations. For example, the following types of tasks represent some actions that you can perform by using DTS: executing a single SQL statement, sending an email, and transferring a file with FTP.
A step within a DTS package describes the order in which tasks are run and the precedence constraints that describe what to do in the case damage or of failure. These steps can be executed sequentially or in parallel.
Packages can also contain global variables which can be used throughout the package. SQL Server 2000 allows input and output parameters for tasks, greatly expanding the usefulness of global variables. DTS packages can be edited, password protected, scheduled for execution, and retrieved by version.
DTS tools
DTS tools packaged with SQL Server include the DTS wizards, DTS Designer, and DTS Programming Interfaces.
DTS wizards
The DTS wizards can be used to perform simple or common DTS tasks. These include the Import/Export Wizard and the Copy of Database Wizard. They provide the simplest method of copying data between OLE DB data sources. There is a great deal of functionality that is not available by merely using a wizard. However, a package created with a wizard can be saved and later altered with one of the other DTS tools.
A Create Publishing Wizard is also available to schedule packages to run at certain times. This only works if SQL Server Agent is running; otherwise the package will be scheduled, but will not be executed.
DTS Designer
The DTS Designer is a graphical tool used to build complex DTS Packages with workflows and event-driven logic. DTS Designer can also be used to edit and customize DTS Packages created with the DTS wizard.
Each connection and task in DTS Designer is shown with a specific icon. These icons are joined with precedence constraints, which specify the order and requirements for tasks to be run. One task may run, for instance, only if another task succeeds (or fails). Other tasks may run concurrently.
The DTS Designer has been criticized for having unusual quirks and limitations, such as the inability to visually copy and paste multiple tasks at one time. Many of these shortcomings have been overcome in SQL Server Integration Services, DTS's successor.
DTS Query Designer
A graphical tool used to build queries in DTS.
DTS Run Utility
DTS Packages can be run from the command line using the DTSRUN Utility.
The utility is invoked using the following syntax:
dtsrun /S server_name[\instance_name]
{ {/[~]U user_name [/[~]P password]} | /E }
]
{
{/[~]N package_name }
| {/[~]G package_guid_string}
| {/[~]V package_version_guid_string}
}
[/[~]M package_password]
[/[~]F filename]
[/[~]R repository_database_name]
[/A global_variable_name:typeid=value]
[/L log_file_name]
[/W NT_event_log_completion_status]
[/Z] [/!X] [/!D] [/!Y] [/!C]
]
When passing in parameters which are mapped to Global Variables, you are required to include the typeid. This is rather difficult to find on the Microsoft site. Below are the TypeIds used in passing in these values.
See also
OLAP
Data warehouse
Data mining
SQL Server Integration Services
Meta Data Services
References
External links
Microsoft SQL Server: Data Transformation Services (DTS)
SQL DTS unique DTS information resource
Understanding Microsoft Repository
DTS Videos & Training
Data Strategy
Microsoft database software
Data management
Extract, transform, load tools
Microsoft server technology | Data Transformation Services | [
"Technology"
] | 1,595 | [
"Data management",
"Data"
] |
1,605,325 | https://en.wikipedia.org/wiki/C.%20N.%20R.%20Rao | Chintamani Nagesa Ramachandra Rao, (born 30 June 1934), is an Indian chemist who has worked mainly in solid-state and structural chemistry. He has honorary doctorates from 86 universities from around the world and has authored around 1,800 research publications and 58 books. He is described as a scientist who had won all possible awards in his field except the Nobel Prize.
Rao completed BSc from Mysore University at age seventeen, and MSc from Banaras Hindu University at age nineteen. He earned a PhD from Purdue University at the age of twenty-four. He was the youngest lecturer when he joined the Indian Institute of Science in 1959. After a transfer to Indian Institute of Technology Kanpur, he returned to IISc, eventually becoming its director from 1984 to 1994. He was chair of the Scientific Advisory Council to the Prime Minister of India from 1985 to 1989 and from 2005 to 2014. He founded and works in Jawaharlal Nehru Centre for Advanced Scientific Research and International Centre for Materials Science.
Rao received scientific awards and honours including the Marlow Medal, Shanti Swarup Bhatnagar Prize for Science and Technology, Hughes Medal, India Science Award, Dan David Prize, Royal Medal, Von Hippel Award, and ENI award. He also received Padma Shri and Padma Vibhushan from the Government of India. On 16 November 2013, the Government of India selected him for Bharat Ratna, the highest civilian award in India, making him the third scientist after C.V. Raman and A. P. J. Abdul Kalam to receive the award. He received the award on 4 February 2014 from President Pranab Mukherjee at the Rashtrapati Bhavan.
Early life and education
C.N.R. Rao was born in a Kannada Deshastha Brahmin family in Bangalore to Hanumantha Nagesa Rao and Nagamma Nagesa Rao. His father was an Inspector of Schools. He was an only child, and his learned parents made an academic environment. He was well versed in Hindu literature from his mother and in English from his father at an early age. He did not attend elementary school but was home-tutored by his mother, who was particularly skilled in arithmetic and Hindu literature. He entered middle school in 1940, at age six. Although he was the youngest in his class, he used to tutor his classmates in mathematics and English. He passed the lower secondary examination (class VII) in the first class in 1944. He was ten years old, and his father rewarded him with four annas (twenty-five paisa). He attended Acharya Patashala high school in Basavanagudi, which made a lasting influence on his interest in chemistry. His father enrolled him to a Kannada-medium course to encourage his mother tongue, but at home used English for all conversation. He completed secondary school leaving certificate in first class in 1947. He studied BSc at Central College, Bangalore where he developed communication skills in English and also learned Sanskrit.
He obtained his bachelor's degree from Mysore University in 1951, in first class, at the age of seventeen. He initially thought of joining Indian Institute of Science (IISc) for a diploma or a postgraduate degree in chemical engineering, but a teacher persuaded him to attend Banaras Hindu University. He obtained a master's in chemistry from BHU two years later.
In 1953, he was granted a scholarship for PhD in Indian Institute of Technology Kharagpur. But four foreign universities, MIT, Penn State, Columbia and Purdue also offered him financial support. He chose Purdue. His first research paper was published in the Agra University Journal of Research in 1954. He completed PhD in 1958, only after two years and nine months.
Career
After completion of his graduate studies, Rao returned to Bangalore in 1959 to take up a lecturing position, joining IISc and embarking on an independent research program. The facility at the time was so meagre that he described it, saying, "You would get string and sealing wax and that's about it." In 1963 he accepted a permanent position in the Department of Chemistry at the Indian Institute of Technology Kanpur. He was elected Fellow of the Indian Academy of Sciences in 1964. He returned to IISc in 1976 to establish a solid state and structural chemistry unit. and became director of the IISc from 1984 to 1994. At various points in his career Rao has taken appointments as a visiting professor at Purdue University, the University of Oxford, the University of Cambridge and University of California, Santa Barbara. He was the Jawaharlal Nehru Professor at the University of Cambridge and Professorial Fellow at the King's College, Cambridge during 1983–1984.
Rao has been working as the National Research Professor holding the positions Linus Pauling Research Professor and Honorary President of Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore, which he founded in 1989. He had served as chair of the Scientific Advisory Council to the Indian Prime Minister for two terms, from 1985 to 1989 and from 2005 to 2014. He is also the director of the International Centre for Materials Science (ICMS), which he founded in 2010, and serves on the board of the Science Initiative Group.
Scientific contribution
Rao is one of the world's foremost solid state and materials chemists. He has contributed to the development of the field over five decades. His work on transition metal oxides has led to basic understanding of novel phenomena and the relationship between materials properties and the structural chemistry of these materials.
Rao was one of the earliest to synthesise two-dimensional oxide materials such as La2CuO4. He was one of the first to synthesise 123 cuprates, the first liquid nitrogen-temperature superconductor in 1987. He was also the first to synthesis Y junction carbon nanotubes in the mid-1990s. His work has led to a systematic study of compositionally controlled metal-insulator transitions. Such studies have had a profound impact in application fields such as colossal magneto resistance and high temperature superconductivity. Oxide semiconductors have unusual promise. He has made immense contributions to nanomaterials over the last two decades, besides his work on hybrid materials.
He shares co-authorship of more than 1800 research papers and has co-authored or edited more than 58 books.
Awards and recognition
Fellowships and memberships of academic societies
Fellow of the Indian Academy of Sciences (FASc, 1965)
Fellow of the Indian National Science Academy (FNA, 1974)
Fellow of the Royal Society (FRS, 1982)
Founding Fellow of The World Academy of Sciences (FTWAS, 1983)
Honorary Fellow of the Royal Society of Chemistry (Hon. FRSC, 1989)
Foreign Member of the Academia Europaea (MAE, 1997)
Honorary Fellow of the Institute of Physics (Hon.FInstP, 2007)
Member of many of the world's scientific associations, including the National Academy of Sciences, American Academy of Arts and Sciences, Royal Society of Canada, French Academy, Japanese Academy, Serbian Academy of Sciences and Arts and Polish Academy of Sciences, Czechoslovak Academy of Sciences, Serbian Academy of Sciences, Slovenian Academy of Sciences, Brazilian Academy of Sciences, Spanish Royal Academy of Sciences, National Academy of Sciences of Korea, African Academy of Sciences, and the American Philosophical Society. He is also a member of the Pontifical Academy.
Honorary doctorates
International:
Professor C.N.R. Rao has received numerous honorary degrees from universities worldwide in recognition of his contributions to science. In Africa, he was awarded a D.Sc. Honoris Causa by Stellenbosch University, South Africa, in 2007. Australia honored him with a D.Sc. Honoris Causa from the Australian National University in Canberra in 2015. In the United Kingdom, he received honorary degrees from the University of Wales (Cardiff), Liverpool, Oxford (2007), and St. Andrews University (2013).
In France, he was awarded Honoris Causa doctorates by the University of Bordeaux in 1983, the University of Caen in 2000, and Université Joseph Fourier in Grenoble in 2011. Other recognitions include a D.Sc. Honoris Causa from Wroclaw University in Poland (1989) and from Novosibirsk University and the Russian Academy of Sciences (Siberian Branch) in Russia (1999). Additionally, Sweden awarded him an Honoris Causa Doctorate from Uppsala University in 2000, and Sudan awarded a D.Sc. Honoris Causa from the University of Khartoum in 2002.
In the United States, he has received honorary doctorates from several universities, including Colorado, Northwestern, Notre Dame, Purdue, Temple, and others.
National
In India, Professor Rao’s contributions have been acknowledged by a wide range of institutions. He has received honorary doctorates from major universities, including Aligarh University, Banaras Hindu University, Bangalore University, Calcutta University, Delhi University, Hyderabad University, Jawaharlal Nehru Technological University, Kanpur University, Mangalore University, Panjab University, and Roorkee University. Additionally, he has been honored by Visvesvaraya Technological University, Indian Institutes of Technology (IITs) at Bombay, Kharagpur, Kanpur, New Delhi, and Guwahati, and the Indian Institutes of Science Education and Research (IISERs) in Bhopal, Kolkata, Mohali, and Pune. Notable recognitions also include an LL.D. (Honoris Causa) from Sri Venkateswara University, a D.Litt. from Guwahati University, the “Desikottama” award from Visva-Bharati University and The Assam Royal Global University, Guwahati 2022.
Major scientific awards
1967: Marlow Medal by the Faraday Society of England
1968: Shanti Swarup Bhatnagar Prize for Science and Technology in Chemical Science
2000: Centenary Medal of the Royal Society of Chemistry, London
2000: Hughes Medal by the Royal Society
2004: India Science Award
2005: Dan David Prize from Tel Aviv University shared with George Whitesides and Robert Langer.
2008: Abdus Salam Medal by The World Academy of Sciences (TWAS)
2009: Royal Medal by the Royal Society
2010: August-Wilhelm-von-Hofmann Medal by the German Chemical Society
2017: The Von Hippel Award by the Materials Research Society
2021: International ENI award 2020 for research in renewable energy sources and energy storage, also called the Energy Frontier award
Scientific awards
1961: DSc from Mysore University.
1973: Yedanapalli Medal and Prize
1975: C. V. Raman Award in Physical Science by the University Grants Commission of India
1980: S. N. Bose Medal by the Indian National Science Academy
1981: Royal Society of Chemistry (London) Medal
1981: Founding member of the World Cultural Council
1989: Hevrovsky Gold Medal of the Czechoslovak Academy of Sciences
1990: Meghnath Saha Medal of the Indian National Science Academy
1996: Einstein Gold Medal of UNESCO
2004: Doctor of Science from University of Calcutta.
2004: Somiya Award of the International Union of Materials Research.
2008: Nikkei Asia Prize for Science, Technology and Innovation, by Nihon Keizai Shimbun, Inc., Japan.
2008: Khwarizmi International Award 2008 for Innovation along with Ajayan Vinu
2011: Ernesto Illy Trieste Science Prize for materials research
2013: 2012 Award for International Scientific Cooperation from the Chinese Academy of Sciences
2013: Elected honorary foreign member of Chinese Academy of Sciences
2013: Distinguished Academician Award from IIT Patna
2018: Platinum Medal from Indian Association of Nanoscience and Nanotechnology
2019: The first Sheikh Saud International Prize for Materials Research from the Center for Advanced Materials of the United Arab Emirates
Foreign fellow of Bangladesh Academy of Sciences
Indian governmental honours
1974 – Padma Shri, India's fourth-highest civilian award.
Padma Vibhushan in 1985
Karnataka Ratna by the Karnataka State Government in 2000
Bharat Ratna in 2014
Foreign honours
: Great Cross of the National Order of Scientific Merit (2002)
: Chevalier de la Légion d'honneur (2005)
: Gold and Silver Star of the Order of the Rising Sun (2015)
: Order of Friendship (2009)
Legacy
Rao with his wife established the CNR Rao Education Foundation using the Dan David Prize money. The foundation is based in Jawaharlal Nehru Centre for Advanced Scientific Research and offers Best Science Teacher Award to pre-university and high school science teachers.
Rao established the International Centre for Materials Science (ICMS) which offers the C N R Rao Prize Lecture in Advanced Materials since 2010.
The World Academy of Sciences instituted the TWAS-C.N.R. Rao Award for Scientific Research since 2006 for scientists in the least developed countries.
The Shanmugha Arts, Science, Technology & Research Academy has created the SASTRA-CNR Rao Award for Chemistry and Material Science in 2014.
Personal life
Rao is married to Indumati Rao in 1960. They have two children, Sanjay and Suchitra. Sanjay works as a science populariser in schools around Bangalore. Suchitra is married to Krishna N. Ganesh, the director of the Indian Institute of Science Education and Research (IISER) at Pune, Maharashtra. Rao is technophobic and he never checks his email by himself. He also said that he uses the mobile phone only to talk to his wife.
Controversies
In 1987, Rao and his team published a series of four papers, of which three were in the Proceedings of the Indian Academy of Sciences (Chemical Science), Pramana, and Current Science, all published by the Indian Academy of Sciences. A report was submitted to the Society for Scientific Values that the three papers had no mention of the dates of receipt, which were normally explicitly mentioned in those journals. Upon inquiry, it was found that the paper manuscripts were actually received after the date of publication, indicating that they were backdated. The society declared the case as "Use of Wrong Means to Claim Priority."
Rao has been subject of allegations on plagiarism. Rao and Saluru Baba Krupanidhi at the Indian Institute of Science in Bangalore, with their students Basant Chitara and L. S. Panchakarla, published a paper "Infrared photodetectors based on reduced graphene oxide and graphene nanoribbons" in the journal Advanced Materials in 2011. After publication the journal editors found sentences copied verbatim in the introduction and methodology from a paper published in Applied Physics Letters in 2010. According to Nature report, it was Basant Chitara, a PhD student at IISc, who wrote the text. An apology was issued by the authors later in the same journal. Rao said that he did read the manuscript and that it was an oversight on his part as he focused mainly on the results and discussion.
Scientists such as Rahul Siddharthan (Institute of Mathematical Sciences, Chennai), Y.B. Srinivas (Institute of Wood Science and Technology), and D.P. Sengupta (former professor at IISC), agreed that the plagiarised portion has no bearing on the findings, yet Siddharthan opined that the reactions made by Rao and Krupanidhi were overboard. Rao and Krupanidhi publicly blamed Chitara, and denied the publication as not plagiarism. Rao had commented, "This should not be really considered as plagiarism, but an instance of copying of a few sentences in the text." He even extended the blame to Krupanidhi asserting that he had no role in it as it was written by Krupanidhi without his knowledge. His claims were not justified by the fact that he was the senior scientist and corresponding author in that publication.
More allegations of instances of plagiarism in articles co-authored Rao have been reported. Written with S. Venkataprasad Bhat and Krupanidhi, Rao's paper in 2010 about the effect of nanoparticles on solar cells in Applied Physics Express contains texts that are very similar to those of a paper by Matheu et al. from Applied Physics Letters in 2008, which it did not even cite. Rao had stated, referring to the 2011 incident, that "[If] I have ever stolen an idea or a result (in) my entire life, (then) hang me." But Rao's article contains similar study to and duplicated figures with that of Matheu et al. An article in the Journal of Luminescence in 2011, written with Chitara, Nidhi Lal and Krupanidhi, contains 20 unattributed lines which appear to be copied from articles by Itskos et al. in Nanotechnology (June 2009 issue) and Heliotis et al. in Advanced Materials (January 2006 issue). Another article in Nanotechnology, written also with Chitara and Krupanidhi, uses six lines from the 1995 article by Huang et al. in Applied Physics Letters.
Rao was given a Bharat Ratna by the Government of India in spite of the controversy and was active as a professor at Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR). In December 2013, brother and sister Tanaya Thakur, a law student, and Aditya Thakur, a class XII student, filed a public interest litigation in Allahabad High Court, Lucknow Bench, to challenge Rao's Bharat Ratna. They asserted that "a scientist with proven cases of plagiarism shall not be presented the highest civilian award." But the court ruled them out as "filing pleas for publicity." There was another plea to revoke the award in 2015, but the Central Information Commission dismissed the petition.
On 17 November 2013, at a press conference following the announcement of his Bharat Ratna, he called the Indian politicians "idiots" which caused a national outrage. He said, "Why the hell have these idiots [politicians] given so little to us despite what we have done? For the money that the government has given us we [scientists] have done much more." In his defence Rao insisted that he merely talked about the "idiotic" way the politicians ignore investments for research funding in science.
References
Further reading
C.N.R. Rao (2010). [ Climbing the Limitless Ladder: A Life in Chemistry]. World Scientific Publishing Co. Pte. Ltd., Singapore.
External links
Academic profile at the Pontifical Academy of Sciences
Dan David Prize laureate 2005
Prof. CNR Rao @ JNCASR
Jawaharlal Nehru Centre for Advanced Scientific Research
C.N.R. Rao Hall of Science
Solid State and Structural Chemistry Unit
1934 births
Living people
Recipients of the Karnataka Ratna
Banaras Hindu University alumni
Foreign associates of the National Academy of Sciences
Members of the Serbian Academy of Sciences and Arts
Foreign members of the USSR Academy of Sciences
Foreign members of the Russian Academy of Sciences
Fellows of the Royal Society
Fellows of the Royal Society of Canada
Fellows of Bangladesh Academy of Sciences
Fellows of the Indian National Science Academy
Founding members of the World Cultural Council
Members of the Pontifical Academy of Sciences
Foreign members of the Chinese Academy of Sciences
Scientists from Bengaluru
Indian institute directors
Kannada people
Purdue University alumni
Academic staff of the Indian Institute of Science
University of Mysore alumni
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
Knights of the Legion of Honour
Recipients of the Bharat Ratna
Recipients of the Padma Vibhushan in science & engineering
Recipients of the Padma Shri in science & engineering
Academic staff of IIT Kanpur
20th-century Indian chemists
Fellows of the Australian Academy of Science
Winners of the Nikkei Asia Prize
Members of the American Philosophical Society
Solid state chemists
Associate fellows of the African Academy of Sciences
Foreign members of the Serbian Academy of Sciences and Arts | C. N. R. Rao | [
"Chemistry"
] | 4,018 | [
"Solid state chemists"
] |
1,605,389 | https://en.wikipedia.org/wiki/Infinity%20symbol | The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding.
This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag.
The infinity symbol and several variations of the symbol are available in various character encodings.
History
The lemniscate has been a common decorative motif since ancient times; for instance, it is commonly seen on Viking Age combs.
The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet.
Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. One paper by Leonhard Euler was typeset with an open letterform more closely resembling a reflected and sideways S than a lemniscate (something and even has been used as a stand-in for the infinity symbol itself.
Usage
Mathematics
In mathematics, the infinity symbol is typically used to represent a potential infinity. For instance, in mathematical expressions with summations and limits such as
the infinity sign is conventionally interpreted as meaning that the variable grows arbitrarily large towards infinity, rather than actually taking an infinite value, although other interpretations are possible.
When quantifying actual infinity, infinite entities taken as objects per se, other notations are typically used. For example, (aleph-nought) denotes the smallest infinite cardinal number (representing the size of the set of natural numbers), and (omega) denotes the smallest infinite ordinal number.
The infinity symbol may also be used to represent a point at infinity, especially when there is only one such point under consideration. This usage includes, in particular, the infinite point of a projective line, and the point added to a topological space to form its one-point compactification.
Other technical uses
In areas other than mathematics, the infinity symbol may take on other related meanings. For instance, it has been used in bookbinding to indicate that a book is printed on acid-free paper and will therefore be long-lasting. On cameras and their lenses, the infinity symbol indicates that the lens's focal length is set to an infinite distance, and is "probably one of the oldest symbols to be used on cameras".
Symbolism and literary uses
In modern mysticism, the infinity symbol has become identified with a variation of the ouroboros, an ancient image of a snake eating its own tail that has also come to symbolize the infinite, and the ouroboros is sometimes drawn in figure-eight form to reflect this identification—rather than in its more traditional circular form.
In the works of Vladimir Nabokov, including The Gift and Pale Fire, the figure-eight shape is used symbolically to refer to the Möbius strip and the infinite, as is the case in these books' descriptions of the shapes of bicycle tire tracks and of the outlines of half-remembered people. Nabokov's poem after which he entitled Pale Fire explicitly refers to "the miracle of the lemniscate". Other authors whose works use this shape with its symbolic meaning of the infinite include James Joyce, in Ulysses, and David Foster Wallace, in Infinite Jest.
Graphic design
The well-known shape and meaning of the infinity symbol have made it a common typographic element of graphic design. For instance, the Métis flag, used by the Canadian Métis people since the early 19th century, is based around this symbol. Different theories have been put forward for the meaning of the symbol on this flag, including the hope for an infinite future for Métis culture and its mix of European and First Nations traditions, but also evoking the geometric shapes of Métic dances,, Celtic knots, or Plains First Nations Sign Language.
A rainbow-coloured infinity symbol is also used by the autism rights movement, as a way to symbolize the infinite variation of the people in the movement and of human cognition. The Bakelite company took up this symbol in its corporate logo to refer to the wide range of varied applications of the synthetic material they produced. Versions of this symbol have been used in other trademarks, corporate logos, and emblems including those of Fujitsu, Cell Press, and the 2022 FIFA World Cup.
Encoding
The symbol is encoded in Unicode at and in LaTeX as \infty: . An encircled version is encoded for use as a symbol for acid-free paper.
The Unicode set of symbols also includes several variant forms of the infinity symbol that are less frequently available in fonts in the block Miscellaneous Mathematical Symbols-B.
See also
Aleph number
History of mathematical notation
Lazy Eight (disambiguation)
References
Mathematical symbols
Infinity | Infinity symbol | [
"Mathematics"
] | 1,193 | [
"Symbols",
"Mathematical objects",
"Mathematical symbols",
"Infinity"
] |
1,605,404 | https://en.wikipedia.org/wiki/Longhua%20Temple | The Longhua Temple (; Shanghainese: Lon-ngu-zy; alternatively Lunghwa Temple; literally "Dragon Flower Temple") is a Buddhist temple dedicated to the Maitreya Buddha in Shanghai. Although most of the present day buildings date from later reconstructions, the temple preserves the architectural design of a Song dynasty (960–1279) monastery of the Chan School. It is the largest ancient temple complex in the city of Shanghai.
History
The temple was first built in 242 AD, during the Three Kingdoms period (220–280). According to a legend, Sun Quan, King of the Kingdom of Wu (222–280), had obtained Sharira relics, which are cremated remains of the Buddha. To house these precious relics, the king ordered the construction of 13 pagodas. Longhua Pagoda (), part of the Longhua temple complex, is said to have been one of them. Like the function of the pagoda, the name of the temple also has its origin in a local legend according to which a dragon once appeared on the site.
The temple was destroyed by war towards the end of the Tang dynasty (618–907) and rebuilt in 977 AD, under the autonomous Kingdom of Wuyue during the Northern Song dynasty period (960–1127). (According to another version of the story, as contained in Song (960–1279) and Yuan dynasty (1271–1368) local histories, the temple was first built by the King of Wuyue.) Later in the Song dynasty, in 1064, it was renamed "Kongxiang Temple" (), but the original name "Longhua Temple" was restored in the Ming dynasty (1368–1644) during the reign of the Wanli Emperor (1573–1620).
The present architectural design follows the Song dynasty (960–1279) original. However, whereas the core of the present Longhua Pagoda survives from that period, most buildings in the temple proper were rebuilt during the reigns of the Tongzhi Emperor (1862–1874) and the Guangxu Emperor (1875–1908) in the Qing dynasty (1644–1911). A modern restoration of the entire temple complex was carried out in 1954.
The temple and monastery were originally surrounded by extensive gardens and orchards. Viewing of the peach blossom in the Longhua gardens was an annual attraction for people in surrounding cities.
The temple grounds have been used as a site for internment as well as for executions. Public executions were held on the site in the 20th century. In 1927, the Kuomintang (國民黨) carried out a purge of suspected communists in Shanghai. Thousands of victims of this purge were brought to the temple grounds to be executed. They are commemorated today by the Longhua Martyrs Cemetery behind the temple. During the Second Sino-Japanese War, the Japanese operated their largest civilian internment camp in the area, where American, British, as well as nationals of other allied countries were held under poor conditions.
The temple's extensive gardens have since been almost entirely absorbed into the neighboring Longhua Martyrs' Cemetery and have been extensively reconstructed in a contemporary monumental style. A small traditional garden remains immediately adjacent to the temple buildings.
Architectural design and artwork
The Longhua Temple occupies an area of more than and the main axis of the compound is long. The tallest structure is the Longhua Pagoda which stands high.
The layout of the temple is that of a Song dynasty monastery of the Buddhist Chan sect, known as the Sangharama Five-Hall Style. Five main halls are arranged along a central north–south pointing axis. From the entrance or Shanmen, the buildings are:
Maitreya Hall
The Maitreya Hall housing a statue of Maitreya buddha and another in his manifestation as "Budai", or Cloth bag monk.
Four Heavenly Kings Hall
The Four Heavenly Kings Hall housing statues of the Four Heavenly Kings.
Mahavira Hall
The Mahavira Hall is the main hall, housing statues of the historical Buddha (Shakyamuni) and two disciples. At the back of the hall is a bas-relief carving, including a depiction of Guanyin, or the Bodhisattva Avalokiteśvara in his female manifestation. Around the front portion are arranged the twenty Guardians of Buddhist Law, and around the back the sixteen principal arhats. The hall also features an ancient bell cast in 1586, during the Wanli era of the Ming dynasty.
Three Sages Hall
The Three Sages Hall () houses statues of the Amitabha buddha, and the Bodhisattvas Avalokiteśvara (male form) and Mahāsthāmaprāpta.
Abbot's Hall
The Abbot's Hall () is a place for lectures and formal meetings.
Bell tower and Drum tower
A Bell Tower and a Drum Tower are arranged off the central axis. The Bell Tower houses a copper bell cast in 1382, the bell is tall, has a maximum diameter of , and weighs . The bell is used in the Evening Bell-Striking Ceremony conducted on New Year's Eve. Also situated off the main axis is a shrine to Ksitigarbha (Dizang the King Bodhisattva).
Buddhist Texts Library
The Buddhist Texts Library houses various versions of sutras and other Buddhist works, as well as ceremonial instruments, antiques, and artifacts.
Artworks in the temple include statues of the Maitreya Buddha in his Bodhisattva form and in his Cloth Bag Monk incarnation, statues of the Eighteen Arhats and 20 Guardians of Buddhist Law, as well as statues of the 500 arhats.
Longhua Pagoda
The Longhua Pagoda is best well-known of the 16 historic pagodas that still stand within the Shanghai municipality. It has an octagonal floor layout. The size of the seven stories decreases from bottom to top. The pagoda consists of a hollow, tube-like brick core surrounded by a wooden staircase. On the outside, it is decorated with balconies, banisters, and upturned eaves. These outer decorations have been reconstructed in keeping with the original style.
Although previous pagodas existed on the same site, the current brick base and body of the pagoda was built in 977 under the Wuyue kingdome (907–978), with continuous renovations of its more fragile wooden components on the exterior. Because of its age, the pagoda is fragile and is not open to the public.
Temple fair
The Longhua Temple Fair has been held since the Ming dynasty annually on the third day of the third month of the Lunar Calendar, when - according to local legend - the dragons visit the temple to help grant the people's wishes. It coincides with the blossoming of the peach trees in Longhua Park. Since its inception, the fair has been an annual event interrupted only by the Cultural Revolution and the SARS outbreak.
Location
The Longhua Temple is located on the Longhua area (formerly Longhua township) of Shanghai (named after the temple). Its street address is No. 2853 Longhua Road (Longhua Lu). It is open to the public for a fee (10 RMB) which includes incense. To go there by subway you can take line 12 followed by a bit of walking or bike riding .
In popular culture
J. G. Ballard in his World War II-era autobiographical novel Empire of the Sun describes the Japanese military use of the Longhua pagoda as a flak cannon tower. In Steven Spielberg's film adaptation of the book, the pagoda is clearly visible above the prison camp.
References
D.C. Burn, A Guide to Lunghwa Temple, Shanghai: Kelly & Walsh (1926).
Eric N. Danielson, Discover Shanghai, Singapore: Marshal Cavendish 2010). [pp. 73–81 on Longhua, and pp. 98–100 on Shanghai's 16 historic pagodas.]
Eric N. Danielson, “How Old is Shanghai's Longhua Temple?” Hong Kong: Journal of the Hong Kong Branch of the Royal Asiatic Society, Vol. 43, 2003 (2006). [pp. 15–28]
Longhua Zhen Zhi, Shanghai (1996).
Pan Mingquan, Shanghai Fo Si, Dao Guan, Shanghai: Shanghai Ci Shu Chubanshe(2003).
Zhang Qinghua and Zhu Baikui, Longhua, Yangzhou: Guanglin Shu She (2003).
Bibliography
External links
How Old is Shanghai's Longhua Temple?
Journal of the Royal Asiatic Society Hong Kong Branch
Chinatravel1.com
China Daily
short article in "The Economist" (site history)
Longhua Monastery, Architectura Sinica Site Archive
10th-century Buddhist temples
977
Architectural history
Buddhist temples in Shanghai
Architecture in China
Landmarks in Shanghai
Song dynasty
Xuhui District
Major National Historical and Cultural Sites in Shanghai
Linji school temples | Longhua Temple | [
"Engineering"
] | 1,805 | [
"Architectural history",
"Architecture"
] |
1,605,422 | https://en.wikipedia.org/wiki/Bart%20Kosko | Bart Andrew Kosko (born February 7, 1960) is a writer and professor of electrical engineering and law at the University of Southern California (USC). He is a researcher and popularizer of fuzzy logic, neural networks, and noise, and the author of several trade books and textbooks on these and related subjects of machine intelligence. He was awarded the 2022 Donald O. Hebb Award for neural learning by the International Neural Network Society.
Personal background
Kosko holds bachelor's degrees in philosophy and in economics from USC (1982), a master's degree in applied mathematics from UC San Diego (1983), a PhD in electrical engineering from UC Irvine (1987) under Allen Stubberud, and a J.D. from Concord Law School. He is an attorney licensed in California and federal court, and worked part-time as a law clerk for the Los Angeles District Attorney's Office.
Kosko is a political and religious skeptic. He is a contributing editor of the libertarian periodical Liberty, where he has published essays on "Palestinian vouchers".
Writing
Kosko's most popular book to date was the international best-seller Fuzzy Thinking, about man and machines thinking in shades of gray, and his most recent book was Noise. He has also published short fiction and the cyber-thriller novel Nanotime, about a possible World War III that takes place in two days of the year 2030. The novel's title coins the term "nanotime" to describe the time speed-up that occurs when fast computer chips, rather than slow brains, house minds.
Kosko has a minimalist prose style, not even using commas in his book Noise.
Research
Kosko's technical contributions have been in three main areas: fuzzy logic, neural networks, and noise.
In fuzzy logic, he introduced fuzzy cognitive maps, fuzzy subsethood, additive fuzzy systems, fuzzy approximation theorems, optimal fuzzy rules, fuzzy associative memories, various neural-based adaptive fuzzy systems, ratio measures of fuzziness, the shape of fuzzy sets, the conditional variance of fuzzy systems, and the geometric view of (finite) fuzzy sets as points in hypercubes and its relationship to the ongoing debate of fuzziness versus probability.
In neural networks, Kosko introduced the unsupervised technique of differential Hebbian learning, sometimes called the "differential synapse," and most famously the BAM or bidirectional associative memory family of feedback neural architectures, with corresponding global stability theorems.
In noise, Kosko introduced the concept of adaptive stochastic resonance, using neural-like learning algorithms to find the optimal level of noise to add to many nonlinear systems to improve their performance. He proved many versions of the so-called "forbidden interval theorem," which guarantees that noise will benefit a system if the average level of noise does not fall in an interval of values. He also showed that noise can speed up the convergence of Markov chains to equilibrium.
Books
Nonfiction
(with coauthor Simon Haykin)
Fiction
References
External links
Official site
1960 births
Living people
American libertarians
University of California, San Diego alumni
University of Southern California alumni
University of Southern California faculty
American probability theorists
Applied mathematics
American male novelists | Bart Kosko | [
"Mathematics"
] | 659 | [
"Applied mathematics"
] |
1,605,425 | https://en.wikipedia.org/wiki/Axial%20compressor | An axial compressor is a gas compressor that can continuously pressurize gases. It is a rotating, airfoil-based compressor in which the gas or working fluid principally flows parallel to the axis of rotation, or axially. This differs from other rotating compressors such as centrifugal compressor, axi-centrifugal compressors and mixed-flow compressors where the fluid flow will include a "radial component" through the compressor.
The energy level of the fluid increases as it flows through the compressor due to the action of the rotor blades which exert a torque on the fluid. The stationary blades slow the fluid, converting the circumferential component of flow into pressure. Compressors are typically driven by an electric motor or a steam or a gas turbine.
Axial flow compressors produce a continuous flow of compressed gas, and have the benefits of high efficiency and large mass flow rate, particularly in relation to their size and cross-section. They do, however, require several rows of airfoils to achieve a large pressure rise, making them complex and expensive relative to other designs (e.g. centrifugal compressors).
Axial compressors are integral to the design of large gas turbines such as jet engines, high speed ship engines, and small scale power stations. They are also used in industrial applications such as large volume air separation plants, blast furnace air, fluid catalytic cracking air, and propane dehydrogenation. Due to high performance, high reliability and flexible operation during the flight envelope, they are also used in aerospace rocket engines, as fuel pumps and in other critical high volume applications.
Description
Axial compressors consist of rotating and stationary components. A shaft drives a central drum which is retained by bearings inside of a stationary tubular casing. Between the drum and the casing are rows of airfoils, each row connected to either the drum or the casing in an alternating manner. A pair of one row of rotating airfoils and the next row of stationary airfoils is called a stage. The rotating airfoils, also known as blades or rotors, accelerate the fluid in both the axial and circumferential directions. The stationary airfoils, also known as vanes or stators, convert the increased kinetic energy into static pressure through diffusion and redirect the flow direction of the fluid to prepare it for the rotor blades of the next stage. The cross-sectional area between rotor drum and casing is reduced in the flow direction to maintain an optimum Mach number axial velocity as the fluid is compressed.
Working
As the fluid enters and leaves in the axial direction, the centrifugal component in the energy equation does not come into play. Here the compression is fully based on diffusing action of the passages. The diffusing action in the stator converts the absolute kinetic head of the fluid into a rise in pressure. The relative kinetic head in the energy equation is a term that exists only because of the rotation of the rotor. The rotor reduces the relative kinetic head of the fluid and adds it to the absolute kinetic head of the fluid i.e., the impact of the rotor on the fluid particles increases their velocity (absolute) and thereby reduces the relative velocity between the fluid and the rotor. In short, the rotor increases the absolute velocity of the fluid and the stator converts this into pressure rise. Designing the rotor passage with a diffusing capability can produce a pressure rise in addition to its normal functioning. This produces greater pressure rise per stage which constitutes a stator and a rotor together. This is the reaction principle in turbomachines. If 50% of the pressure rise in a stage is obtained at the rotor section, it is said to have a 50% reaction.
Design
The increase in pressure produced by a single stage is limited by the relative velocity between the rotor and the fluid, and the turning and diffusion capabilities of the airfoils. A typical stage in a commercial compressor will produce a pressure increase of between 15% and 60% (pressure ratios of 1.15–1.6) at design conditions with a polytropic efficiency in the region of 90–95%. To achieve different pressure ratios, axial compressors are designed with different numbers of stages and rotational speeds. As a rule of thumb we can assume that each stage in a given compressor has the same temperature rise (Delta T). Therefore, at the entry, temperature (Tstage) to each stage must increase progressively through the compressor and the ratio (Delta T)/(Tstage) entry must decrease, thus implying a progressive reduction in stage pressure ratio through the unit. Hence the rear stage develops a significantly lower pressure ratio than the first stage. Higher stage pressure ratios are also possible if the relative velocity between fluid and rotors is supersonic, but this is achieved at the expense of efficiency and operability. Such compressors, with stage pressure ratios of over 2, are only used where minimizing the compressor size, weight or complexity is critical, such as in military jets.
The airfoil profiles are optimized and matched for specific velocities and turning. Although compressors can be run at other conditions with different flows, speeds, or pressure ratios, this can result in an efficiency penalty or even a partial or complete breakdown in flow (known as compressor stall and pressure surge respectively). Thus, a practical limit on the number of stages, and the overall pressure ratio, comes from the interaction of the different stages when required to work away from the design conditions. These “off-design” conditions can be mitigated to a certain extent by providing some flexibility in the compressor. This is achieved normally through the use of adjustable stators or with valves that can bleed fluid from the main flow between stages (inter-stage bleed). Modern jet engines use a series of compressors, running at different speeds; to supply air at around 40:1 pressure ratio for combustion with sufficient flexibility for all flight conditions.
Kinetics and energy equations
The law of moment of momentum states that the sum of the moments of external forces acting on a fluid which is temporarily occupying the control volume is equal to the net change of angular momentum flux through the control volume.
The swirling fluid enters the control volume at radius, , with tangential velocity, , and leaves at radius, , with tangential velocity, .
and are the absolute velocities at the inlet and outlet respectively.
and are the axial flow velocities at the inlet and outlet respectively.
and are the swirl velocities at the inlet and outlet respectively.
and are the blade-relative velocities at the inlet and outlet respectively.
is the linear velocity of the blade.
is the guide vane angle and is the blade angle.
Rate of change of momentum, F is given by the equation:
(from velocity triangle)
Power consumed by an ideal moving blade, P is given by the equation:
Change in enthalpy of fluid in moving blades:
Therefore,
which implies,
Isentropic compression in rotor blade,
Therefore,
which implies
Degree of Reaction,
The pressure difference between the entry and exit of the rotor blade is called reaction pressure. The change in pressure energy is calculated through degree of reaction.
Therefore,
Performance characteristics
Instabilities
Greitzer used a Helmholtz resonator type of compression system model to predict the transient response of a compression system after a small perturbation superimposed on a steady operating condition. He found a non-dimensional parameter which predicted which mode of compressor instability, rotating stall or surge, would result. The parameter used the rotor speed, Helmholtz resonator frequency of the system and an "effective length" of the compressor duct. It had a critical value which predicted either rotating stall or surge where the slope of pressure ratio against flow changed from negative to positive.
Steady-state performance
Axial compressor performance is shown on a compressor map, also known as a characteristic, by plotting pressure ratio and efficiency against corrected mass flow at different values of corrected compressor speed.
Axial compressors, particularly near their design point are usually amenable to analytical treatment, and a good estimate of their performance can be made before they are first run on a rig. The compressor map shows the complete running range, i.e. off-design, of the compressor from ground idle to its highest corrected rotor speed, which for a civil engine may occur at top-of-climb, or, for a military combat engine, at take-off on a cold day. Not shown is the sub-idle performance region needed for analyzing normal ground and in-flight windmill start behaviour.
The performance of a single compressor stage may be shown by plotting stage loading coefficient () as a function of flow coefficient ()
Stage pressure ratio against flow rate is lower than for a no-loss stage as shown. Losses are due to blade friction, flow separation, unsteady flow and vane-blade spacing.
Off-design operation
The performance of a compressor is defined according to its design. But in actual practice, the operating point of the compressor deviates from the design- point which is known as off-design operation.
from equation (1) and (2)
The value of doesn't change for a wide range of operating points till stalling. Also because of minor change in air angle at rotor and stator, where is diffuser blade angle.
is constant
Representing design values with (')
for off-design operations (from ):
for positive values of J, slope of the curve is negative and vice versa.
Surging
In the plot of pressure-flow rate the line separating graph between two regions- unstable and stable is known as the surge line. This line is formed by joining surge points at different rpms. Unstable flow in axial compressors due to complete breakdown of the steady through flow is termed as surging. This phenomenon affects the performance of compressor and is undesirable.
Surge cycle
The following explanation for surging refers to running a compressor at a constant speed on a rig and gradually reducing the exit area by closing a valve. What happens, i.e. crossing the surge line, is caused by the compressor trying to deliver air, still running at the same speed, to a higher exit pressure. When the compressor is operating as part of a complete gas turbine engine, as opposed to on a test rig, a higher delivery pressure at a particular speed can be caused momentarily by burning too-great a step-jump in fuel which causes a momentary blockage until the compressor increases to the speed which goes with the new fuel flow and the surging stops.
Suppose the initial operating point D () at some rpm N. On decreasing the flow-rate at same rpm along the characteristic curve by partial closing of the valve, the pressure in the pipe increases which will be taken care by increase in input pressure at the compressor. Further increase in pressure till point P (surge point), compressor pressure will increase. Further moving towards left keeping rpm constant, pressure in pipe will increase but compressor pressure will decrease leading to back air-flow towards the compressor. Due to this back flow, pressure in pipe will decrease because this unequal pressure condition cannot stay for a long period of time. Though valve position is set for lower flow rate say point G but compressor will work according to normal stable operation point say E, so path E-F-P-G-E will be followed leading to breakdown of flow, hence pressure in the compressor falls further to point H(). This increase and decrease of pressure in pipe will occur repeatedly in pipe and compressor following the cycle E-F-P-G-H-E also known as the surge cycle.
This phenomenon will cause vibrations in the whole machine and may lead to mechanical failure. That is why left portion of the curve from the surge point is called unstable region and may cause damage to the machine. So the recommended operation range is on the right side of the surge line.
Stalling
Stalling is an important phenomenon that affects the performance of the compressor. An analysis is made of rotating stall in compressors of many stages, finding conditions under which a flow distortion can occur which is steady in a traveling reference frame, even though upstream total and downstream static pressure are constant. In the compressor, a pressure-rise hysteresis is assumed. It is a situation of separation of air flow at the aero-foil blades of the compressor. This phenomenon depending upon the blade-profile leads to reduced compression and drop in engine power.
Positive stalling Flow separation occur on the suction side of the blade.
Negative stalling Flow separation occur on the pressure side of the blade.
Negative stall is negligible compared to the positive stall because flow separation is least likely to occur on the pressure side of the blade.
In a multi-stage compressor, at the high pressure stages, axial velocity is very small. Stalling value decreases with a small deviation from the design point causing stall near the hub and tip regions whose size increases with decreasing flow rates. They grow larger at very low flow rate and affect the entire blade height. Delivery pressure significantly drops with large stalling which can lead to flow reversal. The stage efficiency drops with higher losses.
Rotating stalling
Non-uniformity of air flow in the rotor blades may disturb local air flow in the compressor without upsetting it. The compressor continues to work normally but with reduced compression. Thus, rotating stall decreases the effectiveness of the compressor.
In a rotor with blades moving say towards right. Let some blades receives flow at higher incidence, this blade will stop positively. It creates obstruction in the passage between the blade to its left and itself. Thus the left blade will receive the flow at higher incidence and the blade to its right with decreased incidence. The left blade will experience more stall while the blade to its right will experience lesser stall. Towards the right stalling will decrease whereas it will increase towards its left. Movement of the rotating stall can be observed depending upon the chosen reference frame.
Effects
This reduces efficiency of the compressor
Forced vibrations in the blades due to passage through stall compartment.
These forced vibrations may match with the natural frequency of the blades causing resonance and hence failure of the blade.
Development
From an energy exchange point of view axial compressors are reversed turbines. Steam-turbine designer Charles Algernon Parsons, for example, recognized that a turbine which produced work by virtue of a fluid's static pressure (i.e. a reaction turbine) could have its action reversed to act as an air compressor, calling it a turbo compressor or pump. His rotor and stator blading described in one of his patents had little or no camber although in some cases the blade design was based on propeller theory. The machines, driven by steam turbines, were used for industrial purposes such as supplying air to blast furnaces. Parsons supplied the first commercial axial flow compressor for use in a lead smelter in 1901. Parsons' machines had low efficiencies, later attributed to blade stall, and were soon replaced with more efficient centrifugal compressors. Brown Boveri & Cie produced "reversed turbine" compressors, driven by gas turbines, with blading derived from aerodynamic research which were more efficient than centrifugal types when pumping large flow rates of 40,000 cu.ft. per minute at pressures up to 45 p.s.i.
Because early axial compressors were not efficient enough a number of papers in the early 1920s claimed that a practical axial-flow turbojet engine would be impossible to construct. Things changed after A. A. Griffith published a seminal paper in 1926, noting that the reason for the poor performance was that existing compressors used flat blades and were essentially "flying stalled". He showed that the use of airfoils instead of the flat blades would increase efficiency to the point where a practical jet engine was a real possibility. He concluded the paper with a basic diagram of such an engine, which included a second turbine that was used to power a propeller.
Although Griffith was well known due to his earlier work on metal fatigue and stress measurement, little work appears to have started as a direct result of his paper. The only obvious effort was a test-bed compressor built by Hayne Constant, Griffith's colleague at the Royal Aircraft Establishment. Other early jet efforts, notably those of Frank Whittle and Hans von Ohain, were based on the more robust and better understood centrifugal compressor which was widely used in superchargers. Griffith had seen Whittle's work in 1929 and dismissed it, noting a mathematical error, and going on to claim that the frontal size of the engine would make it useless on a high-speed aircraft.
Real work on axial-flow engines started in the late 1930s, in several efforts that all started at about the same time. In England, Hayne Constant reached an agreement with the steam turbine company Metropolitan-Vickers (Metrovick) in 1937, starting their turboprop effort based on the Griffith design in 1938. In 1940, after the successful run of Whittle's centrifugal-flow design, their effort was re-designed as a pure jet, the Metrovick F.2. In Germany, von Ohain had produced several working centrifugal engines, some of which had flown including the world's first jet aircraft (He 178), but development efforts had moved on to Junkers (Jumo 004) and BMW (BMW 003), which used axial-flow designs in the world's first jet fighter (Messerschmitt Me 262) and jet bomber (Arado Ar 234). In the United States, both Lockheed and General Electric were awarded contracts in 1941 to develop axial-flow engines, the former a pure jet, the latter a turboprop. Northrop also started their own project to develop a turboprop, which the US Navy eventually contracted in 1943. Westinghouse also entered the race in 1942, their project proving to be the only successful one of the US efforts, later becoming the J30.
As Griffith had originally noted in 1929, the large frontal size of the centrifugal compressor caused it to have higher drag than the narrower axial-flow type. Additionally the axial-flow design could improve its compression ratio simply by adding additional stages and making the engine slightly longer. In the centrifugal-flow design the compressor itself had to be larger in diameter, which was much more difficult to fit properly into a thin and aerodynamic aircraft fuselage (although not dissimilar to the profile of radial engines already in widespread use). On the other hand, centrifugal-flow designs remained much less complex (the major reason they "won" in the race to flying examples) and therefore have a role in places where size and streamlining are not so important.
Axial-flow jet engines
In the jet engine application, the compressor faces a wide variety of operating conditions. On the ground at takeoff the inlet pressure is high, inlet speed zero, and the compressor spun at a variety of speeds as the power is applied. Once in flight the inlet pressure drops, but the inlet speed increases (due to the forward motion of the aircraft) to recover some of this pressure, and the compressor tends to run at a single speed for long periods of time.
There is simply no "perfect" compressor for this wide range of operating conditions. Fixed geometry compressors, like those used on early jet engines, are limited to a design pressure ratio of about 4 or 5:1. As with any heat engine, fuel efficiency is strongly related to the compression ratio, so there is very strong financial need to improve the compressor stages beyond these sorts of ratios.
Additionally the compressor may stall if the inlet conditions change abruptly, a common problem on early engines. In some cases, if the stall occurs near the front of the engine, all of the stages from that point on will stop compressing the air. In this situation the energy required to run the compressor drops suddenly, and the remaining hot air in the rear of the engine allows the turbine to speed up the whole engine dramatically. This condition, known as surging, was a major problem on early engines and often led to the turbine or compressor breaking and shedding blades.
For all of these reasons, axial compressors on modern jet engines are considerably more complex than those on earlier designs.
Spools
All compressors have an optimum point relating rotational speed and pressure, with higher compressions requiring higher speeds. Early engines were designed for simplicity, and used a single large compressor spinning at a single speed. Later designs added a second turbine and divided the compressor into low-pressure and high-pressure sections, the latter spinning faster. This two-spool design, pioneered on the Bristol Olympus, resulted in increased efficiency. Further increases in efficiency may be realised by adding a third spool, but in practice the added complexity increases maintenance costs to the point of negating any economic benefit. That said, there are several three-spool engines in use, perhaps the most famous being the Rolls-Royce RB211, used on a wide variety of commercial aircraft.
Bleed air, variable stators
As an aircraft changes speed or altitude, the pressure of the air at the inlet to the compressor will vary. In order to "tune" the compressor for these changing conditions, designs starting in the 1950s would "bleed" air out of the middle of the compressor in order to avoid trying to compress too much air in the final stages. This was also used to help start the engine, allowing it to be spun up without compressing much air by bleeding off as much as possible. Bleed systems were already commonly used anyway, to provide airflow into the turbine stage where it was used to cool the turbine blades, as well as provide pressurized air for the air conditioning systems inside the aircraft.
A more advanced design, the variable stator, used blades that can be individually rotated around their axis, as opposed to the power axis of the engine. For startup they are rotated to "closed", reducing compression, and then are rotated back into the airflow as the external conditions require. The General Electric J79 was the first major example of a variable stator design, and today it is a common feature of most military engines.
Closing the variable stators progressively, as compressor speed falls, reduces the slope of the surge (or stall) line on the operating characteristic (or map), improving the surge margin of the installed unit. By incorporating variable stators in the first five stages, General Electric Aircraft Engines has developed a ten-stage axial compressor capable of operating at a 23:1 design pressure ratio.
Design notes
Energy exchange between rotor and fluid
The relative motion of the blades to the fluid adds velocity or pressure or both to the fluid as it passes through the rotor. The fluid velocity is increased through the rotor, and the stator converts kinetic energy to pressure energy. Some diffusion also occurs in the rotor in most practical designs.
The increase in velocity of the fluid is primarily in the tangential direction (swirl) and the stator removes this angular momentum.
The pressure rise results in a stagnation temperature rise. For a given geometry the temperature rise depends on the square of the tangential Mach number of the rotor row. Current turbofan engines have fans that operate at Mach 1.7 or more, and require significant containment and noise suppression structures to reduce blade loss damage and noise.
Compressor maps
A map shows the performance of a compressor and allows determination of optimal operating conditions. It shows the mass flow along the horizontal axis, typically as a percentage of the design mass flow rate, or in actual units. The pressure rise is indicated on the vertical axis as a ratio between inlet and exit stagnation pressures.
A surge or stall line identifies the boundary to the left of which the compressor performance rapidly degrades and identifies the maximum pressure ratio that can be achieved for a given mass flow. Contours of efficiency are drawn as well as performance lines for operation at particular rotational speeds.
Compression stability
Operating efficiency is highest close to the stall line. If the downstream pressure is increased beyond the maximum possible the compressor will stall and become unstable.
Typically the instability will be at the Helmholtz frequency of the system, taking the downstream plenum into account.
See also
References
Bibliography
Treager, Irwin E. 'Aircraft Gas Turbine Engine Technology' 3rd edn, McGraw-Hill Book Company, 1995,
Hill, Philip and Carl Peterson. 'Mechanics and Thermodynamics of Propulsion,' 2nd edn, Prentice Hall, 1991. .
Kerrebrock, Jack L. 'Aircraft Engines and Gas Turbines,' 2nd edn, Cambridge, Massachusetts: The MIT Press, 1992. .
Rangwalla, Abdulla. S. 'Turbo-Machinery Dynamics: Design and Operation,' New York: McGraw-Hill: 2005. .
Wilson, David Gordon and Theodosios Korakianitis. 'The Design of High-Efficiency Turbomachinery and Turbines,' 2nd edn, Prentice Hall, 1998. .
Gas compressors | Axial compressor | [
"Chemistry"
] | 5,093 | [
"Gas compressors",
"Turbomachinery"
] |
5,640,193 | https://en.wikipedia.org/wiki/Astro-G | ASTRO-G (also known as VSOP-2, and very rarely called VSOP-B) was a planned radio telescope satellite by JAXA. It was expected to be launched into elliptic orbit around Earth (apogee height 25,000 km, perigee height 1,000 km).
History
Astro-G was selected in February 2006 against the competition of a proposed new X-Ray astronomy mission (NeXT) and a proposed solar sail mission to Jupiter.
Funding started from FY 2007 with a budget of 12 billion yen, around 100 million US dollars.
It was planned to be launched in 2012 but technical difficulty with the dish antenna as well as budget constraints led to putting development on hold for fiscal year 2010. Eventually the project was canceled in 2011 for the increased cost and the difficulty of achieving science goals.
It was planned to feature a 9 m diameter dish antenna to observe in 8, 22 and 43 GHz bands, and was to be used in combination with ground radio telescopes to perform Very Long Baseline Interferometry. It was expected to achieve ten times higher resolution and ten times higher sensitivity than its predecessor HALCA.
Science targets
Key science :
Jet structure, collimation and acceleration regions
Structure of accretion disks around AGN
Structure of magnetic fields in protostars
Other science targets:
Galactic masers in star-forming region
Extragalactic Megamasers
Radio quiet quasars
X-ray binaries, SNR, gravitational lenses etc.
See also
HALCA
Spektr-R
References
External links
ASTRO-G - Japan Aerospace Exploration Agency - Institute of Space and Astronautical Science
VSOP-2 Project
- National Astronomical Observatory of Japan
Satellites of Japan
Cancelled spacecraft
Radio telescopes
Space telescopes | Astro-G | [
"Astronomy"
] | 345 | [
"Space telescopes"
] |
5,640,236 | https://en.wikipedia.org/wiki/Write%20barrier | In operating systems, write barrier is a mechanism for enforcing a particular ordering in a sequence of writes to a storage system in a computer system. For example, a write barrier in a file system is a mechanism (program logic) that ensures that in-memory file system state is written out to persistent storage in the correct order.
In garbage collection
A write barrier in a garbage collector is a fragment of code emitted by the compiler immediately before every store operation to ensure that (e.g.) generational invariants are maintained.
In computer storage
A write barrier in a memory system, also known as a memory barrier, is a hardware-specific compiler intrinsic that ensures that all preceding memory operations "happen before" all subsequent ones.
See also
Native Command Queuing
References
External links
Barriers and journaling filesystems (LWN.net, May 21, 2008)
Compilers
Memory management | Write barrier | [
"Technology"
] | 179 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
5,640,397 | https://en.wikipedia.org/wiki/International%20Academy%20of%20Astronautics | The International Academy of Astronautics (IAA) is a Paris-based non-government association for the field of astronautics. It was founded in Stockholm, Sweden) on August 16, 1960, by Dr. Theodore von Kármán. It was recognised by the United Nations in 1996.
The stated purpose of the IAA is:
Recognize the accomplishments of their peers
Explore and discuss cutting-edge issues in space research and technology
Provide direction and guidance in the non-military uses of space and the ongoing exploration of the solar system
Among the activities the academy is involved, there are:
Organizes annual conferences, symposia, and gatherings covering topics such as space sciences, space life sciences, space technology and system development, space systems operations and utilization, space policy, law, economy, space and society, culture, and education.
Publishes cosmic studies concerning space exploration, space debris, small satellites, space traffic management, natural disasters, climate change
Publishes the journal of the International Academy of Astronautics, Acta Astronautica.
Publishes dictionaries in 24 languages
Publishies book series on subjects such as small satellites, conference proceedings, remote sensing, and history.
IAA Mission
According to the Academy's mission statement, the fundamental purposes of the IAA, are to:
Foster the development of astronautics for peaceful purposes
Recognize individuals who have distinguished themselves in a branch of science or technology related to astronautics
Provide a program through which the membership can contribute to international endeavors
Promote international cooperation in the advancement of aerospace science.
Cooperation with other organizations
The IAA has established cooperation with: Royal Swedish Academy of Sciences (since 1985), Austrian Academy of Sciences (since 1986), French Academy of Sciences (since 1988), English Royal Society (since 1988), Academy of Finland (since 1988), Indian Academy of Sciences (since 1990), Royal Spanish Academy of Sciences (since 1989), German Academy of Sciences (since 1990), Kingdom of Netherlands (since 1990), Academies of Arts, Humanities & Sciences of Canada also known as Royal Society of Canada (since 1991), U.S. National Academy of Sciences (since 1992), U.S. National Academy of Engineering (since 1992), Israel Academy of Sciences and Humanities (since 1994), Norwegian Academy of Science and Letters (since 1995), Chinese Academy of Sciences (since 1996), Royal Academy of Sciences of Turin (since 1997), Australian Academy of Science (since 1998), Australian Academy of Technological Science and Engineering (since 1998), Royal Netherlands Academy of Arts and Sciences (since 1999), Brazilian Academy of Sciences (since 2000), U.S. Institute of Medicine (since 2002), Academy of Sciences of Ukraine (since 2010), Academy of Sciences of South Africa (since 2011), Royal Society of South Africa (since 2011) and Pontifical Academy of Sciences (since 2012).
Presidents
The Academy's first president was Theodore von Kármán. Edward C. Stone held the post of President of the International Academy of Astronautics until October 2009. G. Madhavan Nair, the chairman of the Indian Space Research Organization, was president of the International Academy of Astronautics from August 2009 until 2015. He was the only Indian and the first non-American to head the IAA.
Journal
The IAA sponsors the monthly journal Acta Astronautica, published by Elsevier Press, which "covers developments in space science technology in relation to peaceful scientific exploration of space and its exploitation for human welfare and progress, the conception, design, development and operation of space-borne and Earth-based systems.” In collaboration with the International Astronautical Federation the IAA launched a review journal, REACH-Reviews in Human Space Exploration, in 2016 that focuses on aspects of human space exploration.
References
External links
IAA Acta Astronautica Journal
Space advocacy organizations
Organizations based in Stockholm
International academies | International Academy of Astronautics | [
"Astronomy"
] | 776 | [
"Space advocacy organizations",
"Astronomy organizations"
] |
5,640,638 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%2010 | Bone morphogenetic protein 10 (BMP10) is a protein that in humans is encoded by the BMP10 gene.
BMP10 is a polypeptide belonging to the TGF-β superfamily of proteins. It is a novel protein that, unlike most other BMP's, is likely to be involved in the trabeculation of the heart. Bone morphogenetic proteins are known for their ability to induce bone and cartilage development. BMP10 is categorized as a BMP since it shares a large sequence homology with other BMP's in the TGF-β superfamily.
Further reading
References
External links
Developmental genes and proteins
Bone morphogenetic protein
TGFβ domain | Bone morphogenetic protein 10 | [
"Biology"
] | 149 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,640,964 | https://en.wikipedia.org/wiki/Edgewood%20Arsenal%20human%20experiments | From 1948 to 1975, the U.S. Army Chemical Corps conducted classified human subject research at the Edgewood Arsenal facility in Maryland. These experiments began after the conclusion of World War II, and continued until the public became aware of the experiments, resulting in significant outcry. The purpose was to evaluate the impact of low-dose chemical warfare agents on military personnel and to test protective clothing, pharmaceuticals, and vaccines. A small portion of these studies were directed at psychochemical warfare; grouped under the title "Medical Research Volunteer Program" (1956–1975), driven by intelligence requirements and the need for new and more effective interrogation techniques.
Overall, about 6,720 soldiers took part in these experiments that involved exposures to more than 250 different chemicals, according to the Department of Defense (DoD). Some of the volunteers exhibited symptoms at the time of exposure to these agents but long-term follow-up was not planned as part of the DoD studies. The experiments were abruptly terminated by the Army in late 1975 amidst an atmosphere of scandal and recrimination as lawmakers accused researchers of questionable ethics. Many official government reports and civilian lawsuits followed in the wake of the controversy.
The chemical agents tested on volunteers included chemical warfare agents and other related agents:
Anticholinesterase nerve agents (VX, sarin) and common organophosphorus (OP) and carbamate pesticides
Mustard agents
Nerve agent antidotes including atropine and scopolamine
Nerve agent reactivators, e.g. the common OP antidote 2-PAM chloride
Psychoactive agents including LSD, PCP, cannabinoids, and BZ
Irritants and riot control agents
Alcohol and caffeine
History
Background and rationale
After the conclusion of World War II, U.S. military researchers obtained formulas for the three nerve gases developed by the Nazis—tabun, soman, and sarin.
In 1947, the first steps of planning began when Dr. Alsoph H. Corwin, a professor of chemistry at Johns Hopkins University wrote the Chemical Corps Technical Command positing the potential for the use of specialized enzymes as so called "toxicological warfare agents". He went on to suggest that with intensive research, substances that depleted certain necessary nutrients could be found, which would, when administered on the battlefield, incapacitate enemy combatants.
In 1948, the US Army Edgewood Chemical Biological Center began conducting research using the aforementioned nerve gases. These studies included a classified human subjects component at least as early as 1948, when "psychological reactions" were documented in Edgewood technicians. Initially, such studies focused solely on the lethality of the gases and its treatment and prevention.
A classified report entitled "Psychochemical Warfare: A New Concept of War" was produced in 1949 by Luther Wilson Greene, Technical Director of the Chemical and Radiological Laboratories at Edgewood. Greene called for a search for novel psychoactive compounds that would create the same debilitating mental side effects as those produced by nerve gases, but without their lethal effect. In his words,Throughout recorded history, wars have been characterized by death, human misery, and the destruction of property; each major conflict being more catastrophic than the one preceding it ... I am convinced that it is possible, by means of the techniques of psychochemical warfare, to conquer an enemy without the wholesale killing of his people or the mass destruction of his property.
In the late 1940s and early '50s, the U.S. Army worked with Harvard anesthesiologist Henry K. Beecher at its interrogation center at Camp King in Germany on the use of psychoactive compounds (mescaline, LSD), including human subject experiments and the debriefing of former Nazi physicians and scientists who had worked along similar lines before the end of the war. In the 1950s, some officials in the U.S. Department of Defense publicly asserted that many "forms of chemical and allied warfare as more 'humane' than existing weapons. For example, certain types of 'psychochemicals' would make it possible to paralyze temporarily entire population centers without damage to homes and other structures." Soviet advances in the same field were cited as a special incentive giving impetus to research efforts in this area, according to testimony by Maj. Gen. Marshall Stubbs, the Army's chief chemical officer.
In June 1955, the United States Department of Defense appointed a so-called Ad Hoc Study Group on Psychochemical Agents, which seems to have acted as a central authority on the research of psychochemical at Edgewood Arsenal and other installations where such experimentation occurred.
General William M. Creasy, former chief chemical officer, U.S. Army, testified to the U.S. House of Representatives in 1959 that "provided sufficient emphasis is put behind it, I think the future lies in the psychochemicals." This was alarming enough to a Harvard psychiatrist, E. James Lieberman, that he published an article entitled "Psychochemicals as Weapons" in The Bulletin of the Atomic Scientists in 1962. Lieberman, while acknowledging that "most of the military data" on the research ongoing at the Army Chemical Center was "secret and unpublished", asserted that "There are moral imponderables, such as whether insanity, temporary or permanent, is a more 'humane' military threat than the usual afflictions of war."
The experiments
The Edgewood Arsenal human experiments took place from approximately 1948 to 1975 at the Medical Research Laboratories—which is now known as the U.S. Army Medical Research Institute of Chemical Defense (USAMRICD)—at the Edgewood Area, Aberdeen Proving Ground, Maryland. The experiments involved at least 254 chemical substances, but focused mainly on midspectrum incapacitants, such as LSD, THC derivatives, benzodiazepines, and BZ. Around 7,000 US military personnel and 1,000 civilians were test subjects over almost three decades. A result of these experiments was that BZ was weaponized, although never deployed.
According to a DOD FAQ, the Edgewood Arsenal experiments involved the following "rough breakout of volunteer hours against various experimental categories":
Acetylcholine related experiments
Much of the experimentation at Edgewood Arsenal surrounded the modulation of acetylcholine or acetylcholinesterase, or the deactivation and reactivation of substances which did the same. These experiments represented a significant enough proportion of the total experimentation to earn a dedicated volume in the main experimental documentation. Much of the follow up data on the acetylcholine related experiments are lacking or entirely missing, due to a combination of remaining classification and failures on the part of the United States Department of Veterans Affairs and United States Department of Defense to follow the subjects of the experimentation.
Anticholinesterase experiments
Anticholinesterases are substances that interfere with the central nervous system and the peripheral nervous system by inhibiting acetylcholinesterase and therefore sustaining the effect of acetylcholine or butyrylcholine within the chemical synapses, resulting in a cholinergic crisis, and possibly death if untreated.
Long term side effects of exposure to anticholinesterases, including at levels below the threshold for profound illness and death can include paralysis and peripheral neuropathy, sleep disturbance, genetic mutation and cancer. In total, 1,406 subjects were tested with 16 agents, some of which included reactivating agents and protective agents.
Anticholinergic experiments
Anticholinergics are substances that interfere with the central nervous system and the peripheral nervous system by inhibiting acetylcholine, resulting in what is essentially the opposite effect of an cholinesterase inhibitor to the extreme. This can result in anticholinergic syndrome, and possibly death if untreated.
Available data from both experiment patients and pharmaceutical research indicates that short-term exposure to anticholinergic compounds, especially the extremely limited exposures described in the documentation is associated with no long-term effects. It is important to note however, that in the decades since their introduction to medical use, research has begun to suggest a causal relationship between long term anticholinergic drug use and later development or worsening of dementia. In total, 1,752 subjects were tested with 21 agents, some of whom received exposure to more than one chemical agent.
Cholinesterase reactivator experiments
Cholinesterase reactivators are substances which reactivate acetylcholinesterase which has been inactivated by an anticholinesterase. This action can be precipitated through a variety of mechanisms, including directly binding and deactivating the anticholinesterase itself, blocking the reaction between the anticholinesterase and the acetylcholinesterase, changing the release of acetylcholine, blocking acetylcholine's cholinolytic effect, or by increasing the excretion of the anticholinesterase.
Available data from the experiments and from prescribing information from modern marketing of these substances concludes that little risk exists of long term effects from exposure. It is noted, however, in both the prescribing information for modern variants and in toxicological research on the subject that it has been the subject of insufficient research to conclude this beyond a reasonable doubt. In total, 219 subjects were tested with 4 agents.
Psychochemical related experiments
The 1976 report on the matter identifies the sole objective of the psychochemical experiments as determining the impact on morale and efficacy such agents would have on military units. It appears that these experiments specifically were first called for in 1954 after the attendees of the First Psychochemical Conference informed the Department of Defense that human trials were indicated. In 1957, the first report of such trials were received, detailing a four-person experiment wherein they attempted to successfully decontaminate themselves of a mock agent while under the influence of LSD.
LSD experiments
The LSD experiments are perhaps the best documented of the psychochemical experiments of the time, garnering at least two significant independent reports. LSD is a Psychedelic drug that acts as a dopamine and serotonin agonist precipitating a hallucinogenic effect, leading to hallucinations, euphoria, and a wide variety of physiological symptoms.
Available data describes a wide range of doses used in the experiments, from approximately 2μg/kg to 16μg/kg A typical dose for recreational use is around 100μg, or about 1.1μg/kg for the average adult male in the U.S., meaning the lowest dose used in experimentation was almost twice the typical recreational dose, and the highest dose exceeded fifteen times the typical recreational dose. Because of limited documentation, it is difficult to ascertain which experiments occurred at which installations, but available documentation describes several general types of experiments; which included presenting individuals with radar symbols for interpretation, having them track a simulated aircraft, having them read a map, having them interpret meteorological data, and having them attempt to defend an installation against a simulated hostile air craft attack with 40-mm antiaircraft automatic weapons. Results varied between experiments, but typically showed significant impairment at all doses, with impairment increasing as dose did.
Available data from the experiments concluded that long term effects from LSD exposure in not only the Edgewood Arsenal Experiments, but in the other associated experiments conducted concurrently by the Army Chemical Corps as well were minimal, with the exception of a possible small increase in congenital heart disease in offspring of the experimental subjects, and neuropsychological abnormalities in 9% of the participants which could not be explained by etiological explanations other than LSD exposure, most of which were considered mild. It is reported that all testing of LSD at Edgewood Arsenal and in general on behalf of the Army Chemical Corps was abandoned on or around April 1963.
BZ experiments
3-Quinuclidinyl benzilate, or BZ is a substance that interferes with the central nervous system and the peripheral nervous system by inhibiting acetylcholine. Existing documentation admits only that the substance was tested at Edgewood Arsenal, and all other data, including the medical records from the subjects are completely missing. Because of the extremely limited data, speculation on possible side effects from exposure is impossible.
Scandal and termination
In September 1975, the Medical Research Volunteer Program was discontinued and all resident volunteers were removed from the Edgewood installation. The founder and director of the program, Van Murray Sim, was called before Congress and chastised by outraged lawmakers, who questioned the absence of follow-up care for the human volunteers. An Army investigation subsequently found no evidence of serious injuries or deaths associated with the MRVP, but deplored both the recruiting process and the informed consent approach, which they characterized as "suggest[ing] possible coercion".
Aftermath
Government reports
1982-85 IOM report
The Institute of Medicine (IOM) published a three-volume report on the Edgewood research in 1982–1985, Possible Long-Term Health Effects of Short-Term Exposure to Chemical Agents.
The three volumes were:
Vol. 1, "Anticholinesterases and Anticholinergics" (1982).
Vol. 2, "Cholinesterase Reactivators, Psychochemicals and Irritants and Vesicants" (1984)
Vol. 3, "Final Report: Current Health Status of Test Subjects" (1985)
The National Academy of Sciences, which oversees the IOM, sent a questionnaire to all of the former volunteers that could be located, approximately 60% of the total. The lack of a detailed record hampered the investigation. The study could not rule out long-term health effects related to exposure to the nerve agents. It concluded that "Whether the subjects at Edgewood incurred these changes [depression, cognitive deficits, tendency to suicide] and to what extent they might now show these effects are not known". With regard specifically to BZ and related compounds, the IOM study concluded that "available data suggest that long-term toxic effects and/or delayed sequellae are unlikely".
2004 GAO report
A Government Accounting Office report of May 2004, Chemical and Biological Defense: DOD Needs to Continue to Collect and Provide Information on Tests and Potentially Exposed Personnel (pp. 1, 24), stated:
[In 1993 and 1994] we [...] reported that the Army Chemical Corps conducted a classified medical research program for developing incapacitating agents. This program involved testing nerve agents, nerve agent antidotes, psycho chemicals, and irritants. The chemicals were given to volunteer service members at Edgewood Arsenal, Maryland; Dugway Proving Ground, Utah; and Forts Benning, Bragg, and McClellan. In total, Army documents identified 7,120 Army and Air Force personnel who participated in these tests. Further, GAO concluded that precise information on the scope and the magnitude of tests involving human subjects was not available, and the exact number of human subjects might never be known.
Safety debates
The official position of the Department of Defense, based on the three-volume set of studies by the Institute of Medicine mentioned above, is that they "did not detect any significant long-term health effects on the Edgewood Arsenal volunteers". The safety record of the Edgewood Arsenal experiments was also defended in the memoirs of psychiatrist and retired colonel James Ketchum, a key scientist:
As late as 2014, information was incomplete; IOM could not conduct adequate medical studies related to similar former US biowarfare programs, because relevant classified documents had not been declassified and released.
Even a book critical of the program, written by Lynn C. Klotz and Edward J. Sylvester, acknowledges that:
Unlike the CIA program, research subjects [at Edgewood] all signed informed consent forms, both a general one and another related to any experiment they were to participate in. Experiments were carried out with safety of subjects a principal focus. [...] At Edgewood, even at the highest doses it often took an hour or more for incapacitating effects to show, and the end-effects usually did not include full incapacitation, let alone unconsciousness. After all, the Edgewood experimenters were focused on disabling soldiers in combat, where there would be tactical value simply in disabling the enemy.
Lawsuits
The U.S. Army believed that legal liability could be avoided by concealing the experiments. However once the experiments were uncovered, the US Senate also concluded questionable legality of the experiments and strongly condemned them.
In the 1990s, the law firm Morrison & Foerster agreed to take on a class-action lawsuit against the government related to the Edgewood volunteers. The plaintiffs collectively referred to themselves as the "Test Vets".
In 2009 a lawsuit was filed by veterans rights organizations Vietnam Veterans of America, and Swords to Plowshares, and eight Edgewood veterans or their families against CIA, the U.S. Army, and other agencies. The complaint asked the court to determine that defendants' actions were illegal and that the defendants have a duty to notify all victims and to provide them with health care. In the suit, Vietnam Veterans of America, et al. v. Central Intelligence Agency, et al. Case No. CV-09-0037-CW, U.S.D.C. (N.D. Cal. 2009), the plaintiffs did not seek monetary damages. Instead, they sought only declaratory and injunctive relief and redress for what they claimed was several decades of neglect and the U.S. government's use of them as human guinea pigs in chemical and biological agent testing experiments.
The plaintiffs cited:
The use of troops to test nerve gas, psychochemicals, and thousands of other toxic chemical or biological substances.
A failure to secure informed consent and other widespread failures to follow the precepts of U.S. and international law regarding the use of human subjects, including the 1953 Wilson Directive and the Nuremberg Code.
A refusal to satisfy their legal and moral obligations to locate the victims of experiments or to provide health care or compensation to them
A deliberate destruction of evidence and files documenting their illegal actions, actions which were punctuated by fraud, deception, and a callous disregard for the value of human life.
On July 24, 2013, United States District Court Judge Claudia Wilken issued an order granting in part and denying in part plaintiffs' motion for summary judgment and granting in part and denying in part defendants' motion for summary judgment. The court resolved all of the remaining claims in the case and vacated trial. The court granted the plaintiffs partial summary judgment concerning the notice claim: summarily adjudicating in plaintiffs' favor, finding that "the Army has an ongoing duty to warn" and ordering "the Army, through the DVA or otherwise, to provide test subjects with newly acquired information that may affect their well-being that it has learned since its original notification, now and in the future as it becomes available". The court granted the defendants' motion for summary judgment with respect to the other claims.
On appeal in Vietnam Veterans of America v. Central Intelligence Agency, a panel majority held in July 2015 that Army Regulation 70-25 (AR 70-25) created an independent duty to provide ongoing medical care to veterans who participated in U.S. chemical and biological testing programs. The prior finding held that the Army has an ongoing duty to seek out and provide "notice" to former test participants of any new information that could potentially affect their health.
List of notable EA (Edgewood Arsenal) numbered chemicals
EA 1152 - Diisopropyl fluorophosphate (DFP)
EA 1205 - Tabun (GA)
EA 1208 - Sarin (GB)
EA 1210 - Soman (GD)
EA 1212 - Cyclosarin (GF)
EA 1285 - Tetraethyl pyrophosphate (TEPP)
EA 1298 - Methylenedioxyamphetamine (MDA), an analogue and active metabolite of MDMA
EA 1508 - VG
EA 1517 - VE
EA 1653 - LSD in tartrate form
EA 1664 - Edemo (VM)
EA 1701 - VX
EA 1729 - LSD in free base form
EA 1779 - CS gas
EA 2092 - Benactyzine
EA 2148-A - Phencyclidine (PCP)
EA 2233 - A dimethylheptylpyran variant
Eight individual isomers numbered EA-2233-1 through EA-2233-8
EA 2277 - BZ ("Substance 78" to Soviets)
EA 3148 - A "V-series" nerve agent, Cyclopentyl S-2-diethylaminoethyl methylphosphonothiolate ("Substance 100A" to Soviets)
EA 3167 - A BZ variant
EA 3443 - A BZ variant
EA 3528 - LSD in maleate form
EA 3580 - A BZ variant
EA 3834 - A BZ variant
EA-4929 - An enantiomer of the drug Dexetimide, also known as benzetimide
EA 4942 - Etonitazene in free base form
EA 5365 - GV
EA 5823 - Sarin (GB) as a binary agent from mixing OPA (isopropyl alcohol+isopropyl amine) + DF
List of notable CS (Chemical Structure) and CAS (Chemical Abstracts Service) numbered chemicals used in the Edgewood Arsenal Experiments
The following chemicals were identified by the National Academies of Sciences, Engineering, and Medicine as having been used in the Edgewood Arsenal Experiments, though they did not receive an EA number designation.
CS 12602 - Tacrine an acetylcholinesterase inhibitor and indirect cholinergic agonist (parasympathomimetic)
CS 58525 - Eserine a highly toxic parasympathomimetic alkaloid, specifically, a reversible cholinesterase inhibitor
CAS 59-99-4 - Neostigmine an acetylcholinesterase inhibitor
CAS 317–52–2 - Hexafluronium bromide a nicotinic acetylcholine receptor antagonist related to Curare
CAS 155–97–5 - Pyridostigmine bromide an acetylcholinesterase inhibitor implicated in Gulf War syndrome
CAS 121–75–5 - Malathion an acetylcholinesterase inhibitor and organophosphate pesticide
CAS 62–51–1 - Methacholine, a synthetic choline ester that acts as a non-selective muscarinic receptor agonist
CAS 674-38-4 - Bethanechol, a parasympathomimetic choline carbamate
CAS 51–34–3 - Scopolamine an anticholinergic drug studied by the CIA as a truth serum
CAS 8015–54–1 - Ditran an anticholinergic drug mixture, related to the chemical warfare agent 3-Quinuclidinyl benzilate
CAS 101-31-5 - Hyoscyamine a levorotary isomer of atropine
CAS 155–41–9 - Methylscopolamine bromide a muscarinic antagonist scopolamine derivative
CAS 31610-87-4 - Methylatropine a belladonna derivative
See also
THC-O-acetate
CB military symbol
United States chemical weapons program
Edgewood Chemical Biological Center
Human experimentation in the United States
Swords to Plowshares
United States v. Stanley
References
General sources
Two autobiographical books from psychiatrists conducting human experiments at Edgewood have been self-published:
Men and Poisons: The Edgewood Volunteers and the Army Chemical Warfare Research Program (2005), Xlibris Corporation, 140pp, was written by Malcolm Baker Bowers Jr, who went on to become a prof of psychiatry at Yale. Bowers' book is a "fictionalized" account with names changed.
Chemical Warfare Secrets Almost Forgotten, A Personal Story of Medical Testing of Army Volunteers with Incapacitating Chemical Agents During the Cold War (1955–1975) (2006, 2nd edition 2007), foreword by Alexander Shulgin, ChemBook, Inc., 360 pp, was written by Ketchum who was a key player after 1960 and went on to become a professor at the University of California, Los Angeles.
The Vanderbilt University Television News Archive has two videos about the experiments, both from a July 1975 NBC Evening News segment.
NBC newsman John Chancellor reported on how Norman Augustine, then-acting Secretary of Army, ordered a probe of Army use of LSD in soldier and civilian experiments.
Correspondent Tom Pettit reported on Major General Lloyd Fellenz, from Edgewood Arsenal, who explained how the experiments there were about searching for humane weapons, adding that the use of LSD was unacceptable.
Journalist Linda Hunt, citing records from the U.S. National Archives, revealed that eight German scientists worked at Edgewood, under Project Paperclip. Hunt used this finding to assert that in this collaboration, US and former Nazi scientists "used Nazi science as a basis for Dachau-like experiments on over 7,000 U.S. soldiers".
A The Washington Post article, dated July 23, 1975, by Bill Richards ("6,940 Took Drugs") reported that a top civilian drug researcher for the Army said a total of 6,940 servicemen had been involved in Army chemical and drug experiments, and that, furthermore, the tests were proceeding at Edgewood Arsenal as of the date of the article.
Two TV documentaries, with different content but confusingly similar titles were broadcast:
Bad Trip to Edgewood (1993) on ITV Yorkshire
Bad Trip to Edgewood (1994) on A&E Investigative Reports.
In 2012, the Edgewood/Aberdeen experiments were featured on CNN and in The New Yorker magazine.
Citations
External links
Edgewood Test Vets: Vietnam Veterans of America, et al. v. Central Intelligence Agency, et al. Case No. CV-09-0037-CW, U.S.D.C. (N.D. Cal. 2009), Morrison & Foerster LLP, August 7, 2013
Hunt, Secret Agenda: The U.S. Government, Nazi Scientists and Project Paperclip 1945-1991.
Secrets of Edgewood, The New Yorker, December 26, 2012
Edgewood/Aberdeen Experiments, U.S. Department of Veterans Affairs
David S. Martin, Vets feel abandoned after secret drug experiments, CNN, March 1, 2012
Tom Bowman, Former sergeant seeks compensation for LSD testing at Edgewood Arsenal, July 11, 1991, The Baltimore Sun
Central Intelligence Agency operations
Chemical warfare
History of the government of the United States
Human subject research in the United States
20th-century military history of the United States
Military psychiatry
Mind control
Psychedelic drug research
Cannabis research
Cannabis and the United States military
Articles containing video clips | Edgewood Arsenal human experiments | [
"Chemistry"
] | 5,498 | [
"nan"
] |
5,640,993 | https://en.wikipedia.org/wiki/Fermentation%20crock | A fermentation crock, also known as a gärtopf crock or Harsch crock, is a crock for fermentation. It has a gutter in the rim which is then filled with water so that when the top is put on an airlock is created, which prevents the food within from spoiling due to the development of surface molds. Ceramic weights may also be used to keep the fermenting food inside submerged. A fermentation tamper, a wide stick or dowel usually made of wood, may be used to pack food into the crock to keep it below the surface of the brine.
See also
Sauerkraut
External links
Image with cross-section through crock
Cooking vessels
Crock | Fermentation crock | [
"Chemistry",
"Biology"
] | 155 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
5,641,106 | https://en.wikipedia.org/wiki/Breast%20MRI | One alternative to mammography, breast MRI or contrast-enhanced magnetic resonance imaging (MRI), has shown substantial progress in the detection of breast cancer.
Uses
Some of the uses of MRI of the breasts are: screening for malignancy in women with greater than 20% lifetime risk of breast cancer (especially those with high risk genes such as BRCA1 and BRCA2), evaluate breast implants for rupture, screening the opposite side breast for malignancy in women with known one sided breast malignancy, extent of disease and the presence of multifocality and multicentricity in patients with invasive carcinoma and ductal carcinoma in situ (DCIS), and evaluate response to neoadjuvant chemotherapy.
MRI breasts has the highest sensitivity to detect breast cancer when compared with other imaging modalities such as breast ultrasound or mammography. In the screening for breast cancer for high-risk women, sensitivity of MRI range from 83 to 94% while specificity (the confidence that a lesion is cancerous and not a false positive) range from 75.2% to 100%.
Nephrogenic systemic fibrosis
The systemic disease nephrogenic systemic fibrosis (NSF), caused by exposure to gadolinium in MRI contrast agents, resembles scleromyxedema and to some extent scleroderma. It may occur months after contrast has been injected. Patients with poorer kidney function are more at risk for NSF, with dialysis patients being more at risk than patients with chronic kidney disease. After several years of controversy during which up to 100 Danish patients have been gadolinium poisoned (and some died) after use of the contrast agent Omniscan, the Norwegian medical company Nycomed admitted that they were aware of some dangers of using gadolinium-based agents for their product. At present, NSF has been linked to the use of four gadolinium-containing MRI contrast agents.
References
Magnetic resonance imaging
Breast imaging | Breast MRI | [
"Chemistry"
] | 412 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
5,641,588 | https://en.wikipedia.org/wiki/Naphthalenesulfonate | Naphthalenesulfonates are derivatives of sulfonic acid which contain a naphthalene functional unit. A related family of compounds are the aminonaphthalenesulfonic acids. Of commercial importance are the alkylnaphthalene sulfonates, which are used as superplasticizers in concrete. They are produced on a large scale by condensation of naphthalenesulfonate or alkylnaphthalenesulfonates with formaldehyde.
Examples include:
amaranth dye
amido black
armstrong's acid
congo red
Evans blue
suramin
trypan blue
References
External links
Naphthalenesulfonates | Naphthalenesulfonate | [
"Chemistry"
] | 139 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
5,641,711 | https://en.wikipedia.org/wiki/European%20Northern%20Observatory | The European Northern Observatory (ENO) is the name by which the Instituto de Astrofísica de Canarias and its observatories (the Teide Observatory on Tenerife and the Roque de los Muchachos Observatory on La Palma) are collectively known. Its name is a word play on the successful collaboration of the member countries in the European Southern Observatory organisation.
See also
European Southern Observatory
References
External links
ENO
Astronomical observatories in the Canary Islands
Astronomy in Europe
Astronomy institutes and departments | European Northern Observatory | [
"Astronomy"
] | 106 | [
"Astronomy organizations",
"Astronomy institutes and departments"
] |
5,642,287 | https://en.wikipedia.org/wiki/Diamine | A diamine is an amine with exactly two amino groups. Diamines are used as monomers to prepare polyamides, polyimides, and polyureas. The term diamine refers mostly to primary diamines, as those are the most reactive.
In terms of quantities produced, 1,6-diaminohexane (a precursor to Nylon 6-6) is most important, followed by ethylenediamine. Vicinal diamines (1,2-diamines) are a structural motif in many biological compounds and are used as ligands in coordination chemistry.
Geminal diamines (1,1-diamines) are usually reactive intermediates in transimination reactions and the reduction of amidines, in aqueous conditions they preferentially eliminate the less basic amine to leave an iminium ion. Some stable geminal diamines have been isolated.
Aliphatic diamines
Linear
1 carbon: methylenediamine (diaminomethane) of theoretical interest only, but its hydrochloride can be used in the synthesis of amides.
2 carbon backbone: ethylenediamine (1,2-diaminoethane). Related derivatives include the N-alkylated compounds, 1,1-dimethylethylenediamine, 1,2-dimethylethylenediamine, ethambutol, tetrakis(dimethylamino)ethylene, TMEDA.
3 carbon backbone: 1,3-diaminopropane (propane-1,3-diamine)
4 carbon backbone: putrescine (butane-1,4-diamine)
5 carbon backbone: cadaverine (pentane-1,5-diamine)
6 carbon backbone: hexamethylenediamine (hexane-1,6-diamine)
Branched
Derivatives of ethylenediamine are prominent:
1,2-diaminopropane, which is chiral.
2,3-Butanediamine, two diastereomers, one of which is C2-symmetric.
Diphenylethylenediamine, two diastereomers, one of which is C2-symmetric.
1,2-Diaminocyclohexane, two diastereomers, one of which is C2-symmetric.
trimethylhexamethylenediamine, several isomers
Cyclic
1,4-Diazacycloheptane
Xylylenediamines
Xylylenediamines are classified as alkylamines since the amine is not directly attached to an aromatic ring.
o-xylylenediamine or OXD
m-xylylenediamine or MXD
p-xylylenediamine or PXD
Aromatic diamines
Three phenylenediamines are known:
o-phenylenediamine or OPD
m-phenylenediamine or MPD
p-phenylenediamine or PPD. 2,5-diaminotoluene is related to PPD but contains a methyl group on the ring.
Various N-methylated derivatives of the phenylenediamines are known:
dimethyl-4-phenylenediamine, a reagent.
N,N'-di-2-butyl-1,4-phenylenediamine, an antioxidant.
Examples with two aromatic rings include derivatives of biphenyl and naphthalene:
4,4'-diaminobiphenyl
1,8-diaminonaphthalene
References
External links
Synthesis of diamines
Monomers | Diamine | [
"Chemistry",
"Materials_science"
] | 767 | [
"Monomers",
"Polymer chemistry"
] |
5,642,452 | https://en.wikipedia.org/wiki/History%20of%20information%20theory | The decisive event which established the discipline of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948.
In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of
the information entropy and redundancy of a source, and its relevance through the source coding theorem;
the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem;
the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; and of course
the bit - a new way of seeing the most fundamental unit of information.
Before 1948
Early telecommunications
Some of the oldest methods of telecommunications implicitly use many of the ideas that would later be quantified in information theory. Modern telegraphy, starting in the 1830s, used Morse code, in which more common letters (like "E", which is expressed as one "dot") are transmitted more quickly than less common letters (like "J", which is expressed by one "dot" followed by three "dashes"). The idea of encoding information in this manner is the cornerstone of lossless data compression. A hundred years later, frequency modulation illustrated that bandwidth can be considered merely another degree of freedom. The vocoder, now largely looked at as an audio engineering curiosity, was originally designed in 1939 to use less bandwidth than that of an original message, in much the same way that mobile phones now trade off voice quality with bandwidth.
Quantitative ideas of information
The most direct antecedents of Shannon's work were two papers published in the 1920s by Harry Nyquist and Ralph Hartley, who were both still research leaders at Bell Labs when Shannon arrived in the early 1940s.
Nyquist's 1924 paper, "Certain Factors Affecting Telegraph Speed", is mostly concerned with some detailed engineering aspects of telegraph signals. But a more theoretical section discusses quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation
where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant.
Hartley's 1928 paper, called simply "Transmission of Information", went further by using the word information (in a technical sense), and making explicitly clear that information in this context was a measurable quantity, reflecting only the receiver's ability to distinguish that one sequence of symbols had been intended by the sender rather than any other—quite regardless of any associated meaning or other psychological or semantic aspect the symbols might represent. This amount of information he quantified as
where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. The Hartley information, H0, is still used as a quantity for the logarithm of the total number of possibilities.
A similar unit of log10 probability, the ban, and its derived unit the deciban (one tenth of a ban), were introduced by Alan Turing in 1940 as part of the statistical analysis of the breaking of the German second world war Enigma cyphers. The decibannage represented the reduction in (the logarithm of) the total number of possibilities (similar to the change in the Hartley information); and also the log-likelihood ratio (or change in the weight of evidence) that could be inferred for one hypothesis over another from a set of observations. The expected change in the weight of evidence is equivalent to what was later called the Kullback discrimination information.
But underlying this notion was still the idea of equal a-priori probabilities, rather than the information content of events of unequal probability; nor yet any underlying picture of questions regarding the communication of such varied outcomes.
In a 1939 letter to Vannevar Bush, Shannon had already outlined some of his initial ideas of information theory.
Entropy in statistical mechanics
One area where unequal probabilities were indeed well known was statistical mechanics, where Ludwig Boltzmann had, in the context of his H-theorem of 1872, first introduced the quantity
as a measure of the breadth of the spread of states available to a single particle in a gas of like particles, where f represented the relative frequency distribution of each possible state. Boltzmann argued mathematically that the effect of collisions between the particles would cause the H-function to inevitably increase from any initial configuration until equilibrium was reached; and further identified it as an underlying microscopic rationale for the macroscopic thermodynamic entropy of Clausius.
Boltzmann's definition was soon reworked by the American mathematical physicist J. Willard Gibbs into a general formula for statistical-mechanical entropy, no longer requiring identical and non-interacting particles, but instead based on the probability distribution pi for the complete microstate i of the total system:
This (Gibbs) entropy, from statistical mechanics, can be found to directly correspond to the Clausius's classical thermodynamic definition.
Shannon himself was apparently not particularly aware of the close similarity between his new measure and earlier work in thermodynamics, but John von Neumann was. It is said that, when Shannon was deciding what to call his new measure and fearing the term 'information' was already over-used, von Neumann told him firmly: "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."
(Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored further in the article Entropy in thermodynamics and information theory).
Development since 1948
The publication of Shannon's 1948 paper, "A Mathematical Theory of Communication", in the Bell System Technical Journal was the founding of information theory as we know it today. Many developments and applications of the theory have taken place since then, which have made many modern devices for data communication and storage such as CD-ROMs and mobile phones possible.
Notable later developments are listed in a timeline of information theory, including:
The 1951, invention of Huffman encoding, a method of finding optimal prefix codes for lossless data compression.
Irving S. Reed and David E. Muller proposing Reed–Muller codes in 1954.
The 1960 proposal of Reed–Solomon codes.
In 1966, Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) develop linear predictive coding (LPC), a form of speech coding.
In 1968, Elwyn Berlekamp invents the Berlekamp–Massey algorithm; its application to decoding BCH and Reed–Solomon codes is pointed out by James L. Massey the following year.
In 1972, Nasir Ahmed proposes the discrete cosine transform (DCT). It later becomes the most widely used lossy compression algorithm, and the basis for digital media compression standards from 1988 onwards, including H.26x (since H.261) and MPEG video coding standards, JPEG image compression, MP3 audio compression, and Advanced Audio Coding (AAC).
In 1976, Gottfried Ungerboeck gives the first paper on trellis modulation; a more detailed exposition in 1982 leads to a raising of analogue modem POTS speeds from 9.6 kbit/s to 33.6 kbit/s
In 1977, Abraham Lempel and Jacob Ziv develop Lempel–Ziv compression (LZ77)
In the early 1980s, Renuka P. Jindal at Bell Labs improves the noise performance of metal–oxide–semiconductor (MOS) devices, resolving issues that limited their receiver sensitivity and data rates. This leads to the wide adoption of MOS technology in laser lightwave systems and wireless terminal applications, enabling Edholm's law.
In 1989, Phil Katz publishes the .zip format including DEFLATE (LZ77 + Huffman coding); later to become the most widely used archive container.
In 1995, Benjamin Schumacher coins the term qubit and proves the quantum noiseless coding theorem.
See also
Timeline of information theory
Claude Shannon
Ralph Hartley
H-theorem
References
Information theory
Information Theory | History of information theory | [
"Mathematics",
"Technology",
"Engineering"
] | 1,831 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,642,519 | https://en.wikipedia.org/wiki/Soil%20zoology | Soil zoology or pedozoology is the study of animals living fully or partially in the soil (soil fauna). The field of study was developed in the 1940s by Mercury Ghilarov in Russia. Ghilarov noted inverse relationships between size and numbers of soil organisms. He also suggested that soil included water, air and solid phases and that soil may have provided the transitional environment between aquatic and terrestrial life. The phrase was apparently first used in the English speaking world at a conference of soil zoologists presenting their research at the University of Nottingham, UK, in 1955.
See also
Biogeochemical cycle
Soil ecology
Zoology
References
Bibliography
Safwat H. Shakir Hanna, ed, 2004, Soil Zoology For Sustainable Development In The 21st century: A Festschrift in Honour of Prof. Samir I. Ghabbour on the Occasion of His 70th Birthday, Cairo, .
External links
D. Keith McE. Kevan, Ethnoentomologist, Cultural Entomology Digest 3
Soil biology
Edaphology
Soil science | Soil zoology | [
"Biology"
] | 211 | [
"Soil biology"
] |
5,642,583 | https://en.wikipedia.org/wiki/Lorenz%20system | The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not (see Saddle points). The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Overview
In 1963, Edward Lorenz, with the help of Ellen Fetter who was responsible for the numerical simulations and figures, and Margaret Hamilton who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations:
The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: is proportional to the rate of convection, to the horizontal temperature variation, and to the vertical temperature variation. The constants , , and are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself.
The Lorenz equations can arise in simplified models for lasers, dynamos, thermosyphons, brushless DC motors, electric circuits, chemical reactions and forward osmosis. Interestingly, the same Lorenz equations were also derived in 1963 by Sauermann and Haken for a single-mode laser. In 1975, Haken realized that their equations derived in 1963 were mathematically equivalent to the original Lorenz equations. Haken's paper thus started a new field called laser chaos or optical chaos. The Lorenz equations are often called Lorenz-Haken equations in optical literature. Later on, it was also shown the complex version of Lorenz equations also had laser equivalent ones.
The Lorenz equations are also the governing equations in Fourier space for the Malkus waterwheel. The Malkus waterwheel exhibits chaotic motion where instead of spinning in one direction at a constant speed, its rotation will speed up, slow down, stop, change directions, and oscillate back and forth between combinations of such behaviors in an unpredictable manner.
From a technical standpoint, the Lorenz system is nonlinear, aperiodic, three-dimensional and deterministic. The Lorenz equations have been the subject of hundreds of research articles, and at least one book-length study.
Analysis
One normally assumes that the parameters , , and are positive. Lorenz used the values , , and . The system exhibits chaotic behavior for these (and nearby) values.
If then there is only one equilibrium point, which is at the origin. This point corresponds to no convection. All orbits converge to the origin, which is a global attractor, when .
A pitchfork bifurcation occurs at , and for two additional critical points appear at
These correspond to steady convection. This pair of equilibrium points is stable only if
which can hold only for positive if . At the critical value, both equilibrium points lose stability through a subcritical Hopf bifurcation.
When , , and , the Lorenz system has chaotic solutions (but not all solutions are chaotic). Almost all initial points will tend to an invariant setthe Lorenz attractora strange attractor, a fractal, and a self-excited attractor with respect to all three equilibria. Its Hausdorff dimension is estimated from above by the Lyapunov dimension (Kaplan-Yorke dimension) as , and the correlation dimension is estimated to be . The exact Lyapunov dimension formula of the global attractor can be found analytically under classical restrictions on the parameters:
The Lorenz attractor is difficult to analyze, but the action of the differential equation on the attractor is described by a fairly simple geometric model. Proving that this is indeed the case is the fourteenth problem on the list of Smale's problems. This problem was the first one to be resolved, by Warwick Tucker in 2002.
For other values of , the system displays knotted periodic orbits. For example, with it becomes a torus knot.
Connection to tent map
In Figure 4 of his paper, Lorenz plotted the relative maximum value in the z direction achieved by the system against the previous relative maximum in the direction. This procedure later became known as a Lorenz map (not to be confused with a Poincaré plot, which plots the intersections of a trajectory with a prescribed surface). The resulting plot has a shape very similar to the tent map. Lorenz also found that when the maximum value is above a certain cut-off, the system will switch to the next lobe. Combining this with the chaos known to be exhibited by the tent map, he showed that the system switches between the two lobes chaotically.
A Generalized Lorenz System
Over the past several years, a series of papers regarding high-dimensional Lorenz models have yielded a generalized Lorenz model, which can be simplified into the classical Lorenz model for three state variables or the following five-dimensional Lorenz model for five state variables:
A choice of the parameter has been applied to be consistent with the choice of the other parameters. See details in.
Simulations
Julia simulation
using Plots
# define the Lorenz attractor
@kwdef mutable struct Lorenz
dt::Float64 = 0.02
σ::Float64 = 10
ρ::Float64 = 28
β::Float64 = 8/3
x::Float64 = 2
y::Float64 = 1
z::Float64 = 1
end
function step!(l::Lorenz)
dx = l.σ * (l.y - l.x)
dy = l.x * (l.ρ - l.z) - l.y
dz = l.x * l.y - l.β * l.z
l.x += l.dt * dx
l.y += l.dt * dy
l.z += l.dt * dz
end
attractor = Lorenz()
# initialize a 3D plot with 1 empty series
plt = plot3d(
1,
xlim = (-30, 30),
ylim = (-30, 30),
zlim = (0, 60),
title = "Lorenz Attractor",
marker = 2,
)
# build an animated gif by pushing new points to the plot, saving every 10th frame
@gif for i=1:1500
step!(attractor)
push!(plt, attractor.x, attractor.y, attractor.z)
end every 10
Maple simulation
deq := [diff(x(t), t) = 10*(y(t) - x(t)), diff(y(t), t) = 28*x(t) - y(t) - x(t)*z(t), diff(z(t), t) = x(t)*y(t) - 8/3*z(t)]:
with(DEtools):
DEplot3d(deq, {x(t), y(t), z(t)}, t = 0 .. 100, [[x(0) = 10, y(0) = 10, z(0) = 10]], stepsize = 0.01, x = -20 .. 20, y = -25 .. 25, z = 0 .. 50, linecolour = sin(t*Pi/3), thickness = 1, orientation = [-40, 80], title = `Lorenz Chaotic Attractor`);
Maxima simulation
[sigma, rho, beta]: [10, 28, 8/3]$
eq: [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]$
sol: rk(eq, [x, y, z], [1, 0, 0], [t, 0, 50, 1/100])$
len: length(sol)$
x: makelist(sol[k][2], k, len)$
y: makelist(sol[k][3], k, len)$
z: makelist(sol[k][4], k, len)$
draw3d(points_joined=true, point_type=-1, points(x, y, z), proportional_axes=xyz)$
MATLAB simulation
% Solve over time interval [0,100] with initial conditions [1,1,1]
% ''f'' is set of differential equations
% ''a'' is array containing x, y, and z variables
% ''t'' is time variable
sigma = 10;
beta = 8/3;
rho = 28;
f = @(t,a) [-sigma*a(1) + sigma*a(2); rho*a(1) - a(2) - a(1)*a(3); -beta*a(3) + a(1)*a(2)];
[t,a] = ode45(f,[0 100],[1 1 1]); % Runge-Kutta 4th/5th order ODE solver
plot3(a(:,1),a(:,2),a(:,3))
Mathematica simulation
Standard way:
tend = 50;
eq = {x'[t] == σ (y[t] - x[t]),
y'[t] == x[t] (ρ - z[t]) - y[t],
z'[t] == x[t] y[t] - β z[t]};
init = {x[0] == 10, y[0] == 10, z[0] == 10};
pars = {σ->10, ρ->28, β->8/3};
{xs, ys, zs} =
NDSolveValue[{eq /. pars, init}, {x, y, z}, {t, 0, tend}];
ParametricPlot3D[{xs[t], ys[t], zs[t]}, {t, 0, tend}]
Less verbose:
lorenz = NonlinearStateSpaceModel[{{σ (y - x), x (ρ - z) - y, x y - β z}, {}}, {x, y, z}, {σ, ρ, β}];
soln[t_] = StateResponse[{lorenz, {10, 10, 10}}, {10, 28, 8/3}, {t, 0, 50}];
ParametricPlot3D[soln[t], {t, 0, 50}]
Python simulation
import matplotlib.pyplot as plt
import numpy as np
def lorenz(xyz, *, s=10, r=28, b=2.667):
"""
Parameters
----------
xyz : array-like, shape (3,)
Point of interest in three-dimensional space.
s, r, b : float
Parameters defining the Lorenz attractor.
Returns
-------
xyz_dot : array, shape (3,)
Values of the Lorenz attractor's partial derivatives at *xyz*.
"""
x, y, z = xyz
x_dot = s*(y - x)
y_dot = r*x - y - x*z
z_dot = x*y - b*z
return np.array([x_dot, y_dot, z_dot])
dt = 0.01
num_steps = 10000
xyzs = np.empty((num_steps + 1, 3)) # Need one more for the initial values
xyzs[0] = (0., 1., 1.05) # Set initial values
# Step through "time", calculating the partial derivatives at the current point
# and using them to estimate the next point
for i in range(num_steps):
xyzs[i + 1] = xyzs[i] + lorenz(xyzs[i]) * dt
# Plot
ax = plt.figure().add_subplot(projection='3d')
ax.plot(*xyzs.T, lw=0.6)
ax.set_xlabel("X Axis")
ax.set_ylabel("Y Axis")
ax.set_zlabel("Z Axis")
ax.set_title("Lorenz Attractor")
plt.show()
R simulation
library(deSolve)
library(plotly)
# parameters
prm <- list(sigma = 10, rho = 28, beta = 8/3)
# initial values
varini <- c(
X = 1,
Y = 1,
Z = 1
)
Lorenz <- function (t, vars, prm) {
with(as.list(vars), {
dX <- prm$sigma*(Y - X)
dY <- X*(prm$rho - Z) - Y
dZ <- X*Y - prm$beta*Z
return(list(c(dX, dY, dZ)))
})
}
times <- seq(from = 0, to = 100, by = 0.01)
# call ode solver
out <- ode(y = varini, times = times, func = Lorenz,
parms = prm)
# to assign color to points
gfill <- function (repArr, long) {
rep(repArr, ceiling(long/length(repArr)))[1:long]
}
dout <- as.data.frame(out)
dout$color <- gfill(rainbow(10), nrow(dout))
# Graphics production with Plotly:
plot_ly(
data=dout, x = ~X, y = ~Y, z = ~Z,
type = 'scatter3d', mode = 'lines',
opacity = 1, line = list(width = 6, color = ~color, reverscale = FALSE)
)
Applications
Model for atmospheric convection
As shown in Lorenz's original paper, the Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman. The Lorenz equations are derived from the Oberbeck–Boussinesq approximation to the equations describing fluid circulation in a shallow layer of fluid, heated uniformly from below and cooled uniformly from above. This fluid circulation is known as Rayleigh–Bénard convection. The fluid is assumed to circulate in two dimensions (vertical and horizontal) with periodic rectangular boundary conditions.
The partial differential equations modeling the system's stream function and temperature are subjected to a spectral Galerkin approximation: the hydrodynamic fields are expanded in Fourier series, which are then severely truncated to a single term for the stream function and two terms for the temperature. This reduces the model equations to a set of three coupled, nonlinear ordinary differential equations. A detailed derivation may be found, for example, in nonlinear dynamics texts from , Appendix C; , Appendix D; or Shen (2016), Supplementary Materials.
Model for the nature of chaos and order in the atmosphere
The scientific community accepts that the chaotic features found in low-dimensional Lorenz models could represent features of the Earth's atmosphere (), yielding the statement of “weather is chaotic.” By comparison, based on the concept of attractor coexistence within the generalized Lorenz model and the original Lorenz model (), Shen and his co-authors proposed a revised view that “weather possesses both chaos and order with distinct predictability”. The revised view, which is a build-up of the conventional view, is used to suggest that “the chaotic and regular features found in theoretical Lorenz models could better represent features of the Earth's atmosphere”.
Resolution of Smale's 14th problem
Smale's 14th problem says, 'Do the properties of the Lorenz attractor exhibit that of a strange attractor?'. The problem was answered affirmatively by Warwick Tucker in 2002. To prove this result, Tucker used rigorous numerics methods like interval arithmetic and normal forms. First, Tucker defined a cross section that is cut transversely by the flow trajectories. From this, one can define the first-return map , which assigns to each the point where the trajectory of first intersects .
Then the proof is split in three main points that are proved and imply the existence of a strange attractor. The three points are:
There exists a region invariant under the first-return map, meaning .
The return map admits a forward invariant cone field.
Vectors inside this invariant cone field are uniformly expanded by the derivative of the return map.
To prove the first point, we notice that the cross section is cut by two arcs formed by . Tucker covers the location of these two arcs by small rectangles , the union of these rectangles gives . Now, the goal is to prove that for all points in , the flow will bring back the points in , in . To do that, we take a plan below at a distance small, then by taking the center of and using Euler integration method, one can estimate where the flow will bring in which gives us a new point . Then, one can estimate where the points in will be mapped in using Taylor expansion, this gives us a new rectangle centered on . Thus we know that all points in will be mapped in . The goal is to do this method recursively until the flow comes back to and we obtain a rectangle in such that we know that . The problem is that our estimation may become imprecise after several iterations, thus what Tucker does is to split into smaller rectangles and then apply the process recursively.
Another problem is that as we are applying this algorithm, the flow becomes more 'horizontal', leading to a dramatic increase in imprecision. To prevent this, the algorithm changes the orientation of the cross sections, becoming either horizontal or vertical.
Gallery
See also
Eden's conjecture on the Lyapunov dimension
Lorenz 96 model
List of chaotic maps
Takens' theorem
Notes
References
Shen, B.-W. (2015-12-21). "Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term". Nonlinear Processes in Geophysics. 22 (6): 749–764. doi:10.5194/npg-22-749-2015. ISSN 1607-7946.
Further reading
External links
Lorenz attractor by Rob Morris, Wolfram Demonstrations Project.
Lorenz equation on planetmath.org
Synchronized Chaos and Private Communications, with Kevin Cuomo. The implementation of Lorenz attractor in an electronic circuit.
Lorenz attractor interactive animation (you need the Adobe Shockwave plugin)
3D Attractors: Mac program to visualize and explore the Lorenz attractor in 3 dimensions
Lorenz Attractor implemented in analog electronic
Lorenz Attractor interactive animation (implemented in Ada with GTK+. Sources & executable)
Interactive web based Lorenz Attractor made with Iodide
Chaotic maps
Articles containing video clips
Articles with example Python (programming language) code
Articles with example MATLAB/Octave code
Articles with example Julia code | Lorenz system | [
"Mathematics"
] | 4,281 | [
"Functions and mappings",
"Mathematical objects",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
5,642,622 | https://en.wikipedia.org/wiki/Leopold%20matrix | The Leopold matrix is a qualitative environmental impact assessment method developed in 1971 by Luna Leopold and collaborators for the USGS. It is used to identify and assign numerical weightings to potential environmental impacts of proposed projects on the environment. It came as a response to the National Environmental Policy Act of 1969 which was criticized for lacking adequate guidance for government agencies on how to properly predict potential environmental impacts and consequently prepare impact reports.
The system consists of a grid of 100 rows representing the possible project activities on the horizontal axis and 88 columns representing environmental factors on the vertical axis, for a total of 8800 possible interactions. In practice, only a select few (25-50) of these interactions are likely to be of sufficient importance to be thoroughly considered. Where an impact is expected, the appropriate cell of the matrix is split diagonally from the top right corner to the bottom left corner in order for the magnitude and importance of each interaction to be recorded. The magnitude (from -10 to +10) is inserted on the top-left diagonal and the importance (from 1 to 10) is inserted on the bottom-right diagonal. Measurements of magnitude and importance tend to be related, but do not necessarily directly correlate. Magnitude can be measured more tangibly in terms of how much area is affected by the development or how severely, however, the importance is a more subjective measurement. While a proposed development may have a large impact in terms of magnitude, the effects it causes may not actually significantly affect the environment as a whole. The example given by author Luna Leopold is of a stream that significantly alters the erosion patterns in a specific area, which may be scored highly in terms of magnitude but may not be necessarily significant, provided the stream in question is swift-moving and transports large amounts of sediment regardless. In this case, an impact of significant magnitude may not actually be important to the environment in question.
Strengths
As outlined by the original authors, the matrix provides a structured framework for practitioners of environmental impact assessment to systematically rank potential significant environmental cause-and-effect relationships. A structured approach avoids the downsides of less organized ad hoc approaches to impact prediction in which impacts can be either underestimated or completely overlooked. Additionally, the grid format allows for a visual display of results that can be easily understood by policymakers and the public. The matrix is also capable of expanding and contracting based on the scope and environmental context of any given undertaking, rendering it functional for both large and small-scale projects. Finally, it is beneficial to practitioners that the tool can be applied at various temporal stages of the environmental impact assessment process.
Criticisms
One of the fundamental downfalls of the method is the lack of criteria or standard methods for assigning magnitude and significance values which may lead to subjective judgements. In the same vein, the method has also been identified as lacking the ability to facilitate any degree of public involvement, primarily due to the subjective value judgements of the user. Another potential pitfall is the sheer size of the matrix with a total of 17 600 items of information potentially being analyzed. The size of the matrix has also been criticized as being too detailed for some projects while simultaneously being too imprecise for others. In terms of direct content, the chance of double-counting certain impacts is also present. The matrix has further been identified as being highly biased toward biophysical impacts making the social impacts of a given project difficult to assess. Of the impacts that are covered, the matrix is seldom capable of taking into consideration secondary or cumulative impacts which are often significant considerations in environmental impact assessment. Another area that the method can be deficient in is having a mechanism capable of distinguishing between long-term impacts and short-term impacts. Due to the presentation of completed matrices, the method has also been identified as treating interactions as though they are certain to occur, with no consideration of probability.
Examples of Leopold Matrix Implementation
Gonabad landfill; a study conducted to evaluate the environmental effects of a municipal waste landfill site.
Vojvodina ecological network; an assessment of the influences of anthropogenic factors on an ecological network (salt steppes, marshes, etc.).
Karbala water projects; a study on seven drinking water treatment facilities based on physio-chemical properties.
Binh Thuan desertification; an assessment of the potential desertification effects and subsequent impacts on socio-economic conditions and water availability.
See also
References
Environmental science
Environmental impact assessment | Leopold matrix | [
"Environmental_science"
] | 892 | [
"nan"
] |
5,642,715 | https://en.wikipedia.org/wiki/Medusa%20Nebula | The Medusa Nebula is a planetary nebula in the constellation of Gemini. It is also known as Abell 21 and Sharpless 2-274. It was originally discovered in 1955 by University of California, Los Angeles astronomer George O. Abell, who classified it as an old planetary nebula. With the computation of expansion velocities and the thermal character of the radio emission, Soviet astronomers in 1971 concluded that it was most likely a planetary nebula. As the nebula is so large, its surface brightness is very low, with surface magnitudes of between +15.99 and +25 reported.
The central star of the planetary nebula is an PG 1159 star.
See also
Abell Catalog of Planetary Nebulae
Geminga, Gemini gamma-ray source
Gemini in Chinese astronomy
IC 444, reflection nebula
Messier 35 open cluster
Cancer Minor (constellation) - Obsolete constellation inside modern Gemini
References
External links
The Sharpless Catalog: Sharpless 274
APOD picture: The Medusa Nebula
Images of the Universe:
Planetary nebulae
Gemini (constellation)
Sharpless objects
21
Astronomical objects discovered in 1955 | Medusa Nebula | [
"Astronomy"
] | 218 | [
"Nebula stubs",
"Astronomy stubs",
"Constellations",
"Gemini (constellation)"
] |
5,642,731 | https://en.wikipedia.org/wiki/Information%20theory%20and%20measure%20theory | This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).
Measures in information theory
Many of the concepts in information theory have separate definitions and formulas for continuous and discrete cases. For example, entropy is usually defined for discrete random variables, whereas for continuous random variables the related concept of differential entropy, written , is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematical expectations, but the expectation is defined with an integral for the continuous case, and a sum for the discrete case.
These separate definitions can be more closely related in terms of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment.
Consider the formula for the differential entropy of a continuous random variable with range and probability density function :
This can usually be interpreted as the following Riemann–Stieltjes integral:
where is the Lebesgue measure.
If instead, is discrete, with range a finite set, is a probability mass function on , and is the counting measure on , we can write:
The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density function is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.
If is the probability measure induced by , then the integral can also be taken directly with respect to :
If instead of the underlying measure μ we take another probability measure , we are led to the Kullback–Leibler divergence: let and be probability measures over the same space. Then if is absolutely continuous with respect to , written the Radon–Nikodym derivative exists and the Kullback–Leibler divergence can be expressed in its full generality:
where the integral runs over the support of Note that we have dropped the negative sign: the Kullback–Leibler divergence is always non-negative due to Gibbs' inequality.
Entropy as a "measure"
There is an analogy between Shannon's basic "measures" of the information content of random variables and a measure over sets. Namely the joint entropy, conditional entropy, and mutual information can be considered as the measure of a set union, set difference, and set intersection, respectively (Reza pp. 106–108).
If we associate the existence of abstract sets and to arbitrary discrete random variables X and Y, somehow representing the information borne by X and Y, respectively, such that:
whenever X and Y are unconditionally independent, and
whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);
where is a signed measure over these sets, and we set:
we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal signed measure over sets, as commonly illustrated in an information diagram. This allows the sum of two measures to be written:
and the analog of Bayes' theorem () allows the difference of two measures to be written:
This can be a handy mnemonic device in some situations, e.g.
Note that measures (expectation values of the logarithm) of true probabilities are called "entropy" and generally represented by the letter H, while other measures are often referred to as "information" or "correlation" and generally represented by the letter I. For notational simplicity, the letter I is sometimes used for all measures.
Multivariate mutual information
Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106–108 for an informal but rather complete discussion.) Namely needs to be defined in the obvious way as the entropy of a joint distribution, and a multivariate mutual information defined in a suitable manner so that we can set:
in order to define the (signed) measure over the whole σ-algebra. There is no single universally accepted definition for the multivariate mutual information, but the one that corresponds here to the measure of a set intersection is due to Fano (1966: p. 57-59). The definition is recursive. As a base case the mutual information of a single random variable is defined to be its entropy: . Then for we set
where the conditional mutual information is defined as
The first step in the recursion yields Shannon's definition The multivariate mutual information (same as interaction information but for a change in sign) of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then bit.
Many other variations are possible for three or more random variables: for example, is the mutual information of the joint distribution of X and Y relative to Z, and can be interpreted as Many more complicated expressions can be built this way, and still have meaning, e.g. or
References
Thomas M. Cover and Joy A. Thomas. Elements of Information Theory, second edition, 2006. New Jersey: Wiley and Sons. .
Fazlollah M. Reza. An Introduction to Information Theory. New York: McGraw–Hill 1961. New York: Dover 1994.
R. W. Yeung, "On entropy, information inequalities, and Groups." PS
See also
Information theory
Measure theory
Set theory
Information theory
Measure theory | Information theory and measure theory | [
"Mathematics",
"Technology",
"Engineering"
] | 1,165 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,642,853 | https://en.wikipedia.org/wiki/Gambling%20and%20information%20theory | Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance.
Kelly Betting
Kelly betting or proportional betting is an application of information theory to investing and gambling. Its discoverer was John Larry Kelly, Jr.
Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies."
Side information
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event:
where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence, or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds, on X. Notice that the expectation is taken over Y rather than X: we need to evaluate how accurate, in the long term, our side information Y is before we start betting real money on X. This is a straightforward application of Bayesian inference. Note that the side information Y might affect not just our knowledge of the event X but also the event itself. For example, Y might be a horse that had too many oats or not enough water. The same mathematics applies in this case, because from the bookmaker's point of view, the occasional race fixing is already taken into account when he makes his odds.
The nature of side information is extremely finicky. We have already seen that it can affect the actual event as well as our knowledge of the outcome. Suppose we have an informer, who tells us that a certain horse is going to win. We certainly do not want to bet all our money on that horse just upon a rumor: that informer may be betting on another horse, and may be spreading rumors just so he can get better odds himself. Instead, as we have indicated, we need to evaluate our side information in the long term to see how it correlates with the outcomes of the races. This way we can determine exactly how reliable our informer is, and place our bets precisely to maximize the expected logarithm of our capital according to the Kelly criterion. Even if our informer is lying to us, we can still profit from his lies if we can find some reverse correlation between his tips and the actual race results.
Doubling rate
Doubling rate in gambling on a horse race is
where there are horses, the probability of the th horse winning being , the proportion of wealth bet on the horse being , and the odds (payoff) being (e.g., if the th horse winning pays double the amount bet). This quantity is maximized by proportional (Kelly) gambling:
for which
where is information entropy.
Expected gains
An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly):
for an optimal betting strategy, where is the initial capital, is the capital after the tth bet, and is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each betable event).
This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: in a game with negative expected value, the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin, which is known as the gambler's ruin scenario. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin.
This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce).
Applications for self-information
The logarithmic probability measure self-information or surprisal, whose average is information entropy/uncertainty and whose average difference is KL-divergence, has applications to odds-analysis all by itself. Its two primary strengths are that surprisals: (i) reduce minuscule probabilities to numbers of manageable size, and (ii) add whenever probabilities multiply.
For example, one might say that "the number of states equals two to the number of bits" i.e. #states = 2#bits. Here the quantity that's measured in bits is the logarithmic information measure mentioned above. Hence there are N bits of surprisal in landing all heads on one's first toss of N coins.
The additive nature of surprisals, and one's ability to get a feel for their meaning with a handful of coins, can help one put improbable events (like winning the lottery, or having an accident) into context. For example if one out of 17 million tickets is a winner, then the surprisal of winning from a single random selection is about 24 bits. Tossing 24 coins a few times might give you a feel for the surprisal of getting all heads on the first try.
The additive nature of this measure also comes in handy when weighing alternatives. For example, imagine that the surprisal of harm from a vaccination is 20 bits. If the surprisal of catching a disease without it is 16 bits, but the surprisal of harm from the disease if you catch it is 2 bits, then the surprisal of harm from NOT getting the vaccination is only 16+2=18 bits. Whether or not you decide to get the vaccination (e.g. the monetary cost of paying for it is not included in this discussion), you can in that way at least take responsibility for a decision informed to the fact that not getting the vaccination involves more than one bit of additional risk.
More generally, one can relate probability p to bits of surprisal sbits as probability = 1/2sbits. As suggested above, this is mainly useful with small probabilities. However, Jaynes pointed out that with true-false assertions one can also define bits of evidence ebits as the surprisal against minus the surprisal for. This evidence in bits relates simply to the odds ratio = p/(1-p) = 2ebits, and has advantages similar to those of self-information itself.
Applications in games of chance
Information theory can be thought of as a way of quantifying information so as to make the best decision in the face of imperfect information. That is, how to make the best decision using only the information you have available. The point of betting is to rationally assess all relevant variables of an uncertain game/race/match, then compare them to the bookmaker's assessments, which usually comes in the form of odds or spreads and
place the proper bet if the assessments differ sufficiently. The area of gambling where this has the most use is sports betting. Sports handicapping lends itself to information theory extremely well because of the availability of statistics. For many years noted economists have tested different mathematical theories using sports as their laboratory, with vastly differing results.
One theory regarding sports betting is that it is a random walk. Random walk is a scenario where new information, prices and returns will fluctuate by chance, this is part of the efficient-market hypothesis. The underlying belief of the efficient market hypothesis is that the market will always make adjustments for any new information. Therefore no one can beat the market because they are trading on the same information from which the market adjusted. However, according to Fama, to have an efficient market three qualities need to be met:
There are no transaction costs in trading securities
All available information is costlessly available to all market participants
All agree on the implications of the current information for the current price and distributions of future prices of each security
Statisticians have shown that it's the third condition which allows for information theory to be useful in sports handicapping. When everyone doesn't agree on how information will affect the outcome of the event, we get differing opinions.
See also
Principle of indifference
Statistical association football predictions
Advanced NFL Stats
References
External links
Statistical analysis in sports handicapping models
DVOA as an explanatory variable
Gambling mathematics
Wagering
Information theory
Statistical inference | Gambling and information theory | [
"Mathematics",
"Technology",
"Engineering"
] | 1,939 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,643,045 | https://en.wikipedia.org/wiki/Yitzhak%20Katznelson | Yitzhak Katznelson (; born 1934) is an Israeli mathematician.
Katznelson was born in Jerusalem. He received his doctoral degree from the University of Paris in 1956. He is a professor of mathematics at Stanford University.
He is the author of An Introduction to Harmonic Analysis, which won the Steele Prize for Mathematical Exposition in 2002.
In 2012 he became a fellow of the American Mathematical Society.
References
External links
An Introduction to Harmonic Analysis
1934 births
Living people
Israeli mathematicians
Jewish American scientists
Mathematical analysts
Stanford University Department of Mathematics faculty
Fellows of the American Mathematical Society
University of Paris alumni | Yitzhak Katznelson | [
"Mathematics"
] | 118 | [
"Mathematical analysis",
"Mathematical analysts"
] |
5,643,254 | https://en.wikipedia.org/wiki/Shemen%20Afarsimon | Shemen afarsimon ( šemen ʾăp̄arsəmōn) was a prized oil used in antiquity. The ancient Jewish community of Ein Gedi was known for its cultivation of the afarsimon.
Balsam and afarsimon in Judaism
The Hebrew Bible does not mention persimmons, but in the Talmud and Midrash the Hebrew term may also stand for balsam, which occurs once in the Hebrew Bible as Hebrew besami (בְּשָׂמִי) "my spice" () in Song of Songs 5:1, which is indirect evidence of the form basam (בָּשָׂם; ).
In modern Hebrew, the word afarsimon is translated as persimmon. However, some doubt that persimmons would have been known to the peoples of the Bible, although being a traditional Jewish New Year's food in the Diaspora.
According to Adin Steinsaltz, the afarsimon of the
Talmud was considered very valuable, and worth its weight in gold.
Identification
It is not known exactly what plant was used to produce the biblical oil. According to one theory, it is the plant Commiphora opobalsamum - a small shrub, 10 to 12 feet high, with wandlike, spreading branches. The oil extracted from the seeds or branches of this plant has been used as a medicine, but more commonly as incense or perfumed oil.
Qumran jug
In April 1988, archeologists working with the former Baptist minister Vendyl Jones discovered a small jug of oil in the Qumran region that Jones announced was the oil used in the Temple. The find was announced by the New York Times on February 15, 1989, and a feature article was published in National Geographic Magazine in October of that year. After testing by the Pharmaceutical Department of the Hebrew University of Jerusalem (the results of which were never detailed or revealed), the substance inside the juglet was claimed by Jones to be the shemen afarsimon hinted at in Psalm 133. According to Jones, it was the first artifact discovered from the First Temple Period, and one of the treasures listed in the Copper Scroll. However, this identification remains controversial.
See also
Balm of Gilead
Balsam of Mecca
Commiphora gileadensis
Holy anointing oil
Perfume
References
Tabernacle and Temples in Jerusalem
Oils
Perfumes
Essential oils
Biblical archaeology | Shemen Afarsimon | [
"Chemistry"
] | 481 | [
"Essential oils",
"Oils",
"Carbohydrates",
"Natural products"
] |
5,643,301 | https://en.wikipedia.org/wiki/Substructure%20search | Substructure search (SSS) is a method to retrieve from a database only those chemicals matching a pattern of atoms and bonds which a user specifies. It is an application of graph theory, specifically subgraph matching in which the query is a hydrogen-depleted molecular graph. The mathematical foundations for the method were laid in the 1870s, when it was suggested that chemical structure drawings were equivalent to graphs with atoms as vertices and bonds as edges. SSS is now a standard part of cheminformatics and is widely used by pharmaceutical chemists in drug discovery.
There are many commercial systems that provide SSS, typically having a graphical user interface and chemical drawing software. Large publicly-available databases like PubChem and ChemSpider can be searched this way, as can Wikipedia's articles describing individual chemicals.
Definitions
Substructure search is used to retrieve from a database of chemicals those which contain the pattern of atoms and bonds specified by a user. It is implemented using a specialist type of query language and in real-world applications the search may be further constrained using logical operators on additional data held in the database. Thus "return all carboxylic acids where a sample of >1 g is available". One definition of "substructure" was provided in 2008: "given two chemical structures A and B, if structure A is fully contained in structure B, then A is a substructure of B, while B is a superstructure of A."
In this definition, the word "structure" is not synonymous with "compound". If it were, the structure for ethanol, would not be a substructure of propanol, , since the terminal CH3 of ethanol is not fully contained at the propanol chain two atoms away from the OH group. Instead the query structure is, formally, a hydrogen-depleted molecular graph. The search is thus for substances which contain three atoms and two single bonds connected as C–C–O. Propanol is a "hit", as is diethyl ether, with C–C–O–C–C. If a user wished to limit the hits to alcohols, then the query structure would have to be drawn with an "explicit hydrogen", as C–C–O–H and ether would no longer match. In mathematical terms, finding substructures is an application of graph theory, specifically subgraph matching.
Examples
Standard conventions used when chemists draw chemical structures need to be considered when implementing substructure search. Historically, the representation of tautomer forms and stereochemistry has posed difficulties. This can be illustrated using histidine.
The top row shows the standard two-dimensional chemical drawing for (S)-histidine (the natural isomer of this amino acid), its enantiomer (R)-histidine and a drawing which conventionally indicates the racemic mixture of equal amounts of the R and S forms. The bottom row shows the same three compounds with the imidazole ring drawn in its alternative tautomer form. For histidine, it has been experimentally determined by 15N NMR spectroscopy that the 1-H tautomer is preferred over the 3-H form in samples. Choice of representation for storage in a database can influence substucture searches. All six drawings are hits for a propanol substructure C–C–C–O, as shown in red. However, only the top row would, apparently, be a hit for the blue substructure of 1-H imidazole-4-methyl, as this is not fully contained in the other three compounds. In fact, each vertical pair is the same chemical substance: tautomers in general cannot be isolated as separate samples. In modern databases, substances are held in a single canonical form, with checks made for uniqueness. The InChIKey provides one way to do this. (S)-Histidine's standard key is HNDVDQJCIGZPNO-YFKPBYRVSA-N, (R)-histidine's key is HNDVDQJCIGZPNO-RXMQYKEDSA-N and (RS)-histidine's is HNDVDQJCIGZPNO-UHFFFAOYSA-N. The first block of 14 letters is identical for all these substances, as it encodes the molecular graph.
Query interfaces and search algorithms
Most substructure search systems present the user with a graphical user interface with a chemical structure drawing component. Query structures may contain bonding patterns such as "single/aromatic" or "any" to provide flexibility. Similarly, the vertices which in an actual compound would be a specific atom may be replaced with an atom list in the query. Cis–trans isomerism at double bonds is catered for by giving a choice of retrieving only the E form, the Z form, or both.
The algorithms for searching are computationally intensive, often of O (n3) or O (n4) time complexity (where n is the number of atoms involved) but the problem is known to be NP-complete. Speedups are achieved using fragment screening as a first step. This pre-computation typically involves creation of bitstrings representing presence or absence of molecular fragments. Target compounds that do not possess the fragments present in the query cannot be hits and are eliminated. Atom-by-atom-searching, in which a mapping of the query's atoms and bonds with the target molecule is sought, is usually done with a variant of the Ullman algorithm.
Implementations
, substructure search is a standard feature in chemical databases accessible via the web. Large databases such as PubChem, maintained by the National Center for Biotechnology Information and ChemSpider, maintained by the Royal Society of Chemistry have graphical interfaces for search. The Chemical Abstracts Service, a division of the American Chemical Society, provides tools to search the chemical literature and Reaxys supplied by Elsevier covers both chemicals and reaction information, including that originally held in the Beilstein database. PATENTSCOPE maintained by the World Intellectual Property Organization makes chemical patents accessible by substructure and Wikipedia's articles describing individual chemicals can also be searched that way.
Suppliers of chemicals as synthesis intermediates or for high-throughput screening routinely provide search interfaces. Currently, the largest database that can be freely searched by the public is the ZINC database, which is claimed to contain over 37 billion commercially available molecules.
History
The idea that chemical structures as depicted using drawings of the type introduced by Kekulé were related to what is now called graph theory was suggested by the mathematician J. J. Sylvester in 1878. He was the first to use the word "graph" in the sense of a network. Arthur Cayley had already, in 1874, considered how to enumerate chemical isomers, in what was an early approach to molecular graphs, where atoms are at vertices and bonds correspond to edges.
In the 20th century, chemists developed standard ways to show structural formula, especially for individual organic compounds that were increasingly being synthesized and tested as potential drugs or agrochemicals, By the 1950s, as the number of compounds made and tested grew, the first attempts to create chemical databases were made and the sub-discipline of cheminformatics was established. As stated in 2012, "searching for substructures in molecules belongs to the most elementary tasks in cheminformatics and is nowadays part of virtually every cheminformatics software".
The first suggested use for substructure search was in 1957, to reduce the workload of patent examiners. They have to search published literature to decide whether an invention is novel, which for chemical patents often means finding known examples within the generic claims of a Markush structure. Before this could become a reality, a number of developments were required. Importantly, the existing literature had to be made searchable and a way to input a chemical structure query and return the matching results had to devised. These requirements had been partially met as early as 1881 when Friedrich Konrad Beilstein introduced the Handbuch der organischen Chemie (Handbook of Organic Chemistry) which carefully classified known chemicals in a very systematic manner so that all examples containing a given heterocycle would be located together.
In 1907, the American Chemical Society set up the Chemical Abstracts Service (CAS). This weekly subscription service included a printed publication with summaries of articles in thousands of scholarly journals and claims in worldwide patents. This had a chemical substance index that, in principle, allowed searching by chemical name or formula. However, it was only when the CAS records had been fully converted into machine-readable form and the internet was available to connect its database to end-users that comprehensive searching became possible. CAS provided various specialist search services from the 1980s but it was not until 2008 that its "SciFinder" system became available via the web.
By the 1960s, companies synthesizing and testing new chemicals made significant progress in creating in-house databases. Imperial Chemical Industries stored chemical structures encoded as text strings, using Wiswesser line notation. Its associated CROSSBOW software allowed substructure search using key-based searches followed by more processor-intensive atom-by-atom search. It was recognised that research chemists wanted not only to search company collections for existing inventory but also to search third-party databases supplied by vendors of small-molecule intermediates. The latter application evolved as a collaboration involving six companies with pharmaceutical interests and their commercial suppliers.
By the 1980s, other line notations were used for commercially-available substructure search systems. SMILES encoding, together with its SMARTS query language, and SYBYL line notation are examples. A comprehensive survey of then-available chemical information systems was produced for NASA in 1985.
The need to combine chemistry search with biological data produced by screening compounds at ever-larger scales led to implementation of systems such as MACCS. This commercial system from MDL Information Systems made use of an algorithm specifically designed for storage and search within groups of chemicals that differed only in their stereochemistry. A review of the many systems available by the mid-1980s pointed out that "most in-house developed systems have been replaced with commercially available standardised software for managing chemical structure databases." The MDL Molfile is now an open file format for storing single-molecule data in the form of a connection table.
By the 2000s, personal computers had become powerful enough that storage and search of chemistry within office software such as Microsoft Excel was possible.
Subsequent developments involved the use of new techniques to allow efficient searches over very large databases and, importantly, the use of a standardised International Chemical Identifier, a type of line notation, to uniquely define a chemical substance.
See also
Molecule mining
References
External links
Wikipedia Chemical Structure Explorer to search Wikipedia chemistry articles by substructure
Search PubChem
Search ChemSpider
Search ZINC-22, a database of over 50 billion molecules
NP-complete problems
Graph algorithms
Computational problems in graph theory
Computational chemistry
Cheminformatics | Substructure search | [
"Chemistry",
"Mathematics"
] | 2,273 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Theoretical chemistry",
"Computational chemistry",
"Mathematical relations",
"nan",
"Cheminformatics",
"Mathematical problems",
"NP-complete problems"
] |
5,643,773 | https://en.wikipedia.org/wiki/Emde%20degradation | The Emde degradation (also called Emde-reaction or Emde-reduction) is a method for the reduction of a quaternary ammonium cation to a tertiary amine with sodium amalgam:
This organic reaction was first described in 1909 by the German chemist Hermann Emde and was for a long time of great importance in structure elucidation of many alkaloids, for example that of ephedrine.
Alternative reducing agents exist for this reaction; for instance, lithium aluminium hydride.
See also
Related reactions are the Hofmann elimination and the von Braun reaction
References
Organic redox reactions
Name reactions
Degradation reactions | Emde degradation | [
"Chemistry"
] | 130 | [
"Name reactions",
"Degradation reactions",
"Organic redox reactions",
"Organic reactions"
] |
5,643,937 | https://en.wikipedia.org/wiki/Music%20and%20mathematics | Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory.
While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties".
History
Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers".
From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection.
Time, rhythm, and meter
Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics.
The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3).
Musical form
Musical form is the plan by which a short piece of music is extended. The term "plan" is also used in architecture, to which musical form is often compared. Like the architect, the composer must take into account the function for which the work is intended and the means available, practicing economy and making use of repetition and order. The common types of form known as binary and ternary ("twofold" and "threefold") once again demonstrate the importance of small integral values to the intelligibility and appeal of music.
Frequency and harmony
A musical scale is a discrete set of pitches used in making or describing music. The most important scale in the Western tradition is the diatonic scale but many others have been used and proposed in various historical eras and parts of the world. Each pitch corresponds to a particular frequency, expressed in hertz (Hz), sometimes referred to as cycles per second (c.p.s.). A scale has an interval of repetition, normally the octave. The octave of any pitch refers to a frequency exactly twice that of the given pitch.
Succeeding superoctaves are pitches found at frequencies four, eight, sixteen times, and so on, of the fundamental frequency. Pitches at frequencies of half, a quarter, an eighth and so on of the fundamental are called suboctaves. There is no case in musical harmony where, if a given pitch be considered accordant, that its octaves are considered otherwise. Therefore, any note and its octaves will generally be found similarly named in musical systems (e.g. all will be called doh or A or Sa, as the case may be).
When expressed as a frequency bandwidth an octave A2–A3 spans from 110 Hz to 220 Hz (span=110 Hz). The next octave will span from 220 Hz to 440 Hz (span=220 Hz). The third octave spans from 440 Hz to 880 Hz (span=440 Hz) and so on. Each successive octave spans twice the frequency range of the previous octave.
Because we are often interested in the relations or ratios between the pitches (known as intervals) rather than the precise pitches themselves in describing a scale, it is usual to refer to all the scale pitches in terms of their ratio from a particular pitch, which is given the value of one (often written 1/1), generally a note which functions as the tonic of the scale. For interval size comparison, cents are often used.
{|class="wikitable"
!Commonterm
!Examplename
!Hz
!Multiple offundamental
!Ratio ofwithin octave
!Centswithin octave
|-
|
|style="text-align:center;"|A2
|110
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A3
|rowspan=2 |220
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|E4
|330
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A4
|rowspan=2 |440
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|C5
|550
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|E5
|660
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|
|style="text-align:center;"|G5
|770
|style="text-align:center;"|
|style="text-align:center;"|
|
|-
|rowspan=2 style="text-align:center;" |Octave
|rowspan=2 style="text-align:center;" |A5
|rowspan=2 |880
|rowspan=2 style="text-align:center;" |
|style="text-align:center;"|
|
|-
|style="text-align:center;"|
|
|}
Tuning systems
There are two main families of tuning systems: equal temperament and just tuning. Equal temperament scales are built by dividing an octave into intervals which are equal on a logarithmic scale, which results in perfectly evenly divided scales, but with ratios of frequencies which are irrational numbers. Just scales are built by multiplying frequencies by rational numbers, which results in simple ratios between frequencies, but with scale divisions that are uneven.
One major difference between equal temperament tunings and just tunings is differences in acoustical beat when two notes are sounded together, which affects the subjective experience of consonance and dissonance. Both of these systems, and the vast majority of music in general, have scales that repeat on the interval of every octave, which is defined as frequency ratio of 2:1. In other words, every time the frequency is doubled, the given scale repeats.
Below are Ogg Vorbis files demonstrating the difference between just intonation and equal temperament. You might need to play the samples several times before you can detect the difference.
– this sample has half-step at 550 Hz (C in the just intonation scale), followed by a half-step at 554.37 Hz (C in the equal temperament scale).
– this sample consists of a "dyad". The lower note is a constant A (440 Hz in either scale), the upper note is a C in the equal-tempered scale for the first 1", and a C in the just intonation scale for the last 1". Phase differences make it easier to detect the transition than in the previous sample.
Just tunings
5-limit tuning, the most common form of just intonation, is a system of tuning using tones that are regular number harmonics of a single fundamental frequency. This was one of the scales Johannes Kepler presented in his Harmonices Mundi (1619) in connection with planetary motion. The same scale was given in transposed form by Scottish mathematician and musical theorist, Alexander Malcolm, in 1721 in his 'Treatise of Musick: Speculative, Practical and Historical', and by theorist Jose Wuerschmidt in the 20th century. A form of it is used in the music of northern India.
American composer Terry Riley also made use of the inverted form of it in his "Harp of New Albion". Just intonation gives superior results when there is little or no chord progression: voices and other instruments gravitate to just intonation whenever possible. However, it gives two different whole tone intervals (9:8 and 10:9) because a fixed tuned instrument, such as a piano, cannot change key. To calculate the frequency of a note in a scale given in terms of ratios, the frequency ratio is multiplied by the tonic frequency. For instance, with a tonic of A4 (A natural above middle C), the frequency is 440 Hz, and a justly tuned fifth above it (E5) is simply 440×(3:2) = 660 Hz.
Pythagorean tuning is tuning based only on the perfect consonances, the (perfect) octave, perfect fifth, and perfect fourth. Thus the major third is considered not a third but a ditone, literally "two tones", and is (9:8)2 = 81:64, rather than the independent and harmonic just 5:4 = 80:64 directly below. A whole tone is a secondary interval, being derived from two perfect fifths minus an octave, (3:2)2/2 = 9:8.
The just major third, 5:4 and minor third, 6:5, are a syntonic comma, 81:80, apart from their Pythagorean equivalents 81:64 and 32:27 respectively. According to Carl , "the dependent third conforms to the Pythagorean, the independent third to the harmonic tuning of intervals."
Western common practice music usually cannot be played in just intonation but requires a systematically tempered scale. The tempering can involve either the irregularities of well temperament or be constructed as a regular temperament, either some form of equal temperament or some other regular meantone, but in all cases will involve the fundamental features of meantone temperament. For example, the root of chord ii, if tuned to a fifth above the dominant, would be a major whole tone (9:8) above the tonic. If tuned a just minor third (6:5) below a just subdominant degree of 4:3, however, the interval from the tonic would equal a minor whole tone (10:9). Meantone temperament reduces the difference between 9:8 and 10:9. Their ratio, (9:8)/(10:9) = 81:80, is treated as a unison. The interval 81:80, called the syntonic comma or comma of Didymus, is the key comma of meantone temperament.
Equal temperament tunings
In equal temperament, the octave is divided into equal parts on the logarithmic scale. While it is possible to construct equal temperament scale with any number of notes (for example, the 24-tone Arab tone system), the most common number is 12, which makes up the equal-temperament chromatic scale. In western music, a division into twelve intervals is commonly assumed unless it is specified otherwise.
For the chromatic scale, the octave is divided into twelve equal parts, each semitone (half-step) is an interval of the twelfth root of two so that twelve of these equal half steps add up to exactly an octave. With fretted instruments it is very useful to use equal temperament so that the frets align evenly across the strings. In the European music tradition, equal temperament was used for lute and guitar music far earlier than for other instruments, such as musical keyboards. Because of this historical force, twelve-tone equal temperament is now the dominant intonation system in the Western, and much of the non-Western, world.
Equally tempered scales have been used and instruments built using various other numbers of equal intervals. The 19 equal temperament, first proposed and used by Guillaume Costeley in the 16th century, uses 19 equally spaced tones, offering better major thirds and far better minor thirds than normal 12-semitone equal temperament at the cost of a flatter fifth. The overall effect is one of greater consonance. Twenty-four equal temperament, with twenty-four equally spaced tones, is widespread in the pedagogy and notation of Arabic music. However, in theory and practice, the intonation of Arabic music conforms to rational ratios, as opposed to the irrational ratios of equally tempered systems.
While any analog to the equally tempered quarter tone is entirely absent from Arabic intonation systems, analogs to a three-quarter tone, or neutral second, frequently occur. These neutral seconds, however, vary slightly in their ratios dependent on maqam, as well as geography. Indeed, Arabic music historian Habib Hassan Touma has written that "the breadth of deviation of this musical step is a crucial ingredient in the peculiar flavor of Arabian music. To temper the scale by dividing the octave into twenty-four quarter-tones of equal size would be to surrender one of the most characteristic elements of this musical culture."
53 equal temperament arises from the near equality of 53 perfect fifths with 31 octaves, and was noted by Jing Fang and Nicholas Mercator.
Connections to mathematics
Set theory
Musical set theory uses the language of mathematical set theory in an elementary way to organize musical objects and describe their relationships. To analyze the structure of a piece of (typically atonal) music using musical set theory, one usually starts with a set of tones, which could form motives or chords. By applying simple operations such as transposition and inversion, one can discover deep structures in the music. Operations such as transposition and inversion are called isometries because they preserve the intervals between tones in a set.
Abstract algebra
Expanding on the methods of musical set theory, some theorists have used abstract algebra to analyze music. For example, the pitch classes in an equally tempered octave form an abelian group with 12 elements. It is possible to describe just intonation in terms of a free abelian group.
Transformational theory is a branch of music theory developed by David Lewin. The theory allows for great generality because it emphasizes transformations between musical objects, rather than the musical objects themselves.
Theorists have also proposed musical applications of more sophisticated algebraic concepts. The theory of regular temperaments has been extensively developed with a wide range of sophisticated mathematics, for example by associating each regular temperament with a rational point on a Grassmannian.
The chromatic scale has a free and transitive action of the cyclic group , with the action being defined via transposition of notes. So the chromatic scale can be thought of as a torsor for the group.
Numbers and series
Some composers have incorporated the golden ratio and Fibonacci numbers into their work.
Category theory
The mathematician and musicologist Guerino Mazzola has used category theory (topos theory) for a basis of music theory, which includes using topology as a basis for a theory of rhythm and motives, and differential geometry as a basis for a theory of musical phrasing, tempo, and intonation.
Musicians who were or are also notable mathematicians
Albert Einstein – Accomplished pianist and violinist.
Art Garfunkel (Simon & Garfunkel) – Masters in Mathematics Education, Columbia University
Brian Cox – Professor of particle physics in the School of Physics and Astronomy at the University of Manchester.
Brian May (Queen) – BSc (Hons) in Mathematics and Physics, PhD in Astrophysics, both from Imperial College London.
Brian Wecht (Ninja Sex Party) – PhD in particle physics, University of California, San Diego
Dan Snaith – PhD Mathematics, Imperial College London
Delia Derbyshire – BA in mathematics and music from Cambridge.
Donald Knuth – Knuth is an organist and a composer. In 2016 he completed a musical piece for organ titled "Fantasia Apocalyptica". It was premièred in Sweden on January 10, 2018
Ethan Port (Savage Republic) – PhD Mathematics, University of Southern California
Gregg Turner (Angry Samoans) – PhD Mathematics, Claremont Graduate University
Jerome Hines – Five articles published in Mathematics Magazine 1951–1956.
Jonny Buckland (Coldplay) – Studied astronomy and mathematics at University College London.
Kit Armstrong – Degree in music and MSc in mathematics.
Manjul Bhargava – Plays the tabla, won the Fields Medal in 2014.
Phil Alvin (The Blasters) – Mathematics, University of California, Los Angeles
Philip Glass – Studied mathematics and philosophy at the University of Chicago.
Robert Schneider (The Apples in Stereo) – PhD Mathematics, Emory University
Tom Lehrer – BA mathematics from Harvard University.
William Herschel – Astronomer and played the oboe, violin, harpsichord and organ. He composed 24 symphonies and many concertos, as well as some church music.
See also
Computational musicology
Equal temperament
Euclidean rhythms (traditional musical rhythms that are generated by Euclid's algorithm)
Harmony search
Interval (music)
List of music software
Mathematics and art
Musical tuning
Non-Pythagorean scale
Piano key frequencies
Rhythm
The Glass Bead Game
3rd bridge (harmonic resonance based on equal string divisions)
Tonality diamond
Tonnetz
Utonality and otonality
References
Ivor Grattan-Guinness (1995) "Mozart 18, Beethoven 32: Hidden shadows of integers in classical music", pages 29 to 47 in History of Mathematics: States of the Art, Joseph W. Dauben, Menso Folkerts, Eberhard Knobloch and Hans Wussing editors, Academic Press
Further reading
Cool math for hot music - A first introduction to mathematics for music theorists by Guerino Mazzola, Maria Mannone, Yan Pang, Springer, 2016,
Music: A Mathematical Offering by Dave Benson, Cambridge University Press, 2006,
External links
Axiomatic Music Theory by S.M. Nemati
Music and Math by Thomas E. Fiore
Twelve-Tone Musical Scale.
Sonantometry or music as math discipline.
Music: A Mathematical Offering by Dave Benson.
Nicolaus Mercator use of Ratio Theory in Music at Convergence
The Glass Bead Game Hermann Hesse gave music and mathematics a crucial role in the development of his Glass Bead Game.
Harmony and Proportion. Pythagoras, Music and Space.
"Linear Algebra and Music"
Notefreqs — A complete table of note frequencies and ratios for midi, piano, guitar, bass, and violin. Includes fret measurements (in cm and inches) for building instruments.
Mathematics & Music, BBC Radio 4 discussion with Marcus du Sautoy, Robin Wilson & Ruth Tatlow (In Our Time, May 25, 2006)
Measuring note similarity with positive definite kernels, Measuring note similarity with positive definite kernels
Mathematics and art
Mathematics and culture | Music and mathematics | [
"Mathematics"
] | 4,031 | [
"Applied mathematics",
"Mathematics of music"
] |
5,644,032 | https://en.wikipedia.org/wiki/Earth-centered%2C%20Earth-fixed%20coordinate%20system | The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as X, Y, and Z measurements from its center of mass. Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.
The distance from a given point of interest to the center of Earth is called the geocentric distance, , which is a generalization of the geocentric radius, , not restricted to points on the reference ellipsoid surface.
The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: ; it is not to be confused for the geodetic altitude.
Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.
Structure
As with any spatial reference system, ECEF consists of an abstract coordinate system (in this case, a conventional three-dimensional right-handed system), and a geodetic datum that binds the coordinate system to actual locations on the Earth. The ECEF that is used for the Global Positioning System (GPS) is the geocentric WGS 84, which currently includes its own ellipsoid definition. Other local datums such as NAD 83 may also be used. Due to differences between datums, the ECEF coordinates for a location will be different for different datums, although the differences between most modern datums is relatively small, within a few meters.
The ECEF coordinate system has the following parameters:
The origin at the center of the chosen ellipsoid. In WGS 84, this is center of mass of the Earth.
The Z axis is the line between the North and South Poles, with positive values increasing northward. In WGS 84, this is the international reference pole (IRP), which does not exactly coincide with the Earth's rotational axis The slight "wobbling" of the rotational axis is known as polar motion, and can actually be measured against an ECEF.
The X axis is in the plane of the equator, passing through the origin and extending from 180° longitude (negative) to the prime meridian (positive); in WGS 84, this is the IERS Reference Meridian.
The Y axis is also in the plane of the equator, passing through extending from 90°W longitude (negative) to 90°E longitude (positive)
An example is the NGS data for a brass disk near Donner Summit, in California. Given the dimensions of the ellipsoid, the conversion from lat/lon/height-above-ellipsoid coordinates to X-Y-Z is straightforward—calculate the X-Y-Z for the given lat-lon on the surface of the ellipsoid and add the X-Y-Z vector that is perpendicular to the ellipsoid there and has length equal to the point's height above the ellipsoid. The reverse conversion is harder: given X-Y-Z can immediately get longitude, but no closed formula for latitude and height exists. See "Geodetic system." Using Bowring's formula in 1976 Survey Review the first iteration gives latitude correct within 10 degree as long as the point is within 10,000 meters above or 5,000 meters below the ellipsoid.
In astronomy
Geocentric coordinates can be used for locating astronomical objects in the Solar System in three dimensions along the Cartesian X, Y, and Z axes. They are differentiated from topocentric coordinates, which use the observer's location as the reference point for bearings in altitude and azimuth.
For nearby stars, astronomers use heliocentric coordinates, with the center of the Sun as the origin. The plane of reference can be aligned with the Earth's celestial equator, the ecliptic, or the Milky Way's galactic equator. These 3D celestial coordinate systems add actual distance as the Z axis to the equatorial, ecliptic, and galactic coordinate systems used in spherical astronomy.
See also
Earth-centered inertial (ECI)
Geodetic system
International Terrestrial Reference System and Frame (ITRS)
Orbital state vectors
Planetary coordinate system
References
External links
ECEF datum transformation Notes on converting ECEF coordinates to WGS-84 datum
Datum Transformations of GPS Positions Application Note Clearer notes on converting ECEF coordinates to WGS-84 datum
geodetic datum overview orientation of the coordinate system and additional information
GeographicLib includes a utility CartConvert which converts between geodetic and geocentric (ECEF) or local Cartesian (ENU) coordinates. This provides accurate results for all inputs including points close to the center of the Earth.
EPSG:4978
Global Positioning System
Astronomical coordinate systems | Earth-centered, Earth-fixed coordinate system | [
"Astronomy",
"Mathematics",
"Technology",
"Engineering"
] | 1,016 | [
"Wireless locating",
"Aerospace engineering",
"Astronomical coordinate systems",
"Aircraft instruments",
"Coordinate systems",
"Global Positioning System"
] |
5,644,212 | https://en.wikipedia.org/wiki/Local%20tangent%20plane%20coordinates | Local tangent plane coordinates (LTP) are part of a spatial reference system based on the tangent plane defined by the local vertical direction and the Earth's axis of rotation.
They are also known as local ellipsoidal system, local geodetic coordinate system, local vertical, local horizontal coordinates (LVLH), or topocentric coordinates.
It consists of three coordinates: one represents the position along the northern axis, one along the local eastern axis, and one represents the vertical position.
Two right-handed variants exist: east, north, up (ENU) coordinates and north, east, down (NED) coordinates.
They serve for representing state vectors that are commonly used in aviation and marine cybernetics.
Axes
These frames are location dependent. For movements around the globe, like air or sea navigation, the frames are defined as tangent to the lines of geographical coordinates:
East–west tangent to parallels,
North–south tangent to meridians, and
Up–down in the direction normal to the oblate spheroid used as Earth's ellipsoid, which does not generally pass through the center of Earth.
Local east, north, up (ENU) coordinates
In many targeting and tracking applications the local East, North, Up (ENU) Cartesian coordinate system is far more intuitive and practical than ECEF or Geodetic coordinates. The local ENU coordinates are formed from a plane tangent to the Earth's surface fixed to a specific location and hence it is sometimes known as a "Local Tangent" or "local geodetic" plane. By convention the east axis is labeled , the north and the up .
Local north, east, down (NED) coordinates
In an airplane, most objects of interest are below the aircraft, so it is sensible to define down as a positive number. The North, East, Down (NED) coordinates allow this as an alternative to the ENU. By convention, the north axis is labeled , the east and the down . To avoid confusion between and , etc. in this article we will restrict the local coordinate frame to ENU.
The origin of this coordinate system is usually chosen to be a fixed point on the surface of the geoid below the aircraft's center of gravity. When that is the case, the coordinate system is sometimes referred as a "local-North-East-Down Coordinate System".
NED coordinates are similar to ECEF in that they're Cartesian, however they can be more convenient due to the relatively small numbers involved, and also because of the intuitive axes. NED and ECEF coordinates can be related with the following formula:
where is a 3D position in a NED system, is the corresponding ECEF position, is the reference ECEF position (where the local tangent plane originates), and is a rotation matrix whose rows are the north, east, and down axes. may be defined conveniently from the latitude and longitude corresponding to :
See also
Axes conventions
Figure of Earth
Horizontal coordinate system
Geodetic coordinates
Geodetic system
Grid reference system
Local coordinates
References
Aerospace
Geographic coordinate systems | Local tangent plane coordinates | [
"Physics",
"Mathematics"
] | 626 | [
"Aerospace",
"Geographic coordinate systems",
"Space",
"Coordinate systems",
"Spacetime"
] |
5,644,238 | https://en.wikipedia.org/wiki/Female%20sabotage | Female sabotage is an evolutionary theory regarding the propensity of certain females to select "burdened" males of their species for mating.
History
Soon after Charles Darwin published his theory of natural selection, he was faced with a puzzle. If natural selection suggests "survival of the fittest," then there is a question as to why some males have traits that detract from their survival.
Darwin knew that there was more to natural selection than simple fitness. An equally important part of the struggle of life regards reproduction. In this case, the question becomes, "Why would a female burden her offspring with dangerous traits by mating with a similarly burdened male?"
Noting that the males with burdensome traits are almost entirely those in polygamous species, where a minority of males generally mate with many females, Darwin had an insight. He realized that if females found these male burdens more "attractive", and if that attractiveness resulted in more matings by burdened males, then the increase in matings of a few sons might offset the death of many other sons as a result of the burden. In effect, if the success of the surviving males produced enough offspring to cover more than the loss of potential offspring from their lost brothers, then the female who mated with a burdened male had chosen correctly.
Female sabotage theory
In 1996, however, Joe Abraham presented a re-interpretation of the problem. In polygamous species, males generally contribute nothing to the nurturing of offspring, but nevertheless continue to consume finite resources. In such situations, males effectively become competitors with females and young once they are finished mating. This gives females a reason to sabotage males, and mating gives them an opportunity to do so. By choosing to mate exclusively with males who are unlikely to survive because of their burdens, the females ensure that as the males die, more food and other resources will remain for females and their young. Because females are the limiting resource in most species, as their numbers increase, population fitness will also increase.
Just as a given amount of land can only produce a finite amount of grazing, and a limited amount of grazing can only support a limited number of grazing animals, so a given number of grazing animals can only sustain a limited number of predators. Similar limitations apply to all living things, and are known as the carrying capacity of a physical area. If males' burdens are more likely to draw the interest of local predators, then such males effectively shift predation away from females and their young. In this case, the females and young will gain an added benefit from decreased predation, and enjoy even higher rates of survivability.
Abraham's explanation reunites the major split in sexual selection—intrasexual competition (male combat) and intersexual selection (female choice)--together under one rubric. Under female sabotage, the increase in resources becomes the critical factor, and the cause of increased male mortality is secondary. The theory also offers new, feminist approaches to leks, harems, resource guarding and mate location.
Perhaps the most attractive aspect of Abraham's explanation, however, is that it can easily work with any of the many current theories of sexual selection, and must play some role in them. An increase in resources and a decrease in predation for females and their young is an inevitable result of increased male mortality, regardless of what mechanism drives females to mate with males carrying burdensome traits.
References
Abraham, J.N. 1998. "La Saboteuse: An Ecological Theory of Sexual Dimorphism in Animals." Acta Biotheoretica 46:23-35.
Evolutionary biology | Female sabotage | [
"Biology"
] | 737 | [
"Evolutionary biology"
] |
5,644,561 | https://en.wikipedia.org/wiki/Marchenko%20equation | In mathematical physics, more specifically the one-dimensional inverse scattering problem, the Marchenko equation (or Gelfand-Levitan-Marchenko equation or GLM equation), named after Israel Gelfand, Boris Levitan and Vladimir Marchenko, is derived by computing the Fourier transform of the scattering relation:
Where is a symmetric kernel, such that which is computed from the scattering data. Solving the Marchenko equation, one obtains the kernel of the transformation operator from which the potential can be read off. This equation is derived from the Gelfand–Levitan integral equation, using the Povzner–Levitan representation.
Application to scattering theory
Suppose that for a potential for the Schrödinger operator , one has the scattering data , where are the reflection coefficients from continuous scattering, given as a function , and the real parameters are from the discrete bound spectrum.
Then defining
where the are non-zero constants, solving the GLM equation
for allows the potential to be recovered using the formula
See also
Lax pair
Notes
References
Eponymous equations of physics
Integral equations
Scattering theory | Marchenko equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 219 | [
"Scattering theory",
"Equations of physics",
"Integral equations",
"Scattering stubs",
"Eponymous equations of physics",
"Mathematical objects",
"Equations",
"Scattering"
] |
5,644,796 | https://en.wikipedia.org/wiki/Nautical%20wheelers | Nautical wheelers refers to a ship builder that specifically works on the fabrication of hulls of ships. The technique called wheeling is used to form the metal panels that form the hulls of ships.
Nautical Wheelers is the name of a song by Jimmy Buffett originally released on the album A1A (Geffen 1974).
Shipbuilding | Nautical wheelers | [
"Engineering"
] | 68 | [
"Shipbuilding",
"Marine engineering"
] |
5,645,320 | https://en.wikipedia.org/wiki/Ecalene | Ecalene is a trademarked mixture of alcohols, which may be used as fuel or as a fuel additive. The typical composition of Ecalene is as follows:
References
Power Energy Fuels, Inc.
PEFI Alcohol Process Development & Demonstration
Fuels
Alcohols
Fuel additives | Ecalene | [
"Chemistry"
] | 57 | [
"Fuels",
"Chemical energy sources"
] |
5,645,353 | https://en.wikipedia.org/wiki/National%20Romantic%20style | The National Romantic style was a Nordic architectural style that was part of the National Romantic movement during the late 19th and early 20th centuries. It is often considered to be a form of Art Nouveau.
The National Romantic style spread across Denmark, Norway, Sweden, Finland, Estonia, and Latvia, as well as Russia, where it also appeared as Russian Revival architecture. Unlike some nostalgic Gothic Revival style architecture in some countries, Romantic architecture often expressed progressive social and political ideals, through reformed domestic architecture.
Nordic designers turned to early medieval architecture and even prehistoric precedents to construct a style appropriate to the perceived character of people. The style can be seen as a reaction to industrialism and an expression of the same "Dream of the North" Romantic nationalism that gave impetus to renewed interest in the study of the history of Scandinavia, along with the rediscovery of the eddas and sagas of Nordic mythology.
Examples
Bergen Station (Bergen stasjon) (1913, Norway)
Copenhagen City Hall (Københavns Rådhus) (1905, Denmark)
(Königlich-Sächsisches Landgericht) (1902, Germany)
Finnish National Theatre (Suomen Kansallisteatteri) (1902, Finland)
Frogner Church (Frogner kirke) (1907, Norway)
Holdre Manor (Holdre mõis) (1910, Estonia)
National Museum of Finland (Suomen Kansallismuseo) (1905, Finland)
Norwegian Institute of Technology (Norges tekniske høgskole) (1910, Norway)
Pohjola Insurance building (1901, Finland)
Polytechnic Students' Union or Sampo Building (1903, Finland)
Röhss Museum (Röhsska konstslöjdsmuseet) (1916, Sweden)
Stockholm City Hall (Stockholms stadshus) (1923, Sweden)
Stockholm Court House (Stockholms Rådhus) (1915, Sweden)
Taagepera Castle (Taagepera mõis) (1912, Estonia)
Tarvaspää, (1913, Finland) the house and studio built for himself by Finnish painter Akseli Gallen-Kallela
Tolstoy House (Толстовский дом) (1912, Russia)
Church of the Epiphany (Uppenbarelsekyrkan) (1913, Sweden)
Vålerenga Church (Vålerenga kirke) (1902, Norway)
Saint Thérèse of the Child Jesus Church () (1932, Barcelona)
Sweden
Finland
Estonia
Denmark
Russia
See also
List of architectural styles
References
State archives: Swedish National Romantic architecture
External links
01
Scandinavian architecture
Architecture in Denmark
Architecture in Finland
Architecture in Norway
Architecture in Sweden
Art Nouveau architecture
Architectural history
19th-century architectural styles
20th-century architectural styles | National Romantic style | [
"Engineering"
] | 568 | [
"Architectural history",
"Architecture"
] |
5,645,363 | https://en.wikipedia.org/wiki/Clean-in-place | Clean-in-place (CIP) is an automated method of cleaning the interior surfaces of pipes, vessels, equipment, filters and associated fittings, without major disassembly. CIP is commonly used for equipment such as piping, tanks, and fillers. CIP employs turbulent flow through piping, and/or spray balls for tanks or vessels. In some cases, CIP can also be accomplished with fill, soak and agitate.
Up to the 1950s, closed systems were disassembled and cleaned manually. The advent of CIP was a boon to industries that needed frequent internal cleaning of their processes. Industries that rely heavily on CIP are those requiring high levels of hygiene, and include: dairy, beverage, brewing, processed foods, pharmaceutical, and cosmetics. A well designed CIP system is needed to accomplish required results from CIP.
The benefit to industries that use CIP is that the cleaning is faster, less labor-intensive and more repeatable, and poses less of a chemical exposure risk. CIP started as a manual practice involving a balance tank, centrifugal pump, and connection to the system being cleaned. Since the 1950s, CIP has evolved to include fully automated systems with programmable logic controllers, multiple balance tanks, sensors, valves, heat exchangers, data acquisition and specially designed spray nozzle systems. Simple, manually operated CIP systems can still be found in use today. However, fully automated CIP systems are in demand to avoid human errors, consistent results at reduced resources.
Depending on soil load and process geometry, the CIP design principles are as follows:
deliver highly turbulent, high flow-rate solution to effect good cleaning (applies to pipe circuits and some filled equipment). The required flow rate can be calculated by considering fluid velocity minimum 1.5 m/s.
deliver solution as a low-energy spray to fully wet the surface (applies to lightly soiled vessels where a static spray ball may be used).
deliver a high energy impinging spray (applies to highly soiled or large diameter vessels where a dynamic spray device may be used).
Factors affecting the effectiveness of the cleaning agents
Temperature of the cleaning solution. Elevating the temperature of a cleaning solution increases its dirt removal efficiency. Molecules with high kinetic energy dislodge dirt faster than the slow moving molecules of a cold solution.
Concentration of the cleaning agent. A concentrated cleaning solution will clean a dirty surface much better than a dilute one due to the increased surface binding capacity.
Contact time of the cleaning solution. The longer the detergent contact period, the higher the cleaning efficiency. After some time, the detergent eventually dissolves the hard stains/soil from the dirty surface.
Pressure exerted by the cleaning solution (or turbulence). The turbulence creates an abrasive force that dislodges stubborn soil from the dirty surface.
Groundwater sources
Originally developed for cleaning closed systems as described above, CIP has more recently been applied to groundwater source boreholes used for high end-uses such as natural mineral/spring waters, food production and carbonated soft drinks (CSD).
Boreholes that are open to the atmosphere are prone to a number of chemical and microbiological problems, so sources for high end-use are often sealed at the surface (headworks). An air filter is built into the headworks to permit the borehole to inhale and exhale when the water level rises and falls quickly (usually due to the pump being turned on and off) without drawing in airborne particles or contaminants (spores, molds, fungi, bacteria, etc.).
In addition, CIP systems can be built into the borehole headworks to permit the injection of cleaning solutions (such as sodium hypochlorite or other sanitizers) and the subsequent recirculation of the mix of these chemicals and the groundwater. This process cleans the borehole interior and equipment without any invasive maintenance being required.
Biomanufacturing Equipment
CIP is commonly used for cleaning bioreactors, fermenters, mix vessels, and other equipment used in biotech manufacturing, pharmaceutical manufacturing and food and beverage manufacturing. CIP is performed to remove or obliterate previous mammalian cell culture batch components. It is used to remove in-process residues, control bioburden, and reduce endotoxin levels within processing equipment and systems. Residue removal is accomplished during CIP with a combination of heat, chemical action, and turbulent flow.
The U.S. Food and Drug Administration published a CIP regulation in 1978 applicable to pharmaceutical manufacturing. The regulation states, "Equipment and utensils shall be cleaned, maintained, and sanitized at appropriate intervals to prevent malfunctions or contamination that would alter the safety, identity, strength, quality or purity of the drug product beyond the official or other established requirements."
Repeatable, reliable, and effective cleaning is of the utmost importance in a manufacturing facility. Cleaning procedures are validated to demonstrate that they are effective, reproducible, and under control. In order to adequately clean processing equipment, the equipment must be designed with smooth stainless steel surfaces and interconnecting piping that has cleanable joints. The chemical properties of the cleaning agents must properly interact with the chemical and physical properties of the residues being removed.
A typical CIP cycle consists of many steps which often include (in order):
Pre-rinse with WFI (water for injection) or PW (purified water) which is performed to wet the interior surface of the tank and remove residue. It also provides a non-chemical pressure test of the CIP flow path.
Caustic solution single pass flush through the vessel to drain. Caustic is the main cleaning solution.
Caustic solution re-circulation through the vessel.
Intermediate WFI or PW rinse
Acid solution wash – used to remove mineral precipitates and protein residues.
Final rinse with WFI or PW – rinses to flush out residual cleaning agents.
disinfectant solution wash or hot water circulation to kill all microbes.
Final air blow – used to remove moisture remaining after CIP cycle.
Critical parameters must be met and remain within the specification for the duration of the cycle. If the specification is not reached or maintained, cleaning will not be ensured and will have to be repeated. Critical parameters include temperature, flow rate/supply pressure, chemical concentration, chemical contact time, and final rinse conductivity (which shows that all cleaning chemicals have been removed).
See also
Effluent guidelines (U.S. wastewater regulations)
Effluent limitation
Good manufacturing practice
Ice pigging
Washdown
Wastewater
References
Cleaning methods
Environmental engineering | Clean-in-place | [
"Chemistry",
"Engineering"
] | 1,368 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
5,645,543 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%208B | Bone morphogenetic protein 8B is a protein that in humans is encoded by the BMP8B gene.
The protein encoded by this gene is a member of the TGF-β superfamily. It has close sequence homology to BMP7 and BMP5 and is believed to play a role in bone and cartilage development. It has been shown to be expressed in the hippocampus of murine embryos.
The bone morphogenetic proteins (BMPs) are a family of secreted signaling molecules that can induce ectopic bone growth. Many BMPs are part of the transforming growth factor-beta (TGFB) superfamily. BMPs were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Based on its expression early in embryogenesis, the BMP encoded by this gene has a proposed role in early development. In addition, the fact that this BMP is closely related to BMP5 and BMP7 has led to speculation of possible bone inductive activity.
References
Further reading
External links
Bone morphogenetic protein
Developmental genes and proteins
TGFβ domain | Bone morphogenetic protein 8B | [
"Biology"
] | 248 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,645,622 | https://en.wikipedia.org/wiki/Sound%20recording%20copyright%20symbol | The sound recording copyright symbol or phonogram symbol, (letter P in a circle), is the copyright symbol used to provide notice of copyright in a sound recording (phonogram) embodied in a phonorecord (LPs, audiotapes, cassette tapes, compact discs, etc.). It was first introduced in the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations in 1961. The United States added it to its copyright law as part of its adherence to the Geneva Phonograms Convention in 17 U.S.C. § 402, the codification of the Copyright Act of 1976.
The letter P in stands for phonogram, the legal term used in most English-speaking countries to refer to works known in U.S. copyright law as "sound recordings".
A sound recording has a separate copyright that is distinct from that of the underlying work (usually a musical work, expressible in musical notation and written lyrics), if any. The sound recording copyright notice extends to a copyright for just the sound itself and will not apply to any other rendition or version, even if performed by the same artist(s).
International treaties
The symbol first appeared in the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations, a multilateral treaty relating to copyright, in 1961. Article 11 of the Rome Convention provided:
When the Geneva Phonograms Convention, another multilateral copyright treaty, was signed in 1971, it included a similar provision in its Article 5:
United States law
The symbol was introduced into United States copyright law in 1971, when the US extended limited copyright protection to sound recordings. The United States anticipated signing onto the Geneva Phonograms Convention, which it had helped draft. On October 15, 1971, Congress enacted the Sound Recording Act of 1971, also known as the Sound Recording Amendment of 1971, which amended the 1909 Copyright Act by adding protection for sound recordings and prescribed a copyright notice for sound recordings. The Sound Recording Act added a copyright notice provision specific to sound recordings, which incorporated the symbol prescribed in the Geneva Convention, to the end of section 19 of the 1909 Copyright Act:
The designation of the symbol continues in § of the current Copyright Act of 1976. That section provides for the a non-mandatory copyright notice on sound recordings:
If a notice appears on the phonorecords, it shall consist of the following three elements:
(1) the symbol ℗ (the letter P in a circle); and
(2) the year of first publication of the sound recording; and
(3) the name of the owner of copyright in the sound recording, or an abbreviation by which the name can be recognized, or a generally known alternative designation of the owner; if the producer of the sound recording is named on the phonorecord labels or containers, and if no other name appears in conjunction with the notice, the producer’s name shall be considered a part of the notice.
Encoding
The symbol has a code point in Unicode at , with the supplementary Unicode character property names, "published" and "phonorecord sign".
See also
Copyright symbol
Enclosed Alphanumerics
References
Copyright law
Typographical symbols
United States copyright law
Symbols introduced in 1971 | Sound recording copyright symbol | [
"Mathematics"
] | 668 | [
"Symbols",
"Typographical symbols"
] |
5,645,755 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%206 | Bone morphogenetic protein 6 is a protein that in humans is encoded by the BMP6 gene.
The protein encoded by this gene is a member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce the growth of bone and cartilage. BMP6 is able to induce all osteogenic markers in mesenchymal stem cells.
The bone morphogenetic proteins (BMPs) are a family of secreted signaling molecules that can induce ectopic bone growth. BMPs are part of the transforming growth factor-beta (TGFB) superfamily. BMPs were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. Based on its expression early in embryogenesis, the BMP encoded by this gene has a proposed role in early development. In addition, the fact that this BMP is closely related to BMP5 and BMP7 has led to speculation of possible bone inductive activity.
As of April 2009, an additional function of BMP6 has been identified as described in Nature Genetics April; 41 [4]:386-8. BMP6 is the key regulator of hepcidin, the small peptide secreted by the liver which is the major regulator of iron metabolism in mammals.
References
Further reading
External links
Bone morphogenetic protein
Developmental genes and proteins
TGFβ domain | Bone morphogenetic protein 6 | [
"Biology"
] | 302 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,646,030 | https://en.wikipedia.org/wiki/Lute%20of%20Pythagoras | The lute of Pythagoras is a self-similar geometric figure made from a sequence of pentagrams.
Constructions
The lute may be drawn from a sequence of pentagrams.
The centers of the pentagrams lie on a line and (except for the first and largest of them) each shares two vertices with the next larger one in the sequence.
An alternative construction is based on the golden triangle, an isosceles triangle with base angles of 72° and apex angle 36°. Two smaller copies of the same triangle may be drawn inside the given triangle, having the base of the triangle as one of their sides. The two new edges of these two smaller triangles, together with the base of the original golden triangle, form three of the five edges of the polygon. Adding a segment between the endpoints of these two new edges cuts off a smaller golden triangle, within which the construction can be repeated.
Some sources add another pentagram, inscribed within the inner pentagon of the largest pentagram of the figure. The other pentagons of the figure do not have inscribed pentagrams.
Properties
The convex hull of the lute is a kite shape with three 108° angles and one 36° angle. The sizes of any two consecutive pentagrams in the sequence are in the golden ratio to each other, and many other instances of the golden ratio appear within the lute.
History
The lute is named after the ancient Greek mathematician Pythagoras, but its origins are unclear. An early reference to it is in a 1990 book on the golden ratio by Boles and Newman.
See also
Spidron
References
Fractals
Golden ratio | Lute of Pythagoras | [
"Mathematics"
] | 337 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Golden ratio",
"Fractals",
"Mathematical relations"
] |
5,646,252 | https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein%205 | Bone morphogenetic protein 5 is a protein that in humans is encoded by the BMP5 gene.
The protein encoded by this gene is member of the TGFβ superfamily. Bone morphogenetic proteins are known for their ability to induce bone and cartilage development. BMP5 may play a role in certain cancers. Like other BMP's BMP5 is inhibited by chordin and noggin. It is expressed in the trabecular meshwork and optic nerve head and may have a role in the development and normal function. It is also expressed in the lung and liver.
This gene encodes a member of the bone morphogenetic protein family which is part of the transforming growth factor-beta superfamily. The superfamily includes large families of growth and differentiation factors. Bone morphogenetic proteins were originally identified by an ability of demineralized bone extract to induce endochondral osteogenesis in vivo in an extraskeletal site. These proteins are synthesized as prepropeptides, cleaved, and then processed into dimeric proteins. This protein may act as an important signaling molecule within the trabecular meshwork and optic nerve head, and may play a potential role in glaucoma pathogenesis. This gene is differentially regulated during the formation of various tumors.
References
External links
Further reading
Bone morphogenetic protein
Developmental genes and proteins
TGFβ domain | Bone morphogenetic protein 5 | [
"Biology"
] | 292 | [
"Induced stem cells",
"Developmental genes and proteins"
] |
5,646,277 | https://en.wikipedia.org/wiki/List%20of%20limits | This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x.
Limits for general functions
Definitions of limits and related concepts
if and only if This is the (ε, δ)-definition of limit.
The limit superior and limit inferior of a sequence are defined as and .
A function, , is said to be continuous at a point, c, if
Operations on a single known limit
If then:
if L is not equal to 0.
if n is a positive integer
if n is a positive integer, and if n is even, then L > 0.
In general, if g(x) is continuous at L and then
Operations on two known limits
If and then:
Limits involving derivatives or infinitesimal changes
In these limits, the infinitesimal change is often denoted or . If is differentiable at ,
. This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x,
. This is the chain rule.
. This is the product rule.
If and are differentiable on an open interval containing c, except possibly c itself, and , L'Hôpital's rule can be used:
Inequalities
If for all x in an interval that contains c, except possibly c itself, and the limit of and both exist at c, then
If and for all x in an open interval that contains c, except possibly c itself,
This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c.
Polynomials and functions of the form xa
Polynomials in x
if n is a positive integer
In general, if is a polynomial then, by the continuity of polynomials, This is also true for rational functions, as they are continuous on their domains.
Functions of the form xa
In particular,
. In particular,
Exponential functions
Functions of the form ag(x)
, due to the continuity of
Functions of the form xg(x)
Functions of the form f(x)g(x)
. This limit can be derived from this limit.
Sums, products and composites
for all positive a.
Logarithmic functions
Natural logarithms
, due to the continuity of . In particular,
. This limit follows from L'Hôpital's rule.
, hence
Logarithms to arbitrary bases
For b > 1,
For b < 1,
Both cases can be generalized to:
where and is the Heaviside step function
Trigonometric functions
If is expressed in radians:
These limits both follow from the continuity of sin and cos.
. Or, in general,
, for a not equal to 0.
, for b not equal to 0.
, for integer n.
. Or, in general,
, for a not equal to 0.
, for b not equal to 0.
, where x0 is an arbitrary real number.
, where d is the Dottie number. x0 can be any arbitrary real number.
Sums
In general, any infinite series is the limit of its partial sums. For example, an analytic function is the limit of its Taylor series, within its radius of convergence.
. This is known as the harmonic series.
. This is the Euler Mascheroni constant.
Notable special limits
. This can be proven by considering the inequality at .
. This can be derived from Viète's formula for .
Limiting behavior
Asymptotic equivalences
Asymptotic equivalences, , are true if . Therefore, they can also be reframed as limits. Some notable asymptotic equivalences include
, due to the prime number theorem, , where π(x) is the prime counting function.
, due to Stirling's approximation, .
Big O notation
The behaviour of functions described by Big O notation can also be described by limits. For example
if
References
Limits (mathematics)
Limits
Functions and mappings | List of limits | [
"Mathematics"
] | 827 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
5,646,338 | https://en.wikipedia.org/wiki/Savart | The savart is a unit of measurement for musical pitch intervals (). One savart is equal to one thousandth of a decade (10/1: 3,986.313714 cents): 3.9863 cents. Musically, in just intonation, the interval of a decade is precisely a just major twenty-fourth, or, in other words, three octaves and a just major third. Today, musical use of the savart has largely been replaced by the cent and the millioctave. The savart is practically the same as the earlier heptameride (eptameride), one seventh of a meride (). One tenth of an heptameride is a decameride () and a hundredth of an heptameride (thousandth of a decade) is approximately one jot ().
Definition
If is the ratio of frequencies of a given interval, the corresponding measure in savarts is given by:
or
Like the more common cent, the savart is a logarithmic measure, and thus intervals can be added by simply adding their savart values, instead of multiplying them as you would frequencies. The number of savarts in an octave is 1000 times the base-10 logarithm of 2, or nearly 301.03. Sometimes this is rounded to 300, which makes the unit more useful for equal temperament.
Conversion
The conversion from savarts into cents, millioctaves or millidecades is:
1 savart = 0.001 decade = 1 millidecade
History
The savart is named after the French physicist and doctor Félix Savart (1791–1841) who advocated the earlier similar interval of the French acoustician Joseph Sauveur (1653–1716). Sauveur proposed the méride, eptaméride (or heptaméride), and decaméride. In English these are meride, heptameride, and decameride respectively. The octave is divided into 43 merides, the meride is divided into seven heptamerides, and the heptameride is divided into ten decamerides. There are thus heptamerides in an octave. The attraction of this scheme to Sauveur was that log10(2) is very close to .301, and thus the number of heptamerides in a given ratio is found to a high degree of accuracy from simply its log times 1000. This is equivalent to assuming 1000 heptamerides in a decade rather than 301 in an octave, the same as Savart's definition. The unit was given the name savart sometime in the 20th century. A disadvantage of this scheme is that there are not an exact number of heptamerides/savarts in an equal tempered semitone. For this reason Alexander Wood used a modified definition of the savart, with 300 savarts in an octave, and hence 25 savarts in a semitone.
A related unit is the jot, of which there are 30103 in an octave, or approximately 100,000 in a decade. The jot is defined in a similar way to the savart, but has a more accurate rounding of log10(2) because more digits are used. There are approximately 100 jots in a savart. The jot was first described by Augustus De Morgan (1806-1871) which he called an atom. The name jot was coined by John Curwen (1816-1880) at the suggestion of Hermann von Helmholtz.
Comparison
Other uses
The unit is used for acoustical engineering analysis, especially in underwater acoustics, where it is known as a millidecade.
See also
Decidecade
Musical tuning
Notes
Equal temperaments
Intervals (music)
Units of level
1000 (number) | Savart | [
"Physics",
"Mathematics"
] | 776 | [
"Units of measurement",
"Physical quantities",
"Musical symmetry",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Equal temperaments",
"Symmetry"
] |
5,646,487 | https://en.wikipedia.org/wiki/Object%20model | In computing, object model has two related but distinct meanings:
The properties of objects in general in a specific computer programming language, technology, notation or methodology that uses them. Examples are the object models of Java, the Component Object Model (COM), or Object-Modeling Technique (OMT). Such object models are usually defined using concepts such as class, generic function, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
A collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
An object model consists of the following important features:
Object reference Objects can be accessed via object references. To invoke a method in an object, the object reference and method name are given, together with any arguments.
Interfaces An interface provides a definition of the signature of a set of methods without specifying their implementation. An object will provide a particular interface if its class contains code that implement the method of that interface. An interface also defines types that can be used to declare the type of variables or parameters and return values of methods.
Actions An action in object-oriented programming (OOP) is initiated by an object invoking a method in another object. An invocation can include additional information needed to carry out the method. The receiver executes the appropriate method and then returns control to the invoking object, sometimes supplying a result.
Exceptions Programs can encounter various errors and unexpected conditions of varying seriousness. During the execution of the method many different problems may be discovered. Exceptions provide a clean way to deal with error conditions without complicating the code. A block of code may be defined to throw an exception whenever particular unexpected conditions or errors arise. This means that control passes to another block of code that catches the exception.
See also
Object-oriented programming
Object-oriented analysis and design
Object database
Object Management Group
Domain-driven design
Eigenclass model
Literature
External links
Document Object Model (DOM) The official W3C definition of the DOM.
"The Java Object Model"
The Ruby Object Model: Data Structure in Detail
Object Membership: The core structure of object-oriented programming
Object Model Features Matrix A "representative sample of the design space of object models" (sense 1).
ASCOM Standards web site
Object-oriented programming | Object model | [
"Engineering"
] | 578 | [
"Software engineering",
"Software engineering stubs"
] |
5,647,048 | https://en.wikipedia.org/wiki/Karl%20K%C3%BCpfm%C3%BCller | Karl Küpfmüller (6 October 1897 – 26 December 1977) was a German electrical engineer, who was prolific in the areas of communications technology, measurement and control engineering, acoustics, communication theory, and theoretical electro-technology.
Biography
Küpfmüller was born in Nuremberg, where he studied at the Ohm-Polytechnikum. After returning from military service in World War I, he worked at the telegraph research division of the German Post in Berlin as a co-worker of Karl Willy Wagner, and, from 1921, he was lead engineer at the central laboratory of Siemens & Halske AG in the same city.
In 1928 he became full professor of general and theoretical electrical engineering at the Technische Hochschule in Danzig, and later held the same position in Berlin. Küpfmüller joined the National Socialist Motor Corps in 1933. In the following year he also joined the SA. In 1937 Küpfmüller joined the NSDAP and became a member of the SS, where he reached the rank of Obersturmbannführer.
Küpfmüller was appointed as director of communication technology Research & Development at the Siemens-Wernerwerk for telegraphy. In 1941–1945 he was director of the central R&D division at Siemens & Halske in 1937.
From 1952 until his retirement in 1963, he held the chair for general communications engineering at Technische Hochschule Darmstadt.
Later he was honorary professor at the Technische Hochschule Berlin. In 1968, he received the Werner von Siemens Ring for his contributions to the theory of telecommunications and other electro-technology.
He died at Darmstadt.
Studies in communication theory
About 1928, he did the same analysis that Harry Nyquist did, to show that not more than 2B independent pulses per second could be put through a channel of bandwidth B. He did this by quantifying the time-bandwidth product k of various communication signal types, and showing that k could never be less than 1/2. From his 1931 paper (rough translation from Swedish):
"The time law allows comparison of the capacity of each transfer method with various known methods. On the other hand it indicates the limits that the development of technology must stay within. One interesting question for example is where the lower limit for k lies. The answer is acquired by at least one power change being needed to achieve one signal. So the frequency range must be at least so wide that the settling time becomes less than the duration of a signal, and from this comes k=1/2. So we can never get below this value, no matter how technology develops."
Textbooks by Küpfmüller
K. Küpfmüller, Einführung in die theoretische Elektrotechnik [Introduction to the theory of electrical engineering]. Berlin: Julius Springer, 1932.
K. Küpfmüller (revised and extended by W. Mathis and A. Reibiger), Theoretische Elektrotechnik: Eine Einführung [Theory of electrical engineering: An introduction], 19th ed. New York: Springer-Verlag, 2013.
K. Küpfmüller "Die Systemtheorie der elektrischen Nachrichtenübertragung" S. Hirzel; 4., berichtigte Aufl edition (1974)
References
Further reading
Bissell, C.C. (translator, 2005) "On the Dynamics of Automatic Gain Controllers", K. Küpfmüller, Elektrische Nachrichtentechnik, Vol. 5, No. 11, 1928, pp. 459–467.
Bissell, C.C. (2006) Karl Küpfmüller, 1928: An early time-domain, closed-loop, stability criterion. Historic Perspective. IEEE Control Systems Magazine, 26 (3). 115-116, 126. ISSN 0272-1708
Küpfmüller biography at the University of Hannover (German)
1897 births
1977 deaths
Engineers from Nuremberg
Information theory
Sturmabteilung personnel
SS-Obersturmbannführer
German electrical engineers
Werner von Siemens Ring laureates
Communication theorists
Recipients of the Knights Cross of the War Merit Cross
Academic staff of Technische Universität Darmstadt
Academic staff of Technische Universität Berlin | Karl Küpfmüller | [
"Mathematics",
"Technology",
"Engineering"
] | 876 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,647,547 | https://en.wikipedia.org/wiki/Latimer%E2%80%93MacDuffee%20theorem | The Latimer–MacDuffee theorem is a theorem in abstract algebra, a branch of mathematics.
It is named after Claiborne Latimer and Cyrus Colton MacDuffee, who published it in 1933. Significant contributions to its theory were made later by Olga Taussky-Todd.
Let be a monic, irreducible polynomial of degree . The Latimer–MacDuffee theorem gives a one-to-one correspondence between -similarity classes of matrices with characteristic polynomial and the ideal classes in the order
where ideals are considered equivalent if they are equal up to an overall (nonzero) rational scalar multiple. (Note that this order need not be the full ring of integers, so nonzero ideals need not be invertible.) Since an order in a number field has only finitely many ideal classes (even if it is not the maximal order, and we mean here ideals classes for all nonzero ideals, not just the invertible ones), it follows that there are only finitely many conjugacy classes of matrices over the integers with characteristic polynomial .
References
Theorems in abstract algebra | Latimer–MacDuffee theorem | [
"Mathematics"
] | 236 | [
"Theorems in algebra",
"Algebra stubs",
"Algebra",
"Theorems in abstract algebra"
] |
5,647,555 | https://en.wikipedia.org/wiki/Receptor%20tyrosine%20kinase | Receptor tyrosine kinases (RTKs) are the high-affinity cell surface receptors for many polypeptide growth factors, cytokines, and hormones. Of the 90 unique tyrosine kinase genes identified in the human genome, 58 encode receptor tyrosine kinase proteins.
Receptor tyrosine kinases have been shown not only to be key regulators of normal cellular processes but also to have a critical role in the development and progression of many types of cancer. Mutations in receptor tyrosine kinases lead to activation of a series of signalling cascades which have numerous effects on protein expression. The receptors are generally activated by dimerization and substrate presentation. Receptor tyrosine kinases are part of the larger family of protein tyrosine kinases, encompassing the receptor tyrosine kinase proteins which contain a transmembrane domain, as well as the non-receptor tyrosine kinases which do not possess transmembrane domains.
History
The first RTKs to be discovered were the EGF and NGF receptors in the 1960s, but the classification of receptor tyrosine kinases was not developed until the 1970s.
Classes
Approximately 20 different RTK classes have been identified.
RTK class I (EGF receptor family) (ErbB family)
RTK class II (Insulin receptor family)
RTK class III (PDGF receptor family)
RTK class IV (VEGF receptors family)
RTK class V (FGF receptor family)
RTK class VI (CCK receptor family)
RTK class VII (NGF receptor family)
RTK class VIII (HGF receptor family)
RTK class IX (Eph receptor family)
RTK class X (AXL receptor family)
RTK class XI (TIE receptor family)
RTK class XII (RYK receptor family)
RTK class XIII (DDR receptor family)
RTK class XIV (RET receptor family)
RTK class XV (ROS receptor family)
RTK class XVI (LTK receptor family)
RTK class XVII (ROR receptor family)
RTK class XVIII (MuSK receptor family)
RTK class XIX (LMR receptor)
RTK class XX (Undetermined)
Structure
Most RTKs are single subunit receptors but some exist as multimeric complexes, e.g., the insulin receptor that forms disulfide linked dimers in the presence of hormone (insulin); moreover, ligand binding to the extracellular domain induces formation of receptor dimers. Each monomer has a single hydrophobic transmembrane-spanning domain composed of 25 to 38 amino acids, an extracellular N terminal region, and an intracellular C terminal region. The extracellular N terminal region exhibits a variety of conserved elements including immunoglobulin (Ig)-like or epidermal growth factor (EGF)-like domains, fibronectin type III repeats, or cysteine-rich regions that are characteristic for each subfamily of RTKs; these domains contain primarily a ligand-binding site, which binds extracellular ligands, e.g., a particular growth factor or hormone. The intracellular C terminal region displays the highest level of conservation and comprises catalytic domains responsible for the kinase activity of these receptors, which catalyses receptor autophosphorylation and tyrosine phosphorylation of RTK substrates.
Kinase activity
A kinase is a type of enzyme that transfers phosphate groups (see below) from high-energy donor molecules, such as ATP (see below) to specific target molecules (substrates); the process is termed phosphorylation. The opposite, an enzyme that removes phosphate groups from targets, is known as a phosphatase. Kinase enzymes that specifically phosphorylate tyrosine amino acids are termed tyrosine kinases.
When a growth factor binds to the extracellular domain of a RTK, its dimerization is triggered with other adjacent RTKs. Dimerization leads to a rapid activation of the protein's cytoplasmic kinase domains, the first substrate for these domains being the receptor itself. The activated receptor as a result then becomes autophosphorylated on multiple specific intracellular tyrosine residues.
Signal transduction
Through diverse means, extracellular ligand binding will typically cause or stabilize receptor dimerization. This allows a tyrosine in the cytoplasmic portion of each receptor monomer to be trans-phosphorylated by its partner receptor, propagating a signal through the plasma membrane. The phosphorylation of specific tyrosine residues within the activated receptor creates binding sites for Src homology 2 (SH2) domain- and phosphotyrosine binding (PTB) domain-containing proteins.
Specific proteins containing these domains include Src and phospholipase Cγ. Phosphorylation and activation of these two proteins on receptor binding lead to the initiation of signal transduction pathways. Other proteins that interact with the activated receptor act as adaptor proteins and have no intrinsic enzymatic activity of their own. These adaptor proteins link RTK activation to downstream signal transduction pathways, such as the MAP kinase signalling cascade. An example of a vital signal transduction pathway involves the tyrosine kinase receptor, c-met, which is required for the survival and proliferation of migrating myoblasts during myogenesis. A lack of c-met disrupts secondary myogenesis and—as in LBX1—prevents the formation of limb musculature. This local action of FGFs (Fibroblast Growth Factors) with their RTK receptors is classified as paracrine signalling. As RTK receptors phosphorylate multiple tyrosine residues, they can activate multiple signal transduction pathways.
Families
Epidermal growth factor receptor family
The ErbB protein family or epidermal growth factor receptor (EGFR) family is a family of four structurally related receptor tyrosine kinases. Insufficient ErbB signaling in humans is associated with the development of neurodegenerative diseases, such as multiple sclerosis and Alzheimer's disease.
In mice, loss of signaling by any member of the ErbB family results in embryonic lethality with defects in organs including the lungs, skin, heart, and brain. Excessive ErbB signaling is associated with the development of a wide variety of types of solid tumor. ErbB-1 and ErbB-2 are found in many human cancers and their excessive signaling may be critical factors in the development and malignancy of these tumors.
Fibroblast growth factor receptor (FGFR) family
Fibroblast growth factors comprise the largest family of growth factor ligands at 23 members. The natural alternate splicing of four fibroblast growth factor receptor (FGFR) genes results in the production of over 48 different isoforms of FGFR.
These isoforms vary in their ligand binding properties and kinase domains; however, all share a common extracellular region composed of three immunoglobulin (Ig)-like domains (D1-D3), and thus belong to the immunoglobulin superfamily.
Interactions with FGFs occur via FGFR domains D2 and D3. Each receptor can be activated by several FGFs. In many cases, the FGFs themselves can also activate more than one receptor. This is not the case with FGF-7, however, which can activate only FGFR2b.
A gene for a fifth FGFR protein, FGFR5, has also been identified. In contrast to FGFRs 1-4, it lacks a cytoplasmic tyrosine kinase domain, and one isoform, FGFR5γ, only contains the extracellular domains D1 and D2.
Vascular endothelial growth factor receptor (VEGFR) family
Vascular endothelial growth factor (VEGF) is one of the main inducers of endothelial cell proliferation and permeability of blood vessels. Two RTKs bind to VEGF at the cell surface, VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1).
The VEGF receptors have an extracellular portion consisting of seven Ig-like domains so, like FGFRs, belong to the immunoglobulin superfamily. They also possess a single transmembrane spanning region and an intracellular portion containing a split tyrosine-kinase domain. VEGF-A binds to VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1). VEGFR-2 appears to mediate almost all of the known cellular responses to VEGF. The function of VEGFR-1 is less well defined, although it is thought to modulate VEGFR-2 signaling. Another function of VEGFR-1 may be to act as a dummy/decoy receptor, sequestering VEGF from VEGFR-2 binding (this appears to be particularly important during vasculogenesis in the embryo). A third receptor has been discovered (VEGFR-3); however, VEGF-A is not a ligand for this receptor. VEGFR-3 mediates lymphangiogenesis in response to VEGF-C and VEGF-D.
RET receptor family
The natural alternate splicing of the RET gene results in the production of 3 different isoforms of the protein RET. RET51, RET43, and RET9 contain 51, 43, and 9 amino acids in their C-terminal tail, respectively. The biological roles of isoforms RET51 and RET9 are the most well studied in vivo, as these are the most common isoforms in which RET occurs.
RET is the receptor for members of the glial cell line-derived neurotrophic factor (GDNF) family of extracellular signalling molecules or ligands (GFLs).
In order to activate RET, first GFLs must form a complex with a glycosylphosphatidylinositol (GPI)-anchored co-receptor. The co-receptors themselves are classified as members of the GDNF receptor-α (GFRα) protein family. Different members of the GFRα family (GFRα1-GFRα4) exhibit a specific binding activity for a specific GFLs.
Upon GFL-GFRα complex formation, the complex then brings together two molecules of RET, triggering trans-autophosphorylation of specific tyrosine residues within the tyrosine kinase domain of each RET molecule. Phosphorylation of these tyrosines then initiates intracellular signal transduction processes.
Eph receptor family
Ephrin receptors are the largest subfamily of RTKs.
Discoidin domain receptor (DDR) family
The DDRs are unique RTKs in that they bind to collagens rather than soluble growth factors.
Regulation
The receptor tyrosine kinase (RTK) pathway is carefully regulated by a variety of positive and negative feedback loops. Because RTKs coordinate a wide variety of cellular functions such as cell proliferation and differentiation, they must be regulated to prevent severe abnormalities in cellular functioning such as cancer and fibrosis.
Protein tyrosine phosphatases
Protein Tyrosine Phosphatase (PTPs) are a group of enzymes that possess a catalytic domain with phosphotyrosine-specific phosphohydrolase activity. PTPs are capable of modifying the activity of receptor tyrosine kinases in both a positive and negative manner. PTPs can dephosphorylate the activated phosphorylated tyrosine residues on the RTKs which virtually leads to termination of the signal. Studies involving PTP1B, a widely known PTP involved in the regulation of the cell cycle and cytokine receptor signaling, has shown to dephosphorylate the epidermal growth factor receptor and the insulin receptor. Some PTPs, on the other hand, are cell surface receptors that play a positive role in cell signaling proliferation. Cd45, a cell surface glycoprotein, plays a critical role in antigen-stimulated dephosphorylation of specific phosphotyrosines that inhibit the Src pathway.
Herstatin
Herstatin is an autoinhibitor of the ErbB family, which binds to RTKs and blocks receptor dimerization and tyrosine phosphorylation. CHO cells transfected with herstatin resulted in reduced receptor oligomerization, clonal growth and receptor tyrosine phosphorylation in response to EGF.
Receptor endocytosis
Activated RTKs can undergo endocytosis resulting in down regulation of the receptor and eventually the signaling cascade. The molecular mechanism involves the engulfing of the RTK by a clathrin-mediated endocytosis, leading to intracellular degradation.
Drug therapy
RTKs have become an attractive target for drug therapy due to their implication in a variety of cellular abnormalities such as cancer, degenerative diseases and cardiovascular diseases. The United States Food and Drug Administration (FDA) has approved several anti-cancer drugs caused by activated RTKs. Drugs have been developed to target the extracellular domain or the catalytic domain, thus inhibiting ligand binding, receptor oligomerization. Herceptin, a monoclonal antibody that is capable of binding to the extracellular domain of RTKs, has been used to treat HER2 overexpression in breast cancer.
+ Table adapted from "Cell signalling by receptor-tyrosine kinases," by Lemmon and Schlessinger's, 2010. Cell, 141, p. 1117–1134.
See also
Tyrosine kinase
Insulin receptor
Enzyme-linked receptor
Tyrphostins
Bcr-Abl tyrosine kinase inhibitors
References
External links
Tyrosine kinase receptors
Single-pass transmembrane proteins
EC 2.7.10 | Receptor tyrosine kinase | [
"Chemistry"
] | 2,906 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
5,647,595 | https://en.wikipedia.org/wiki/Ephrin%20receptor | Eph receptors (Ephs, after erythropoietin-producing human hepatocellular receptors) are a group of receptors that are activated in response to binding with Eph receptor-interacting proteins (Ephrins). Ephs form the largest known subfamily of receptor tyrosine kinases (RTKs). Both Eph receptors and their corresponding ephrin ligands are membrane-bound proteins that require direct cell-cell interactions for Eph receptor activation. Eph/ephrin signaling has been implicated in the regulation of a host of processes critical to embryonic development including axon guidance, formation of tissue boundaries, cell migration, and segmentation. Additionally, Eph/ephrin signaling has been identified to play a critical role in the maintenance of several processes during adulthood including long-term potentiation, angiogenesis, and stem cell differentiation and cancer.
Subclasses
Ephs can be divided into two subclasses, EphAs and EphBs (encoded by the genetic loci designated EPHA and EPHB respectively), based on sequence similarity and on their binding affinity for either the glycosylphosphatidylinositol-linked ephrin-A ligands or the transmembrane-bound ephrin-B ligands. Of the 16 Eph receptors (see above) that have been identified in animals, humans are known to express nine EphAs (EphA1-8 and EphA10) and five EphBs (EphB1-4 and EphB6). In general, Ephs of a particular subclass bind preferentially to all ephrins of the corresponding subclass, but have little to no cross-binding to ephrins of the opposing subclass. It has recently been proposed that the intrasubclass specificity of Eph/ephrin binding could be partially attributed to the different binding mechanisms used by EphAs and EphBs. There are exceptions to the intrasubclass binding specificity observed in Ephs, however, as it has recently been shown that ephrin-B3 can bind to and activate EphA4 and that ephrin-A5 can bind to and activate EphB2. EphA/ephrinA interaction typically occur with higher affinity than EphB/ephrin-B interactions which can partially be attributed to the fact that ephrin-As bind via a "lock-and-key" mechanism that requires little conformational change of the EphAs in contrast to EphBs which utilize an "induced fit" mechanism that requires a greater amount of energy to alter the conformation of EphBs to bind to ephrin-Bs.
16 Ephs have been identified in animals and are listed below:
EPHA1, EPHA2, EPHA3, EPHA4, EPHA5, EPHA6, EPHA7, EPHA8, EPHA9, EPHA10
EPHB1, EPHB2, EPHB3, EPHB4, EPHB5, EPHB6
Activation
The extracellular domain of Eph receptors is composed of a highly conserved globular ephrin ligand-binding domain, a cysteine-rich region and two fibronectin type III domains. The cytoplasmic domain of Eph receptors is composed of a juxtamembrane region with two conserved tyrosine residues, a tyrosine kinase domain, a sterile alpha motif (SAM), and a PDZ-binding motif. Following binding of an ephrin ligand to the extracellular globular domain of an Eph receptor, tyrosine and serine residues in the juxtamembrane region of the Eph become phosphorylated allowing the intracellular tyrosine kinase to convert into its active form and subsequently activate or repress downstream signaling cascades. The structure of the trans-autophosphorylation of the juxtamembrane region of EPHA2 has been observed within a crystal of EPHA2.
Function
The ability of Ephs and ephrins to mediate a variety of cell-cell interactions places the Eph/ephrin system in an ideal position to regulate a variety of different biological processes during embryonic development.
Bi-directional signaling
Unlike most other RTKs, Ephs have a unique capacity to initiate an intercellular signal in both the receptor-bearing cell ("forward" signaling) and the opposing ephrin-bearing cell ("reverse" signaling) following cell-cell contact, which is known as bi-directional signaling. Although the functional consequences of Eph/ephrin bi-directional signaling have not been completely elucidated, it is clear that such a unique signaling process allows for ephrin Ephs to have opposing effects on growth cone survival and allows for the segregation of Eph-expressing cells from ephrin-expressing cells.
Segmentation
Segmentation is a basic process of embryogenesis occurring in most invertebrates and all vertebrates by which the body is initially divided into functional units. In the segmented regions of the embryo, cells begin to present biochemical and morphological boundaries at which cell behavior is drastically different – vital for future differentiation and function. In the hindbrain, segmentation is a precisely defined process. In the paraxial mesoderm, however, development is a dynamic and adaptive process that adjusts according to posterior body growth. Various Eph receptors and ephrins are expressed in these regions, and, through functional analysis, it has been determined that Eph signaling is crucial for the proper development and maintenance of these segment boundaries. Similar studies conducted in zebrafish have shown similar segmenting processes within the somites containing a striped expression pattern of Eph receptors and their ligands, which is vital to proper segmentation - the disruption of expression resulting in misplaced or even absent boundaries.
Axon guidance
As the nervous system develops, the patterning of neuronal connections is established by molecular guides that direct axons (axon guidance) along pathways by target and pathway derived signals. Eph/ephrin signaling regulates the migration of axons to their target destinations largely by decreasing the survival of axonal growth cones and repelling the migrating axon away from the site of Eph/ephrin activation. This mechanism of repelling migrating axons through decreased growth cone survival depends on relative levels of Eph and ephrin expression and allows gradients of Eph and ephrin expression in target cells to direct the migration of axon growth cones based on their own relative levels of Eph and ephrin expression. Typically, forward signaling by both EphA and EphB receptors mediates growth cone collapse while reverse signaling via ephrin-A and ephrin-B induces growth cone survival.
The ability of Eph/ephrin signaling to direct migrating axons along Eph/ephrin expression gradients is evidenced in the formation of the retinotopic map in the visual system, with graded expression levels of both Eph receptors and ephrin ligands leading to the development of a resolved neuronal map (for a more detailed description of Eph/ephrin signaling see "Formation of the Retinotopic Map" in ephrin). Further studies then showed the role of Eph’s in topographic mapping in other regions of the central nervous system, such as learning and memory via the formation of projections between the septum and hippocampus.
In addition to the formation of topographic maps, Eph/ephrin signaling has been implicated in the proper guidance of motor neuron axons in the spinal cord. Although several members of Ephs and ephrins contribute to motor neuron guidance, ephrin-A5 reverse signaling has been shown to play a critical role in the survival of motor neuron growth cones and to mediate growth cone migration by initiating repellence in EphA-expressing migrating axons.
Cell migration
More than just axonal guidance, Ephs have been implicated in the migration of neural crest cells during gastrulation. In the chick and rat embryo trunk, the migration of crest cells is partially mediated by EphB receptors. Similar mechanisms have been shown to control crest movement in the hindbrain within rhombomeres 4, 5, and 7, which distribute crest cells to brachial arches 2, 3, and 4 respectively. In C. elegans a knockout of the vab-1 gene, known to encode an Eph receptor, and its Ephrin ligand vab-2 results in two cell migratory processes being affected.
Angiogenesis
Eph receptors are present in high degrees during vasculogenesis, angiogenesis, and other early development of the circulatory system. This development is disturbed without it. It is thought to distinguish arterial and venous endothelium, stimulating the production of capillary sprouts as well as in the differentiation of mesenchyme into perivascular support cells.
The construction of blood vessels requires the coordination of endothelial and supportive mesenchymal cells through multiple phases to develop the intricate networks required for a fully functional circulatory system. The dynamic nature and expression patterns of the Ephs make them, therefore, ideal for roles in angiogenesis. Mouse embryonic models show expression of EphA1 in mesoderm and pre-endocardial cells, later spreading up into the dorsal aorta then primary head vein, intersomitic vessels, and limb bud vasculature, as would be consistent with a role in angiogenesis. Different class A Eph receptors have also been detected in the lining of the aorta, brachial arch arteries, umbilical vein, and endocardium. Complementary expression of EphB2/ephrin-B4 was detected in developing arterial endothelial cells and EphB4 in venous endothelial cells. Expression of EphB2 and ephrin-B2 was also detected on supportive mesenchymal cells, suggesting a role in wall development through mediation of endothelial-mesenchymal interactions. Blood vessel formation during embryogenesis consists of vasculogenesis, the formation of a primary capillary network followed by a second remodeling and restructuring into a finer tertiary network - studies utilizing ephrin-B2 deficient mice showed a disruption of the embryonic vasculature as a result of a deficiency in the restructuring of the primary network. Functional analysis of other mutant mice have led to the development of a hypothesis by which Ephs and ephrins contribute to vascular development by restricting arterial and venous endothelial mixing, thus stimulating the production of capillary sprouts as well as in the differentiation of mesenchyme into perivascular support cells, an ongoing area of research.
Limb development
While there is currently little evidence to support this (and mounting evidence to refute it), some early studies implicated the Ephs to play a part in the signaling of limb development. In chicks, EphA4 is expressed in the developing wing and leg buds, as well as in the feather and scale primordia. This expression is seen in the distal end of the limb buds, where cells are still undifferentiated and dividing, and appears to be under the regulation of retinoic acid, FGF2, FGF4, and BMP-2 – known to regulate limb patterning. EphA4 defective mice do not present abnormalities in limb morphogenesis (personal communication between Andrew Boyd and Nigel Holder), so it is possible that these expression patterns are related to neuronal guidance or vascularisation of the limb with further studies being required to confirm or deny a potential role of Eph in limb development.
Cancer
As a member of the RTK family and with responsibilities as diverse as Ephs, it is not surprising to learn that the Ephs have been implicated in several aspects of cancer. While used extensively throughout development, Ephs are rarely detected in adult tissues. Elevated levels of expression and activity have been correlated with the growth of solid tumors, with Eph receptors of both classes A and B being over expressed in a wide range of cancers including melanoma, breast, prostate, pancreatic, gastric, esophageal, and colon cancer, as well as hematopoietic tumors. Increased expression was also correlated with more malignant and metastatic tumors, consistent with the role of Ephs in governing cell movement.
It is possible that the increased expression of Eph in cancer plays several roles, first, by acting as survival factors or as a promoter of abnormal growth. The angiogenic properties of the Eph system may increase vascularisation of and thus growth capacity of tumors. Second, elevated Eph levels may disrupt cell-cell adhesion via cadherin, known to alter expression and localisation of Eph receptors and ephrins, which is known to further disrupt cellular adhesion, a key feature of metastatic cancers. Third, Eph activity may alter cell matrix interactions via integrins by the sequestering of signaling molecules following Eph receptor activation, as well as providing potential adherence via ephrin ligand binding following metastasis.
Discovery and history
The Eph receptors were initially identified in 1987 following a search for tyrosine kinases with possible roles in cancer, earning their name from the erythropoietin-producing hepatocellular carcinoma cell line from which their cDNA was obtained. These transmembranous receptors were initially classed as orphan receptors with no known ligands or functions, and it was some time before possible functions of the receptors were known.
When it was shown that almost all Eph receptors were expressed during various well-defined stages of development in assorted locations and concentrations, a role in cell positioning was proposed, initiating research that revealed the Eph/ephrin families as a principle cell guidance system during vertebrate and invertebrate development.
References
External links
Tyrosine kinase receptors | Ephrin receptor | [
"Chemistry"
] | 2,912 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
5,647,658 | https://en.wikipedia.org/wiki/Cognitive%20imitation | Cognitive imitation is a form of social learning, and a subtype of imitation. Cognitive imitation is contrasted with motor and vocal or oral imitation. As with all forms of imitation, cognitive imitation involves learning and copying specific rules or responses done by another. The principal difference between motor and cognitive imitation is the type of rule (and stimulus) that is learned and copied by the observer. So, whereas in the typical imitation learning experiment subjects must copy novel actions on objects or novel sequences of specific actions (novel motor imitation), in a novel cognitive imitation paradigm subjects have to copy novel rules, independently of specific actions or movement patterns.
The following example illustrates the difference between cognitive and motor-spatial imitation: Imagine someone overlooking someone's shoulder and stealing their automated teller machine (ATM) password. As with all forms of imitation, the individual learns and successfully reproduces the observed sequence. The observer in our example, like most of us, presumably knows how to operate an ATM (namely, that you have to push X number of buttons on the ATM screen in a specific sequence), so the specific motor responses of touching the screen isn't what the thief is learning. Instead, the thief could learn two types of abstract rules. On the one hand, the thief can learn a spatial rule: touch item in the top right, followed by item on the top left, then the item in the middle of the screen, and finally the one on lower right. This would be an example of motor-spatial imitation because the thief's response is guided by an abstract motor-spatial rule. On the other, the thief could ignore the spatial patterning of the observed responses and instead focus on the particular items that were touched, generating an abstract numerical rule, independently of where they are in space: 3-1-5-9. This would constitute an example of cognitive imitation because the individuals is copying an abstract serial rule without copying specific motor-responses. In this example, the thief's responses match those he observed only because the numbers are in the same location. If the numbers were in a different location—that is, if the numbers on the ATM's keypad were scrambled with every attempt to enter a password—the thief would, nonetheless, reproduce the target password because they learned a cognitive (i.e., an abstract, item-specific serial rule), rather than a spatial rule (i.e., an observable motor-spatial pattern).
In rhesus monkeys
The term "cognitive imitation" was first introduced by Subiaul and his colleagues (Subiaul, Cantlon, et al., 2004), defining it as "a type of observational learning in which a naïve student copies an expert's use of a rule". To isolate cognitive from motor imitation, Subiaul and colleagues trained two rhesus macaques to respond, in a prescribed order, to different sets of photographs that were displayed simultaneously on a touch-sensitive monitor. Because the position of the photographs varied randomly from trial to trial, sequences could not be learned by motor imitation. Both monkeys learned new sequences more rapidly after observing an expert execute those sequences than when they had to learn new sequences entirely by trial and error. A mircro-analysis of each monkeys' performance showed that each monkey learned the order of two of the four photographs faster than baseline levels. A second experiment ruled out social facilitation as an explanation for this result. A third experiment, however, demonstrated that monkeys did not learn when the computer highlighted each picture in the correct sequence in the absence of a monkey ("ghost control").
Dissociating cognitive and motor-spatial imitation
Subiaul and colleagues, using two computerized tasks that measure the learning of two abstract rules: cognitive—item-based—rules (e.g., apple-boy-cat;) and motor-spatial-based rules (e.g., up-down-right) have shown that there are important dissociations between the imitation of these two types of rules. Specifically, results have shown that while 3-year-olds successfully imitate item-specific rules (i.e., cognitive imitation), these same 3-year-olds fail to imitate motor-spatial rules (i.e., motor-spatial imitation). This dissociation isn't because there's something inherently harder about learning spatial versus cognitive rules. Follow-up studies have shown that 3-year-olds easily learn new spatial rules by trial and error, correctly recalling such rules after a 30s delay, (Exp. 2). This result excludes the possibility that 3-year-olds' motor-spatial imitation problems are due to difficulty learning (i.e., encoding and recalling) novel spatial rules in general. In another study, 3-year-olds observed a model correctly touch the first item (e.g., Top Right) in the sequence, but then skip the middle item (e.g., Top Left picture) and, instead, touch the last item in the sequence (e.g., Bottom Left picture), resulting in an error, marked as unintentional by the model who said, "Whoops! That's not right!". This is a goal emulation learning condition, as the child had to copy the model's intended goal (Top-Right, Bottom-Left, Top-Left), rather than the observed (incorrect) response (Top-Right, Top-Left), similar to Meltzoff's "re-enactment" paradigm. When given an opportunity to respond, 3-year-olds generated the intended (i.e., correct) sequence (Exp. 3.) 3-year-old's success in the goal emulation condition excludes the possibility that 3-year-olds' motor-spatial imitation problem is due to difficulty vicariously learning (i.e., because of a lack of interest, failure to attend, problems inferring goals, etc.) a novel spatial rule from a model. Children's success in the goal emulation condition shows that social learning may be achieved by social reasoning (inferring goals) and causal inferences (error detection), independently of any domain-specific imitation learning mechanism.
To further explore this dissociation between cognitive- and motor-spatial imitation Subiaul and colleagues conducted a large-scale cross-sectional, within-subject study with preschoolers (2–6 years) using the same two tasks: cognitive (item-specific) and motor-spatial (spatial-specific). Results showed that children's cognitive imitation performance did not predict their motor-spatial imitation learning, and vice versa. Importantly, while age predicted improved cognitive and motor-spatial imitation performance, children's ability to individually learn each type of rule via trial and error did not predict their ability to imitate those same rules.
Subiaul and colleagues have argued that these results are consistent with the hypothesis that imitation learning is domain-specific, not domain-general. A critical caveat may be that the imitation of NOVEL rules and responses is domain-specific while the imitation of FAMILIAR responses is likely to be mediated by domain-general, non-specialized mechanisms, as Heyes and others have argued.
See also
Imitation
Mimic octopus
Modelling (psychology)
Monkey see, monkey do
References
Notes
Subiaul, F., Cantlon, J., Holloway, R. L., Terrace, H. S. (2004). Cognitive Imitation in Rhesus Macaques. Science, 305 (5682, pp. 407–410).
Subiaul, F., et al. (2015). "Becoming a high-fidelity - super - imitator: what are the contributions of social and individual learning?" Dev Sci.
Subiaul, F., et al. (2012). "Multiple imitation mechanisms in children." Dev Psychol 48(4): 1165-1179.
Social learning theory | Cognitive imitation | [
"Biology"
] | 1,646 | [
"Behavior",
"Social learning theory"
] |
5,647,670 | https://en.wikipedia.org/wiki/Empirical%20measure | In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.
The motivation for studying empirical measures is that it is often impossible to know the true underlying probability measure . We collect observations and compute relative frequencies. We can estimate , or a related distribution function by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.
Definition
Let be a sequence of independent identically distributed random variables with values in the state space S with probability distribution P.
Definition
The empirical measure Pn is defined for measurable subsets of S and given by
where is the indicator function and is the Dirac measure.
Properties
For a fixed measurable set A, nPn(A) is a binomial random variable with mean nP(A) and variance nP(A)(1 − P(A)).
In particular, Pn(A) is an unbiased estimator of P(A).
For a fixed partition of S, random variables form a multinomial distribution with event probabilities
The covariance matrix of this multinomial distribution is .
Definition
is the empirical measure indexed by , a collection of measurable subsets of S.
To generalize this notion further, observe that the empirical measure maps measurable functions to their empirical mean,
In particular, the empirical measure of A is simply the empirical mean of the indicator function, Pn(A) = Pn IA.
For a fixed measurable function , is a random variable with mean and variance .
By the strong law of large numbers, Pn(A) converges to P(A) almost surely for fixed A. Similarly converges to almost surely for a fixed measurable function . The problem of uniform convergence of Pn to P was open until Vapnik and Chervonenkis solved it in 1968.
If the class (or ) is Glivenko–Cantelli with respect to P then Pn converges to P uniformly over (or ). In other words, with probability 1 we have
Empirical distribution function
The empirical distribution function provides an example of empirical measures. For real-valued iid random variables it is given by
In this case, empirical measures are indexed by a class It has been shown that is a uniform Glivenko–Cantelli class, in particular,
with probability 1.
See also
Empirical risk minimization
Poisson random measure
References
Further reading
Measures (measure theory)
Empirical process | Empirical measure | [
"Physics",
"Mathematics"
] | 542 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
5,647,807 | https://en.wikipedia.org/wiki/Conditional%20operator | The conditional operator is supported in many programming languages. This term usually refers to ?: as in C, C++, C#, and JavaScript. However, in Java, this term can also refer to && and ||.
&& and ||
In some programming languages, e.g. Java, the term conditional operator refers to short circuit boolean operators && and ||. The second expression is evaluated only when the first expression is not sufficient to determine the value of the whole expression.
Difference from bitwise operator
& and | are bitwise operators that occur in many programming languages. The major difference is that bitwise operations operate on the individual bits of a binary numeral, whereas conditional operators operate on logical operations. Additionally, expressions before and after a bitwise operator are always evaluated.
if (expression1 || expression2 || expression3)If expression 1 is true, expressions 2 and 3 are NOT checked.
if (expression1 | expression2 | expression3)This checks expressions 2 and 3, even if expression 1 is true.
Short circuit operators can reduce run times by avoiding unnecessary calculations. They can also avoid Null Exceptions when expression 1 checks whether an object is valid.
Usage in Java
class ConditionalDemo1 {
public static void main(String[] args) {
int value1 = 1;
int value2 = 2;
if ((value1 == 1) && (value2 == 2))
System.out.println("value1 is 1 AND value2 is 2");
if ((value1 == 1) || (value2 == 1))
System.out.println("value1 is 1 OR value2 is 1");
}
}
"?:"
In most programming languages, ?: is called the conditional operator. It is a type of ternary operator. However, ternary operator in most situations refers specifically to ?: because it is the only operator that takes three operands.
Regular usage of "?:"
?: is used in conditional expressions. Programmers can rewrite an if-then-else expression in a more concise way by using the conditional operator.
Syntax
condition ? expression 1 : expression 2condition: An expression which is evaluated as a boolean value.
expression 1, expression 2: Expressions with values of any type.
If the condition is evaluated to true, the expression 1 will be evaluated. If the condition is evaluated to false, the expression 2 will be evaluated.
It should be read as: "If condition is true, assign the value of expression 1 to result. Otherwise, assign the value of expression 2 to result."
Association property
The conditional operator is right-associative, meaning that operations are grouped from right to left. For example, an expression of the form a ? b : c ? d : e is evaluated as a ? b : (c ? d : e).
Examples by languages
Java
class ConditionalDemo2 {
public static void main(String[] args) {
int value1 = 1;
int value2 = 2;
int result;
boolean someCondition = true;
result = someCondition ? value1 : value2;
System.out.println(result);
}
}In this example, because someCondition is true, this program prints "1" to the screen. Use the ?: operator instead of an if-then-else statement if it makes your code more readable; for example, when the expressions are compact and without side-effects (such as assignments).
C++
#include <iostream>
int main() {
int x = 1;
int y = 2;
std::cout << ( x > y ? x : y ) << " is the greater of the two." << std::endl;
}There are several rules that apply to the second and third operands in C++:
If both operands are of the same type, the result is of that type
If both operands are of arithmetic or enumeration types, the usual arithmetic conversions (covered in Standard Conversions) are performed to convert them to a common type
If both operands are of pointer types or if one is a pointer type and the other is a constant expression that evaluates to 0, pointer conversions are performed to convert them to a common type
If both operands are of reference types, reference conversions are performed to convert them to a common type
If both operands are of type void, the common type is type void
If both operands are of the same user-defined type, the common type is that type.
C#
// condition ? first_expression : second_expression;
static double sinc(double x)
{
return x != 0.0 ? Math.Sin(x)/x : 1.0;
}There are several rules that apply to the second and third operands x and y in C#:
If x has type X and y has type Y:
If an implicit conversion exists from X to Y but not from Y to X, Y is the type of the conditional expression.
If an implicit conversion exists from Y to X but not from X to Y, X is the type of the conditional expression.
Otherwise, no expression type can be determined, and a compile-time error occurs.
If only one of x and y has a type, and both x and y are implicitly convertible to that type, that type is the type of the conditional expression.
Otherwise, no expression type can be determined, and a compile-time error occurs.
JavaScript
var age = 26;
var beverage = (age >= 21) ? "Beer" : "Juice";
console.log(beverage); // "Beer"The conditional operator of JavaScript is compatible with the following browsers:
Chrome, Edge, Firefox (1), Internet Explorer, Opera, Safari, Android webview, Chrome for Android, Edge Mobile, Firefox for Android (4), Opera for Android, Safari on IOS, Samsung Internet, Node.js.
Special usage in conditional chain
The ternary operator is right-associative, which means it can be "chained" in the following way, similar to an if ... else if ... else if ... else chain.
Examples by languages
JavaScript
function example(…) {
return condition1 ? value1
: condition2 ? value2
: condition3 ? value3
: value4;
}
// Equivalent to:
function example(…) {
if (condition1) { return value1; }
else if (condition2) { return value2; }
else if (condition3) { return value3; }
else { return value4; }
}
C/C++
const double a =
expression1 ? a1
: expression2 ? a2
: expression3 ? a3
: /*otherwise*/ a4;
// Equivalent to:
double a;
if (expression1)
a = a1;
else if (expression2)
a = a2;
else if (expression3)
a = a3;
else /*otherwise*/
a = a4;
Special usage in assignment expression
the conditional operator can yield a L-value in C/C++ which can be assigned another value, but the vast majority of programmers consider this extremely poor style, if only because of the technique's obscurity.
C/C++
((foo) ? bar : baz) = frink;
//equivalent to:
if (foo)
bar = frink;
else
baz = frink;
See also
?:, a conditional operator in computer programming
Ternary operation
Bitwise operators
short circuit boolean operators
Operator (programming)
References
Computer programming
Operators (programming)
Articles with example Java code | Conditional operator | [
"Technology",
"Engineering"
] | 1,606 | [
"Software engineering",
"Computer programming",
"Computers"
] |
5,648,111 | https://en.wikipedia.org/wiki/Polyglycerol%20polyricinoleate | Polyglycerol polyricinoleate (PGPR), E476, is an emulsifier made from glycerol and fatty acids (usually from castor bean, but also from soybean oil). In chocolate, compound chocolate and similar coatings, PGPR is mainly used with another substance like lecithin to reduce viscosity. It is used at low levels (below 0.5%), and works by decreasing the friction between the solid particles (e.g. cacao, sugar, milk) in molten chocolate, reducing the yield stress so that it flows more easily, approaching the behaviour of a Newtonian fluid. It can also be used as an emulsifier in spreads and in salad dressings, or to improve the texture of baked goods. It is made up of a short chain of glycerol molecules connected by ether bonds, with ricinoleic acid side chains connected by ester bonds.
PGPR is a yellowish, viscous liquid, and is strongly lipophilic: it is soluble in fats and oils and insoluble in water and ethanol.
Manufacture
Glycerol is heated to above 200 °C in a reactor in the presence of an alkaline catalyst to create polyglycerol. Castor oil fatty acids are separately heated to above 200 °C, to create interesterified ricinoleic fatty acids. The polyglycerol and the interesterified ricinoleic fatty acids are then mixed to create PGPR.
Use in chocolate
As PGPR improves the flow characteristics of chocolate and compound chocolate, especially near the melting point, it can improve the efficiency of chocolate coating processes: chocolate coatings with PGPR flow better around shapes of enrobed and dipped products, and it also improves the performance of equipment used to produce solid molded products: the chocolate flows better into the mold, and surrounds inclusions and releases trapped air more easily. PGPR can also be used to reduce the quantity of cocoa butter needed in chocolate formulations: the solid particles in chocolate are suspended in the cocoa butter, and by reducing the viscosity of the chocolate, less cocoa butter is required, which saves costs, because cocoa butter is an expensive ingredient, and also leads to a lower-fat product.
Safety
The FDA has deemed PGPR to be generally recognized as safe for human consumption, and the Joint FAO/WHO Expert Committee on Food Additives (JECFA) has also deemed it safe. Both of these organisations set the acceptable daily intake at 7.5 milligrams per kilogram of body weight. In 2017, a panel from the European Food Safety Authority recommended an increased acceptable daily intake of 25 milligrams per kilogram of body weight based on a new chronic toxicity and carcinogenicity study. In Europe, PGPR is allowed in chocolate up to a level of 0.5%.
In a 1998 review funded by Unilever of safety evaluations from the late 1950s and early 1960s, "PGPR was found to be 98% digested by rats and utilized as a source of energy superior to starch and nearly equivalent to peanut oil." Additionally, no evidence was found of interference with normal fat metabolism, nor with growth, reproduction, and maintenance of tissue. Overall, it did not "constitute a human health hazard".
A study published in the European Food Safety Authority in 2017 re-evaluated the safety of the additive and recommended to revise the acceptable daily intake and increase it to 25 milligrams per kilogram of body weight.
References
Food additives
Non-ionic surfactants
Organic polymers
Fatty acid esters
E-number additives | Polyglycerol polyricinoleate | [
"Chemistry"
] | 748 | [
"Organic compounds",
"Organic polymers"
] |
5,648,227 | https://en.wikipedia.org/wiki/Home%20tour%20event | A home tour event or housing show, sometimes branded as Parade of Homes, Street of Dreams, Tour of Homes or Homearama, is a building industry showcase of homes, typically new builds, held in several regions throughout the United States. The events date to the late 1940s.
Background
In the years following World War II, the United States experienced housing shortages. Builders entered the market to create mass housing developments such as Levittown; by 1952, such builders had produced six million new housing units. The market for starter homes was saturated, and builders needed to create a new market for move-up homes that had additional space and included more upscale features. According to House & Home magazine, the building boom meant builders had "in a very literal sense...built themselves out of their easy market" and said "from now on they must sell their new houses to people already well-housed -- at least by yesterday's standards". According to the Washington Post in 1953, home builders were being confronted for the first time since before the war with a buyer's market.
Aspirational standards of living increased as Americans were exposed to bigger homes via movies, television, and advertising; by the mid-1950s 18 million families were financially able to consider moving from starter homes to move-up homes. In general the period was one of increased consumer spending.
History
The date of the first of such events is disputed. According to the National Association of Home Builders (NABH), the Salt Lake Home Builders Association held the first Parade of Homes in the United States in 1946. According to Samuel Dodd, writing in the Journal of Design History, the first housing show was organized in 1948 by the NABH under the brand Parade of Homes. The nation's first home-tour organization was established in Minnesota in 1948 by the Builders Association of the Twin Cities (now known as Housing First Minnesota).
Early events featured crowd-drawing publicity stunts such as beauty contests and other sexualized components; a Hartford, Connecticut show featured a "girl in a bathtub" promotion -- a female employee sitting in a bathtub answering questions. Other publicity stunts included a 1956 Kansas City show that featured a team assembling and furnishing a prefabricated home in eight hours.
According to Dodd, by the mid-1950s such shows had become "the highlight of the annual homebuilding economy".
Events
Housing shows use a model home merchandising strategy to turn show attendees into buyers by staging the houses to allow attendees to picture themselves in the space. Beatrice West was a prominent designer and stager in the 1950s.
Some cold war era events featured homes with a bomb shelter.
In some locales, the events are presented by the local Home Builders Association or Building Industry Association. The events showcase homes, both new construction and remodeled homes. Homes generally include single-family homes, condominiums, duplexes, and/or townhomes. Sometimes entire new developments or streets within existing developments are developed for such events. Sometimes event homes are located in multiple neighborhoods in a city.
The Parade of Homes presented by Housing First Minnesota is the largest housing show in the nation. The event runs twice a year, once in the spring and once in the fall, with participation peaking at 1,259 home entries in a single event in 2006. The first United States Trademark for "Parade of Homes" was registered by the Home Builders Association of Fort Wayne, Inc. and was transferred to Housing First Minnesota in 2010.
The Portland metropolitan area of Oregon Home Builders Association event features high-end homes designed to showcase new designs and amenities. Started in 1976, a previous incarnation of the event was known as the Parade of Homes and later as Street of Dreams.
Some events allow tours for free; others require an admission ticket to view the homes. Some events offer awards in various categories. Some events feature remodeled homes.
References
Housing in the United States
Promotional events | Home tour event | [
"Engineering"
] | 794 | [
"Architecture stubs",
"Architecture"
] |
5,648,302 | https://en.wikipedia.org/wiki/Expansion%20joint | A expansion joint, or movement joint, is an assembly designed to hold parts together while safely absorbing temperature-induced expansion and contraction of building materials. They are commonly found between sections of buildings, bridges, sidewalks, railway tracks, piping systems, ships, and other structures.
Building faces, concrete slabs, and pipelines expand and contract due to warming and cooling from seasonal variation, or due to other heat sources. Before expansion joint gaps were built into these structures, they would crack under the stress induced.
Bridge expansion joints
Bridge expansion joints are designed to allow for continuous traffic between structures while accommodating movement, shrinkage, and temperature variations on reinforced and prestressed concrete, composite, and steel structures. They stop the bridge from bending out of place in extreme conditions, and also allow enough vertical movement to permit bearing replacement without the need to dismantle the bridge expansion joint. There are various types, which can accommodate movement from , including joints for small movement (EMSEAL BEJS, XJS, JEP, WR, WOSd, and Granor AC-AR), medium movement (ETIC EJ, Wd), and large movement (WP, ETIC EJF/Granor SFEJ).
Modular expansion joints are used when the movements of a bridge exceed the capacity of a single gap joint or a finger type joint. Modular multiple-gap expansion joints can accommodate movements in all directions and rotations about every axis. They can be used for longitudinal movements of as little as 160mm, or for very large movements of over 3000 mm. The total movement of the bridge deck is divided among a number of individual gaps which are created by horizontal surface beams. The individual gaps are sealed by watertight elastomeric profiles, and surface beam movements are regulated by an elastic control system. The drainage of the joint is via the drainage system of the bridge deck. Certain joints feature so-called “sinus plates” on their surface, which reduce noise from over-passing traffic by up to 80%.
Masonry control joints are also sometimes used in bridge slabs.
Masonry
Clay bricks expand as they absorb heat and moisture. This places compression stress on the bricks and mortar, encouraging bulging or flaking. A joint replacing mortar with elastomeric sealant will absorb the compressive forces without damage. Concrete decking (most typically in sidewalks) can suffer similar horizontal issues, which is usually relieved by adding a wooden spacer between the slabs. The wooden expansion joint compresses as the concrete expands. Dry, rot-resistant cedar is typically used, with a row of nails protruding out that will embed into the concrete and hold the spacer in place.
Comparison to control joints
Control joints, or contraction joints, are sometimes confused with expansion joints, but have a different purpose and function. Concrete and asphalt have relatively weak tensile strength, and typically form random cracks as they age, shrink, and are exposed to environmental stresses (including stresses of thermal expansion and contraction). Control joints attempt to attenuate cracking by designating lines for stress relief. They are cut into pavement at regular intervals. Cracks tend to form along the cuts, rather than in random fashion elsewhere. This is primarily an aesthetic issue; the appearance of even, regular cracking, which may be hidden in the joint’s crevice, is often preferred over random cracking.
Thus, expansion joints reduce cracks, including in the overall structure, while control joints manage cracks, primarily along the visual surface.
Roadway control joints may be sealed with hot tar, cold sealant (such as silicone), or compression sealant (such as rubber or polymers based crossed linked foams). Mortar with a breakaway bond may be used to fill some control joints.
Control joints must have adequate depth and not exceed maximum spacing for them to be effective. Typical specifications for a four-inch-thick slab are:
25% depth of material
spacing at 24× to 36× of slab depth (some specification call for a maximum of 30×)
special care for inside corners
Tile and stone flooring movement joints
Movement joints are designed to absorb the movement of the subfloor and the tiles themselves due to thermal expansion and contraction, moisture variations, and structural shifts. These joints are essentially gaps, typically filled with a flexible material like silicone or rubber, that separate tiles and allow for movement without causing the tiles to crack, buckle, or become disjointed.
Railway expansion joints
If a railway track runs over a bridge which has expansion joints that move more than a few millimeters, the track must be able to compensate this longer expansion or contraction. On the other hand, the track must always provide a continuous surface for the wheels traveling over it. These conflicting requirements are served by special expansion joints, where two rails glide along with each other at a very acute angle during expansion or contraction. They are typically seen near one or both ends of large steel bridges. Such an expansion joint looks somewhat like the tongue of a railroad switch, but with a different purpose and operation.
Ducted air systems
Expansion joints are required in large ducted air systems to allow fixed pieces of piping to be largely free of stress as thermal expansion occurs. Bends in elbows also can accommodate this. Expansion joints also isolate pieces of equipment such as fans from the rigid ductwork, thereby reducing vibration to the ductwork as well as allowing the fan to “grow” as it comes up to the operating air system temperature without placing stress on the fan or the fixed portions of ductwork.
An expansion joint is designed to allow deflection in the axial (compressive), lateral (shear), or angular (bending) deflections. Expansion joints can be non-metallic or metallic (often called bellows type). Non-metallic can be a single ply of rubberized material or a composite made of multiple layers of heat and erosion resistant flexible material. Typical layers are: outer cover to act a gas seal, a corrosion-resistant material such as Teflon, a layer of fiberglass to act as an
insulator and to add durability, several layers of insulation to ensure that the heat transfer from the flue gas is reduced to the required temperature and an inside layer.
A bellows is made up of a series of one or more convolutions of metal to allow the axial, lateral, or angular deflection.
Pipe expansion joints
Pipe expansion joints are necessary in systems that convey high temperature substances such as steam or exhaust gases, or to absorb movement and vibration. A typical joint is a bellows of metal (most commonly stainless steel), plastic (such as PTFE), fabric (such as glass fibre) or an elastomer such as rubber.
A bellows is made up of a series of convolutions, with the shape of the convolution designed to withstand the internal pressures of the pipe, but flexible enough to accept axial, lateral, and angular deflections. Expansion joints are also designed for other criteria, such as noise absorption, anti-vibration, earthquake movement, and building settlement. Metal expansion joints have to be designed according to rules laid out by EJMA, for fabric expansion joints there are guidelines and a state-of-the-art description by the Quality Association for Fabric Expansion Joints. Pipe expansion joints are also known as "compensators", as they compensate for the thermal movement.
Pressure balanced expansion joints
Expansion joints are often included in industrial piping systems to accommodate movement due to thermal and mechanical changes in the system. When the process requires large changes in temperature, metal components change size. Expansion joints with metal bellows are designed to accommodate certain movements while minimizing the transfer of forces to sensitive components in the system.
Pressure created by pumps or gravity is used to move fluids through the piping system. Fluids under pressure occupy the volume of their container. The unique concept of pressure balanced expansion joints is they are designed to maintain a constant volume by having balancing bellows compensate for volume changes in the bellows (line bellows) which is moved by the pipe. An early name for these devices was “pressure-volumetric compensator”.
Manufacturing of rubber expansion joints
Wrapping fabric reinforced rubber sheets
Rubber expansion joints are mainly manufactured by manual wrapping of rubber sheets and fabric reinforced rubber sheets around a bellows-shaped product mandrel. Besides rubber and fabric, reinforced rubber and/or steel wires or metal rings are added for additional reinforcement. After the entire product is built up on the mandrel, it is covered with a winding of (nylon) peel ply to pressurize all layers together. Because of the labor-intensive production process, a large part of the production has moved to eastern Europe and Asian countries.
Molded rubber expansion joints
Some types of rubber expansion joints are made with a molding process. Typical joints that are molded are medium-sized expansion joints with bead rings, which are produced in large quantities. These rubber expansion joints are manufactured on a cylindrical mandrel, which is wrapped with bias cut fabric ply. At the end the bead rings are positioned and the end sections are folded inwards over the bead rings. This part is finally placed in a mold and molded into shape and vulcanized. This is a highly automated solution for large quantities of the same type of joint.
Automated winding of rubber expansion joints
New technology has been developed to wind rubber and reinforcement layers on the (cylindrical or bellows-shaped) mandrel automatically using industrial robots instead of manual wrapping. This is fast and accurate and provides repeatable high quality. Another aspect of using industrial robots for the production of rubber expansion joints is the possibility to apply an individual reinforcement layer instead of using pre-woven fabric. The fabric reinforcement is pre-woven and cut at the preferred bias angle. With individual reinforcement it is possible to add more or less fiber material at different sections of the product by changing the fiber angles over the length of the product.
Expansion joint accessories
Liners
Internal liners can be used to either protect the metallic bellows from erosion or reduce turbulence across the bellows. They must be used when purge connectors are included in the design. In order to provide enough clearance in the liner design, appropriate lateral and angular movements must be specified by the designer. When designing an expansion joint with combination ends, flow direction must be specified as well.
Covers
External covers or shrouds should be used to protect the internal bellows from being damaged. They also serve a purpose as insulation of the bellows. Covers can either be designed as removable or permanent accessories.
Particulate barriers/purge connectors
In systems that have a media with significant particulate content (i.e. flash or catalyst), a barrier of ceramic fiber can be utilized to prevent corrosion and restricted bellows flexibility resulting from the accumulation of the particulate. Purge connectors may also be utilized to perform this same function. Internal liners must also be included in the design if the expansion joint includes purge connectors or particulate barriers.
Limit rods
Limit rods may be used in an expansion joint design to limit the axial compression or expansion. They allow the expansion joint to move over a range according to where the nut stops are placed along the rods. Limit rods are used to prevent bellows over-extension while restraining the full pressure thrust of the system.
Failure modes
Expansion joint failure can occur for various reasons, but experience shows that failures fall into several distinct categories. This list includes, but is not limited to: shipping and handling damage, improper installation/insufficient protection, during/after installation, improper anchoring, guiding, and supporting of the system, anchor failure in service, corrosion, system over-pressure, excessive bellows deflection, torsion, bellows erosion, and particulate matter in bellows convolutions restricting proper movement.
There are various actions that can be taken to prevent and minimize expansion joint failure. During installation, prevent any damage to the bellows by carefully following the instructions furnished by the manufacturer. After installation, carefully inspect the entire piping system to see if any damage occurred during installation, if the expansion joint is in the proper location, and if the expansion joint flow direction and positioning is correct. Also, periodically inspect the expansion joint throughout the operating life of the system in order to check for external corrosion, loosening of threaded fasteners and deterioration of anchors, guides, and other hardware.
Other expansion joint types
Other types of expansion joints can include: fabric expansion joint, metal expansion joint (Pressure balanced expansion joints are a type of Metal expansion joints), toroidal expansion joint, gimbal expansion joint, universal expansion joint, in-line expansion joint, refractory lined expansion joint, hinged expansion joint, reinforced expansion joint and more.
Copper expansion joints are excellent materials designed for the movement of building components due to temperature, loads, and settlement. Copper is easy to form and lasts a long time. Details regarding roof conditions, roof edges, floors, are available.
See also
Breather switch
Copper expansion joints for buildings
Expansion Joint Manufacturers Association
Metal expansion joint
Reinforced rubber
Slide plate
Toroidal expansion joint
References
External links
Quality Association for Fabric Expansion Joints
Structural engineering
Road hazards
Piping
Heating, ventilation, and air conditioning
Mechanical engineering
de:Dehnfuge | Expansion joint | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,690 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Building engineering",
"Chemical engineering",
"Road hazards",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Piping"
] |
13,574,643 | https://en.wikipedia.org/wiki/Operating%20System%20Projects | OSP, an Environment for Operating System Projects, is a teaching operating system designed to provide an environment for an introductory course in operating systems. By selectively omitting specific modules of the operating system and having the students re-implement the missing functionality, an instructor can generate projects that require students to understand fundamental operating system concepts.
The distribution includes the OSP project generator, which can be used to package a project and produce stubs (files that are empty except for required components, and that can be compiled) for the files that the students must implement. OSP includes a simulator that the student code runs on.
See also
Mobile operating system
Network operating system
References
OSP: An Environment for Operating System Projects by Michael Kifer and Scott A. Smolka, Addison Wesley, 1991, 86 pages (2nd printing in 1992).
External links
1992 paper (ACM portal)
1996 paper
Discontinued operating systems | Operating System Projects | [
"Technology"
] | 184 | [
"Operating system stubs",
"Computing stubs"
] |
13,574,754 | https://en.wikipedia.org/wiki/Immunoconjugate | Immunoconjugates are antibodies conjugated (joined) to a second molecule, usually a toxin, radioisotope or label.
These conjugates are used in immunotherapy and to develop monoclonal antibody therapy as a targeted form of chemotherapy when they are often known as antibody-drug conjugates.
When the conjugates include a radioisotope see radioimmunotherapy.
When the conjugates include a toxin see immunotoxin.
References
Further reading
Technology Insight: cytotoxic drug immunoconjugates for cancer therapy. 2007 looks useful from the abstract.
Targeted Therapy of Cancer: New Prospects for Antibodies and Immunoconjugates. 2006 full article, 18 pages.
Arming antibodies: prospects and challenges for immunoconjugates. 2005 10 pages.
Immunology
Antineoplastic drugs | Immunoconjugate | [
"Biology"
] | 192 | [
"Immunology"
] |
13,574,846 | https://en.wikipedia.org/wiki/Automated%20quality%20control%20of%20meteorological%20observations | A meteorological observation at a given place can be inaccurate for a variety of reasons, such as a hardware defect. Quality control can help spot which meteorological observations are inaccurate.
One of the main automated quality control programs used today in the area of meteorological observations is the meteorological assimilation data ingest system (MADIS).
History
Weather observation quality control systems verify probability, history, and trends. One of the main and simplest forms of quality control is the check of probability. This check throws out impossible observations, such as the dew point being higher than the temperature or data outside acceptable ranges, such as temperatures over 200 degrees Fahrenheit. Another basic quality control check is to have the data compared to preset geographic extremes, perhaps combined with diurnal variations. However this only flags the data as uncertain because the station could be reporting correctly but there is no way to know. A better way is to correlate with previous observations as well as the other simple checks. This method uses one hour persistence to check the quality of the current observation. This method makes continuity of observations better since the system is able to make better judgments on whether the current observations are bad or not.
Current
Systems such as MADIS use a three-pronged approach to quality control. This approach is much better mainly because it has more information to compare the current observation to. The first part of the process is the limit check. As already described the program checks whether the observation is within predetermined limits that are set according to whether they can physically exist or not. The second part is the temporal check which compares the station to its closest surrounding stations. The third part is internal checking, which compares the observation to previous ones and sees whether it makes sense or not. It also takes into account present weather conditions so that the data is not considered bad just because the system is set for fair weather.
Each one of MADIS' quality control checks are organized into three different levels. Level one is the validity tests, level two is the internal checks and also statistical spatial tests and level three is the spatial test. The level two statistical spatial test tests whether or not the station has failed any quality control check more than 75% of the time during the previous seven days. Once this has happened the station will continue to fail until it improves to failing only 25% of the time. The spatial check for the MADIS program also uses a reanalysis procedure: If there is a large difference between the station being checked and the station that it is being checked against, then one of them is wrong. Instead of assuming that the station being checked is wrong, the program then moves on to the other stations that are near the one being checked. If the station that is being checked still is way off compared to most of the stations surrounding it then it is flagged as bad. However if the station is close to all of the other ones except for one then that one is assumed bad.
References
Meteorological data and networks
Sensors
Error detection and correction | Automated quality control of meteorological observations | [
"Technology",
"Engineering"
] | 602 | [
"Sensors",
"Error detection and correction",
"Reliability engineering",
"Measuring instruments"
] |
13,575,052 | https://en.wikipedia.org/wiki/Stimulus%20control | In behavioral psychology, stimulus control is a phenomenon in operant conditioning that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus or stimulus delta. For example, the presence of a stop sign at a traffic intersection alerts the driver to stop driving and increases the probability that braking behavior occurs. Stimulus control does not force behavior to occur, as it is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior elicited through classical conditioning.
Some theorists believe that all behavior is under some form of stimulus control. For example, in the analysis of B. F. Skinner, verbal behavior is a complicated assortment of behaviors with a variety of controlling stimuli.
Characteristics
The controlling effects of stimuli are seen in quite diverse situations and in many aspects of behavior. For example, a stimulus presented at one time may control responses emitted immediately or at a later time; two stimuli may control the same behavior; a single stimulus may trigger behavior A at one time and behavior B at another; a stimulus may control behavior only in the presence of another stimulus, and so on. These sorts of control are brought about by a variety of methods and they can explain many aspects of behavioral processes.
In simple, practical situations, for example if one were training a dog using operant conditioning, optimal stimulus control might be described as follows:
The behavior occurs immediately when the discriminative stimulus is given.
The behavior never occurs in the absence of the stimulus.
The behavior never occurs in response to some other stimulus.
No other behavior occurs in response to this stimulus.
Establishing stimulus control through operant conditioning
Discrimination training
Operant stimulus control is typically established by discrimination training. For example, to make a light control a pigeon's pecks on a button, reinforcement only occurs following a peck to the button. Over a series of trials the pecking response becomes more probable in the presence of the light and less probable in its absence, and the light is said to become a discriminative stimulus or SD. Virtually any stimulus that the animal can perceive may become a discriminative stimulus, and many different schedules of reinforcement may be used to establish stimulus control. For example, a green light might be associated with a VR 10 schedule and a red light associated with a FI 20-sec schedule, in which case the green light will control a higher rate of response than the red light.
Generalization
After a discriminative stimulus is established, similar stimuli are found to evoke the controlled response. This is called stimulus generalization. As the stimulus becomes less and less similar to the original discriminative stimulus, response strength declines; measurements of the response thus describe a generalization gradient.
An experiment by Hanson (1959) provides an early, influential example of the many experiments that have explored the generalization phenomenon. First a group of pigeons was reinforced for pecking a disc illuminated by a light of 550 nm wavelength, and never reinforced otherwise. Reinforcement was then stopped, and a series of different wavelength lights was presented one at a time. The results showed a generalization gradient: the more the wavelength differed from the trained stimulus, the fewer responses were produced.
Many factors modulate the generalization process. One is illustrated by the remainder of Hanson's study, which examined the effects of discrimination training on the shape of the generalization gradient. Birds were reinforced for pecking at a 550 nm light, which looks yellowish-green to human observers. The birds were not reinforced when they saw a wavelength more toward the red end of the spectrum. Each of four groups saw a single unreinforced wavelength, either 555, 560, 570, or 590 nm, in addition to the reinforced 550 wavelength. The birds were then tested as before, with a range of unreinforced wavelengths. This procedure yielded sharper generalization gradients than did the simple generalization procedure used in the first procedure. In addition, however, Hansen's experiment showed a new phenomenon, called the "peak shift". That is, the peak of the test gradients shifted away from the SD, such that the birds responded more often to a wavelength they had never seen before than to the reinforced SD. An earlier theory involving inhibitory and excitatory gradients partially explained the results, A more detailed quantitative model of the effect was proposed by Blough (1975). Other theories have been proposed, including the idea that the peak shift is an example of relational control; that is, the discrimination was perceived as a choice between the "greener" of two stimuli, and when a still greener stimulus was offered the pigeons responded even more rapidly to that than to the originally reinforced stimulus.
Matching to sample
In a typical matching-to-sample task, a stimulus is presented in one location (the "sample"), and the subject chooses a stimulus in another location that matches the sample in some way (e.g., shape or color). In the related "oddity" matching procedure, the subject responds to a comparison stimulus that does not match the sample. These are called "conditional" discrimination tasks because which stimulus is responded to depends or is "conditional" on the sample stimulus.
The matching-to-sample procedure has been used to study a very wide range of problems. Of particular note is the "delayed matching to sample" variation, which has often been used to study short-term memory in animals. In this variation, the subject is exposed to the sample stimulus, and then the sample is removed and a time interval, the "delay", elapses before the choice stimuli appear. To make a correct choice the subject has to retain information about the sample across the delay. The length of the delay, the nature of the stimuli, events during the delay, and many other factors have been found to influence performance on this task.
Cannabinoids
Psychoactive cannabinoids produce discriminative stimulus effects by stimulation of CB1 receptors in the brain.
See also
Behavior therapy
Behaviorism
Motivating operation
Quantitative analysis of behavior
Signal detection
Self-control
References
Further reading
Staddon, J. E. R. (2001). Adaptive dynamics – The theoretical analysis of behavior. The MIT Press. London, England.
Experimental psychology
Behaviorism
Behavioral concepts | Stimulus control | [
"Biology"
] | 1,293 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
13,575,397 | https://en.wikipedia.org/wiki/CG%20suppression | CG suppression is a term for the phenomenon that CG dinucleotides are very uncommon in most portions of vertebrate genomes.
In adult somatic tissues, cytosine residues may be methylated, and this occurs almost exclusively within a symmetric CpG context. Methylated C residues spontaneously deaminate to form T residues; hence CpG dinucleotides steadily mutate to TpG dinucleotides, which gives rise to the under-representation of CpG dinucleotides in the human genome (they occur at only 21% of the expected frequency). (On the other hand, spontaneous deamination of unmethylated C residues gives rise to U residues, a mutation that is quickly recognized and repaired by the cell).
In human and mouse, CGs are the least frequent dinucleotide, making up less than 1% of all dinucleotides. GCs are the second most infrequent, making up more than 4% of all dinucleotides, so CGs are more than fourfold less frequent than all other dinucleotides.
See also
CpG island
References
Genomics techniques | CG suppression | [
"Chemistry",
"Biology"
] | 243 | [
"Genetics techniques",
"Genomics techniques",
"Molecular biology techniques"
] |
13,575,792 | https://en.wikipedia.org/wiki/HAT-P-4b | HAT-P-4b is a confirmed extrasolar planet orbiting the star HAT-P-4 over 1000 light years away in Boötes constellation. It was discovered by transit on October 2, 2007, which looks for slight dimming of stars caused by planets that passed in front of them. It is the fourth planet discovered by the HATNet Project. It is also called BD+36 2593b, TYC 2569-01599-1b, 2MASS J15195792+3613467b, SAO 64638b.
It is a hot Jupiter with 68% the mass and 127% the radius of Jupiter, and 41% the density of water (31% of Jupiter). Since the inclination is known by transit observation, the true mass is known. The secondary eclipse where the planet passes behind the star was detected by the Spitzer Space Telescope leading to the discovery that the planet has inefficient heat transfer from its day to night side.
The study in 2012, utilizing a Rossiter–McLaughlin effect, have determined the planetary orbit is probably aligned with the rotational axis of the star, misalignment equal to -4.9°.
References
External links
Boötes
Transiting exoplanets
Hot Jupiters
Giant planets
Exoplanets discovered in 2007
Exoplanets discovered by HATNet | HAT-P-4b | [
"Astronomy"
] | 277 | [
"Boötes",
"Constellations"
] |
13,575,891 | https://en.wikipedia.org/wiki/Extracellular%20polymeric%20substance | Extracellular polymeric substances (EPSs) are natural polymers of high molecular weight secreted by microorganisms into their environment. EPSs establish the functional and structural integrity of biofilms, and are considered the fundamental component that determines the physicochemical properties of a biofilm. EPS in the matrix of biofilms provides compositional support and protection of microbial communities from the harsh environments. Components of EPS can be of different classes of polysaccharides, lipids, nucleic acids, proteins, lipopolysaccharides, and minerals.
Components
EPSs are mostly composed of polysaccharides (exopolysaccharides) and proteins, but include other macromolecules such as DNA, lipids and humic substances. EPSs are the construction material of bacterial settlements and either remain attached to the cell's outer surface, or are secreted into its growth medium. These compounds are important in biofilm formation and cells' attachment to surfaces. EPSs constitute 50% to 90% of a biofilm's total organic matter.
Exopolysaccharides
Exopolysaccharides (also sometimes abbreviated EPSs; EPS sugars thereafter) are the sugar-based parts of EPS. Microorganisms synthesize a wide spectrum of multifunctional polysaccharides including intracellular polysaccharides, structural polysaccharides and extracellular polysaccharides or exopolysaccharides. Exopolysaccharides generally consist of monosaccharides and some non-carbohydrate substituents (such as acetate, pyruvate, succinate, and phosphate).
Exopolysaccharides are secreted from microorganisms including microalgae into the surrounding environment during their growth or propagation. They can either be loosely attached to the cell wall or excreted into the environment. Many microalgae, especially a variety of red algae and cyanobacteria, are producers of structurally diverse exopolysaccharides. Additionally, exopolysaccharides are involved in cell-to-cell interactions, adhesion, and biofilm formation.
Exopolysaccharides are widely used in the food industry as thickeners and gelling additives, which improve food quality and texture. Currently, exopolysaccharides have received much attention for their antibacterial, anti-oxidative, and anticancer properties, which lead to the development of promising pharmaceutical candidates. Since exopolysaccharides are released into the culture medium, they can be easily recovered and purified. Different strategies used for the economical extraction and other downstream processing were discussed in a chapter of the referenced book.
The minerals, results of biomineralization processes regulated by the environment or bacteria, are also essential components of the exopolysaccharides. They provide structural integrity to biofilm matrix and act as a scaffold to protect bacterial cells from shear forces and antimicrobial chemicals. The minerals in EPS were found to contribute to morphogenesis of bacteria and the structural integrity of the matrix. For example, in Bacillus subtilis, Mycobacterium smegmatis, and Pseudomonas aeruginosa biofilms, calcite () contributes to the integrity of the matrix. The minerals also associate with medical conditions. In the biofilms of Proteus mirabilis, Proteus vulgaris, and Providencia rettgeri, the minerals calcium and magnesium cause catheter encrustation.
Constituents
A 2013 review described sulfated polysaccharides synthesized by 120 marine microalgae, most of which are EPS. These heteropolymers consist mainly of galactose, glucose, and xylose in different proportions except those from Gyrodinium impudicum, which are homopolymers. Most EPS from cyanobacteria are also complex anionic heteropolymers containing six to ten different monosaccharides, one or more uronic acids, and various functional substituents such as methyl, acetate, pyruvate, sulfate groups, and proteins. For instance, the EPS from Arthrospira platensis are heteropolymer with protein (55%) moieties and a complex polysaccharide composition, containing seven neutral sugars: glucose, rhamnose, frucose, galactose, xylose, arabinose, and mannose, as well as two uronic acids, galacturonic acid and glucuronic acid.
Dunaliella salina is a unicellular green alga of outstanding halotolerance. Salt stress induces the secretion of extracellular polymeric substances from D. salina. It is speculated that the release of complex mixtures of macromolecular polyelectrolytes with high polysaccharide content contributes to the survival strategy of D. salina in varying salt concentrations. Four monosaccharides (galactose, glucose, xylose, and fructose) were detected in the hydrolysate of EPS from D. salina under salt stress. In contrast, the water-soluble polysaccharides released by Chlorella pyrenoidosa contain galactose, arabinose, mannose, ribose, xylose, fucose, and rhamnose; their release depends on the cell photosynthetic activity and reproductive state.
Strategies for EPS Yield-Increase
Although the EPS from microalgae have many potential applications, their low yield is one of the major limitations for scale-up in industry. The type and amount of EPS obtained from a certain microalgae-culture depends on several factors, such as culture system design, nutritional and culture conditions, as well as the recovery and purification process. Therefore, the configuration and optimization of production systems are critical for the further development of applications.
Examples of successful increase of EPS yield include
an optimized medium (for Chlamydomonas reinhardtii),
an examination of the nutritional conditions including higher salinity and nitrogen concentration (for Botryococcus braunii),
the addition of sulfate and magnesium salts in the culture medium (P. cruentum),
a co-culturing of Chlorella and Spirulina with the Basidiomycete Trametes versicolor,
and a novel mutagenesis tool (atmospheric and room temperature plasma, ARTP), leading to an increase of EPS production of up to 34% (volumetric yield of 1.02 g/L.
It was suggested that co-cultures of microalgae and other microorganisms can be used more universally as a technology to increase the production of EPS, since microorganisms may respond to the interaction partners by secreting EPS as a strategy during unfavorable conditions.
List of Exopolysaccharides (EPSes)
acetan (Acetobacter xylinum)
alginate (Azotobacter vinelandii, Pseudomonas spp.)
cellulose (Acetobacter xylinum)
chitosan (Mucorales spp.)
curdlan (Alcaligenes faecalis var. myxogenes)
cyclosophorans (Agrobacterium spp., Rhizobium spp. and Xanthomonas spp.)
dextran (Leuconostoc mesenteroides, Leuconostoc dextranicum and Lactobacillus hilgardii)
emulsan (Acinetobacter calcoaceticus)
galactoglucopolysaccharides (Achromobacter spp., Agrobacterium radiobacter, Pseudomonas marginalis, Rhizobium spp. and Zooglea spp.)
galactosaminogalactan (Aspergillus spp.)
gellan (Aureomonas elodea and Sphingomonas paucimobilis)
glucuronan (Sinorhizobium meliloti)
N-acetylglucosamine (Staphylococcus epidermidis)
N-acetyl-heparosan (Escherichia coli)
hyaluronic acid (Streptococcus equi)
indican (Beijerinckia indica)
kefiran (Lactobacillus hilgardii)
lentinan (Lentinus elodes)
levan (Alcaligenes viscosus, Zymomonas mobilis, Bacillus subtilis)
pullulan (Aureobasidium pullulans)
scleroglucan (Sclerotium rolfsii, Sclerotium delfinii and Sclerotium glucanicum)
schizophyllan (Schizophyllum commune)
stewartan (Pantoea stewartii subsp. stewartii)
succinoglycan (Alcaligenes faecalis var. myxogenes, Sinorhizobium meliloti)
xanthan (Xanthomonas campestris)
welan (Alcaligenes spp.)
Exoenzymes
Exoenzymes are enzymes secreted by microorganisms, such as bacteria and fungi, to function outside their cells. These enzymes are crucial for breaking down large molecules in the environment into smaller ones that the microorganisms can absorb (transport into their cells) and use for growth and energy.
Several studies have demonstrated that the activity of extracellular enzymes in aquatic microbial ecology is of algal origin. These exoenzymes released from microalgae include alkaline phosphatases, chitinases, β-d-glucosidases, proteases etc. and can influence the growth of microorganisms, chemical signaling, and biogeochemical cycling in ecosystems. The study of these exoenzymes may help to optimize the nutrient supplement strategy in aquaculture. Nevertheless, only a few of the enzymes were isolated and purified. Selected prominent enzyme classes are highlighted in the cited literature.
Extracellular Proteases
The green microalgae Chlamydomonas coccoides and Dunaliella sp. and chlorella sphaerkii (a unicellular marine chlorophyte) were found to produce extracellular proteases. The diatom Chaetoceros didymus releases substantial amounts of proteases into the medium, this production is induced by the presence of the lytic bacterium Kordia algicida and is connected to the resistance of this alga against the effects of this bacterium. Some proteases are of functional importance in viral life cycles, thus being attractive targets for drug development.
Phycoerythrin-like Proteins
Phycobiliproteins are water soluble light-capturing proteins, produced by cyanobacteria, and several algae. These pigments have been explored as fluorescent tags, food coloring agents, cosmetics, and immunological diagnostic agents. Most of these pigments are synthesized and accumulated intracellularly. As an exception, the cyanobacteria Oscillatoria and Scytonema sp. release an extracellular phycoerythrin-like 250 kDa protein. This pigment inhibits the growth of the green algae Chlorella fusca and Chlamydomonas and can be potentially used as an algicide.
Extracellular Phenoloxidases
Phenols are an important group of ecotoxins due to their toxicity and persistence. Many microorganisms can degrade aromatic pollutants and use them as a source of energy, and the ability of microalgae to degrade a multitude of aromatic compounds including phenolic compounds is increasingly recognized. Some microalgae including Chlamydomonas sp., Chlorella sp., Scenedesmus sp. and Anabaena sp. are able to degrade various phenols such as pentachlorophenol, p-nitrophenol, and naphthalenesulphonic acids. Though the metabolic degradation pathways are not fully understood, enzymes including phenoloxidase laccase (EC 1.10.3.2) and laccase-like enzymes are involved in the oxidation of aromatic substrates. These exoenzymes can be potentially applied in the environmental degradation of phenolic pollutants.
Protease Inhibitors
Protease inhibitors are a class of compounds that inhibit the activity of proteases (enzymes responsible for cleaving peptide bonds in proteins). These inhibitors are crucial in various biological processes and therapeutic applications, as proteases play key roles in numerous physiological functions, including digestion, immune response, blood coagulation, and cell signaling.
An extracellular cysteine protease inhibitor, ECPI-2, was purified from the culture medium of Chlorella sp. The inhibitor had an inhibitory effect against the proteolytic activity of papain, ficin, and chymopapain. ECPI-2 contains 33.6% carbohydrate residues that may be responsible for the stability of the enzyme under neutral or acidic conditions. These inhibitor proteins from Chlorella may be synthesized to protect cells from attacks by e.g., viruses or herbivores. Compared to organic compounds, peptide drugs are of relatively low toxicity to the human body. The development of peptide inhibitors as drugs is thus an attractive research topic in current medicinal chemistry. Protease inhibitors are attractive agents in the treatment of specific diseases; for instance, elastase is of critical importance in diseases like lung emphysema, which motivates further investigation on microalgal protease inhibitors as valuable lead-structures in pharmaceutical development.
Biofilm
Biofilm formation
The first step in the formation of biofilms is adhesion. The initial bacterial adhesion to surfaces involves the adhesin–receptor interactions. Certain polysaccharides, lipids and proteins in the matrix function as the adhesive agents. EPS also promotes cell–cell cohesion (including interspecies recognition) to facilitate microbial aggregation and biofilm formation. In general, the EPS-based matrix mediates biofilm assembly as follows. First, the EPS formation takes place at the site of adhesion, it will be either produced on bacterial surfaces or secreted on the surface of attachment, and form an initial polymeric matrix promoting microbial colonization and cell clustering. Next, continuous production of EPS further expands the matrix in 3 dimensions while forming a core of bacterial cells. The bacterial core provides a supporting framework, and facilitates the development of 3D clusters and aggregation of microcolonies. Studies on P. aeruginosa, B. subtilis, V. cholerae, and S. mutans suggested that the transition from initial cell clustering to microcolony appears to be conserved among different biofilm-forming model organisms. As an example, S. mutans produces an exoenzymes, called glucosyltransferases (Gtfs), which synthesize glucans in situ using host diet sugars as substrates. Gtfs even bind to the bacteria that do not synthesize Gtfs, and therefore, facilitate interspecies and interkingdom coadhesion.
Significance in biofilms
Afterwards, as biofilm becomes established, EPS provides physical stability and resistance to mechanical removal, antimicrobials, and host immunity. Exopolysaccharides and environmental DNA (eDNA) contribute to viscoelasticity of mature biofilms so that detachment of biofilm from the substratum will be challenging even under sustained fluid shear stress or high mechanical pressure. In addition to mechanical resistance, EPS also promotes protection against antimicrobials and enhanced drug tolerance. Antimicrobials cannot diffuse through the EPS barrier, resulting in limited drug access into the deeper layers of the biofilm. Moreover, positively charged agents will bind to negatively charged EPS contributing to the antimicrobial tolerance of biofilms, and enabling inactivation or degradation of antimicrobials by enzymes present in biofilm matrix. EPS also functions as local nutrient reservoir of various biomolecules, such as fermentable polysaccharides. A study on V. cholerae in 2017 suggested that due to osmotic pressure differences in V. cholerae biofilms, the microbial colonies physically swell, therefore maximizing their contact with nutritious surfaces and thus, nutrient uptake.
In microalgal biofilms
EPS is found in the matrix of other microbial biofilms such as microalgal biofilms. The formation of biofilm and structure of EPS share a lot of similarities with bacterial ones. The formation of biofilm starts with reversible absorption of floating cells to the surface. Followed by production of EPS, the adsorption will get irreversible. EPS will colonize the cells at the surface with hydrogen bonding. Replication of early colonizers will be facilitated by the presence of organic molecules in the matrix which will provide nutrients to the algal cells. As the colonizers are reproducing, the biofilm grows and becomes a 3-dimensional structure. Microalgal biofilms consist of 90% EPS and 10% algal cells. Algal EPS has similar components to the bacterial one; it is made up of proteins, phospholipids, polysaccharides, nucleic acids, humic substances, uronic acids and some functional groups, such as phosphoric, carboxylic, hydroxyl and amino groups. Algal cells consume EPS as their source of energy and carbon. Furthermore, EPS protects them from dehydration and reinforces the adhesion of the cells to the surface. In algal biofilms, EPS has two sub-categories; soluble EPS (sEPS) and the bounded EPS (bEPS) with former being distributed in the medium and the latter being attached to the algal cells. Bounded EPS can be further subdivided to tightly bounded EPS (TB-EPS) and loosely bounded EPS (LB-EPS). Several factors contribute to the composition of EPS including species, substrate type, nutrient availability, temperature, pH and light intensity.
Ecology
Exopolysaccharides can facilitate the attachment of nitrogen-fixing bacteria to plant roots and soil particles, which mediates a symbiotic relationship. This is important for colonization of roots and the rhizosphere, which is a key component of soil food webs and nutrient cycling in ecosystems. It also allows for successful invasion and infection of the host plant. Bacterial extracellular polymeric substances can aid in bioremediation of heavy metals as they have the capacity to adsorb metal cations, among other dissolved substances. This can be useful in the treatment of wastewater systems, as biofilms are able to bind to and remove metals such as copper, lead, nickel, and cadmium. The binding affinity and metal specificity of EPSs varies, depending on polymer composition as well as factors such as concentration and pH. In a geomicrobiological context, EPSs have been observed to affect precipitation of minerals, particularly carbonates. EPS may also bind to and trap particles in biofilm suspensions, which can restrict dispersion and element cycling. Sediment stability can be increased by EPS, as it influences cohesion, permeability, and erosion of the sediment. There is evidence that the adhesion and metal-binding ability of EPS affects mineral leaching rates in both environmental and industrial contexts. These interactions between EPS and the abiotic environment allow for EPS to have a large impact on biogeochemical cycling. Predator-prey interactions between biofilms and bacterivores, such as the soil-dwelling nematode Caenorhabditis elegans, had been extensively studied. Via the production of sticky matrix and formation of aggregates, Yersinia pestis biofilms can prevent feeding by obstructing the mouth of C. elegans. Moreover, Pseudomonas aeruginosa biofilms can impede the slithering motility of C. elegans, termed as 'quagmire phenotype', resulting in trapping of C. elegans within the biofilms and preventing the exploration of nematodes to feed on susceptible biofilms. This significantly reduced the ability of predator to feed and reproduce, thereby promoting the survival of biofilms.
Capsular exopolysaccharides can protect pathogenic bacteria against desiccation and predation, and contribute to their pathogenicity. Sessile bacteria fixed and aggregated in biofilms are less vulnerable compared to drifting planktonic bacteria, as the EPS matrix is able to act as a protective diffusion barrier. The physical and chemical characteristics of bacterial cells can be affected by EPS composition, influencing factors such as cellular recognition, aggregation, and adhesion in their natural environments.
Use
So far, biomass-based production of industrial microalgae has been widely applied in the fields from food and feed to high-value chemicals for pharmaceutical and ecological applications.
Although the commercial cultivation of microalgae became increasingly popular, only algal biomass is processed to current products, while huge volumes of algae-free media are unexploited in flow through cultures and after biomass harvesting of batch cultures. Medium recycling to save culturing costs faces the big risk of growth inhibition. High volumes of spent media give rise to environmental pollution and cost of water and nutrition supply in cultivation when the media are discarded directly to the environment. Therefore the application of recycling methods motivated by the simultaneous generation of high value products from spent medium bears potential in commercial and environmental perspectives.
Cosmetics and medicine
In nutraceutical industries, Arthrospira (Spirulina) and Chlorella are the most important species in commercialization as health foods and nutrition supplements with various health benefits including enhancing immune system activity, anti-tumor effects, and animal growth promotion, due to their abundant proteins, vitamins, active polysaccharides, and other important compounds. Microalgal carotenoids, with β-carotene from Dunaliella and astaxanthin from Haematococcus are commercially produced in large scale processes. Microalgal derived products are currently successfully developed for uses in cosmetics and pharmaceutical products. Examples include the polysaccharides from cyanobacteria used in personal skin care products and extracts of Chlorella sp. which contain oligopeptides that can promote firmness of the skin. In the pharmaceutical industries drug candidates with anti-inflammatory, anticancer, and anti-infective activities have been identified. For instance, adenosine from Phaeodactylum tricornutum, can act as an anti-arrhythmic agent for the treatment of tachycardia and the green algal metabolite caulerpin is featured in studies of anti-tuberculos is activities.
Moreover, some extracellular polysaccharides from microalgae have various bioactivities involving antitumor, anti-inflammatory, and antiviral activity, providing promising prospects for pharmaceutical applications.
Food and feed
Microalgae such as Isochrysis galbana, Nannochlor opsisoculata, Chaetoceros muelleri, Chaetoceros gracilis and P. tricornutum have been long utilized in aquaculture as direct or indirect feed sources in hatchery to provide excellent nutritional conditions for early juveniles of farmed fish, shellfish, and shrimp.
Furthermore, the EPS layer acts as a nutrient trap, facilitating bacterial growth. The exopolysaccharides of some strains of lactic acid bacteria, e.g., Lactococcus lactis subsp. cremoris, contribute a gelatinous texture to fermented milk products (e.g., Viili), and these polysaccharides are also digestible. An example of the industrial use of exopolysaccharides is the application of dextran in panettone and other breads in the bakery industry.
B. subtilis has gained interest for its probiotic properties due to its biofilm which allows it to effectively maintain a favorable microenvironment in the gastrointestinal tract. In order to survive the passage through the upper gastrointestinal tract, B. subtilis produces an extracellular matrix that protects it from stressful environments such as the highly acidic environment in the stomach.
Energy
Production of oleaginous microalgae are becoming attractive as alternative sources of biofuels with potential to meet global demand for renewable bioenergy. The enhanced oil recovery (EOR) using extracellular biopolymers from microalgae may be an upcoming field of application.
In recent years, EPS sugars from marine bacteria have been found to speed up the cleanup of oil spills. During the Deepwater Horizon oil spill in 2010, these EPS-producing bacteria were able to grow and multiply rapidly. It was later found that their EPS sugars dissolved the oil and formed oil aggregates on the ocean surface, which sped up the cleaning process. These oil aggregates also provided a valuable source of nutrients for other marine microbial communities. This let scientists modify and optimize the use of EPS sugars to clean up oil spills.
Agriculture and decontamination
During the growth, microalgae produce and secrete metabolites such as acetate or glycerol into the medium. Extracellular metabolites (EM) from microalgae have important ecological significances. For instance, marine microalgae release a large amount of dissolved organic substances (DOS), which serve as energy sources for heterotrophs in algal-bacterial symbiotic interactions. Excretions into the pericellular space determine, to a great degree, the course of allelopathic interactions between microalgae and other microorganisms. Some allelopathic compounds from microalgae are realized as environment-friendly herbicides or biocontrol agents with direct perspectives for their biotechnological use.
In B. subtilis, the protein matrix component, TasA, and the exopolysaccharide have both been shown to be essential for effective plant-root colonization in Arabidopsis and tomato plants. It was also suggested that TasA plays an important role in mediating interspecies aggregation with streptococci.
Due to the growing need to find a more efficient and environmentally friendly alternative to conventional waste removal methods, industries are paying more attention to the function of bacteria and their EPS sugars in bioremediation.
Researchers found that adding EPS sugars from cyanobacteria to wastewaters removes heavy metals such as copper, cadmium and lead. EPS sugars alone can physically interact with these heavy metals and take them in through biosorption. The efficiency of removal can be optimized by treating the EPS sugars with different acids or bases before adding them to wastewater. Some contaminated soils contain high levels of polycyclic aromatic hydrocarbons (PAHs); EPSs from the bacterium Zoogloea sp. and the fungus Aspergillus niger, are efficient at removing these toxic compounds. EPSs contain enzymes such as oxidoreductase and hydrolase, which are capable of degrading PAHs. The amount of PAH degradation depends on the concentration of EPSs added to the soil. This method proves to be low cost and highly efficient.
New approaches to target biofilms
The application of nanoparticles (NPs) are one of novel promising techniques to target biofilms due to their high surface-area-to-volume ratio, their ability to penetrate to the deeper layers of biofilms and the capacity to releasing antimicrobial agents in a controlled way. Studying NP-EPS interactions could provide deeper understanding on how to develop more effective nanoparticles. "smart release" nanocarriers that can penetrate biofilms and be triggered by pathogenic microenvironments to deliver drugs or multifunctional compounds, such as catalytic nanoparticles to aptamers, dendrimers, and bioactive peptides) have been developed to disrupt the EPS and the viability or metabolic activity of the embedded bacteria. Some factors that would alter the potentials of the NP to transport antimicrobial agents into the biofilm include physicochemical interactions of the NPs with EPS components, the characteristics of the water spaces (pores) within the EPS matrix and the EPS matrix viscosity. Size and surface properties (charge and functional groups) of the NPs are the major determinants of the penetration in and the interaction with the EPS. Another potential antibiofilm strategy is phage therapy. Bacteriophages, viruses that invade specific bacterial host cells, were suggested to be effective agents in penetrating biofilms. In order to reach the maximum efficacy to eradicate biofilms, therapeutic strategies need to target both the biofilm matrix components as well as the embedded microorganisms to target the complex biofilm microenvironment.
See also
Extracellular matrix in multi-cellular organisms
Exopolymer
Integrin
Sea snot
References
External links
EPS, BioMineWiki
Microbiology terms
Bacteria
Bacteriology
Environmental soil science
Membrane biology
Biological matter
Microbiology
Biomolecules
Polymers
Water treatment | Extracellular polymeric substance | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology",
"Environmental_science"
] | 6,207 | [
"Prokaryotes",
"Environmental engineering",
"Polymers",
"Water treatment",
"Polymer chemistry",
"Microbiology terms",
"Water technology",
"Environmental soil science",
"Natural products",
"Membrane biology",
"Water pollution",
"Biomolecules",
"Microscopy",
"Molecular biology",
"Microbiol... |
13,576,258 | https://en.wikipedia.org/wiki/Transmission%20delay | In a network based on packet switching, transmission delay (or store-and-forward delay, also known as packetization delay or serialization delay) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link.
Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits. It is given by the following formula:
seconds
where:
is the transmission delay in seconds;
is the number of bits;
is the rate of transmission (say, in bits per second).
Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.
See also
End-to-end delay
Processing delay
Queuing delay
Propagation delay
Network delay
Round-trip delay
References
External links
Java Applet
Network Delays
de:Laufzeit (Elektrotechnik)
Computer networks engineering | Transmission delay | [
"Technology",
"Engineering"
] | 280 | [
"Computing stubs",
"Computer networks engineering",
"Computer engineering",
"Computer network stubs"
] |
13,576,575 | https://en.wikipedia.org/wiki/Isoionic%20point | The isoionic point is the pH value at which a zwitterion molecule has an equal number of positive and negative charges and no adherent ionic species. It was first defined by S.P.L. Sørensen, Kaj Ulrik Linderstrøm-Lang and Ellen Lund in 1926 and is mainly a term used in protein sciences.
It is different from the isoelectric point (pI) in that pI is the pH value at which the net charge of the molecule, including bound ions is zero. Whereas the isoionic point is at net charge zero in a deionized solution. Thus, the isoelectric and isoionic points are equal when the concentration of charged species is zero.
For a diprotic acid, the hydrogen ion concentration can be found at the isoionic point using the following equation
hydrogen ion concentration
first acid dissociation constant
second acid dissociation constant
dissociation constant for water
concentration of the acid
Note that if then and if then . Therefore, under these conditions, the equation simplifies to
The equation can be further simplified to calculate the pH by taking the negative logarithm of both sides to yield
which shows that under certain conditions, the isoionic and isoelectric point are similar.
References
Zwitterions | Isoionic point | [
"Physics",
"Chemistry"
] | 267 | [
"Ions",
"Zwitterions",
"Matter"
] |
13,577,532 | https://en.wikipedia.org/wiki/Tunnel%20finisher | A tunnel finisher is a machine that removes wrinkles from garments and is often used in the textile industry. As with other industrial pressing equipment, this machine is employed to improve the quality and look of a textile product. It has a chamber called a "tunnel" and includes a conveyor fed unit through which the garments are steamed and dried. The machine also features hook systems; air curtain entrance to eliminate moisture or condensation; cotton care and roller units; exhaust steam, and a preconditioning module.
Process
Most garments are shipped by sea freight from the country of production. They get very wrinkled because of the box packing being used. In the receiving country, they are unpacked and put on a clothes hanger. Those hangers are sent via automated transport through the tunnel with a speed up to 3,000 garments per hour. These garments are then sent to a room to be steamed and dried.
The machine processes each garment through several stages. First, the garment passes through a steam chamber to make the fabric moldable. Then wrinkles are removed by a strong hot air flow alongside the garments. Finally, the garment is dried by cooler air before it leaves the tunnel finisher. In the case of garments, smaller areas such as collars require further pressing using other equipment such as steam iron for a better finish.
The tunnel finisher is also used in laundries and dry cleaners to remove wrinkles from garments after washing or dry cleaning.
Classifications
Tunnel finishers can be grouped into two different classifications, "wide body" or "narrow body." "Wide body" machines are designed for high production finishing of blended garments wet-to-dry, damp-to-dry and or dry-to-dry. "Narrow body" machines are designed for shoulder-to-shoulder processing and are best suited for the dry-to-dry finishing of garments. However; they are capable of damp-to-dry finishing at slower production speeds. These units are ideal for dry cleaners, hotel laundries, institutional laundries and other on-premises laundry applications. The smaller capacity version of the tunnel finisher is called "cabinet tunnel" and this typically capable of automated processing of separate batches of 4 or 5 garments at the same time. The production capacity for this smaller equipment is 10 percent of the tunnel finisher.
References
Machines
Clothing industry | Tunnel finisher | [
"Physics",
"Technology",
"Engineering"
] | 481 | [
"Physical systems",
"Machines",
"Mechanical engineering"
] |
13,577,623 | https://en.wikipedia.org/wiki/Seven-segment%20display%20character%20representations | The various shapes of numerical digits, letters, and punctuation on seven-segment displays is not standardized by any relevant entity (e.g. ISO, IEEE or IEC). Unicode provides encoding codepoint for segmented digits in Unicode 13.0 in Symbols for Legacy Computing block.
Digit
Two basic conventions are in common use for some Arabic numerals: display segment A is optional for digit 6 (/), segment F for 7 (/), and segment D for 9 (/). Although EF () could also be used to represent digit 1, this seems to be rarely done if ever. CDEG () is occasionally encountered on older calculators to represent 0.
In Unicode 13.0, 10 codepoints had been given for segmented digits 0–9 in the Symbols for Legacy Computing block:
Alphabet
In addition to the ten digits, seven-segment displays can be used to show most letters of the Latin, Cyrillic and Greek alphabets including punctuation.
One such special case is the display of the letters A–F when denoting the hexadecimal values (digits) 10–15. These are needed on some scientific calculators, and are used with some testing displays on electronic equipment. Although there is no official standard, today most devices displaying hex digits use the unique forms shown to the right: uppercase A, lowercase b, uppercase C, lowercase d, uppercase E and F. To avoid ambiguity between the digit 6 and the letter b the digit 6 is displayed with segment A lit.
However, this modern scheme was not always followed in the past, and various other schemes could be found as well:
The Texas Instruments seven-segment display decoder chips 7446/7447/7448/7449 and 74246/74247/74248/74249 and the Siemens FLH551-7448/555-8448 chips used truncated versions of "2", "3", "4", "5" and "6" for digits A–G. Digit F (1111 binary) was blank.
Soviet programmable calculators like the Б3-34 instead used the symbols "−", "L", "C", "Г", "E", and " " (space) to display hexadecimal numbers above nine. (The Б3-34 character set allowed for a cross-alphabet display of the English word "Error" as either EГГ0Г or 3ГГ0Г, depending on the error, in all-numeric form during error messages.)
Not all 7-segment decoders were suitable to display digits above nine at all. For comparison, the National Semiconductor MM74C912 displayed "o" for A and B, "−" for C, D and E, F, and blank for G.
The CD4511 even just displayed blanks.
The Magic Black Box, an electronic version of the Magic 8 Ball toy, used a ROM to generate 64 different 16-character alphanumeric messages on a LED display. It could not generate K, M, V, W, and X but it could generate a question mark.
For the remainder of characters, ad hoc and corporate solutions dominate the field of using seven-segment displays to show general words and phrases. Such applications of seven-segment displays are usually not considered essential and are only used for basic notifications on consumer electronics appliances (as is the case of this article's example phrases), and as internal test messages on equipment under development. Certain letters (M, V, W, X in the Latin alphabet) cannot be expressed unambiguously at all due to either diagonal strokes, more than two vertical strokes, or inability to distinguish them from other letters, while others can only be expressed in either capital form or lowercase form but not both. The Nine-segment display, fourteen-segment display, sixteen-segment display or dot matrix display are more commonly used for hardware that requires the display of messages that are more than trivial.
Examples
The following phrases come from a portable media player's seven-segment display. They give a good illustration of an application where a seven-segment display may be sufficient for displaying letters, since the relevant messages are neither critical nor in any significant risk of being misunderstood, much due to the limited number and rigid domain specificity of the messages. As such, there is no direct need for a more expressive display, in this case, although even a slightly wider repertoire of messages would require at least a 14-segment display or a dot matrix one.
See also
Calculator spelling
New Alphabet
References
External links
Seven Segment Optical Character Recognition
Display technology
Digital typography
Writing systems introduced in the 1900s | Seven-segment display character representations | [
"Engineering"
] | 974 | [
"Electronic engineering",
"Display technology"
] |
13,577,914 | https://en.wikipedia.org/wiki/Footspeed | Footspeed, or sprint speed, is the maximum speed at which a human can run. It is affected by many factors, varies greatly throughout the population, and is important in athletics and many sports, such as association football, Australian rules football, American football, track and field, field hockey, tennis, baseball, and basketball.
Factors in speed
The key determinant of footspeed in sprinting is the predominance of one distinct type of muscle fibre over another, specifically the ratio of fast-twitch muscles to slow-twitch muscles in a sprinter's physical makeup. Though fast-twitch muscles produce no more energy than slow-twitch muscles when they contract, they do so more rapidly through a process of anaerobic metabolism, though at the cost of inferior efficiency over longer periods of firing. The average human has an almost-equal ratio of fast-twitch to slow-twitch fibers, but top sprinters may have as much as 80% fast-twitch fibers, while top long-distance runners may have only 20%. This ratio is believed to have genetic origins, though some assert that it can be adjusted by muscle training. "Speed camps" and "Speed Training Manuals", which purport to provide fractional increases in maximum footspeed, are popular among budding professional athletes, and some sources estimate that 17–19% of speed can be trained.
Though good running form is useful in increasing speed, fast and slow runners have been shown to move their legs at nearly the same rate – it is the force exerted by the leg on the ground that separates fast sprinters from slow. Top short-distance runners exert as much as four times their body weight in pressure on the running surface. For this reason, muscle mass in the legs, relative to total body weight, is a key factor in maximizing footspeed.
Limits of speed
The record is 44.72 km/h (27.78 mph), measured between meter 60 and meter 80 of the 100 meters sprint at the 2009 World Championships in Athletics by speed. (Bolt's average speed over the course of this race was 37.578 km/h or 23.35 mph.)
Compared to quadrupedal animals, humans are exceptionally capable of endurance, but incapable of great speed. Examples of animals with higher sprinting speeds include cheetahs which can attain short bursts of speed well over 100 km/h (62 mph), the American quarter horse has topped 88 km/h (55 mph), greyhounds can reach 70 km/h (43 mph), and the Mongolian wild ass has been measured at 64 km/h (40 mph). Even the domestic cat may reach 48 km/h (30 mph).
In the 2023 Chicago Marathon, Kelvin Kiptum set a time of 2:00:35. That equates to an average speed above 20 km/h,(12.47mph) for two hours.
See also
Walking speed, the normal pace humans walk.
Notes
Sport of athletics terminology
Running
Velocity | Footspeed | [
"Physics"
] | 624 | [
"Physical phenomena",
"Physical quantities",
"Motion (physics)",
"Vector physical quantities",
"Velocity",
"Wikipedia categories named after physical quantities"
] |
13,578,015 | https://en.wikipedia.org/wiki/MPLAB | MPLAB is a proprietary freeware integrated development environment for the development of embedded applications on PIC and dsPIC microcontrollers, and is developed by Microchip Technology.
MPLAB Extensions for Visual Studio Code and MPLAB X for NetBeans platform are the latest editions of MPLAB, including support for Microsoft Windows, macOS and Linux operating systems.
MPLAB and MPLAB X support project management, code editing, debugging and programming of Microchip 8-bit PIC and AVR (including ATMEGA) microcontrollers, 16-bit PIC24 and dsPIC microcontrollers, as well as 32-bit SAM and PIC32 microcontrollers by Microchip Technology.
MPLAB X
MPLAB X is the latest version of the MPLAB IDE built by Microchip Technology, and is based on the open-source NetBeans platform. It replaced the older MPLAB 8.x series, which had its final release (version 8.92) on July 23, 2013.
MPLAB X is the first version of the IDE to include cross-platform support for macOS and Linux operating systems, in addition to Microsoft Windows. It supports editing, very buggy debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. It supports automatic code generation with the MPLAB Code Configurator and the MPLAB Harmony Configurator plugins.
MPLAB X supports the following compilers:
MPLAB XC8 — C compiler for 8-bit PIC and AVR devices
MPLAB XC16 — C compiler for 16-bit PIC devices
MPLAB XC-DSC - C compiler for dsPIC family of devices
MPLAB XC32 — C/C++ compiler for 32-bit MIPS-based PIC32 and ARM-based SAM devices
HI-TECH C — C compiler for 8-bit PIC devices (discontinued)
SDCC — open-source 8-bit C compiler
Debugger bugs:
Memory view crashes the whole IDE when searching for an address
Step over sometimes steps in and step out doesn't work
Disassembler view is buggy showing incorrect instructions
Phantom breakpoints that can't be cleared
Automatic firmware update sometimes fail requiring full erase of SNAP
MPLAB 8.x
MPLAB 8.x is the discontinued version of the legacy MPLAB IDE technology, custom built by Microchip Technology in Microsoft Visual C++. MPLAB supports project management, editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB only works on Microsoft Windows. MPLAB is still available from Microchip's archives, but is not recommended for new projects. It is designed to work with MPLAB-certified devices such as the MPLAB ICD 3 and MPLAB REAL ICE, for programming and debugging PIC microcontrollers using a personal computer. PICKit programmers are also supported by MPLAB.
MPLAB supports the following compilers:
MPLAB MPASM Assembler
MPLAB ASM30 Assembler
MPLAB C Compiler for PIC18
MPLAB C Compiler for PIC24 and dsPIC DSCs
MPLAB C Compiler for PIC32
HI-TECH C
References
External links
Microchip MPLAB Website
Embedded systems | MPLAB | [
"Technology",
"Engineering"
] | 693 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
13,578,381 | https://en.wikipedia.org/wiki/HD%20113766 | HD 113766 is a binary star system located 424 light years from Earth in the direction of the constellation Centaurus. The star system is approximately 10 million years old and both stars are slightly more massive than the Sun. The two are separated by an angle of 1.3 arcseconds, which, at the distance of this system, corresponds to a projected separation of at least 170 AU.
What makes HD 113766 special is the presence of a large belt of warm (~440 K) dust surrounding the star HD 113766 A. The dense dust belt, more than 100 times more massive than the Solar System's asteroid belt, is thought to be collapsing to form a rocky planet, which when it has formed will lie within the star's terrestrial habitable zone where liquid water can exist on its surface. HD 113766 represents the most well understood system in a growing class of objects that should provide more clues to how rocky planets like the Earth formed.
HD 113766 A
Rocky accretion belt
The dusty material in the system was analyzed in 2007 by a group led by Dr. Carey Lisse, of the Johns Hopkins University Applied Physics Laboratory in Laurel, MD, USA. Observations were made using the infrared spectrometer on board the Spitzer Space Telescope, and interpreted using the results of the NASA Deep Impact and STARDUST missions. Analysis of the atomic and mineral composition, dust temperature, and dust mass show a huge amount of warm material similar to metal rich S-type asteroids in a narrow belt at 1.8 ± 0.2 AU from the HD 113766 A. The group found at least a Mars' mass worth of warm dust in particles of size 10 m or less, and very likely as much as a few Earth masses of dust if one adds in the contribution of material in bodies up to 1 km in radius which are currently thought to be the basic building blocks of rocky planet formation. Comparison with current planetary formation theories suggests that the disk is in the early stages of terrestrial (rocky) planet formation. This can be also inferred by the presence of metals in the rocky material making up the disk. If planets had already formed the high density metals should have sunk to their cores during the molten stage of planet formation; a process known as planetary differentiation.
Icy accretion belts
While no water gas was found to be associated with the warm dust belt, two concentrations of icy material were found in the system. The first belt lies between 4 and 9 AU, and is at the equivalent position of the solar system's asteroid belt, while the second belt is even farther out between 30 and 80 AU, where the solar system's Kuiper Belt would lie. This material may be the source of future water for the rocky planet at 1.8 AU if and when it completes its formation.
There may also be gas giant planets in this system, already formed (in the first 1-5 Myrs) before the current era of rocky planet formation. While none have been detected to date, by analogy with the Solar System, their presence is likely, since evidence for analogues of the Solar System's asteroid belt, Kuiper belt, and terrestrial planets have been found.
HD 113766 B
The star system was first identified as being potentially interesting by Backman et al. using observations made by the Infrared Astronomical Satellite (IRAS) in 1983. Later measurements in 2001 by a team led by Meyer et al. determined that the system was actually a close binary, with the second star in the system, HD 113766 B, a near twin of HD 113766 A orbiting approximately 170 AU from the A star where the terrestrial planet is forming. Located at more than 4 times the distance of Pluto from the Sun, HD 113766 B has almost no effect on the material orbiting close to HD 113766 A.
Similar star systems
Binary star systems are common, found more frequently than single star systems like the Sun's. The arrangement of HD 113766, a binary star system with a protoplanetary disk around one star, is somewhat similar to the one-half of the system HD 98800, which has been reported to have a large amount of warm dust mass at the equivalent distance of the Solar System's asteroid belt. It is not currently known why both of these star systems should have such configurations; i.e. a protoplanetary disk around part of the system while other stars in the system lack one.
References
Further reading
Full paper at arXiv.org
Durchmusterung objects
113766
Centaurus
Binary stars
Circumstellar disks
F-type main-sequence stars
063975 | HD 113766 | [
"Astronomy"
] | 951 | [
"Centaurus",
"Constellations"
] |
13,578,878 | https://en.wikipedia.org/wiki/Impulse%20excitation%20technique | The impulse excitation technique (IET) is a non-destructive material characterization technique to determine the elastic properties and internal friction of a material of interest. It measures the resonant frequencies in order to calculate the Young's modulus, shear modulus, Poisson's ratio and internal friction of predefined shapes like rectangular bars, cylindrical rods and disc shaped samples. The measurements can be performed at room temperature or at elevated temperatures (up to 1700 °C) under different atmospheres.
The measurement principle is based on tapping the sample with a small projectile and recording the induced vibration signal with a piezoelectric sensor, microphone, laser vibrometer or accelerometer. To optimize the results a microphone or a laser vibrometer can be used as there is no contact between the test-piece and the sensor. Laser vibrometers are preferred to measure signals in vacuum. Afterwards, the acquired vibration signal in the time domain is converted to the frequency domain by a fast Fourier transformation. Dedicated software will determine the resonant frequency with high accuracy to calculate the elastic properties based on the classical beam theory.
Elastic properties
Different resonant frequencies can be excited dependent on the position of the support wires, the mechanical impulse and the microphone. The two most important resonant frequencies are the flexural which is controlled by the Young's modulus of the sample and the torsional which is controlled by the shear modulus for isotropic materials.
For predefined shapes like rectangular bars, discs, rods and grinding wheels, dedicated software calculates the sample's elastic properties using the sample dimensions, weight and resonant frequency (ASTM E1876-15).
Flexure mode
The first figure gives an example of a test-piece vibrating in the flexure
mode. This induced vibration is also referred as the out-of-plane vibration mode. The in-plane vibration will be excited by turning the sample 90° on the axis parallel to its length. The natural frequency of this flexural vibration mode is characteristic for the dynamic Young's modulus.
To minimize the damping of the test-piece, it has to be supported at the nodes where the vibration amplitude is zero. The test-piece is mechanically excited at one of the anti-nodes to cause maximum vibration.
Torsion mode
The second figure gives an example of a test-piece vibrating in the torsion mode. The natural frequency of this vibration is characteristic for the shear modulus.
To minimize the damping of the test-piece, it has to be supported at the center of both axis. The mechanical excitation has to be performed in one corner in order to twist the beam rather than flexing it.
Poisson's ratio
The Poisson's ratio is a measure in which a material tends to expand in directions perpendicular to the direction of compression. After measuring the Young's modulus and the shear modulus, dedicated software determines the Poisson's ratio using Hooke's law which can only be applied to isotropic materials according to the different standards.
Internal friction / Damping
Material damping or internal friction is characterized by the decay of the vibration amplitude of the sample in free vibration as the logarithmic decrement. The damping behaviour originates from anelastic processes occurring in a strained solid i.e. thermoelastic damping, magnetic damping, viscous damping, defect damping, ... For example, different materials defects (dislocations, vacancies, ...) can contribute to an increase in the internal friction between the vibrating defects and the neighboring regions.
Dynamic vs. static methods
Considering the importance of elastic properties for design and engineering applications, a number of experimental techniques are developed and these can be classified into 2 groups; static and dynamic methods. Statics methods (like the four-point bending test and nanoindentation) are based on direct measurements of stresses and strains during mechanical tests. Dynamic methods (like ultrasound spectroscopy and impulse excitation technique) provide an advantage over static methods because the measurements are relatively quick and simple and involve small elastic strains. Therefore, IET is very suitable for porous and brittle materials like ceramics and refractories. The technique can also be easily modified for high temperature experiments and only a small amount of material needs to be available.
Accuracy and uncertainty
The most important parameters to define the measurement uncertainty are the mass and dimensions of the sample. Therefore, each parameter has to be measured (and prepared) to a level of accuracy of 0.1%. Especially, the sample thickness is most critical (third power in the equation for Young's modulus). In that case, an overall accuracy of 1% can be obtained practically in most applications.
Applications
The impulse excitation technique can be used in a wide range of applications. Nowadays, IET equipment can perform measurements between −50 °C and 1700 °C in different atmospheres (air, inert, vacuum). IET is mostly used in research and as quality control tool to study the transitions as function of time and temperature.
A detailed insight into the material crystal structure can be obtained by studying the elastic and damping properties. For example, the interaction of dislocations and point defects in carbon steels are studied. Also the material damage accumulated during a thermal shock treatment can be determined for refractory materials. This can be an advantage in understanding the physical properties of certain materials.
Finally, the technique can be used to check the quality of systems. In this case, a reference piece is required to obtain a reference frequency spectrum. Engine blocks for example can be tested by tapping them and comparing the recorded signal with a pre-recorded signal of a reference engine block.
By using simple cluster analysis algorithms or principal component analysis, sample's pattern recognition is also achievable with a set of pre-recorded signals.
Experimental correlations
Rectangular bar
Young's modulus
with
E the Young's modulus
m the mass
ff the flexural frequency
b the width
L the length
t the thickness
T the correction factor
The correction factor can only be used if L/t ≥ 20!
Shear modulus
with
Note that we assume that b≥t
G the shear modulus
ft the torsional frequency
m the mass
b the width
L the length
t the thickness
R the correction factor
Cylindrical rod
Young's modulus
with
E the Young's modulus
m the mass
ff the flexural frequency
d the diameter
L the length
T the correction factorThe correction factor can only be used if L/d ≥ 20!Shear modulus
with
ft the torsional frequency
m the mass
d the diameter
L the length
Poisson ratio
If the Young's modulus and shear modulus are known, the Poisson's ratio can be calculated according to:
Damping coefficient
The induced vibration signal (in the time domain) is fitted as a sum of exponentially damped sinusoidal functions according to:
with
f the natural frequency
δ = kt the logarithmic decrement
In this case, the damping parameter Q−1 can be defined as:
with W the energy of the system
Extended IET applications: the Resonalyser Method
Isotropic versus orthotropic material behaviour
Isotropic elastic properties can be found by IET using the above described empirical formulas for the Young's modulus E, the shear modulus G and Poisson's ratio v. For isotropic materials the relation between strains and stresses in any point of flat sheets is given by the flexibility matrix [S] in the following expression:
In this expression, ε1 and ε2 are normal strains in the 1- and 2-direction and Υ12 is the shear strain. σ1 and σ2 are the normal stresses and τ12 is the shear stress. The orientation of the axes 1 and 2 in the above figure is arbitrary. This means that the values for E, G and v are the same in any material direction.
More complex material behaviour like orthotropic material behaviour can be identified by extended IET procedures. A material is called orthotropic when the elastic properties are symmetric with respect to a rectangular Cartesian system of axes. In case of a two dimensional state of stress, like in thin sheets, the stress-strain relations for orthotropic material become:
E1 and E2 are the Young's moduli in the 1- and 2-direction and G12 is the in-plane shear modulus. v12 is the major Poisson's ratio and v21 is the minor Poisson's ratio. The flexibility matrix [S] is symmetric. The minor Poisson's ratio can hence be found if E1, E2 and v12 are known.
The figure above shows some examples of common orthotropic materials: layered uni-directionally reinforced composites with fiber directions parallel to the plate edges, layered bi-directionally reinforced composites, short fiber reinforced composites with preference directions (like wooden particle boards), plastics with preference orientation, rolled metal sheets, and much more...
Extended IET for orthotropic material behaviour
Standard methods for the identification of the two Young's moduli E1 and E2 require two tensile, bending of IET tests, one on a beam cut along the 1-direction and one on a beam cut along the 2-direction. Major and minor Poisson's ratios can be identified if also the transverse strains are measured during the tensile tests. The identification of the in-plane shear modulus requires an additional in plane shearing test.
The "Resonalyser procedure" is an extension of the IET using an inverse method (also called "Mixed numerical experimental method"). The non destructive Resonalyser procedure allows a fast and accurate simultaneous identification of the 4 Engineering constants E1, E2, G12 and v12 for orthotropic materials. For the identification of the four orthotropic material constants, the first three natural frequencies of a rectangular test plate with constant thickness and the first natural frequency of two test beams with rectangular cross section must be measured. One test beam is cut along the longitudinal direction 1, the other one cut along the transversal direction 2 (see Figure on the right).
The Young's modulus of the test beams can be found using the bending IET formula for test beams with a rectangular cross section.
The ratio Width/Length of the test plate must be cut according to the following formula:
This ratio yields a so-called "Poisson plate". The interesting property of a Freely suspended Poisson plate is that the modal shapes that are associated with the 3 first resonance frequencies are fixed: the first resonance frequency is associated with a torsional modal shape, the second resonance frequency is associated with a saddle modal shape and the third resonance frequency is associated with a breathing modal shape.
364x364px
So, without the necessity to do an investigation to the nature of the modal shapes, the IET on a Poisson plate reveals the vibrational behaviour of a Poisson plate.
The question is now how to extract the orthotropic Engineering constants from the frequencies measured with IET on the beams and Poisson plate. This problem can be solved by an inverse method (also called" Mixed numerical/experimental method") based on a finite element (FE) computer model of the Poisson plate. A FE model allows computing resonance frequencies for a given set of material properties
In an inverse method, the material properties in the finite element model are updated in such a way that the computed resonance frequencies match the measured resonance frequencies.Problems with inverse methods are:
· The need of good starting values for the material properties
· Are the parameters converging to the correct physical solution?
· Is the solution unique?
The requirements to obtain good results are:
· The FE-model must be sufficiently accurate
· The IET measurements must be sufficiently accurate
· The starting values must be close enough to the final solution to avoid a local minimum (instead of a global minimum)
· The computed frequencies in the FE model of the Poisson plate must be sensitive for variations of all the material parameters
In the case the Young's moduli (obtained by IET) are fixed (as non variable parameters) in the inverse method procedure and if only the Poisson's ratio v12 and the in-plane shear modulus G12 are taken as variable parameters in the FE-model, the Resonalyser procedure''' satisfies all above requirements.
Indeed,
IET yields very accurate resonance frequencies, even with non-expert equipment,
a FE of a plate can be made very accurate by selecting a sufficiently fine element grid,
the knowledge of the modal shapes of a Poisson plate can be used to generate very good starting values using a virtual field method
and the first 3 natural frequencies of a Poisson plate are sensitive for variations of all the orthotropic Engineering constants.
Standards
ASTM E1876 - 15 Standard Test Method for Dynamic Youngs Modulus, Shear Modulus, and Poissons Ratio by Impulse Excitation of Vibration. www.astm.org.
ISO 12680-1:2005 - Methods of test for refractory products -- Part 1: Determination of dynamic Young's modulus (MOE) by impulse excitation of vibration. ISO.
DIN EN 843-2:2007 Advanced technical ceramics - Mechanical properties of monolithic ceramics at room temperature". webstore.ansi.org''.
References
Nondestructive testing
Quality control
Materials science
Continuum mechanics | Impulse excitation technique | [
"Physics",
"Materials_science",
"Engineering"
] | 2,804 | [
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Nondestructive testing",
"Materials testing",
"nan"
] |
13,578,886 | https://en.wikipedia.org/wiki/Tuymaada | The International Olympiad "Tuymaada" is an annual competition for students under the age of 18, held in the Sakha Republic, Russia. The contestants compete individually, in four independent sections: computer science, mathematics, physics and chemistry. The participating teams (national and local teams) can have up to three students for one section (the total number of students that can participate for a team being twelve, plus two teachers: a leader and a deputy leader). The contest is held in July, for two days of competitions. The structure is being in conformity with the International Science Olympiads worldwide and all Russian National Science Olympiads.
History of the Olympiad
In 1993, the Ministry of Education decided to open a summer school where pupils who showed good results on various maths, physics and computing Olympiads were invited to. The leading teachers of the republic taught classes and there was a summer Olympiad held. In that historically important year the team of the Buryatia republic was invited. In 1994, already it was decided to hold the first international Olympiad of schoolchildren. The summer school for gifted children was later named the International Summer School "Tuymaada" and played a special role in bringing to life the presidential program "Gifted Children". The school gave a perfect opportunity for kids personal development. Holding an international Olympiad "Tuymaada" became the main tradition of the School. Pavlova E.N., head of the department of the Ministry of Education, has been in charge of the Olympiad and a principal of the school for all these years.
The organizers
The organizers of the Olympiads and the initiators of foundation of specialized classes in the Republic were such famous professors, as M.A. Alekseev, S.G. Dyrachov, V.N. Sophroneev, I.E. Serguchev, I.M. Yakoylev and others. The progress of gifted pupils would have been impossible without the great work of staff of Verchneviluysk physics and maths school: V. Dolgunov, N. Ivanov, A. Semenov, A. Machasynov. The names of other teachers are also very famous, such as M. Shergin, O. Sukneva, V. Ratchin, G. Isaev, P. Timopheev, J. Nogovichyna, L. Semenov, R. Mustakimov, T. Demidko, E. Kozlova, M. Sleptsova, Z. Chetvertakova, S. Egorova, and others.
It is necessary to outline the work of staff of physics and maths school (at present known as the Republican college), founded in 1977. With the help of Aliev I., first principal, candidate of sciences, senior lecturer at Yakut State University, his enthusiasm and self-devotion, school has become the leading institution, that works with gifted children.
Other important names that should be mentioned: A.M. Abramov, Doctor of Sciences, member-journalist of the Russian Academy of Education, Moscow; T.T. Timopheev, candidate of sciences, Novosibirsk; I.F. Sharygin, doctor of sciences, professor of Moscow State University; D.G. Von-Der Flaas, candidate of sciences, member of the all-Russian Olympiads jury and many other Russian scientists and professors.
Early developments
Nowadays the Olympiads are held on various subjects, the special attention is paid to the use of new informational technologies. The coordination of distant intellectual studies is carried out by Popov S.V., professor of Yakut State University, doctor of sciences and by Potapov V.F., the honoured teacher of the Russian Federation. The achievements and success of children shows that the direction chosen the right direction and approach of work. According to the opinion of independent experts, the level of tasks of the Olympiad satisfy the standards of All-Russian and International Olympiads.
See also
International Science Olympiad
ACM International Collegiate Programming Contest
Central European Olympiad in Informatics
References
External links
http://www.guas.info/competit/tuyme.htm
http://www.wcpsd.org/posters/education/Grigoriev.pdf
http://www.mathlinks.ro
Competitions in Russia
Science competitions
Science events in Russia
Annual events in Russia | Tuymaada | [
"Technology"
] | 918 | [
"Science and technology awards",
"Science competitions"
] |
13,578,970 | https://en.wikipedia.org/wiki/Data%20Control%20Block | In IBM mainframe operating systems, such as OS/360, MVS, z/OS, a Data Control Block (DCB) is a description of a dataset in a program. A DCB is coded in Assembler programs using the DCB macro instruction (which expands into a large number of "define constant" instructions). High level language programmers use library routines containing DCBs.
A DCB is one of the many control blocks used in these operating systems. A control block is a data area with a predefined structure, very similar to a C struct, but typically only related to system's functions. A DCB may be compared to a FILE structure in C, but it is much more complex, offering many more options for various access methods.
The control block acted as the Application programming interface between Logical IOCS and the application program and usually was defined within (and resided within) the application program itself. The addresses of I/O subroutines would be resolved during a linkedit phase after compilation or else dynamically inserted at OPEN time.
The equivalent control block for IBM DOS/360, DOS/VSE and z/VSE operating systems is a "DTF" (Define the file)
Typical contents of a DCB
symbolic file name (to match a JCL statement for opening the file)
type of access (e.g. random, sequential, indexed)
physical characteristics (blocksize, logical record length)
number of I/O buffers to allocate for processing to permit overlap of I/O
address of I/O operating system library subroutines (e.g. read/write)
other variables as required by the subroutines according to type
Prototype DCBs
Many of the constants and variables contained within a DCB may be left blank (i.e., these default to zero).
The OPEN process results in a merge of the constants and variables specified in the DD JCL statement, and the dataset label for existing magnetic tape and direct-access datasets, into the DCB, replacing the zero values with actual, non-zero values.
A control block called the JFCB (Job File Control Block) initially holds the information extracted from the DD statement for the dataset. The results of the merge are stored in the JFCB which may also be written into the DSCB during the CLOSE process, thereby making the dataset definition permanent.
An example is the BLKSIZE= variable, which may be (and usually is) specified in the DCB as zero. In the DD statement, the BLKSIZE is specified as a non-zero value and this, then, results in a program-specified LRECL (logical record length) and a JCL-specified BLKSIZE (physical block size), with the merge of the two becoming the permanent definition of the dataset.
See also
Data Set Control Block (DSCB), a part of VTOC
Record-oriented filesystem
IBM mainframe operating systems
IBM file systems | Data Control Block | [
"Technology"
] | 626 | [
"Operating system stubs",
"Computing stubs"
] |
13,579,157 | https://en.wikipedia.org/wiki/Dimethyl%20pimelimidate | Dimethyl pimelimidate (DMP) is an organic chemical compound with two functional imidate groups. It is usually available as the more stable dihydrochloride salt. It binds free amino groups at pH range 7.0-10.0 to form amidine bonds.
Uses
DMP is used mainly as bifunctional coupling reagent to link proteins. It is often used to prepare antibody affinity columns. The appropriate antibody is first incubated with Protein A or Protein G-agarose and allowed to bind. DMP is then added to couple the molecules together.
Health effects
DMP is irritating to the eyes, skin, mucous membranes and upper respiratory tract. It can exert harmful effects by inhalation, ingestion, or skin absorption.
References
MSDS safety data, also available in other languages
Sigma-Aldrich product detail
MSDS datasheet
Carboximidates | Dimethyl pimelimidate | [
"Chemistry"
] | 191 | [
"Carboximidates",
"Functional groups"
] |
13,580,135 | https://en.wikipedia.org/wiki/Conditional%20short-circuit%20current | Conditional short-circuit current is the value of the alternating current component of a prospective current, which a switch without integral short-circuit protection, but protected by a suitable short circuit protective device (SCPD) in series, can withstand for the operating time of the current under specified test conditions. It may be understood to be the RMS value of the maximum permissible current over a specified time interval (t0,t1) and operating conditions.
The IEC definition is critiqued to be open to interpretation.
References
Electronic circuits | Conditional short-circuit current | [
"Engineering"
] | 106 | [
"Electronic engineering",
"Electronic circuits"
] |
13,581,128 | https://en.wikipedia.org/wiki/List%20of%20buildings%20designed%20by%20Talbot%20Hobbs | This is a list of buildings designed by Talbot Hobbs in Western Australia between 1887 and 1938.
See also
List of heritage buildings in Perth, Western Australia
List of heritage places in Fremantle
List of heritage places in York, Western Australia
References
Hobbs
Hobbs | List of buildings designed by Talbot Hobbs | [
"Engineering"
] | 49 | [
"Architecture stubs",
"Architecture"
] |
13,581,828 | https://en.wikipedia.org/wiki/Surface%20conductivity | Surface conductivity is an additional conductivity of an electrolyte in the vicinity of the charged interfaces. Surface and volume conductivity of liquids correspond to the electrically driven motion of ions in an electric field. A layer of counter ions of the opposite polarity to the surface charge exists close to the interface. It is formed due to attraction of counter-ions by the surface charges. This layer of higher ionic concentration is a part of the interfacial double layer. The concentration of the ions in this layer is higher as compared to the ionic strength of the liquid bulk. This leads to the higher electric conductivity of this layer.
Smoluchowski was the first to recognize the importance of surface conductivity at the beginning of the 20th century.
There is a detailed description of surface conductivity by Lyklema in "Fundamentals of Interface and Colloid Science"
The Double Layer (DL) has two regions, according to the well established Gouy-Chapman-Stern model. The upper level, which is in contact with the bulk liquid is the diffuse layer. The inner layer that is in contact with interface is the Stern layer.
It is possible that the lateral motion of ions in both parts of the DL contributes to the surface conductivity.
The contribution of the Stern layer is less well described. It is often called "additional surface conductivity".
The theory of the surface conductivity of the diffuse part of the DL was developed by Bikerman. He derived a simple equation that links surface conductivity κσ with the behaviour of ions at the interface. For symmetrical electrolyte and assuming identical ions diffusion coefficients D+=D−=D it is given in the reference:
where
F is the Faraday constant
T is the absolute temperature
R is the gas constant
C is the ionic concentration in the bulk fluid
z is the ion valency
ζ is the electrokinetic potential
The parameter m characterizes the contribution of electro-osmosis to the motion of ions within the DL:
The Dukhin number is a dimensionless parameter that characterizes the contribution of the surface conductivity to a variety of electrokinetic phenomena, such as, electrophoresis and electroacoustic phenomena. This parameter and, consequently, surface conductivity can be calculated from the electrophoretic mobility using appropriate theory. Electrophoretic instrument by Malvern and electroacoustic instruments by Dispersion Technology contain software for conducting such calculations.
See also
Interface and Colloid Science
Surface Science
Surface conductivity may refer to the electrical conduction across a solid surface measured by surface probes. Experiments may be done to test this material property as in the n-type surface conductivity of p-type. Additionally, surface conductivity is measured in coupled phenomena such as photoconductivity, for example, for the metal oxide semiconductor ZnO. Surface conductivity differs from bulk conductivity for analogous reasons to the electrolyte solution case, where the charge carriers of holes (+1) and electrons (-1) play the role of ions in solution.
References
Chemical mixtures
Colloidal chemistry
Surface science
Matter
Soft matter | Surface conductivity | [
"Physics",
"Chemistry",
"Materials_science"
] | 629 | [
"Colloidal chemistry",
"Soft matter",
"Surface science",
"Colloids",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Matter"
] |
13,583,602 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA76 | In molecular biology, SNORA76 (also known as ACA62) is a non-coding RNA (ncRNA) which modifies other small nuclear RNAs (snRNAs). It is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines.
This snoRNA was identified by computational screening and its expression in mouse experimentally verified
by Northern blot and primer extension analysis. ACA62 is proposed to guide the pseudouridylation of 18S rRNA U34
and U105.
References
External links
Non-coding RNA | Small nucleolar RNA SNORA76 | [
"Chemistry"
] | 136 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.